An Introduction to Artificial Intelligence

When we say Artificial Intelligence we first think of robots. But AI is more generic than that. AI incorporates the machines, that we use in our day to day lives, with intelligence. Intelligence in the sense that machines do not have to depend totally on humans for all the petty decisions and acquire some sort of independent thinking.

The term Artificial Intelligence was coined in 1957 at a conference at Dartmouth, New Hampshire convened to discuss the possibilities of simulating human intelligence and thinking in computers. Even until now there is no clear definition for AI, but it is a well established, natural and scaling branch of computer science.

The three main category of problems AI deals with are mundane, formal and expert problems. Recently it was proven that mundane  tasks are the harder to simulate because they require common sense and hence lots of knowledge. On the other hand expert tasks are domain and knowledge specific hence easier to accomplish for the machines. The solutions for these AI and some other non-AI problems are got using some AI techniques such as Search, Use of knowledge and Abstraction.

The basic issues relating to AI technically are Knowledge representation, Reasoning Techniques and Learning.

Knowledge Presentation

Intelligence in machines is said to be drawn from knowledge. It is knowledge that is to be acquired, stored and used by the machine in order to act intelligent and solve real world problems. This knowledge is in huge amounts and necessary structures are required to store, search and represent this knowledge.

Some very famous knowledge representation techniques are: Predicate Logic, Weak & Strong slot and filler structures, Semantic Nets, Frames, Scripts

Knowledge is basically large chunks of data the machine is given initially to acquire the basic intelligence levels. As the machine works, it must be able to acquire intelligence from experience (inference), learn from mistakes and also store these “Life lessons” as knowledge so that it can reuse this experience later when required.

e.g. The following sentences are stored randomly in the knowledge base 

                           1) Neil Armstrong was the first man on the Moon.

                           2) All men are Mortal.

When represented in predicate logic they look like:

                            1) First-on (Neil Armstrong, Moon)

                            2) ¥ x: Men(x)→Mortal(x)            
                                
Various Heuristic search techniques such as Generate-and-test, Hill Climbing, Best-First Search, Problem reduction, Constraint satisfaction, Means-ends analysis are used to search for the specific information needed in the knowledge database that uses any of the representation techniques. 

Reasoning Techniques

Reasoning is required to drawing inferences from what it already known. It is required to do something that a machine or any knowledge system has not been explicitly told to do. In this case the machine should reason from all the information that it has in form of knowledge and try to do a task for the first time thinking all by itself. Truly the first step in displaying intelligence.

e.g. when given the following statements:

                     1) Neil Armstrong was the first man on the Moon.

                     2) All men are Mortal.

Now if we ask : Is Neil Armsrtong a mortal?

The machine should reason and answer “Yes” using reasoning because it is assumed that it does not have a specific answer to the above question already in the knowledge base.

Many reasoning techniques are used in the contemporary world like: Formal reasoning, Procedural reasoning, Reasoning by analog, Generalization and Abstraction, Meta-level reasoning, uncertain reasoning and Non-monotonic Reasoning. One of the very often used reasoning technique is default reasoning which is a form of  Non-monotonic reasoning.

Reasoning makes use of many mathematical concepts such as Probability, Baye’s theorem and Certainity factors. Various search techniques

Learning

Any machine cannot be called intelligent until it is able to learn to do new things and to adopt to new situations. Machines learn and store this information for future references. But the main issue is these machines must be provided with adequate mechanisms to learn new things from old. The process of learning produces and increases knowledge and improves behavior and performance of a machine.

Defining Learning technically “A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P ,if its performance at tasks T, as measured by P, improves with experience E”

Various learning techniques are: Rote learning, Learning by taking advice, Learning by problem solving, Learning from examples, Decision trees, Reinforcement learning.

Learning tasks covers a wide range of phenomena: Skill refinement, Knowledge Acquisition, Taking Advice, Problem solving, Discovery etc…Computers (like humans) require Vision, Image Processing, Speech Recognition, Sensory motors, Natural language understanding to learn. The adequate hardware and software for this must be provided.

e.g.: A robot in order to move from one room to another requires vision to see, image processing to locate the door and Sensory motor co-ordination to walk from one room to another.

Applications of AI

1. Game playing:
Game playing is a field which demonstrates several aspects of intelligence, particularly the ability to plan (at both immediate tactical-level and long term strategic level) and the ability to learn. e.g.: Perhaps the best example of usage of intelligence in game playing is the Chess Grandmaster supercomputer DEEP BLUE which beat the then present Chess grandmaster Gary Kasparov.

2. Technological advancements:
Present day computers are very fast and on the way to enter the next (i.e.., fifth)generation of computers. AI is the doorway to that new technological world where computers no longer need humans for the minutest details and spare humans for the jobs that really need their attention. The technical burden on humans would be reduced.

3. Medical improvements:
Computers that take care of human health from both inside and outside the human body have already being developed and are constantly evolving. These computers check the human condition, diagnose it and even suggest a prescription. e.g.. :MYCIN 

4. Many more:
Computers that detect molecular structures e.g.: DENDRALComputers that help in editing, space programs, real life graphics,Military simulations, Neural Networks and Fuzzy Logic, Natural language processing, Image processing, Computer vision, Speech Understanding.                

Conclusion

The anticipation of the fifth generation computers whose domain is Artificial Intelligence astound us to contemplate about the future of Mankind and Intelligent Machine-kind working side by side with equal Intelligence. We should not ignore the future possibilities that these machines could start thinking for themselves and not under human control, of course this is a bit far fetched but cannot be ruled out completely.

On the flip side of the coin these AI machines are becoming increasingly helpful in many crucial areas where man always felt a bit left out without any advice or help. AI machines share the workload of humans both in physical and mental aspects. They lower the work of human experts. AI machines try to bridge the gap between a man and his machine. There is hardly any area where AI is not applicable. AI is fast picking up its pace into the future.

Either way AI perhaps is the most researched and most awaited computer generation and the trend, for the demand and applications for intelligence in machines for human luxury and need, is here to stay. 

References

  1. “Artificial Intelligence” ---Elaine rich, Kevin Knight.
  2. “Artificial Intelligence-A modern approach”---Russel norvig.
  3. “Artificial Intelligence”---Sudhir kumar reddy

 

Technical Paper on ATM Networks

ASYNCHRONOUS TRANSFER MODE networks are true multi-service networks with the potential to offer broad band services. This transfer mode is considered to be the ground on which BISDN is built. Public ATM network implementation will mainly be used as broadband backbone networks to support leased lines, IP traffic, transit telephony etc, since a common network infrastructure for provision of multiple services is a very cost effective solution.This paper is mainly highlighted by the transmission and switching information in a network. Switching combines the aspects of ‘time division multiplexing’ and ‘packet switching’.

Introduction

The asynchronous transfer mode is considered to be the ground on which BISDN is to be built. ATM is the transfer mode for implementing BISDN.The term transfer comprises both transmission and switching aspects, so a transfer mode is a specific way of transmitting and switching information in a network. The service of an ATM network is the transport and routing that is multiplexing, transmission and switching of ATM cells.ATM implementations serve as back bone networks mainly for data communication .the merit of an ATM back bone network for the network operator is that a common, unique network infrastructure can be deployed flexibly to support all the existing and future services.

One problem with other protocols which implement virtual connections is that some time slots are wasted if no data is being transmitted. ATM avoids this by dynamically allocating bandwidth for traffic on demand. This means greater utilization of bandwidth and better capacity to handle heavy load situations. When an ATM connection is requested, details concerning the connection are specified which allow decisions to be made concerning the route and handling of the data to be made. Typical details are the type of traffic [video requires higher priority], destination, peak and average bandwidth requirements [which the network can use to estimate resources and cost structures], a cost factor [which allows the network to chose a route which fits within the cost structure] and other parameters.

ATM is a technique used in BISDN’s (Broad band integrated services digital network) BISDN is a typical ITU term denoting   that ATM based networks should be embedded into the ISDN environment.  ISDN refers to a new form of network which integrates voice, data and image in a digitized form. The ISDN connection is digital end-to-end.

Asynchronous Systems

Asynchronous systems send data bytes between the sender and receiver by packaging the data in an envelope. This envelope helps transport the character across the transmission link that separates the sender and receiver. The transmitter creates the envelope, and the receiver uses the envelope to extract the data. Each character (data byte) the sender transmits is preceded with a start bit, and suffixed with a stop bit. These extra bits serve to synchronize the receiver with the sender. This section briefly discusses the differences between two different methods of Serial transmission, namely, asynchronous and synchronous. A protocol establishes a means of communicating between two systems. As long as the sender and receiver each use the same protocol, information can be reliably exchanged between them.

We shall look at two common protocols used in Serial data communications, the first is known as Asynchronous, the second as Synchronous. In asynchronous serial transmission, each character is packaged in an envelope, and sent across a single wire, bit by bit, to a receiver. Because no signal lines are used to convey clock (timing) information, this method groups data together into a sequence of bits (five - eight), then prefixes them with a start bit and appends the data with a stop bit. It’s important to realize that the receiver and sender are re-synchronized each time a character arrives. This method of transmission is suitable for slow speeds less than about 32000 bits per second. The signal that is sent does not contain any information that can be used to validate if it was received without modification. This means that this method does not contain error detection information, and is susceptible to errors.

In addition, for every character that is sent, additional two bits are also sent.. The asynchronous protocol evolved early in the history of telecommunications. It became popular with the invention of the early tele-typewriters that were used to send telegrams around the world .synchronous systems send data bytes between the sender and receiver. Each data byte is preceeded with a start bit, and suffixed with a stop bit. These extra bits server to synchronize the receiver with the sender. Transmission of these extra bits (2 per byte) reduce data throughput. Synchronization is achieved for each character only. When the sender has no data to transmit, the line is idle and the sender and receiver are NOT in synchronization. Asynchronous protocols are suited for low speed data communications

ATM Transmission

ATM breaks data into small chunks of fixed size cells (48 bytes of data plus a 5 byte overhead). ATM is designed for handling large amounts of data across long distances using a high speed backbone approach. ATM cell header error control is a physical layer function. Every ATM cell transmitter calculates the HEC value across the first four octets of cell header.HEC code is capable of correcting single bit error codes.

After initializing the receiver in correction mode .when a single bit error is detected, it is corrected or else the cell is discarded. The above receiver operation has been chosen to take into account the error characteristics of fiber based transmission systems. One problem with other protocols which implement virtual connections is that some time slots are wasted if no data is being transmitted. ATM avoids this by dynamically allocating bandwidth for traffic on demand. This means greater utilization of bandwidth and better capacity to handle heavy load situations. When an ATM connection is requested, details concerning the connection are specified which allow decisions to be made concerning the route and handling of the data to be made. Typical details are the type of traffic [video requires higher priority], destination, peak and average bandwidth requirements [which the network can use to estimate resources and cost structures], a cost factor [which allows the network to chose a route which fits within the cost structure] and other parameters. When designing the architecture of the internetwork, it is important to take into account the communications requirements. This is not just an issue of total traffic, but also of instantaneous demand and user response requirements. ATM technologies will enable the use of the same lines for voice, data, or video communications without preallocating exclusive portions of the network to each application.ATM layer operation and maintenance functions are monitoring of virtual path and virtual channel availability and performance monitoring at the VP and VC levels

The data exchange interface is defined by the ATM Forum for connecting non ATM capable devices to an ATM network. The frame based user to network interface (FUNI) was defined for ATM for access rates of nX64kbits. The DXI allows data terminal equipment (DTE) such as a router to be connected via data circuit terminating equipment (DCE) to an ATM switch. The DXI header contains fields to carry cell loss priority (CLP) and congestion notification (CN) information. The transfer of cells through an ATM network is supported by the generation of the cells through packetizer, multiplexing and switching of cells this is illustrated clearly in ATM switching.

The ATM originally invented by researches as a hybrid switching technique  combining the merits of channel and packet switching for optimized for real time systems.ITU-T originally defined two options for the user network interface, one on pure cell multiplexing and other on SDH(synchronous digital hierarchy.

Switching techniques

Rather than allocating a dedicated virtual circuit for the duration of each call, data is assembled into small packets and statistically multiplexed according to their traffic characteristics.ATM also provides automatic protection switching (APS) Connection is made to the public communications carrier packet network. This is a special network which connects users which send data grouped in packets. Packet technology is suited to medium speed, medium volume data requirements. It offers cheaper cost than the datel circuit, but for large volumes, is more expensive than the leased circuit. Special hardware and software is required to packetize the data before transmission, and depacketize the data on arrival. Packet switched circuits exist for the duration of the call.

Datelcircuits are justly elated to Leased lines. Here the multiplexing takes place by the time interval between each Channel, hence it is known as Time division multiplexing. This process is also known a bus switching technique.TMD comes under central memory switching element .the interconnection network can be realized by a high speed TDM bus. A multiplexor is a device which shares a communications link between a number of users. It does this by time or frequency division. It is costly to provide a single circuit for each device (terminal) It is costly to provide a single circuit for each device (terminal). Imagine having 200 remote terminals, and supplying 200 physical lines for each terminal.

Rather than provide a separate circuit for each device, the multiplexer combines each low speed circuit onto a single high speed link. The cost of the single high speed link is less than the required number of low speed links.

In time division, the communications link is subdivided in terms of time. Each sub-circuit is given the channel for a limited amount of time, before it is switched over to the next user, and so on.

Here it can be seen that each sub-channel occupies the entire bandwidth of the channel, but only for a portion of the time. In frequency division multiplexing, each sub-channel is separated by frequency (each channel is allocated part of the bandwidth of the channel).

The speed or bandwidth of the main link is the sum of the individual channel speeds or bandwidth. It can be though of as a many to one device. The use of ATM cell packetize in a customer networks is to convert non ATM signals to the ATM cell format this will arise especially during the introductory phase of ATM networks .to this end the terminals have to be provided with  the network clock via access line.

Summary:

ASYNCHRONOUS TRANSFER MODE networks are true multi-service networks with the potential to offer broad band services. Asynchronous systems send data bytes between the sender and receiver by packaging the data in an envelope, another interesting thing related to ATM network implementation is the architectural concept for virtual path and virtual channel, since Leased line services would then be realized via the VP network and switched connections.

Actually ATM is a hybrid switching technique combining the merits of channel and packet switching optimized for real time systems.  The multiplexer combines each low speed circuit onto a single high speed link. In TMD each channel is given certain time before shifting to another user.

References:

  1. Concepts, protocols, applications, (Third edition) Jonathan B.Kudsen.
  2. Network principles And Concepts, (Fifth edition), Microsoft press
  3. Switching networks, J.F.Kurose.



    

Technical Paper on Quantum Cryptography

The contest between code-makers and code-breakers has been going on for thousands of years. The purpose of cryptography is to transmit information in such a way that access to it is restricted entirely to the intended recipient, even if the transmission itself is received by others. Key Distribution is the main problem in conventional cryptography. Recently, quantum mechanics has made a remarkable entry in this field in the form of Quantum cryptography in which key distribution is done by using laws of physics. This paper briefly discusses about conventional cryptography.

This paper also discusses about the fundamentals of quantum cryptography and illustrates its working and discusses its applications and implementations. Quantum cryptography not only ensures secure communication (privacy through uncertainty) but also detects eavesdropper’s presence. This science is of increasing importance with the advent of broadcast and network communication, such as electronic transactions, the Internet, e-mail, and cell phones etc. Within few years this technique would start encrypting some of the most valuable secrets of government and industry.

Introduction:-

WHAT IS CRYPTOGRAPHY:-cryptography is a science whose purpose is to transmit information in such a way that access to it is restricted entirely to the   intended recipient, even if the transmission itself is received by others.

BACKGROUND:- The concept of cryptography dates back as far as the Roman Empire (Julius Cesar). Before the digital age this was widely used by the governments, especially for the military purposes.

IT’S IMPORTANCE :- Today  this  science is  of  Increasing  importance with the advent of broadcast and network communication, such as business  transactions,    the  Internet,  e-mail, and  mobile phones,  where  sensitive  monetary,  business, political,  and  personal communications are transmitted Over public channels.

HOW IS IT DONE:-   Cryptography operates by sender scrambling or encrypting the original message or plaintext in a systematic way that obscures its  meaning. The encrypted message or ciphertext is transmitted and the receiver will recover      message by unscrambling or decrypting the transmission. In Today’s modern---- Cryptography, the encryption algorithm itself is public information and the security lies on the users’ knowledge of a secret string of information, known as the ‘key’. Everyone one can make copies of the encrypted message, but only the intended Recipient who possesses the correct key can unlock from it the original message.  

Conventional Cryptography:-

 Existing cryptographic techniques are usually identified as "traditional" or "modern."

  • Traditional techniques date back for centuries, and use operations of coding (use of alternative words or phrases), transposition (reordering of plaintext), and substitution (alteration of plaintext characters). Traditional techniques were designed to be simple, for hand encoding and decoding. By contrast, modern techniques use computers, and rely on extremely long keys, convoluted algorithms, and intractable problems to achieve assurances of security.

  • Most computer encryption systems (Modern) belong in one of two categories:
    • Secret-key encryption
    • Public-key encryption

SECRET KEY ENCRYPTION:- Also referred to as ‘Symmetric key Encryption’. In, Symmetric-key encryption each computer has a secret key (code). Symmetric-key requires that you know which computers will be talking to each other so you can install the key on each one. Symmetric-key encryption is essentially the same as a secret code that each of the two computers must know in order to decode the information. The code provides the key to decoding the message. A k-bit "secret key" is shared by two users. To make unauthorized decipherment more difficult, the transformation algorithm can be carefully designed to make each bit of output depend on every bit of the input. With such an arrangement, a key of 128 bits used for encoding results in a choice of about 1038 numbers. Eg: DES, 3-DES, RC4, RC5 etc

Disadvantages:-
  • A large bit key is required for secure communication.
  • The key is subject to interception by hackers.

PUBLIC KEY ENCRYPTION:-  Public key encryption uses a combination of a private key and a public key. The private key is known only to your computer, while the public key is given by your computer to any computer that wants to communicate securely with it. To decode an encrypted message, a computer must use the public key, provided by the originating computer, and its own private key. Eg: RSA, ECC.

Disadvantages:-
  • Much slower compared to secret key encryption.
  • Ciphertext is much larger than the plaintext.

RSA:-The widely used RSA algorithm is one example of PKC. Anyone wanting to receive a message publishes a key, which contains two numbers. A sender converts a message into a series of digits, and performs a simple mathematical calculation on the series using the publicly available numbers. Messages are deciphered by the recipient by performing another operation, known only to him.

Please refer for a more detailed study on various types of Cryptographic Algorithms

Key Distribution Problem:-

The main practical problem with secret key encryption is exchanging a secret key. In principle any two users who wished to communicate could first meet to agree on a key in advance, but in practice this could be inconvenient. Other methods for establishing a key, such as the use of secure courier or private knowledge, could be impractical for routine communication between many users. But any discussion of how the key is to be chosen that takes place on a public communication channel could in principle be intercepted and used by an eavesdropper.

One proposed method for solving this key distribution problem is the appointment of a central key distribution server. Every potential communicating party registers with the server and establishes a secret key. The server then relays secure communications between users, but the server itself is vulnerable to attack. Here Quantum cryptography comes into play. Quantum encryption, provides a way of agreeing on a secret key without making this assumption.

Communication at the quantum level changes many of the conventions of both classical secret key and public key communication described above. For example, it is not necessarily possible for messages to be perfectly copied by anyone with access to them, nor for messages to be relayed without changing them in some respect, nor for an eavesdropper to passively monitor communications without being detected .

INTRODUCTION TO QUANTUM CRYPTOGRAPHY:- 

Quantum cryptography is a new field based on quantum mechanics  A quantum cryptography system is a key distribution system that attempts to link the security of the system to the correctness of the uncertainty principle of quantum mechanics

Hiesenberg’s   uncertainty principle:- “The mere act of observing or measuring a particle will ultimately change its behaviour.” The essence of the uncertainty principle of quantum mechanics is twofold. First, any measurements made on a physical system that extracts some information about that system will necessarily disturb that system, albeit possibly in a very small way. Second, any measurement made on a physical system that extracts some information about a certain quantity, call it x, necessarily precludes obtaining information about a conjugate quantity of the same system, call it p.

FUNDAMENTALS OF QUANTUM CRYPTOGRAPHY:- 

To understand the ideas of Quantum Cryptography, we must first discuss some underlying physics.
  • Electromagnetic waves such as light waves can exhibit the phenomenon of polarization, in which the direction of the electric field vibrations is constant or varies in some definite way. A polarization filter is a material that allows only light of a specified polarization direction to pass. If the light is randomly polarized, only half of it will pass a perfect filter.
  • According to quantum theory, light waves are propagated as discrete particles known as photons. A photon is a massless particle, the quantum of the electromagnetic field, carrying energy, momentum, and angular momentum. The polarization of the light is carried by the direction of the angular momentum or spin of the photons. A photon either will or will not pass through a polarization filter, but if it emerges it will be aligned with the filter regardless of its initial state; there are no partial photons.
  • The foundation of quantum cryptography lies in the Heisenberg uncertainty principle, which states that certain pairs of physical properties are related in such a way that measuring one property prevents the observer from simultaneously knowing the value of the other. In particular, when measuring the polarization of a photon, the choice of what direction to measure affects all subsequent measurements. For instance, if one measures the polarization of a photon by noting that it passes through a vertically oriented filter, the photon emerges as vertically polarized regardless of its initial direction of polarization. If one places a second filter oriented at some angle q to the vertical, there is a certain probability that the photon will pass through the second filter as well, and this probability depends on the angle q. As q increases, the probability of the photon passing through the second filter decreases until it reaches 0 at q = 90 deg (i.e., the second filter is horizontal). When q = 45 deg, the chance of the photon passing through the second filter is precisely 1/2. This is the same result as a stream of randomly polarized photons impinging on the second filter, so the first filter is said to randomize the measurements of the second. 

Polarization by a filter: Unpolarized light enters a vertically aligned filter, which absorbs some of the light and polarizes the remainder in the vertical direction. A second filter tilted at some angle q absorbs some of the polarized light and transmits the rest, giving it a new polarization.

  • A pair of orthogonal (perpendicular) polarization states used to describe the polarization of photons, such as horizontal/vertical, is referred to as a basis. A pair of bases are said to be conjugate bases if the measurement of the polarization in the first basis completely randomizes the measurement in the second basis , as in the above example with q = 45 deg. It is a fundamental consequence of the Heisenberg uncertainty principle that such conjugate pairs of states must exist for a quantum system.

Working:- If a sender, typically designated Alice in the literature, uses a filter in the 0-deg/90-deg basis to give the photon an initial polarization (either horizontal or vertical, but she doesn't reveal which), a receiver Bob can determine this by using a filter aligned to the same basis. However if Bob uses a filter in the 45-deg/135-deg basis to measure the photon, he cannot determine any information about the initial polarization of the photon.

Alice and Bob are equipped with two polarizers each, one aligned with the rectilinear 0-deg/90-deg (or +) basis that will emit - or | polarized photons and one aligned with the diagonal 45-deg/135-deg (or X) basis that will emit \ or / polarized photons. Alice and Bob can communicate via a quantum channel over which Alice can send photons, and a public channel over which they can discuss results. An eavesdropper Eve is assumed to have unlimited computing power and access to both these channels, though she cannot alter messages on the public channel .

Alice begins to send photons to Bob, each one polarized at random in one of the four directions: 0, 45, 90, or 135 deg. As Bob receives each photon, he measures it with one of his polarizers chosen at random. Since he does not know which direction Alice chose for her polarizer, his choice may not match hers. If it does match the basis, Bob will measure the same polarization as Alice sent, but if it doesn't match, Bob's measurement will be completely random. For instance, if Alice sends a photon | and Bob measures with his + polarizer oriented either - or |, he will correctly deduce Alice sent a | photon, but if he measures with his X polarizer, he will deduce (with equal probability) either \ or /, neither of which is what Alice actually sent. Furthermore, his measurement will have destroyed the original polarization.

To eliminate the false measurements from the sequence, Alice and Bob begin a public discussion after the entire sequence of photons has been sent. Bob tells Alice which basis he used to measure each photon, and Alice tells him whether or not it was the correct one. Neither Alice nor Bob announces the actual measurements, only the bases in which they were made. They discard all data for which their polarizers didn't match, leaving (in theory) two perfectly matching strings. They can then convert these into bit strings by agreeing on which photon directions should be 0 and which should be 1.

These characteristics provide the principles behind quantum cryptography. If an eavesdropper Eve uses a filter aligned with Alice's filter, she can recover the original polarization of the photon. But if she uses a misaligned filter she will not only receive no information, but will have influenced the original photon so that she will be unable to reliably retransmit one with the original polarization. Bob will either receive no message or a garbled one, and in either case will be able to deduce Eve's presence. A user can suggest a key by sending a series of photons with random polarizations.

This sequence can then be used to generate a sequence of numbers. The process is known as quantum key distribution. If the key is intercepted by an eavesdropper, this can be detected and it is of no consequence, since it is only a set of random bits and can be discarded. The sender can then transmit another key. Once a key has been securely received, it can be used to encrypt a message that can be transmitted by conventional means: telephone, e-mail, or regular postal mail

Illustration of Quantum Key Distribution:- 

A quantum cryptography system allows two people; say Alice and Bob, to exchange a secret key. Alice uses a transmitter to send photons in one of four polarizations: 0, 45, 90 or 135 degrees. Bob uses a receiver to measure each polarization in either the rectilinear basis (0 and 90) or the diagonal basis (45 and 135).



The BB84 system is now one of several types of quantum cryptosystems for key distribution. The basic idea of those cryptosystems is as follows. A sequence of correlated particle pairs is generated, with one member of each pair being detected by each party. An eavesdropper on this communication would have to detect a particle to read the signal, and retransmit it in order for his presence to remain unknown. However, the act of detection of one particle of a pair destroys its quantum correlation with the other, and the two parties can easily verify whether this has been done, without revealing the results of their own measurements, by communication over an open channel. 


Quantum Cryptography Applications:-

  • The genius of quantum cryptography is that it solves the problem of key distribution Sending a message using photons is straightforward in principle, since one of their quantum properties, namely polarization, can be used to represent a 0 or a 1. Each photon therefore carries one bit of quantum information, which physicists call a qubit. The sender and receiver can easily spot the   alterations by the measurements caused by the eavesdropper. Cryptographers cannot exploit this idea to send private messages, but they can determine whether its security was compromised in retrospect.
  • Provides absolute security where it is needed. For example:
    • Financial institutions and trading exchanges :- QKD can secure most critical communications
    • Ultra secure point-to-point links :- Generally where a high secure point-to-point communication is needed
  • Using these principles research is being done for high-speed free-space and fiber-optic quantum cryptography implemented via ground-ground, ground-satellite, aircraft-satellite and satellite-satellite links.

Drawbacks:-

  • Distance is limited to only tens of kilometers
  • Since Optical fibres are used to transmit photons; losses occur along the fibre
  • Amplifiers cannot be used as they destroy the qubit state.

Conclusion:-

Quantum cryptography promises to revolutionize secure communication by providing security based on the fundamental laws of physics, instead of the current state of mathematical algorithms or computing technology. 

The advantage of quantum cryptography over traditional key exchange methods is that the exchange of information can be shown to be secure in a very strong sense, without making assumptions about the intractability of certain mathematical problems. The devices for implementing such methods exist and the performance of demonstration systems is being continuously improved. 

Within the next few years, if not months, such systems could start encrypting some of the most valuable secrets of government and industry.

References:-

  1. Herbert Goldstein "Classical Mechanics".
  2. Wher, Richards and Adair "physics of the atom" fourth edition.
  3. James F. Kurose and Keith W. Ross "Computer networking" second edition. A top down approach featuring the internet.
  4. Bruce Schneier "Applied Cryptography" second edition. Protocols, Algorithms and source code in C.
  5. Salvatore Vittorio “Quantum Cryptography: Privacy through Uncertainty" (Released October 2002)
  6. Quantum Cryptography Tutorial by James Ford.
  7. Text by Artur Ekert last update March 20, 1995 by K-A S. "CQC Introductions: Quantum Cryptography"
  8. Wikipedia, the free encyclopedia "Quantum cryptography".
  9. Michel Gualtieri (April 2000) "A Quantum twist on decoding and encoding". Seton Hall University- computer science undergraduate program.  

 

A complete guide to understanding Cryptography

What is Cryptography?

Everyone has secrets; some have more than others. When it becomes necessary to transmit those secrets from one point to another, it's important to protect the information while it's in transit. Cryptography presents various methods for taking legible, readable data, and transforming it into unreadable data for the purpose of secure transmission, and then using a key to transform it back into readable data when it reaches its destination.

Predating computers by thousands of years, cryptography has its roots in basic transposition ciphers, which assigns each letter of the alphabet a particular value. A simple example is to assign each letter a progressively higher number, where A=1, B=2, and so forth. Using this formula for example, the word "wiseGEEK", once encrypted, would read "23 9 19 5 7 5 5 11". During World War Two, machines were invented that made the ciphers more complicated and difficult to break, and today, computers have made cryptography even stronger still.

In data and telecommunications, cryptography is necessary when communicating over any untrusted medium, which includes just about any network, particularly the Internet.

Encryption:

Encryption refers to algorithmic schemes that encode plain text into non-readable form or cyphertext, providing privacy. The receiver of the encrypted text uses a “key” to decrypt the message, returning it to its original plain text form. The key is the trigger mechanism to the algorithm.

There are many types of encryption and not all of it is reliable. The same computer power that yields strong encryption can be used to break weak encryption schemes. Initially, 64-bit encryption was thought to be quite strong, but today 128-bit encryption is the standard, and this will undoubtedly change again in the future.

Encryption schemes are categorized as being symmetric or asymmetric. Symmetric key algorithms such as Blowfish, AES and DES, work with a single, prearranged key that is shared between sender and receiver. This key both encrypts and decrypts text.

In asymmetric encryption schemes, such as RSA and Diffie-Hellman, the scheme creates a “key pair” for the user: a public key and a private key. The public key can be published online for senders to use to encrypt text that will be sent to the owner of the public key. Once encrypted, the cyphertext cannot be decrypted except by the one who holds the private key of that key pair. This algorithm is based around the two keys working in conjunction with each other. Asymmetric encryption is considered one step more secure than symmetric encryption, because the decryption key can be kept private.

Some methods of cryptography used a "secret key" to allow the recipient to decrypt the message. The most common secret key cryptosystem is the Data Encryption Standard (DES), or the more secure Triple-DES which encrypts the data three times.

Conventional methods to secure data:

  • Controlling access to the computer system or media. For instance, through 'logon' authentication (e.g. via passwords).
  • Employing an access control mechanism (such as profiling)
  • Restricting physical access (e.g. keeping media locked away or preventing access to the computer itself).  

Shortcomings:

  • Conventional access control mechanisms can often be bypassed (for instance via hacking).
  • What if data has to be transmitted, or if the data media (e.g.: floppy disk) has to be moved outside the secure environment?
  • What if a number of people are sharing the computer environment? 

Cryptography (encryption and decryption) is a technique designed to protect      your information in ALL such situations.

The Purpose Of Cryptography

Within the context of any application-to-application communication, there are some specific security requirements, including:
  • Authentication: The process of proving one's identity. (The primary forms of host-to-host authentication on the Internet today are name-based or address-based, both of which are notoriously weak.)
  • Privacy/confidentiality: Ensuring that no one can read the message except the intended receiver.
  • Integrity: Assuring the receiver that the received message has not been altered in any way from the original.
  • Non-repudiation: A mechanism to prove that the sender really sent this message.

There are 3 types of Cryptographic Algorithms

Cryptographic Algorithms

1. Secret Key Cryptography

With secret key cryptography, a single key is used for both encryption and decryption. The sender uses the key (or some set of rules) to encrypt the plaintext and sends the ciphertext to the receiver. The receiver applies the same key (or rule set) to decrypt the message and recover the plaintext. Because a single key is used for both functions, secret key cryptography is also called symmetric encryption. Here it is obvious that the key must be known to both the sender and the receiver; that, in fact, is the secret.

Secret key cryptography schemes are generally categorized as being either stream ciphers or block ciphers.

Stream ciphers operate on a single bit (byte or computer word) at a time and implement some form of feedback mechanism so that the key is constantly changing.

A block cipher is so-called because the scheme encrypts one block of data at a time using the same key on each block.

In general, the same plaintext block will always encrypt to the same ciphertext when using the same key in a block cipher whereas the same plaintext will encrypt to different ciphertext in a stream cipher.

Types of Secret key cryptography algorithms: Secret key cryptography algorithms that are in use today include:
  • Data Encryption Standard (DES): The most common SKC scheme used today, DES was designed by IBM in the 1970s and adopted by the National Bureau of Standards (NBS) DES is a block-cipher employing a 56-bit key that operates on 64-bit blocks. DES has a complex set of rules and transformations that were designed specifically to yield fast hardware implementations and slow software implementations.
  • Advanced Encryption Standard (AES): In 1997, NIST initiated a very public, 4-1/2 year process to develop a new secure cryptosystem for U.S. government applications. The Advanced Encryption Standard, became the official successor to DES in December 2001. AES uses an SKC scheme called Rijndael, a block cipher .The algorithm can use a variable block length and key length; the latest specification allowed any combination of keys lengths of 128, 192, or 256 bits and blocks of length 128, 192, or 256 bits.
  • International Data Encryption Algorithm (IDEA): Secret-key cryptosystem written by Xuejia Lai and James Massey, in 1992 and patented by Ascom; a 64-bit SKC block cipher using a 128-bit key. Also available internationally.
  • Blowfish: A symmetric 64-bit block cipher invented by Bruce Schneier; optimized for 32-bit processors with large data caches, it is significantly faster than DES on a Pentium/PowerPC-class machine. Key lengths can vary from 32 to 448 bits in length. Blowfish, available freely and intended as a substitute for DES or IDEA, is in use in over 80 products.
  • Twofish: A 128-bit block cipher using 128-, 192-, or 256-bit keys. Designed to be highly secure and highly flexible, well-suited for large microprocessors, 8-bit smart card microprocessors, and dedicated hardware

 

2. Public-Key Cryptography

Let me give you two simple examples:
  • Multiplication vs. factorization: Suppose I tell you that I have two numbers, 9 and 16, and that I want to calculate the product; it should take almost no time to calculate the product, 144. Suppose instead that I tell you that I have a number, 144, and I need you tell me which pair of integers I multiplied together to obtain that number. You will eventually come up with the solution but whereas calculating the product took milliseconds, factoring will take longer because you first need to find the 8 pair of integer factors and then determine which one is the correct pair.
  • Exponentiation vs. logarithms: Suppose I tell you that I want to take the number 3 to the 6th power; again, it is easy to calculate 36=729. But if I tell you that I have the number 729 and want you to tell me the two integers that I used, x and y so that logx 729 = y, it will take you longer to find all possible solutions and select the pair that I used.

PKC depends upon the existence of so-called one-way functions, or mathematical functions that are easy to computer whereas their inverse function is relatively difficult to compute.

Because pair of keys are required, this approach is also called asymmetric cryptography.

In PKC, one of the keys is designated the public key and may be advertised as widely as the owner wants. The other key is designated the private key and is never revealed to another party. It is straight forward to send messages under this scheme. Suppose Alice wants to send Bob a message. Alice encrypts some information using Bob's public key; Bob decrypts the ciphertext using his private key. This method could be also used to prove who sent a message; Alice, for example, could encrypt some plaintext with her private key; when Bob decrypts using Alice's public key, he knows that Alice sent the message and Alice cannot deny having sent the message (non-repudiation).

Public-key cryptography algorithms that are in use today for key exchange or digital signatures include:
  • RSA: The first, and still most common, PKC implementation, named for the three MIT mathematicians who developed it — Ronald Rivest, Adi Shamir, and Leonard Adleman. RSA today is used in hundreds of software products and can be used for key exchange, digital signatures, or encryption of small blocks of data. RSA uses a variable size encryption block and a variable size key. The key-pair is derived from a very large number, n, that is the product of two prime numbers chosen according to special rules; these primes may be 100 or more digits in length each, yielding an n with roughly twice as many digits as the prime factors. The public key information includes n and a derivative of one of the factors of n; an attacker cannot determine the prime factors of n (and, therefore, the private key) from this information alone and that is what makes the RSA algorithm so secure. (Some descriptions of PKC erroneously state that RSA's safety is due to the difficulty in factoring large prime numbers. In fact, large prime numbers, like small prime numbers, only have two factors!) The ability for computers to factor large numbers, and therefore attack schemes such as RSA, is rapidly improving and systems today can find the prime factors of numbers with more than 140 digits. The presumed protection of RSA, however, is that users can easily increase the key size to always stay ahead of the computer processing curve. As an aside, the patent for RSA expired in September 2000 which does not appear to have affected RSA's popularity one way or the other. 
  •  Digital Signature Algorithm (DSA): The algorithm specified in NIST's Digital Signature Standard (DSS), provides digital signature capability for the authentication of messages.
  • Public-Key Cryptography Standards (PKCS): A set of interoperable standards and guidelines for public-key cryptography, designed by RSA Data Security Inc.

 

3. Hash Functions:

Hash functions, also called message digests and one-way encryption, are algorithms that, in some sense, use no key (Figure 1C). Instead, a fixed-length hash value is computed based upon the plaintext that makes it impossible for either the contents or length of the plaintext to be recovered. Hash algorithms are typically used to provide a digital fingerprint of a file's contents often used to ensure that the file has not been altered by an intruder or virus. Hash functions are also commonly employed by many operating systems to encrypt passwords. Hash functions, then, provide a measure of the integrity of a file.

Hash algorithms that are in common use today include:
  • Message Digest (MD) algorithms: A series of byte-oriented algorithms that produce a 128-bit hash value from an arbitrary-length message.
  • Secure Hash Algorithm (SHA): Algorithm for NIST's Secure Hash Standard (SHS). SHA-1 produces a 160-bit hash value. Five algorithms in the SHS: SHA-1 plus SHA-224, SHA-256, SHA-384, and SHA-512 which can produce hash values that are 224, 256, 384, or 512 bits in length, respectively.
  • HAVAL (HAsh of VAriable Length): Designed by Y. Zheng, J. Pieprzyk and J. Seberry, a hash algorithm with many levels of security. HAVAL can create hash values that are 128, 160, 192, 224, or 256 bits in length.
Hash functions are sometimes misunderstood and some sources claim that no two files can have the same hash value. This is, in fact, not correct. Consider a hash function that provides a 128-bit hash value. There are, obviously, 2128 possible hash values. But there are a lot more than 2128 possible files. Therefore, there have to be multiple files — in fact, there have to be an infinite number of files! — That can have the same 128-bit hash value.

The difficulty is finding two files with the same hash! What is, indeed, very hard to do is to try to create a file that has a given hash value so as to force a hash value collision — which is the reason that hash functions are used extensively for information security and computer forensics applications.

Applications of Cryptography 


Cryptography is extremely useful; there is a multitude of applications, many of which are currently in use. A typical application of cryptography is a system built out of the basic techniques. Such systems can be of various levels of complexity. Some of the more simple applications are secure communication, identification, authentication, and secret sharing. More complicated applications include systems for electronic commerce, certification, secure electronic mail, key recovery, and secure computer access.

Secure Communication

Secure communication is the most straightforward use of cryptography. Two people may communicate securely by encrypting the messages sent between them. This can be done in such a way that a third party eavesdropping may never be able to decipher the messages. While secure communication has existed for centuries, the key management problem has prevented it from becoming commonplace. Thanks to the development of public-key cryptography, the tools exist to create a large-scale network of people who can communicate securely with one another even if they had never communicated before.

Identification and Authentication

Identification and authentication are two widely used applications of cryptography. Identification is the process of verifying someone's or something's identity. For example, when withdrawing money from a bank, a teller asks to see identification (for example, a driver's license) to verify the identity of the owner of the account. This same process can be done electronically using cryptography. Every automatic teller machine (ATM) card is associated with a ``secret'' personal identification number (PIN), which binds the owner to the card and thus to the account. When the card is inserted into the ATM, the machine prompts the cardholder for the PIN. If the correct PIN is entered, the machine identifies that person as the rightful owner and grants access. Another important application of cryptography is authentication. Authentication is similar to identification, in that both allow an entity access to resources (such as an Internet account), but authentication is broader because it does not necessarily involve identifying a person or entity.

Secret Sharing

Another application of cryptography, called secret sharing, allows the trust of a secret to be distributed among a group of people. For example, in a (k, n)-threshold scheme, information about a secret is distributed in such a way that any k out of the n people (k £ n) have enough information to determine the secret, but any set of k-1 people do not. In any secret sharing scheme, there are designated sets of people whose cumulative information suffices to determine the secret. In some implementations of secret sharing schemes, each participant receives the secret after it has been generated. In other implementations, the actual secret is never made visible to the participants, although the purpose for which they sought the secret (for example, access to a building or permission to execute a process) is allowed.

Electronic Commerce

Over the past few years there has been a growing amount of business conducted over the Internet - this form of business is called electronic commerce or e-commerce. E-commerce is comprised of online banking, online brokerage accounts, and Internet shopping, to name a few of the many applications. One can book plane tickets, make hotel reservations, rent a car, transfer money from one account to another, buy compact disks (CDs), clothes, books and so on all while sitting in front of a computer. However, simply entering a credit card number on the Internet leaves one open to fraud. One cryptographic solution to this problem is to encrypt the credit card number (or other private information) when it is entered online, another is to secure the entire session. When a computer encrypts this information and sends it out on the Internet, it is incomprehensible to a third party viewer. The web server ("Internet shopping center") receives the encrypted information, decrypts it, and proceeds with the sale without fear that the credit card number (or other personal information) slipped into the wrong hands. As more and more business is conducted over the Internet, the need for protection against fraud, theft, and corruption of vital information increases.

Certification

Another application of cryptography is certification; certification is a scheme by which trusted agents such as certifying authorities vouch for unknown agents, such as users. The trusted agents issue vouchers called certificates which each have some inherent meaning. Certification technology was developed to make identification and authentication possible on a large scale

Key Recovery

Key recovery is a technology that allows a key to be revealed under certain circumstances without the owner of the key revealing it. This is useful for two main reasons: first of all, if a user loses or accidentally deletes his or her key, key recovery could prevent a disaster. Secondly, if a law enforcement agency wishes to eavesdrop on a suspected criminal without the suspect's knowledge (akin to a wiretap), the agency must be able to recover the key. Key recovery techniques are in use in some instances; however, the use of key recovery as a law enforcement technique is somewhat controversial.

Remote Access

Secure remote access is another important application of cryptography. The basic system of passwords certainly gives a level of security for secure access, but it may not be enough in some cases. For instance, passwords can be eavesdropped, forgotten, stolen, or guessed. Many products supply cryptographic methods for remote access with a higher degree of security.


Technical Paper on Brain Machine Interface

“No technology is superior if it tends to overrule human faculty. In fact, it should be other way around”

Imagine that you are somewhere else and you have to control a machine which is in a remote area, where human can’t withstand for a long time. In such a condition we can move to this BRAIN -MACHINE INTERFACE. It is similar to robotics but it is not exactly a robot. In the robot the interface has a sensor with controller but here the interface with human and machine. In the present wheel chair movements are done according to the patient by controlling the joystick with only up, reverse, left and right movements are possible. But if the patient is a paralyzed person, then it is a critical for the patient to take movements. Such a condition can be recovered by this approach.

The main objective of this paper is to interface the human and machine, by doing this several objects can be controlled. This paper enumerates how Human and Machine can be interfaced and researches undergone on recovery of paralyzed person in their mind.

Introduction:

The core of this paper is that to operate machines from a remote area . In the given BMI DEVELOPMENT SYSTEMS the brain is connected to client interface node through a neural interface nodes . The client interface node connected to a BMI SERVER which controls remote ROBOTS through a host control.

Brain Study:

In the previous research, it has been shown that a rat wired into an artificial neural system can make a robotic water feeder move just by willing it. But the latest work sets new benchmarks because it shows how to process more neural information at a faster speed to produce more sophisticated robotic movements. That the system can be made to work using a primate is also an important proof of principle.

Scientists have used the brain signals from a monkey to drive a robotic arm. As the animal stuck out its hand to pick up some food off a tray, an artificial neural system linked into the animal's head mimicked activity in the mechanical limb.

It was an amazing sight to see the robot in my lab move, knowing that it was being driven by signals from a monkey brain. It was as if the monkey had a 600-mile- (950-km-) long virtual arm. The rhesus monkeys consciously controls the movement of a robot arm in real time, using only signals from their brains and visual feedback on a video screen. It is said that the animals appeared to operate the robot arm as if it were their own limb. The technologies achievement represents an important step toward technology that could enable paralyzed people to control "neuroprosthetic" limbs, and even free-roaming "neurorobots" using brain signals. Importantly, the technology that developed for analyzing brain signals from behaving animals could also greatly improve rehabilitation of people with brain and spinal cord damage from stroke, disease or trauma.

By understanding the biological factors that control the brain's adaptability.
The clinicians could develop improved drugs and rehabilitation methods for people with such damage. The latest work is the first to demonstrate that monkeys can learn to use only visual feedback and brain signals, without resort to any muscle movement, to control a mechanical robot arm including both reaching and grasping movements.

Signal Analysis using Electrodes:

A brain-signal recording and analysis system that enabled to decipher brain signals from monkeys in order to control the movement of a robot arm .In the xperiments, an array of microelectrodes each smaller than the diameter of a human hair into the frontal and parietal lobes of the brains of wo female rhesus macaque monkeys. They implanted 96 electrodes in one animal and 320 in the other. The researchers reported their technology of implanting arrays of hundreds of electrodes and recording from them over long periods.

The frontal and parietal areas of the brain are chosen because they are known to be involved in producing multiple output commands to control complex muscle movement.

The faint signals from the electrode arrays were detected and analyzed by the computer system and developed to recognize patterns of signals that represented particular movements by an animal's arm.

Experiments:

The experiments conducted for Brain-Machine Interface are:

Monkey Experiment:

The goal of the project is to control a hexapod robot (RHEX) using neural signals from monkeys at remote location. To explore the optimal mapping of cortical signals to Rhex’s movement parameters, a model of Rhex’s movements has been generated and human arm control is used to approximate cortical control. In preliminary investigations, the objective was to explore different possible mappings or control strategies for Rhex. Both kinematic (position, velocity) and dynamic (force, torque) mappings from hand space were explored and optimal control strategies were determined. These mappings will be tested in the next phases of the experiment to ascertain the maximal control capabilities of prefrontal and parietal cortices.

In the initial, output signals from the monkeys' brains were analyzed and recorded as the animals were taught to use a joystick to both position a cursor over a target on a video screen and to grasp the joystick with a specified force. After the animal’s initial training, however the cursor was made a simple display – now incorporating into its movement the dynamics, such as inertia and momentum, of a robot arm functioning in another room. While the animal’s performance initially declined when the robot arm was included in the feedback loop, they quickly learned to allow for these dynamics and became proficient in manipulating the robot-reflecting cursor The joystick was then removed, after which the monkeys continued to move their arms in mid-air to manipulate and "grab" the cursor, thus controlling the robot arm.

After a series of psychometric tests on human volunteers, the strategy of controlling a model of Rhex depicted above using the human hand was determined to be the easiest to use and fastest to learn. The flexion/extension of the wrist is mapped to angular velocity and the linear translation of the hand is mapped to linear (fore/aft) velocity. The monkeys are being trained to use this technique to control a virtual model of Rhex

The most amazing result, though, was that after only a few days of playing with the robot in this way, the monkey suddenly realized that it didn't need to move her arm at all. "The arm muscles went completely quiet, it kept the arm at side and controlled the robot arm using only its brain and visual feedback.

Our analyses of the brain signals showed that the animal learned to assimilate the robot arm into her brain as if it was her own arm." Importantly the experiments included both reaching and grasping movements, but derived from the same sets of electrodes.

The neurons from which we were recording could encode different kinds of information. It was surprised to see that the animal can learn to time the activity of the neurons to basically control different types of parameters sequentially. For example, after using a group of neurons to move the robot to a certain point, these same cells would then produce the force output that the animals need to hold an object.

Analysis of the signals from the animal’s brain as they learned revealed that the brain circuitry was actively reorganizing itself to adapt.

Analysis of Outputs:

It was extraordinary to see that when we switched the animal from joystick control to brain control, the physiological properties of the brain cells changed immediately. And when we switched the animal back to joystick control the very next day, the properties changed again.

Such findings tell us that the brain is so amazingly adaptable that it can incorporate an external device into its own 'neuronal space' as a natural extension of the body , actually, we see this every day, when we use any tool, from a pencil to a car. As a part of that we incorporate the properties of that tool into our brain, which makes us proficient in using it, such findings of brain plasticity in mature animals and humans are in sharp contrast to traditional views that only in childhood is the brain plastic enough to allow for such adaptation.

The finding that their brain-machine interface system can work in animals will have direct application to clinical development of neuroprosthetic devices for paralyzed people.

There is certainly a great deal of science and engineering to be done to develop this technology and to create systems that can be used safely in humans. However, the results so far lead us to believe that these brain-machine interfaces hold enormous promise for restoring function to paralyzed people.

The researchers are already conducting preliminary studies of human subjects, in which they are performing analysis of brain signals to determine whether those signals correlate with those seen in the animal models. They are also exploring techniques to increase the longevity of the electrodes beyond the two years they have currently achieved in animal studies. To miniaturize the components, to create wireless interfaces and to develop different grippers, wrists and other mechanical components of a neuroprosthetic device.

And in their animal studies, proceeding to add an additional source of feedback to the system in the form of a small vibrating device placed on the animal's side that will tell the animal about another property of the robot. Beyond the promise of neuroprosthetic devices, the technology for recording and analyzing signals from large electrode arrays in the brain will offer an unprecedented insight into brain function and plasticity.

We have learned in our studies that this approach will offer important insights into how the large-scale circuitry of the brain works .Since we have total control of the system, for example, we can change the properties of the robot arm and watch in real time how the brain adapts.

Brain Machine Interface in Human beings:

The approach of this paper is to control the operations of a robot by means of an human brain without any links .

The brain signals are taken by electrodes from the frontal and parietal lobes .The signals are conveyed with means of electrodes and processed by the unit .The unit has a BMI development system . The brain is connected to (i.e. the microelectrodes are connected to the frontal and parietal lobes) client interface through neural interface nodes which in turn is linked with BMI server which controls the host device .

In the present wheel chair, movements are done according to the patient by controlling the joystick with only up, reverse, left and right movements which are only possible. But if the patient is a paralyzed person, then it is a critical for the patient to take movements because he is unable to control the wheel-chair. So this technology is a marvelous gift to help them.

Conclusion:

Thus this technology is a boon to this world. By this adaptation many Bio-medical difficulties can be overtaken and many of our dreams will come true .

References:

Bio-medical Engineering by Dr. Dan Koditschek. Neural Engineering by Karen Coulter and Rahul Bagdia Neural Networks by Patrick Davalo and Erick Naim.