Technical paper on Cryptography

Introduction

People mean different things when they talk about cryptography. Children play with toy ciphers and secret languages. However, these have little to do with real security and strong encryption. Strong encryption is the kind of encryption that can be used to protect information of real value against organized criminals, multinational corporations, and major governments. Strong encryption used to be only military business; however, in the information society it has become one of the central tools for maintaining privacy and confidentiality. As we move into an information society, the technological means for global surveillance of millions of individual people are becoming available to major governments. Cryptography has become one of the main tools for privacy, trust, access control, electronic payments, corporate security, and countless other fields. Cryptography is no longer a military thing that should not be messed with. It is time to de-mystify cryptography and make full use of the advantages it provides for the modern society. There are two kinds of cryptosystems: symmetric and asymmetric. Symmetric cryptosystems use the same key (the secret key) to encrypt and decrypt a message, and asymmetric cryptosystems use one key (the public key) to encrypt a message and a different key (the private key) to decrypt it. Asymmetric cryptosystems are also called public key cryptosystems.

Basic terminology

Suppose that someone wants to send a message to a receiver, and wants to be sure that no-one else can read the message. However, there is the possibility that someone else opens the letter or hears the electronic-communication.

In cryptographic terminology, the message is called plaintext or clear text. Encoding the contents of the message in such a way that hides its contents from outsiders is called encryption. The encrypted message is called the cipher text. The process of retrieving the plaintext from the cipher text is called decryption. Encryption and decryption usually make use of a key, and the coding method is such that only knowing the proper key can perform decryption.

Cryptography is the art or science of keeping messages secret. Cryptanalysis is the art of breaking ciphers, i.e. retrieving the plaintext without knowing the proper key. People who do cryptography are cryptographers, and practitioners of cryptanalysis are cryptanalysts.

Cryptography deals with all aspects of secure messaging, authentication, digital signatures, electronic money, and other applications. Cryptology is the branch of mathematics that studies the mathematical foundations of cryptographic methods.


Digital signature

Some public-key algorithms can be used to generate digital signatures. A digital signature is a small amount of data that was created using some secret key, and there is a public key that can be used to verify that the signature was really generated using the corresponding private key. The algorithm used to generate the signature must be such that without knowing the secret key it is not possible to create a signature that would verify as valid.

Digital signatures are used to verify that a message really comes from the claimed sender (assuming only the sender knows the secret key corresponding to his/her public key). They can also be used to timestamp documents: a trusted party signs the document and its timestamp with his/her secret key, thus testifying that the document existed at the stated time.

Digital signatures can also be used to testify (or certify) that a public key belongs to a particular person. Signing the combination of the key and the information about its owner by a trusted key does this. The digital signature by a third party (owner of the trusted key), the public key and information about the owner of the public key are often called certificates.

The reason for trusting that third party key may again be that it was signed by another trusted key. Eventually some key must be a root of the trust hierarchy (that is, it is not trusted because it was signed by somebody, but because you believe a priori that the key can be trusted). In a centralized key infrastructure there are very few roots in the trust network (e.g., trusted government agencies; such roots are also called certification authorities). In a distributed infrastructure there need not be any universally accepted roots, and each party may have different trusted roots (such of the party's own key and any keys signed by it). This is the web of trust concept used in e.g. PGP.

Computing a message digest from the document, and concatenating it with information about the signer, a timestamp, etc typically create a digital signature of an arbitrary document. The resulting string is then encrypted using the private key of the signer using a suitable algorithm. The resulting encrypted block of bits is the signature. It is often distributed together with information about the public key that was used to sign it. To verify a signature, the recipient first determines whether it trusts that the key belongs to the person it is supposed to belong to (using the web of trust or a priori knowledge), and then decrypts the signature using the public key of the person. If the signature decrypts properly and the information matches that of the message (proper message digest etc.), the signature is accepted as valid.

Cryptography hash functions

Cryptographic hash functions are used in various contexts, for example to compute the message digest when making a digital signature. A hash function compresses the bits of a message to a fixed-size hash value in a way that distributes the possible messages evenly among the possible hash values. A cryptographic hash function does this in a way that makes it extremely difficult to come up with a message that would hash to a particular hash value.

Cryptographic hash functions typically produce hash values of 128 or more bits. This number (2128) is vastly larger than the number of different messages likely to ever be exchanged in the world. The reason for requiring more than 128 bits is based on the birthday paradox. The birthday paradox roughly states that given a hash function mapping any message to a 128-bit hash digest, we can expect that the same digest will be computed twice when 264 randomly selected messages have been hashed. As cheaper memory chips for computers become available it may become necessary to require larger than 128 bit message digests (such as 160 bits as has become standard recently).


Cryptographic random number generator

Cryptographic random number generators generate random numbers for use in cryptographic applications, such as for keys. Conventional random number generators available in most programming languages or programming environments are not suitable for use in cryptographic applications (they are designed for statistical randomness, not to resist prediction by cryptanalysts).

In the optimal case, random numbers are based on true physical sources of randomness that cannot be predicted. Such sources may include the noise from a semiconductor device, the least significant bits of an audio input, or the intervals between device interrupts or user keystrokes. The noise obtained from a physical source is then "distilled" by a cryptographic hash function to make every bit depend on every other bit. Quite often a large pool (several thousand bits) is used to contain randomness, and every bit of the pool is made to depend on every bit of input noise and every other bit of the pool in a cryptographically strong way.

When true physical randomness is not available, pseudo-random numbers must be used. This situation is undesirable, but often arises on general-purpose computers. It is always desirable to obtain some environmental noise - even from device latencies, resource utilization statistics, network statistics, keyboard interrupts, or whatever. The point is that the data must be unpredictable for any external observer; to achieve this, the random pool must contain at least 128 bits of true entropy.

Cryptographic pseudo-random number generators typically have a large pool ("seed value") containing randomness. Bits are returned from this pool by taking data from the pool, optionally running the data through a cryptographic hash function to avoid revealing the contents of the pool. When more bits are needed, the pool is stirred by encrypting its contents by a suitable cipher with a random key (that may be taken from an unreturned part of the pool) in a mode, which makes every bit of the pool depend on every other bit of the pool. New environmental noise should be mixed into the pool before stirring to make predicting previous or future values even more impossible.

Even though cryptographically strong random number generators are not very difficult to build if designed properly, they are often overlooked. The importance of the random number generator must thus be emphasized - if done badly; it will easily become the weakest point of the system.

Strength of cryptographic algorithm

Good cryptographic systems should always be designed so that they are as difficult to break as possible. It is possible to build systems that cannot be broken in practice (though this cannot usually be proved). This does not significantly increase system implementation effort; however, some care and expertise is required. There is no excuse for a system designer to leave the system breakable. Any mechanisms that can be used to circumvent security must be made explicit, documented, and brought into the attention of the end users.

In theoryCrypta, trying all possible keys in sequence can break any cryptographic method with a key. If using brute force to try all keys is the only option, the required computing power increases exponentially with the length of the key. A 32-bit key takes 232 (about 109) steps. This is something anyone can do on his/her home computer. A system with 40 bit keys takes 240 steps - this kind of computation requires something like a week (depending on the efficiency of the algorithm) on a modern home computer. A system with 56 bit keys takes a substantial effort (with a large number of home computers using distributed effort, it has been shown to take just a few months), but is easily breakable with special hardware. The cost of the special hardware is substantial but easily within reach of organized criminals, major companies, and governments. Keys with 64 bits are probably breakable now by major governments, and within reach of organized criminals, major companies, and lesser governments in few years. Keys with 80 bits appear good for a few years, and keys with 128 bits will probably remain unbreakable by brute force for the foreseeable future. Even larger keys are sometimes used. However, key length is not the only relevant issue. Many ciphers can be broken without trying all possible keys. In general, it is very difficult to design ciphers that could not be broken more effectively using other methods. Designing your own ciphers may be fun, but it is not recommended for real applications unless you are a true expert and know exactly what you are doing.

One should generally be very wary of unpublished or secret algorithms. Quite often the designer is then not sure of the security of the algorithm, or its security depends on the secrecy of the algorithm. Generally, no algorithm that depends on the secrecy of the algorithm is secure. Particularly in software, anyone can hire someone to disassemble and reverse-engineer the algorithm. Experience has shown that the vast majority of secret algorithms that have become public knowledge later have been pitifully weak in reality.

The key lengths used in public-key cryptography are usually much longer than those used in symmetric ciphers. This is caused by the extra structure that is available to the cryptanalyst. There the problem is not that of guessing the right key, but deriving the matching secret key from the public key. In the case of, factoring a large integer that has two large prime factors could do this. In the case of some other cryptosystems it is equivalent to computing the discrete logarithm modulo a large integer (which is believed to be roughly comparable to factoring when the module is a large prime number). There are public key cryptosystems based on yet other problems.

To give some idea of the complexity for the RSA cryptosystem, a 256-bit modulus is easily factored at home, and university research groups can break 512 bit keys within a few months. Keys with 768 bits are probably not secure in the long term. Keys with 1024 bits and more should be safe for now unless major cryptographically advances are made against RSA; keys of 2048 bits are considered by many to be secure for decades.


Analysis and attacks on cryptoggapy system

Cryptanalysis is the art of deciphering encrypted communications without knowing the proper keys. There are many cryptanalytic techniques. Some of the more important ones for a system implementer are described below.

Only-only attack: This is the situation where the attacker does not know anything about the contents of the message, and must work from cipher text only. In practice it is quite often possible to make guesses about the plaintext, as many types of messages have fixed format headers. Even ordinary letters and documents begin in a very predictable way. For example, many classical attacks use frequency analysis of the cipher text, however, this does not work well against modern ciphers.

Known-plaintext attack: The attacker knows or can guess the plaintext for some parts of the cipher text. The task is to decrypt the rest of the cipher text blocks using this information. This may be done by determining the key used to encrypt the data, or via some shortcut.

One of the best-known modern known-plaintext attacks is linear cryptanalysis against block ciphers.

Chosen-plaintext attack: The attacker is able to have any text he likes encrypted with the unknown key. The task is to determine the key used for encryption.

A good example of this attack is the differential cryptanalysis, which can be applied against lock ciphers (and in some cases also against hash functions).

Man-in-the-middle attack: This attack is relevant for cryptographic communication and key exchange protocols. The idea is that when two parties, A and B, are exchanging keys for secure communication, an adversary positions himself between A and B on the communication line. The adversary then intercepts the signals that A and B send to each other, and performs a key exchange with A and B separately. A and B will end up using a different key, each of which is known to the adversary. The adversary can then decrypt any communication from A with the key he shares with A, and then resends the communication to B by encrypting it again with the key he shares with B. Both A and B will think that they are communicating securely, but in fact the adversary is hearing everything.

The usual way to prevent the man-in-the-middle attack is to use a public key cryptosystem capable of providing digital signatures. For set up, the parties must know each other’s public keys in advance. After the shared secret has been generated, the parties send digital signatures of it to each other. The man-in-the-middle can attempt to forge these signatures, but fails because he cannot fake the signatures.

Correlation between the secret key and the output of the cryptosystem is the main source of information to the cryptanalyst. In the easiest case, the information about the secret key is directly leaked by the cryptosystem. More complicated cases require studying the correlation (basically, any relation that would not be expected on the basis of chance alone) between the observed (or measured) information about the cryptosystem and the guessed key information.

For example, in linear attacks against block ciphers the cryptanalyst studies the known plaintext and the observed cipher text. Guessing some of the key bits of the cryptosystem the analyst determines by correlation between the plaintext and the cipher text whether she guessed correctly. This can be repeated, and has many variations.

The differential cryptanalysis introduced by Eli Biham and Adi Shamir in late 1980's was the first attack that fully utilized this idea against block ciphers (especially against DES). Later Mitsuru Matsui came up with linear cryptanalysis, which was even more effective against DES. More recently, new attacks using similar ideas have been developed.

Perhaps the best introduction to this material is the proceedings of EUROCRYPT and CRYPTO throughout the 1990's. There can be found Mitsuru Matsui's discussion of linear cryptanalysis of DES, and the ideas of truncated differentials by Lars Knudsen (for example, IDEA cryptanalysis). The book by Eli Biham and Adi Shamir about the differential cryptanalysis of DES is the "classical" work on this subject.

The correlation idea is fundamental to cryptography and several researchers have tried to construct cryptosystems, which are provably secure against such attacks. For example, Knudsen and Nyberg have studied provable security against differential cryptanalysis.”

Attack against or using the underlying hardware: in the last few years as more and more small mobile crypto devices have come into widespread use, a new category of attacks has become relevant which aim directly at the hardware implementation of the cryptosystem.

The attacks use the data from very fine measurements of the crypto device doing, say, encryption and compute key information from these measurements. The basic ideas are then closely related to those in other correlation attacks. For instance, the attacker guesses some key bits and attempts to verify the correctness of the guess by studying correlation against her measurements.

Several attacks have been proposed such as using careful timings of the device, fine measurements of the power consumption, and radiation patterns. These measurements can be used to obtain the secret key or other kinds information stored on the device.

Faults in cryptosystems can lead to cryptanalysis and even the discovery of the secret key. The interest in cryptographically devices lead to the discovery that some algorithms behaved very badly with the introduction of small faults in the internal computation.

For example, the usual implementation of RSA private key operations are very susceptible to fault attacks. It has been shown that by causing one bit of error at a suitable point can reveal the factorization of the modulus (i.e. it reveals the private key).

Quantum computing: Peter Shore’s paper on polynomial time factoring and discrete logarithm algorithms with quantum computers has caused growing interest in quantum computing. Quantum computing is a recent field of research that uses quantum mechanics to build computers that are, in theory, more powerful than modern serial computers. The power is derived from the inherent parallelism of quantum mechanics. So instead of doing tasks one at a time, as serial machines do, quantum computers can perform them all at once. Thus it is hoped that withquantum computers we can solve problems infeasible with serial machines.

Shore’s results imply that if quantum computers could be implemented effectively then most of public key cryptography will become history. However, they are much less effective against secret key cryptography.

Current state of the art of quantum computing does not appear alarming, as only very small machines have been implemented. The theory of quantum computation gives much promise for better performance than serial computers; however, whether it will be realized in practice is an open question.

Quantum mechanics is also a source for new ways of data hiding and secure communication with the potential of offering unbreakable security; this is the field of quantum cryptography. Unlike quantum computing, many successful experimental implementations of quantum cryptography have been already achieved. However, quantum cryptography is still some way off from being realized in commercial applications.

Algorithms


Public key

Public key cryptosystems were invented in the late 1970's, with some help from the development of complexity theory around that time. It was observed that based on a problem so difficult that it would need thousands of years to solve, and with some luck, a cryptosystem could be developed which would have two keys, a secret key and a public key. With the public key one could encrypt messages, and decrypt them with the private key. Thus the owner of the private key would be the only one who could decrypt the messages, but anyone knowing the public key could send them in privacy.

Another idea that was observed was that of a key exchange. In a two-party communication it would be useful to generate a common secret key for bulk encryption using a secret key cryptosystem (e.g. some block cipher).

Indeed, Whitfield Daffier and Martin Hellman used ideas from number theory to construct a key exchange protocol that started the era of public key cryptosystems. Shortly after that Ron Rivest, Adi Shamir and Leonard Adleman developed a cryptosystem that was the first real public key cryptosystem capable of encryption and digital signatures.

Later several public cryptosystems followed using many different underlying ideas (e.g. knapsack problems, different groups on finite fields and lattices). Many of them were soon proven to be insecure. However, the Diffie-Hellman protocol and RSA appear to have remained two of the strongest up to now.

Terminology

The basic ingredient in any public key cryptosystem is a difficult computational problem. The security of the cryptosystem is based on the fact that the private key can be computed from the public key only by solving this difficult problem. We now introduce some relevant terminology used in public key cryptography.

Practical cryptosystems

The wide interest in public key cryptography has produced several practically important cryptosystems. In the following they are listed in order of the underlying problem. As a basic guideline, a public key cryptosystem is build from a difficult problem as follows: take a difficult problem (for example, NP-hard) for which you can find an instance that can be solved in polynomial time. To encrypt a message, convert the message into such an easy instance of the difficult problem, and then use the public key to convert the easy problem into a difficult one. The result is then sent to the recipient through an insecure channel. To decrypt use the private key to convert the difficult problem into the easy one and solve it to regain the message. All public key systems use this principle, although they differ significantly in the details (like the underlying problem or the structure of public and private key).

Factorization: RSA, Rabin

RSA (Rivest-Shamir-Adleman) is the most commonly used public key algorithm. It can be used both for encryption and for digital signatures. The security of RSA is generally considered equivalent to factoring, although this has not been proved.

RSA computation takes place with integers modulo n = p * q, for two large secret primes p, q. To encrypt a message m, it is exponentiated with a small public exponent e. For decryption, the recipient of the ciphertext c = me (mod n) computes the multiplicative reverse d = e-1 (mod (p-1)*(q-1)) (we require that e is selected suitably for it to exist) and obtains cd = m e * d = m (mod n). The private key consists of n, p, q, e, d (where p and q can be forgotten); the public key contains only of n, e. The problem for the attacker is that computing the reverse d of e is assumed to be no easier than factorizing n.

The key size (the size of the modulus) should be greater than 1024 bits (i.e. it should be of magnitude 10300) for a reasonable margin of security. Keys of size, say, 2048 bits should give security for decades.RSA is currently the most important public key algorithm. It was patented in the United States (the patent expired in the year 2000).

The Rabin cryptosystem may be seen as a relative of RSA, although it has a quite different decoding process. What makes it interesting is that breaking Rabin is provably equivalent to factoring.

Rabin uses the exponent 2 (or any even integer) instead of odd integers like RSA. This has two consequences. First, the Rabin cryptosystem can be proven to be equivalent to factoring; second, the decryption becomes more difficult - at least in some sense. The latter is due to problems in deciding which of the possible outcomes of the decryption process is correct.

As it is equivalent to factoring the modulus, the size of the modulus is the most important security parameter. Module of more than 1024 bits is assumed to be secure.

Secret key

Secret key algorithms use the same key for both encryption and decryption (or one is easily derivable from the other). This is the more straightforward approach to data encryption, it is mathematically less complicated than public key cryptography, and has been used for many centuries.

The following terminology is often used when symmetric ciphers are examined.

S-boxes: lookup tables that map n bits to m bits (where n and m often are equal).

There are several ways of constructing good S-boxes for ciphers, as well as several ways of measuring them. Some designers use a rigorous mathematical approach by using bent functions (or related) to create S-boxes, which can be proven to be resistant against some particular attacks. Other designers use heuristic approaches, which may lead to S-boxes that are more difficult to handle in mathematical proofs, but can have additional benefits (such as several different implementation options).

The S-box may even be the only non-linear part of the cipher. This is the case in the block cipher DES, and thus may be regarded as the single most important part of the algorithm. In fact, many consider DES's S-boxes so good that they use them in their own designs (for example, Serpent).

Feistel networks: a Feistel network is a general device of constructing block ciphers from simple functions. The original idea was used in the block cipher Lucifer, invented by Horst Feistel. Several variations have been devised from the original version.

Put simply, the standard Feistel network takes a function from n bits to n bits and produces an invertible function from 2n bits to 2n bits. The basic function upon which the structure is based is often called the round function. The essential property of Feistel networks that makes them so useful in cipher design is that the round function need not be invertible, but the resulting function always is.
If the round function depends on, say, k bits of a key, then the Feistel cipher requires rk bits of the key where r is the number of rounds used.

The security of the Feistel structure is not obvious, but analysis of DES has shown that it is a good way to construct ciphers. It is compulsory that a Feistel cipher has enough rounds, but just adding more rounds does not always guarantee security.

The operation of taking the user key and expanding it into rk bits for the Feistel rounds is called key scheduling. This is often a non-linear operation, so that finding out any of the rk bits of the round keys does not directly provide any information about the actual user key. There are many ciphers that have this basic structure; Lucifer, DES, and Two fish, just to name a few.

Expansion, Permutation: these are common tools in mixing bits in a round function. They are linear operations, and thus not sufficient to guarantee security. However, when used with good non-linear S-boxes (as in DES) they are vital for the security because they propagate the non-linearity uniformly over all bits.

Bit slice operations (bit wise logic operations XOR, AND, OR, NOT and bit permutations): The idea of bit slice implementations of block ciphers is due to Eli Biham. It is common practice in vector machines to achieve parallel operation. However, Biham applied it on serial machines by using large registers as available in modern computers. The term "bit slice" is due to Matthew Kwan.

Basically all block ciphers can be written in bit slice manner, but operations such as addition and multiplication may become very slow. On the other hand, permutations are almost free as they just require naming of the registers again and this can be done one the coding level. Thus, for example, in DES exhaustive key search using bit slice techniques, one can increment the current key in a fraction of the time than is usually needed for key scheduling.

The AES finalist Serpent is designed to be implemented using bit slice operations only. This makes it particularly efficient on modern architectures with many registers.


The One-Time Pad

The one-time pad (OTP) is the only cipher that has been proven to be unconditionally secure, i.e. unbreakable in practice. It has also be proven that any unbreakable, unconditionally secure, cipher must in principle be a one-time pad.

The Vernam cipher (invented by G. Vernam in 1917) is a famous instance of an OTP. This cipher is very simple: take a stream of bits that contains the plaintext message, and a secret random bit-stream of the same length as the plaintext, which is considered the key. To encrypt the plaintext with the key, sequentially exclusive-or each pair of key bit and plaintext bit to obtain the ciphertext bit. If the key is truly random, it can be proven that the attacker has no means of deciding whether some guessed plaintext is more likely than any other when having only the ciphertext and no knowledge of the plaintext.

The practical problem is that the key does not have small constant length, but the same length as the message, and one part of a key should never be used twice (or the cipher can be broken). So, we just have traded the problem of exchanging secret data for the problem of exchanging secret random keys of the same length. However, this cipher has allegedly been in widespread use since its invention, and even more since the security proof by C. Shannon in 1949. Although admittedly the security of this cipher had been conjectured earlier, it was Shannon who actually found a formal proof for it.

DES: The Data Encryption Standard (DES) is an algorithm developed in the mid-1970s. It was turned into a standard by the US National Institute of Standards and Technology (NIST), and was also adopted by several other governments worldwide. It was and still is widely used, especially in the financial industry.

DES is a block cipher with 64-bit block size. It uses 56-bit keys. This makes it susceptible to exhaustive key search with modern computers and special-purpose hardware. DES is still strong enough to keep most random hackers and individuals out, but it is easily breakable with special hardware by government, criminal organizations, or major corporations. DES is getting too weak, and should not be used in new applications.

A variant of DES, Triple-DES (also 3DES) is based on using DES three times (normally in an encrypt-decrypt-encrypt sequence with three different, unrelated keys). The Triple-DES is arguably much stronger than (single) DES, however, it is rather slow compared to some new block ciphers.

Nevertheless, even though DES seems to be of little interest for applications of today there are many reasons for considering it still important. It was the first block cipher, which was widely deployed in the public sector. Thus it played an important role in making strong cryptography available to the public.

Also, the design was exceptionally good for a cipher that was meant to be used only a few years. DES proved to be a very strong cipher and it took over a decade for any interesting crypt analytical attacks against it to develop (not to underestimate the pioneering efforts that lead to this breakthrough). The development of differential cryptanalysis and linear cryptanalysis opened ways to really understand the design of block ciphers.

Although at the time of DES's introduction its design philosophy was held secret, it did not discourage its analysis - to the contrary. Some information has been published about its design, and one of the original designers, Don Coppersmith, has commented that they discovered ideas similar to differential cryptanalysis already while designing DES in 1974. However, it was just matter of time that these fundamental ideas were re-discovered.

Even today, when DES is no longer considered a practical solution; it is often used to describe new crypt analytical techniques. It is remarkable that even today, there are no crypt analytical techniques that would completely break DES in a structural way; indeed, the only real weakness known is the short key size (and perhaps the small block size).

AES : In response to the growing feasibility of attacks against DES,NIST launched a call for proposals for an official successor that meets 21st century security needs. This successor is to be called the Advanced Encryption Standard (AES). Five algorithms made it into the second round, from which Rijndael was selected to be final standard. We will now have a quick look at each of them and their cryptographic peculiarities.

All the ciphers have 128-bit block size and they support 128, 192 and 256 bit keys. The rather large key sizes are probably required to give means for construction of efficient hash functions.

The AES: -Rijndael, the design of two Belgian cryptographers, Joan Daemen and Vincent Rijmen.

Rijndael follows the tradition of square ciphers (it is based on ideas similar to the Square cipher). NIST gave as its reasons for selecting Rijndael that it performs very well in hardware and software across a wide range of environments in all possible modes. It has excellent key setup time and has low memory requirements, in addition its operations are easy to defend against power and timing attacks.

NIST stated that all five finalists had adequate security and that there was nothing wrong with the other four ciphers. After all analysis and received comments were considered, NIST considered Rijndael the best choice for the AES. The other four finalists are mentioned below.

MARS by Zunic ET. al., IBM.

This is an interesting new design (using a special type of a Feistel network), which depends heavily on the instruction sets available on modern 32-bit processors. This has the benefit that on these target machines it is efficient, but it may lead to implementation difficulties in cheaper architectures like smart cards.


RC6 by Rivest, Robshaw and Yin, RSA Laboratories.

RC6 follows the ideas of RC5 - but with many improvements. For example, it attempts to avoid some of the differential attacks against RC5's data dependent rotations. However, there are some attacks that get quite far, and it is unclear whether RC6 is well enough analyzed yet.

Serpent: by Anderson, Biham and Knudsen.

Serpent has a basically conservative but in many ways innovative design. It may be implemented by bit slice (or vector) gate logic throughout. This makes it rather complicated to implement from scratch, and writing it in a non-bit slice way involves an efficiency penalty.

The 32 rounds lead to probably the highest security margin on all AES candidates, while it is still fast enough for all purposes.

Two fish by Schneier ET. al., Counterpane Security.

Two fish is a new block cipher designed by Counterpane (whose CEO is Bruce Schneier). The design is highly delicate, with many alternative ways of implementation. It is cryptanalysed in much detail, by the very authoritative "extended two fish team". It is basically a Feistel cipher, but utilizes many different ideas.

This cipher has key dependent S-boxes like Blowfish (another cipher by Bruce Schneier).

Blowfish

Bruce Schneier designed blowfish. It is a block cipher with 64-bit block size and variable length keys (up to 448 bits). It has gained a fair amount of acceptance in a number of applications, including Nautilus and PGPfone.

Blowfish utilizes the idea of randomized S-boxes: while doing key scheduling, it generates large pseudo-random look-up tables by doing several encryptions. The tables depend on the user-supplied key in a very complex way. This approach has been proven to be highly resistant against many attacks such as differential and linear cryptanalysis. Unfortunately it also means that it is not the algorithm of choice for environments where large memory space (something like than 4096 bytes) is not available.

The only known attacks against Blowfish are based on its weak key classes.

IDEA: IDEA (International Data Encryption Algorithm) is an algorithm developed at ETH Zurich in Switzerland by Xuejia Lai. It uses a 128-bit key, and it is generally considered to be very secure. It has been one of the best publicly known algorithms for some time (before the AES standard started its second round, see above). It has been around now for several years, and no practical attacks on it have been published despite of numerous attempts to analyze it.

One of the best attacks uses the impossible differential idea of Biham, Shamir and Biryukov. They can attack only 4.5 rounds of IDEA and this poses no threat to the total of 8.5 rounds used in IDEA.

RC4:RC4 is a stream cipher designed by Ron Rivest at RSA Data Security, Inc. It used to be a trade secret, until someone posted source code for an algorithm on the Usenet, claiming it to be equivalent to RC4. There is very strong evidence that the posted algorithm is indeed equivalent to RC4. The algorithm is very fast. Its security is unknown, but breaking it does not seem trivial either. Because of its speed, it may have uses in certain applications. It accepts keys of arbitrary length.

RC4 is essentially a pseudo random number generator, and the output of the generator is exclusive-oared with the data stream. For this reason, it is very important that the same RC4 key never be used to encrypt two different data streams.

Modes Of Operation

Many commonly used ciphers are block ciphers. Block ciphers transform a fixed-size block of data (usually 64 bits) it into another fixed-size block (possibly 64 bits wide again) using a function selected by the key. If the key, input block and output block have all n bits, a block cipher basically defines a one-to-one mapping from n-bit integers to permutations of n-bit integers.

If the same block is encrypted twice with the same key, the resulting ciphertext blocks are also the same (this mode of encryption is called electronic code book, or ECB). This information could be useful for an attacker. To cause identical plaintext blocks being encrypt to different ciphertext blocks, two standard modes are commonly used:

CBC (cipher block chaining): first XORing the plaintext block with the previous ciphertext block obtains a ciphertext block, and encrypting the resulting value. This way leading blocks influence all trailing blocks, which increases the number of plaintext bits one ciphertext bit depends on, but also leads to synchronization problems if one block is lost.

CFB (cipher feedback): the kth ciphertext block is obtained by encrypting the (k-1) Th ciphertext block and XORing the result onto the plaintext. Interestingly, an CFB feedback loop can also be used as a pseudo-random number generator if one simply feeds one block of true random data with trailing blocks of zeroes into the encryption routine (although the expected period of this PRNG would be only about 2n/2 where n is the block size of the cipher). Block ciphers have also interesting relationships to other classes of ciphers. For example:

Stream ciphers. A stream cipher consists of a state machine that outputs at each state transition one bit of information. This stream of output bits is commonly called the running key. The encryption can be implemented by just exclusively-oring the running key to the plaintext message.

The state machine is nothing more than a pseudo-random number generator. For example, we can build one from a block cipher by encrypting repeatedly its own output. Typically, more elaborate constructions are used for stream ciphers to obtain high-speed.

Some of the more well known stream ciphers are RC4 and SEAL. Several stream ciphers are based on linear-feedback shift registers (LFSR), such as A5/1 used in GSM. These have the benefit of being very fast (several times faster than usual block ciphers).

SSSC (self-synchronizing stream ciphers): The class of self-synchronizing stream ciphers has the convenient property that it corrects the output stream after bit flips or even dropped bits after a short time (say, a few hundred bits).

SSSC's can be constructed using block ciphers in a CFB-related way. Assume that we have already encrypted n bits of the message and know that much of the ciphertext (where n denotes the block length of the cipher). Then we produce a new running key (see above) bit by encrypting the n ciphertext bits. Take one bit of the output of the cipher to be the new running key bit. Now moving one bit further we can iterate this procedure for the whole length of the message.

It is easy to see that a one-bit error in a ciphertext cannot affect the decrypted plaintext after n bits. This makes the cipher self-synchronizing.

The block cipher used should have sufficiently large block size to avoid substitution attacks, for example.

Cryptography before the 1970's

In this section some of the famous ciphers of the past are listed, with links to more complete information where possible.

Fish: was used by the German army in WWII to encipher high-command communications. It was produced by a stream cipher called the Lorentz machine. Fish was the name given to it by British cryptanalysts. It was important because it caused difficulties for British analysts, who finally developed a machine called Colossus, which was arguably the first, or one of the first, digital computers.

The Colossus machine may have been an important factor in the planning and success of the Allied attack on Normandy. Given the intelligence produced by cryptanalysis of Fish, Allied forces knew the positions of pretty much every German division.

Enigma: did the Germans in World War II use another cipher. The machine used several rotors and looked like a typing machine. However, first Polish and then later British mathematicians were able to keep up with the development of the machine. British analysts at Bletchley Park deciphered most communication using the basic version of Enigma within few hours of the interception. One of the strongest Enigma's was used in submarine communication, but British analysts managed to break them with great implications for the battle on the Atlantic.

There are several good books about Enigma and Bletchley Park. In addition, the work of the major figure of British cryptanalysis, Alan Turing, has been explained in many articles and books. Recently his original notes about cryptanalysis from that time have been released for the public. This cipher, or a variant of it, is used by the UNIX crypt (1) program. It is unlikely that any variant of Enigma could be considered very secure by today’s standards.

Substitution cipher. This is one of the basic pencil-and-paper methods. Make a table by first listing your alphabet in order on the first row, and then making a random permutation of the alphabet on the second row. You can then encrypt any character of the alphabet by first looking it up on the first row, and the writing down the random character on the second row. The key of this method is the permutation of the alphabet on the second row. Decryption works in reverse.

This method is susceptible to frequency analysis, as long as the alphabet size is small. Modern block ciphers can be seen as a variant of this idea, in the sense that they try hide the message under a very large alphabet that depends on the key. In the block cipher case the permutation is generated by the secret key and the key space might not cover all the possible permutations.

Vigenere. This cipher uses clock arithmetic to add together the key and the message. The difference between OTP and Vigenere is that in Vigenere we explicitly reuse the short key several times for one message. Methods for attacking Vigenere ciphers are the Kasiski test, index of coincidence etc. These lead to effective methods which break even very short message (relative to the key size of course).

Hill cipher. The Hill cipher uses matrices in clock arithmetic, and is highly susceptible to known plaintext attacks.

Hash function: -
Cryptographic hash functions are used in various contexts, for example to compute the message digest when making a digital signature. A hash function compresses the bits of a message to a fixed-size hash value in a way that distributes the possible messages evenly among the possible hash values. A cryptographic hash function does this in a way that makes it extremely difficult to come up with a message that would hash to a particular hash value. Some of the best known and most widely used hash functions are briefly described below.

SHA-1 (Secure Hash Algorithm) (also SHS, Secure Hash Standard): This is a cryptographic hash algorithm published by the United States Government. It produces a 160 bit hash value from an arbitrary length string. It is considered to be very good.

MD5 (Message Digest Algorithm 5) is a cryptographic hash algorithm developed at RSA Laboratories. It can be used to hash an arbitrary length byte string into a 128 bit value.

MD5's ancestor, MD4 has been broken, and there are some concerns about the safety of MD5 as well. In 1996 a collision of the MD5 compression function was found by Hans Dobbertin. Although this result does not directly compromise its security, as a precaution the use of MD5 is not recommended in new applications.

Tiger is a recent hash algorithm developed by Anderson and Biham.

MD2, MD4: These are older hash algorithms from RSA Data Security. They have known flaws (Hans Dobbertin, FSE'96, LNCS 1039), and their use is not recommended.

Random number generator

Cryptographic systems need cryptographically strong (pseudo) random numbers that cannot be guessed by an attacker. Random numbers are typically used to generate session keys, and their quality is critical for the quality of the resulting systems. The random number generator is easily overlooked, and can easily become the weakest point of the cryptosystem.

Some machines may have special purpose hardware noise generators. Noise from the leak current of a diode or transistor, least significant bits of audio inputs, times between interrupts, etc. are all good sources of randomness when processed with a suitable cryptographically hash function. It is a good idea to acquire true environmental noise whenever possible.

One cryptographically random number generator is Yarrow by counterpane. A good page to search for further material on (statistical) pseudo-random number generators is the plab site. Any cryptographically good pseudo-random number generator should pass all the basic tests for statistical randomness.


References

Technical paper on Nanotech Nasa applications


Laboratories throughout the world are rapidly gaining atomically precise control over matter. As this control extends to an ever wider variety of materials, processes and devices, opportunities for applications relevant to NASA's missions will be created. This document surveys a number of future molecular nanotechnology capabilities of aerospace interest. Computer applications, launch vehicle improvements, and active materials appear to be of particular interest. We also list a number of applications for each of NASA's enterprises. If advanced molecular nanotechnology can be developed, almost all of NASA's endeavors will be radically improved. In particular, a sufficiently advanced molecular nanotechnology can arguably bring large scale space colonization within our grasp.


Introduction:

This document describes potential aerospace applications of molecular nanotechnology, defined as the thorough three-dimensional structural control of materials, processes and devices at the atomic scale. The inspiration for molecular nanotechnology comes from Richard P. Feynman's 1959 visionary talk at Caltech in which he said, "The problems of chemistry and biology can be greatly helped if our ability to see what we are doing, and to do things on an atomic level, is ultimately developed---a development which I think cannot be avoided." Indeed, scanning probe microscopes (SPMs) have already given us this ability in limited domains.We can see the IBM Almaden STM Gallery for some beautiful examples. Synthetic chemistry, biotechnology, "laser tweezers" and other developments are also bringing atomic precision to our endeavors.

[Drexler 92a], an expanded version of Drexler's MIT Ph.D. thesis, examines one vision of molecular nanotechnology in considerable technical detail. [Drexler 92a] proposes the development of programmable molecular assembler/replicators. These are atomically precise machines that can make and break chemical bonds using mechanosynthesis to produce a wide variety of products under software control, including copies of themselves. Interestingly, living cells exhibit many properties of assembler/replicators. Cells make a wide variety of products, including copies of themselves, and can be programmed with DNA. Replication is one approach to building large systems, such as human rated launch vehicles, from molecular machines manipulating matter one or a few atoms at a time. Note that biological replication is responsible for systems as large as redwood trees and whales.

Today, extremely precise atomic and molecular manipulation is common in many laboratories around the world and our abilities are rapidly approaching Feynman's dream. The implications for aerospace development are profound and ubiquitous. A number of applications are mentioned here and a few are described in some detail with references. From this sample of applications it should be clear that although molecular nanotechnology is a long term, high risk project, the payoff is potentially enormous -- vastly superior computers, aerospace transportation, sensors and other technologies; technologies that may enable large scale space exploration and colonization.

This document is organized into two sections. In the first, we examine three technologies -- computers, aerospace transportation, and active materials -- useful to nearly all NASA missions. In the second, we investigate some potential molecular nanotechnology payoffs for each area identified in NASA's strategic plan. Some of these applications are under investigation by nanotechnology researchers at NASA Ames. Some of the applications described below have relatively near-term potential and working prototypes may be realized within three to five years. This is certainly not true in other cases. Indeed, many of the possible applications of nanotechnology that we describe here are, at the present time, rather speculative and futuristic. However, each of these ideas have been examined at least cursorily by competent scientists, and as far as we know all of them are within the bounds of known physical laws. We are not suggesting that their achievement will be easy, cheap or near-term. Some may take decades to realize; some other ideas may be scrapped in the coming years as insuperable barriers are identified. But we feel that they are worth mentioning here as illustrations of the potential future impact of nanotechnology.

Technology:


Computer Technology

The applicability of manufacturing at an ever smaller scale is nowhere more self-evident than in computer technology. Indeed, Moore's law [Moore 75] (an observation not a physical law) says that computer chip feature size decreases exponentially with time, a trend that predicts atomically precise computers by about 2010-2015. This capability is being approached from many directions. Here we will concentrate on those under development by NASA Ames and her partners. For a review of many other approaches see [Goldhaber-Gordon 97].

Carbon Nanotube Electronic Components

As mentioned before, carbon nanotubes can be described as rolled up sheets of graphite. Different tubes can have different helical windings depending on how the graphite sheet is connected to itself. Theory [Dresselhaus 95, pp. 802-814] suggests that single-walled carbon nanotubes can have metallic or semiconductor properties depending on the helical winding of the tube. [Chico 96], [Han 97b], [Menon 97a], [Menon 97b], and others have computationally examined the properties of some of hypothetical devices that might be made by connecting tubes with different electrical properties. Such devices are only few nanometers across -- 100 times smaller than current computer chip features. For a number of references in fullerene nanotechnology see [Globus 97].


Molecular Electronic Components

Several authors, including [Tour 96], have described methods to produce conjugated macromolecules of precise length and composition. This technique was used to produce molecular electronic devices in mole quantities [Wu 96]. The resultant single molecular wires were tested experimentally and found to be conducting [Bumm 96]. The three and four terminal devices have been examined computationally and look promising [Tour 97]. The features of these components are approximately 3 angstroms wide, about 750 times smaller than current silicon technology can produce.


Aerospace Transportation:

Launch Vehicles

[Drexler 92a] proposed a nanotechnology based on diamond and investigated its potential properties. In particular, he examined applications for materials with a strength similar to that of diamond (69 times strength/mass of titanium). This would require a very mature nanotechnology constructing systems by placing atoms on diamond surfaces one or a few at a time in parallel. Assuming diamondoid materials, [McKendree 95] predicted the performance of several existing single-stage-to-orbit (SSTO) vehicle designs. The predicted payload to dry mass ratio for these vehicles using titanium as a structural material varied from <>

[Drexler 92b] used a more speculative methodology to estimate that a four passenger SSTO weighing three tons including fuel could be built using a mature nanotechnology. Using McKendree's cost model, such a vehicle would cost about $60,000 to purchase -- the cost of today's high-end luxury automobiles.

These studies assumed a fairly advanced nanotechnology capable of building diamondoid materials. In the nearer term, it may be possible to develop excellent structural materials using carbon nanotubes. Carbon nanotubes have a Young's modulus of approximately one terapascal -- comparable to diamond.

Space Elevator

[Issacs 66] and [Pearson 75] proposed a space elevator -- a cable extending from the Earth's surface into space with a center of mass at geosynchronous altitude. If such a system could be built, it should be mechanically stable and vehicles could ascend and descend along the cable at almost any reasonable speed using electric power (actually generating power on the way down). The first incredibly difficult problem with building a space elevator is strength of materials. Maximum stress is at geosynchronous altitude so the cable must be thickest there and taper exponentially as it approaches Earth. Any potential material may be characterized by the taper factor -- the ratio between the cable's radius at geosynchronous altitude and at the Earth's surface. For steel the taper factor is tens of thousands -- clearly impossible. For diamond, the taper factor is 21.9 [McKendree 95] including a safety factor. Diamond is, however, brittle. Carbon nanotubes have a strength in tension similar to diamond, but bundles of these nanometer-scale radius tubes shouldn't propagate cracks nearly as well as the diamond tetrahedral lattice. Thus, if the considerable problems of developing a molecular nanotechnology capable of making nearly perfect carbon nanotube systems approximately 70,000 kilometers long can be overcome, the first serious problem of a transportation system capable of truly large scale transfers of mass to orbit can be solved. The next immense problem with space elevators is safety -- how to avoid dropping thousands of kilometers of cable on Earth if the cable breaks.Active materials may help by monitoring and repairing small flaws in the cable and/or detecting a major failure and disassembling the cable into small elements.


Active Materials

Today, the smallest feature size in production systems is about 250 nanometers -- the smallest feature size in computer chips. Since atoms are an angstrom or so across and carbon nanotubes have a diameter as small as 0.7 nanometers, atomically precise molecular machines can be smaller than current MEMS devices by two to three orders of magnitude in each dimension, or six to nine orders of magnitude smaller in volume (and mass). For example, the size of the kinesin motor, which transports material in cells, is 12 nm. [Han 97a] computationally demonstrated that molecular gears fashioned from single-walled carbon nanotubes with benzyne teeth should operate well at 50-100 gigahertz. These gears are about two nanometers across. [Han 97c] computationally demonstrated cooling the gears with an inert atmosphere. [Srivastava 97c] simulated powering the gears using alternating electric fields generated by a single simulated laser. In this case, charges were added to opposite sides of the tube to form a dipole. For an examination of the state-of-the-art in small machines see the 1997 Conference on Biomolecular Motors and Nanomachines.

To make active materials, a material might be filled with nano-scale sensors, computers, and actuators so the material can probe its environment, compute a response, and act. Although this document is concerned with relatively simple artificial systems, living tissue may be thought of as an active material. Living tissue is filled with protein machines which gives living tissue properties (adaptability, growth, self-repair, etc.) unimaginable in conventional materials.

Swarms

Active materials can theoretically be made entirely of machines. These are sometimes called swarms since they consist of large numbers of identical simple machines that grasp and release each other and exchange power and information to achieve complex goals. Swarms change shape and exert force on their environment under software control. Although some physical prototypes have been built, at least one patent issued, and many simulations run, swarm potential capabilities are not well analyzed or understood

NASA Missions

NASA's mission is divided into five enterprises: Mission to Planet Earth, Aeronautics, Human Exploration and Development of Space, Space Science, and Space Technology. We will examine some potential nanotechnology applications in each area.


Mission to Planet Earth:


EOS Data System

The Earth Observing System (EOS) will use satellites and other systems to gather data on the Earth's environment. The EOS data system will need to process and archive >terabyte per day for the indefinite future. Simply storing this quantity of data is a significant challenge -- each day's data would fill about 1,000 DVD disks. With projected write-once nanomemory densities of 1015 bytes/cm2 [Bauschlicher 97a] a year's worth of EOS data can be stored on a small piece of diamond. With projected nanocomputer processing speeds of 1018 MIPS [Drexler 92a], a million calculations on each byte of one day's data would take one second on the desktop.


Aeronautics and Space Transportation Technology

The strength of materials and computational capabilities previously discussed for space transportation should also allow much more advanced aircraft. Stronger, lighter materials can obviously make aircraft with greater lift and range. More powerful computers are invaluable in the design stage and of great utility in advanced avionics.


Active surfaces for aeronautic control

MEMS technology has been used to replace traditional large control structures on aircraft with large numbers of small MEMS controlled surfaces. This control system was used to operate a model airplane in a windtunnel. Nanotechnology should allow even finer control -- finer control than exhibited by birds, some of which can hover in a light breeze with very little wing motion. Nanotechnology should also enable extremely small aircraft.


Complex Shapes

A reasonably advanced nanotechnology should be able to make simple atomically precise materials under software control. If the control is at the atomic level, then the full range of shapes possible with a given material should be achievable. Aircraft construction requires complex shapes to accommodate aerodynamic requirements. With molecular nanotechnology, strong complex-shaped components might be manufactured by general purpose machines under software control.


Payload Handling

The aeronautics mission is responsible for launch vehicle development. Payload handling is an important function. Very efficient payload handling might be accomplished by a very advanced swarm. The sequence begins by placing each payload on a single large swarm located next to the shuttle orbiter. The swarm forms itself around the payloads and then moves them into the payload bay, arranging the payloads to optimize the center of gravity and other considerations. The swarm holds the payload in place during launch and may even damp out some launch vibrations. On orbit, satellites can be launched from the payload bay by having the swarm give them a gentle push. The swarm can then be left in orbit, perhaps at a space station, and used for orbital operations.

This scenario requires a very advanced swarm that can operate in an atmosphere and on orbit in a vacuum. Besides the many and obvious difficulties of developing a swarm for a single environment, this provides additional challenges. Note that a simpler swarm might be used for aircraft payload handling.

Vehicle Checkout

Aerospace vehicles often require complex checkout procedures to insure safety and reliability. This is particularly true of reusable launch vehicles. A very advanced swarm with some special purpose appendages might be placed on a vehicle. It might then spread out over the vehicle and into all crevices to examine the state of the vehicle in great detail.


Human Exploration and Development of Space

Nanotechnology-enabled Earth-to-orbit transportation has the greatest potential to revolutionize human access to space by dropping the current $10,000 per pound cost of launch, but this was discussed above. Other less dramatic technologies include:


High Strength and Reliability Materials

Space structures with a long design life (such as space station modules) need high-reliability materials that do not degrade. Active materials might help. The machines monitor structural integrity at the sub-micrometer scale. When a portion of the material becomes defective, it could be disassembled and then correctly reassembled. It should be noted that bone works somewhat along these lines. It is constantly being removed and added by specialized cells.


Waste Recycling

An advanced nanotechnology might be able to build filters that dynamically modify themselves to attract the contaminant molecules detected by the air and water quality sensors. Once attached to the filter, the filter could in principle move the offending molecules to a molecular laboratory for modifications to useful or at least inert products. A swarm might implement such an active filter if it was able to dynamically manufacture proteins that could bind contaminant molecules. The protein and bound contaminant might then be manipulated by the swarm for transportation.

With a sufficiently advanced nanotechnology it might even be possible to directly generate food by non-biological means. Then agriculture waste in a self-sufficient space colony could be converted directly to useful nutrition. Making this food attractive will be a major challenge.

Spacecraft Docking

For resupply, spacecraft docking is a frequent necessity in space station operations. When two spacecraft are within a few meters of each other, a swarm could extend from each, meet in the middle, and form a stable connection before gradually drawing the spacecraft together.


Zero and Partial G Astronaut Training

A swarm could support space-suited astronauts in simulated partial-g environments by holding them up appropriately. The swarm moves in response to the astronaut's motion providing the appropriate simulation of partial or 0 gravities. Tools and other objects are also manipulated by the swarm to simulate non-standard gravity.


Smart Space Suits

Active nanotechnology materials (see active materials) might enable construction of a skin-tight space suit covering the entire body except the head (which is in a more conventional helmet). The material senses the astronaut's motions and changes shape to accommodate it. This should eliminate or substantially reduce the limitations current systems place on astronaut range of motion.


Small Asteroid Retrieval

In situ resource utilization is undoubtedly necessary for large scale colonization of the solar system. Asteroids are particularly promising for orbital use since many are in near Earth orbits. Moving asteroids into low Earth orbit for utilization poses a safety problem should the asteroid get out of control and enter the atmosphere. Very small asteroids can cause significant destruction. The 1908 Tunguska explosion, which [Chyba 93) calculated to be a 60 meter diameter stony asteroid, leveled 2,200 km2 of forest. [Hills 93] calculated that 4 meter diameter iron asteroids are near the threshold for ground damage. Both these calculations assumed high collision speeds. At a density of 7.7 g/cm3 [Babadzhanov 93], a 3 meter diameter asteroid should have a mass of about 110 tons. [Rabinowitz 97] estimates that there are about one billion ten meter diameter near Earth asteroids and there should be far more smaller objects.

For colonization applications one would ideally provide the same radiation protection available on Earth. Each square meter on Earth is protected by about 10 tons of atmosphere. Therefore, structures orbiting below the van Allen belts would like 10 tons/meter2 surface area shielding mass. This would dominate the mass requirements of any system and require one small asteroid for each 11 meter2 of colony exterior surface area. A 10,000 person cylindrical space colony such as Lewis One [Globus 91] with a diameter of almost 500 meters and a length of nearly 2000 meters would require a minimum of about 90,000 retrieval missions to provide the shielding mass. The large number of missions required suggests that a fully automated, replicating nanotechnology may be essential to build large low Earth orbit colonies from small asteroids.

Medical Applications

Several authors, including [Freitas 98] have speculated that a sufficiently advanced nanotechnology could examine and repair cells at the molecular level. Should this capability become available -- presumably driven by terrestrial applications -- the small size and advanced capabilities of such systems could be of great utility on long duration space flights and on self-sufficient colonies.


Space Science:


Space Telescopes

Molecular manufacturing should enable the creation of very precise mirrors. Unlike lightsail applications, telescope mirrors require a very precise and somewhat complicated shape. A swarm with special purpose appendages capable of bonding to the mirror might be able to achieve and maintain the desired shape.


Virtual Sample Return

A very advanced nanotechnology would be capable of imaging and then removing the surface atoms of an extra-terrestrial sample. By removing successive surface layers the location of each atom in the sample might be recorded, destroying the sample in the process. This data could then be sent to Earth. Besides requiring a very advanced nanotechnology, there is a more fundamental -- but not necessarily fatal -- problem: as the outside layer of atoms is removed the next layer may rearrange itself so the sample is not necessarily perfectly recorded.



Space Technology:


Solar Power

Low Earth orbit spacecraft generally depend on solar cells and batteries for power. According to [Drexler 92b]: For energy collection, molecular manufacturing can be used to make solar photovoltaic cells at least as efficient as those made in the laboratory today. Efficiencies can therefore be > 30%. In space applications, a reflective optical concentrator need consist of little more than a curved aluminum shell <>2 with a temperature differential from base to tip of <>

Accordingly, solar collectors can consist of arrays of photovoltaic cells several microns in thickness and diameter, each at the focus of a mirror of ~100 micron diameter, the back surface of which serves as a ~100 micron diameter radiator. If the mean thickness of this system is ~1 micron, the mass is ~10-3 kg/m2 and the power per unit mass, at Earth's distance from the Sun, where the solar constant is ~1.4 kW/m2, is > 105 W/kg."


Conclusion:

Many of the applications discussed here are speculative to say the least. However, they do not appear to violate the laws of physics. Something similar to these applications at these performance levels should be feasible if we can gain complete control of the three-dimensional structure of materials, processes and devices at the atomic scale.

How to gain such control is a major, unresolved issue. However, it is clear that computation will play a major role regardless of which approach -- positional control with replication, self-assembly, or some other means -- is ultimately successful. Computation has already played a major role in many advances in chemistry, SPM manipulation, and biochemistry. As we design and fabricate more complex atomically precise structures, modeling and computer aided design will inevitably play a critical role. Not only is computation critical to all paths to nanotechnology, but for the most part the same or similar computational chemistry software and expertise supports all roads to molecular nanotechnology. Thus, even if NASA's computational molecular nanotechnology efforts should pursue an unproductive path, the expertise and capabilities can be quickly refocused on more promising avenues as they become apparent.

As nanotechnology progresses we may expect applications to become feasible at a slowly increasing rate. However, if and when a general purpose programmable assembler/replicator can be built and operated, we may expect an explosion of applications. From this point, building new devices will become a matter of developing the software to instruct the assembler/replicators. Development of a practical swarm is another potential turning point. Once an operational swarm that can grow and divide has been built, a large number of applications become software projects. It is also important to note that the software for swarms and assembler/replicators can be developed using simulators -- even before operational devices are available.

Nanotechnology advocates and detractors are often preoccupied with the question "When?" There are three interrelated answers to this question (see also [Merkle 97] and [Drexler 91]):

Nobody knows. There are far too many variables and unknowns. Beware of those who have excessive confidence in any date.

The time-to-nanotechnology will be measured in decades, not years. While a few applications will become feasible in the next few years, programmable assembler/replicators and swarms will be extremely difficult to develop.

The time-to-nanotechnology is very sensitive to the level of effort expended. Resources allocated to developing nanotechnology are likely to be richly rewarded, particularly in the long term.


References:

  • Nanosystems: molecular machinery, manufacturing, and computation by K. Eric Drexler
  • [Bauschlicher 97b] Charles. W. Bauschlicher and M. Rosi, "Differentiating between hydrogen and fluorine on a diamond surface", using Nanotechnology.
  • Forrest Bishop, "The Construction and Utilization of Space Filling Polyhedra for Active Mesostructures," Nanotechnology
  • [Freitas 98] Robert A. Freitas Jr., Nanomedicine, Volume I: Basic Capabilities , Landes Bioscience,
  • Jack G. Hills and M. Patrick Goda, "The fragmentation of small asteroids in the atmosphere," The Astronomical Journal, March 1993, volume 105