Quantum Cryptography

Published

Introduction

Today we take security for granted everywhere we go. Keeping secrets is a way of life, and in some cases, required. We don’t leave identifiable information in the clear, we lock it up. Everything from credit card information to birth certificates and social security numbers; fear of losing our identities to a complete stranger instills a sense of responsibility within us. But it was not always like this. Security is not a concept of the 21st or 20th century, it was born out of military necessity, where protecting land and lives was the order of the day. In fact, secret communication can be traced back to early society — Mesopotamia, Egypt, India, Japan and China, however, it was not until the rise of the Greek empire that we began to see what we refer to as encryption.

Around 400 B.C. the Greeks pioneered a method of sending secret military communiqué. The Scytale was a long piece of parchment wrapped around a wooden staff. Once secured, a message would be written lengthwise on the parchment, sometimes adding additional letters wherever a blank was left. When unwrapped, the parchment would read from top down, appearing as nothing more than a jumble of meaningless letters. What made this encryption work was that the correct size staff or rod would be required to properly view the message; it is the key to the cipher. Even if the enemy did manage to capture the parchment, it quite possibly could take days to decipher the message, rendering it useless, especially if the commander did not receive the encrypted orders to begin with.

The Caesar Cipher is the most famous and most common means of encryption. It is named for Julius Caesar who used a substitution algorithm to protect messages of military importance. His algorithm used a shift of three in the alphabet to make substitutions to each letter. This was a relatively safe method as most of his enemy commanders were illiterate and not cryptanalysts.

The improved Caesar Cipher introduces keys that are more complex than single shift. Instead, a variable length key is created, say 7, 8, 3. If the string we are encoding is NIGHT AND DAY, then N is shifted by 7 letters, I by 8, and G by 3. We’ve come to the end of the key, so we start again. H is shifted by 7, T by 8 and A by 3. The outcome is UQJOB DUL GHG, something that is orders more difficult to decipher without a key.

The two weaknesses to substitution encryption are brute force attacks and frequency analysis. The brute force method pertains to single shift ciphers. Since there is only twenty-six possible shifts, a table is generated from the original message displaying all possible messages, shifting each letter once, up to twenty-five. The naked eye can then pick out the original text.

Frequency analysis can be used in single shift and variable length keys. Each letter in any language alphabet appears with varying frequency in a given block of text. In the English language, the letter “e” is the most common. This inherent property can be exploited to decrypt the cipher. In both instances, the objective is to scan each letter and determine how often it appears in the cipher text, and match it to the frequency table for that language. This works on both algorithms, but pattern detection is also needed if the key is variable length.

Currently, only one algorithm exists that is unbreakable: the one-time pad. Once the plain text is available for encryption, a randomized key is created that matches the plain text in length. This ensures that patterns cannot be found using frequency analysis. The catch to this method is that once the key has been used to decipher the text just once, it must be thrown away, as there is a likely chance the key can be reversed engineered if used again.

Data Encryption Standard

The Data Encryption Standard (DES) became the government standard in 1976 for all unclassified documentation. It is made up of a 64-bit key and the plain text which are sent through sixteen identical stages of processing, know as the Feistel network. Before the main rounds, the block of plain text is divided into two 32-bit halves and processed alternately; this crisscrossing is known as the Feistel scheme. This is analogous to making “passes” over the data. The Feistel structure ensures decryption and encryption are very similar processes, the only difference being the order of processing. The similarity keeps the implementation simple, and alleviates the need for separate encryption and decryption algorithms.

As strong as this algorithm was, it has become insecure and broken. Firstly, DES can only be safely used to encrypt 32 gigabytes of data. The Birthday Paradox1 tells us that after accumulating a number of blocks equal to the square root of the total number possible, there will be an approximately 50% chance of two or more being the same, and information about the message contents would begin to leak. In practice, a 50% chance is already much to large.

Secondly, it is open to brute force attacks. Before DES was even adopted, the adequacy of its key size was questioned, but ultimately was reduced from 128 bits to 64. Twenty years after being named a government standard, RSA Security sponsored a series of contests, offering a $10,000 prize to the first team that broke a message encrypted with DES. That contest was won by the DESCHALL Project, using idle cycles of thousands of computers across the Internet. Today, DES can be broken in nine days on average, for a cost of about $10,000 using commercially available parts.

Advanced Encryption Standard

In 2002, the Advanced Encryption Standard (AES) became the new government standard, effectively replacing DES, and even going so far as to be considered strong enough to protect the most classified data, or information regarded as Top Secret. The cipher was developed by two cryptographers, Joan Daemen and Vincent Rijmen, and submitted to the AES selection process under the name "Rijndael", a combination of the names of the inventors.

It should be noted that before AES was developed, the cheap solution to improve DES quality was to invocate the DES algorithm three successive times using three different keys, which created Triple DES, a relatively secure encryption that no known proof-of-concept attacks exists. However, DES and Triple DES are at the very least six times slower in encryption than AES in hardware and software.

AES has a fixed block size of 128 bits and a key size of 128, 192 or 256 bits. The government mandates a key size of 192 or 256 for any Top Secret information, although a 128 bit key is sufficient to protect classified information up to the Secret level. In terms of storage, AES can safely encrypt 256 exabytes of data, or 274,877,906,944 gigabytes.

The Rijndael algorithm uses a substitution-permutation network as opposed to the Feistal network found in DES. Substitution-boxes (or S-boxes) are created to resist cryptanalysis. An S-box takes a number of input bits, m, and transforms them into some number of output bits, analogous to a lookup table.

Although orders of magnitudes stronger than DES, AES is not without cracking attempts. The algorithm was designed to use variable length keys to protect against brute force attacks. Currently, the longest key in use is 256, however, since 128-bit keys are used in so many public products, it is believed that the NSA suspects a fundamental weakness in the key. The largest feasible brute force attack is against a 64-bit key (following Moore’s Law this number could potentially be 66-bit key as of 2005), so in theory 128 is unbreakable.

The real AES weaknesses reside in side channel attacks and cryptographic “breaks.” Side channel attacks are any attacks based on information gained from the physical implementation of a system, rather than theoretical weaknesses in the algorithms. Examples include timing information, power consumption, electromagnetic leaks or even sound. All can provide an extra source of data which can be exploited to break the system. However, many side-channel attacks require considerable technical knowledge of the system on which the cryptography is implemented. Other methods such as coercion or physically breaking into the system are not considered side channel attacks.

An older attack called Differential Fault Analysis looks to exploit computational errors to find the cryptographic keys. This attack requires a fault to happen at a bit in any of the registers during an intermediate stage, which is accomplished by exposing the machine that is running an AES process to physical phenomena such as microwave radiation. In testing, a 128-bit key was extracted by analyzing less than 50 ciphertexts.

A timing attack watches data movement into and out of the CPU and memory. By observing how long it takes to transfer key information, it is sometimes possible to determine how long the key is for that instance. Internal operational stages in many cipher implementations provide partial information about the plain text and key values; some of this information can be inferred from observed timings. However, a timing attack may just watch for the length of time a cryptographic algorithm requires, which is sometimes enough information to be useful in breaking the cipher. A power monitoring attack can provide similar information by observing the power lines to the hardware, especially the CPU.

Flowing currents heat the materials through which they flow. Those materials also continually lose heat to the environment due to other equally fundamental facts of thermodynamics, so there is a continually changing mechanical stress as a result of these heating and cooling effects. That stress appears to be the most significant contributor to low level noise emissions from operating CPUs. If the surface of the CPU chip can be observed, infrared images can also provide information about the code being executed on the CPU, known as a thermal imaging attack.

An XSL works by trying to express the entire algorithm as multivariate quadratic polynomials, and then using an innovative technique to treat the terms of those polynomials as individual variables. This gives you a system of linear equations in a quadratically large number of variables, which you have to solve. These quadratic equations have several methods for which they can be solved, and can result in recovering the key. The attack is notable because it only requires a handful of known plaintexts, whereas previous methods of cryptanalysis often require unrealistically large numbers of known plaintexts.

All modern encryption employs various mathematical techniques to transform plain text into cipher text, and these techniques are based on transposition and substitution, concepts from another age in human history. When DES was introduced, it was thought to be unbreakable. Advances in the microprocessor brought us computers today that are more powerful than the supercomputers of the 1970s, and computing power that was unrealized at the time. It would be foolish to think that AES will remain unbreakable.

We are in the era of theoretical cryptanalysis. Cipher key lengths have gotten so long that attacks simply can't be implemented because their complexity is just too great. Implementation is critical however, some attacks have hidden problems when you try them out, and other attacks are more efficient than predicted. You can try the attack on simplified versions of the cipher, that is, fewer rounds, smaller block size, but you can never be sure the attack scales as predicted. This is how differential cryptanalysis was developed — the attack was demonstrated on simpler variants of DES and then scaled to the full DES. Many of the attacks we use to break algorithms are more often mathematical arguments than computer demonstrations. More testing of the general XSL technique is required on weaker algorithms and simplified variants of real algorithms before a conclusion can be drawn about the substance of the attack.

Should XSL be effective, it would not only affect AES, but Serpent, the runner-up in the RSA contest for a new encryption algorithm. It shares many of the same qualities as Rijndael, the winner of the AES contest, but it is much more conservative in its approach, in that the designers specified 32 rounds as insurance against future discoveries.

Public and Private Key Encryption

There are two branches of modern cryptography techniques: public-key encryption and secret-key encryption. In public-key cryptography, messages are exchanged using keys that depend on the difficulty of certain mathematical problems such as the factoring of the product of two extremely large prime numbers, usually numbers over one hundred. Each participant has a public-key for encrypting messages, and a private-key to decrypt them.

In secret-key encryption, a key is shared by two users who use it to encrypt plain text to cipher text. By carefully designing transformation algorithms, every bit of output can be made to depend on every bit of the input. In this model, a key of 128 bits used for encoding results in a key of 1038 in length. Assuming that brute force is employed, the encrypted message should be safe: a billion computers doing a billion operations per second would require a trillion years to decrypt it. Even if a side channel attack or encryption “break” were to occur, the key size could always be increased.

The problem facing secret-key encryption is determining a secret key. In theory any two users who wish to communicate could agree on a key in advance, but in practice for many users this would require secure storage and organization of an large database of agreed-on keys. A possible solution is to agree on a key at the time of communication, but this raises its own problem, that if a secure key hasn't been established, it is difficult to come up with one that will euchre eavesdroppers. This is also known as the key distribution problem.

One solution is appointing a central key distribution center that every potential communicating party must register with and establish a shared secret key. If Alice wishes to establish a secret key with Bob, this request is sent to the central server. The server can then inform Bob that Alice wishes to communicate, and re-encrypt and re-transmit a key she has sent. However, using the Diffie-Hellman protocol allows for a secret key to be agreed upon even without the central server. Its security is based on the assumed difficulty of taking discrete logarithms modulo very large prime numbers. Quantum encryption takes this a step further and provides a way of agreeing on a secret key without making this assumption.

Quantum Encryption

The difficulty of overcoming a public-key cipher may hold secret keys secure for a decade or more. But the advent of the quantum information era, and in particular the capability of quantum computers to rapidly perform astronomically challenging factorizations, may be a warning of the eventual demise of RSA and other cryptographic schemes.

Classical public-key cryptography relies on the computational difficulty of mathematical problems in order for the key distribution to work, however, quantum cryptography relies on the laws of quantum mechanics. A method called public-key cryptography is often used to distribute the secret keys for encryption and decoding of a full-length message. Quantum cryptographic devices typically employ individual photons of light and take advantage of either the Heisenberg uncertainty principle or quantum entanglement.

The challenge modern cryptographers face is for sender and receiver to share a key while ensuring that no third party has stolen a copy. The security of public-key cryptography depends on factorization or other high level mathematical problems. It is easy to compute the product of two large numbers but extremely hard to factor it back into the primes. The popular RSA cipher algorithm, widely deployed in public-key cryptography, relies on factorization. The secret key being transferred between sender and receiver is encrypted with a publicly available key. It can be decrypted only with a private key owned by the receiver of the data, made up of two factors.

The most straightforward application is in distribution of secret keys. The amount of information that can be transmitted is not very large, but it is provably secure. To achieve secure transmissions of large amounts of data, existing secret-key algorithms can be taken advantage of, and can be used to replace the Diffie-Hellman algorithm.

The information exchange is observations of quantum states; photons put into a particular state by the sender and then observed by the recipient. The information is encoded into some quantum properties of the photon, and any effort to monitor them guarantees that it will disturb them in a detectable way. The effect arises because in quantum theory, some pairs of physical properties are complementary, measuring one will disturbs the other. This is the Heisenberg uncertainty principle. Because of this, certain quantum information occurs as conjugates that cannot be measured simultaneously.

Information is encoded by modifying the polarizations of photons which can be expressed in any of three different bases: rectilinear, circular, and diagonal, but observing in one basis randomizes the conjugates. Thus, if Bob and Alice do not agree on what basis of a quantum system they are using, the receiver may inadvertently destroy the sender's information without gaining anything useful. However this is also what makes it so secure, a third party cannot read the information because once they measure the polarization of the photons they change the conjugates.

Attacks on Quantum Encryption

The attacks against quantum encryption are different than what we find in traditional encryption. Traditional man-in-the-middle attacks are impossible due to the observer effect that comes into play. If Alice and Bob are using an entangled photo system and a third party attempts to intercept the stream of photons, they will alter them by decreasing the strength of each photon to a degree that would make it easily detected. Photons cannot be reemitted back to Bob correctly, since his measurement has destroyed information about the photon's full state and correlations.

Quantum cryptography is still vulnerable to a type of man-in-the-middle where the interceptor, Eve, establishes herself as "Alice" to Bob, and as "Bob" to Alice. Then, Eve simply has to perform negotiations on both sides simultaneously, obtaining two different keys. Alice-side key is used to decrypt the incoming message, which is re-encrypted using the Bob-side key. This attack fails if both sides can verify each other's identity.

A denial of service attack is possible because a dedicated fiber optic line is required between the two points. The attack can be mounted by simply cutting the line or by attempting to tap it. If the equipment used in quantum cryptography can be tampered with, it could be made to generate keys that were not secure using a random number generator attack.

Lastly, a method of attack has been proposed in which Eve sends a large pulse of light back to Alice in between transmitted photons. Alice's equipment will reflect some of Eve's light. Even if the transmitting equipment is dead black it has some small reflectivity. When Eve's light comes back to her, it is polarized and Eve can gather information about Alice.

Unlike public key cryptography, quantum cryptography should remain secure when quantum computers arrive on the scene. One way of sending a quantum key between sender and receiver requires that a laser transmit single photons that are polarized in one of two modes. In the first, photons are positioned vertically or horizontally (rectilinear mode); in the second, they are oriented 45 degrees to the left or right of vertical (diagonal mode). In either mode, the opposing positions of the photons represent either a digital 0 or a 1. Alice sends a string of bits, choosing randomly to send photons in either the rectilinear or the diagonal modes. Bob makes a similarly random decision about which mode to measure the incoming bits. The Heisenberg uncertainty principle dictates that he can measure the bits in only one mode, not both. Only the bits that Bob measured in the same mode as sent by Alice are guaranteed to be in the correct orientation, thus retaining the proper value.

Risks Associated with Quantum Encryption

As quantum technology is still relatively new, there exist some problems which are currently being addressed. For example, a problem arises when more than a single photon is inadvertently sent at a time, which is a common occurrence since no perfect single photon emitter exists. This occurs because pulses of laser light are sent through a series of filters until only one photon squeezes through, and that filtering process isn't perfect; sometimes more than one photon per pulse gets through. Or if two photons of the same polarization are sent, one of them can be picked up by an eavesdropper, while the other one will go through unchanged, as if nothing is amiss.

Professor Hoi-Kwong Lo, lead scientist at the University of Toronto study has began using the problematic extra photons to dupe eavesdroppers. The light in is prepared in such a way that a small percentage of photons are decoys, and they contain no information at all about the key. The process works because Alice and Bob can compare, and Alice will announce in a separate message which ones are the signals and which are the decoys. The signal photons contain information about the key, but the eavesdropper doesn't know which photons they measured.

Distance is also an important factor in quantum encryption. In order to send a quantum-encrypted key farther, the initial light from the laser must be more intense, which means there must be more photons to begin with, thus increasing the likelihood that more than one photon will make it through the filters. MagiQ Technologies is just one company working on this problem and more.

Commercialization of Quantum Encryption

Bob Gelfond was a Wall Street investor turned entrepreneur when he realized there were no startups working on quantum computing that he could invest in. Learning from his past investments, including Amazon.com, he founded MagiQ Technologies, a company focused on quantum information system, based on his love for science and technology. Soon after in 1999, he began building an expert team around the subject.

It seemed that a lot of [quantum encryption] was already proven in the lab, for example IBM did a lot of research. However, nobody wanted to go forward with anything. The whole thing looked like a good opportunity to transition a product from the lab over to early adopters. At the same time we wanted to be able to capture as much intellectual property as possible. While we are now selling to established customers we continue to work with early adopters, eventually we might move into licensing our technology as well. Right now, cryptography is just the start, in the future we want to be known as the quantum information company that is a leader in quantum cryptography, quantum computing and everything in between.

Bob Gelfond

MagiQ has also begun to solve one of the biggest challenges—distance. Quantum encryption keys are exchanged between two points and have to be connected by a fiber-optic line, and optical networks need repeaters between each span of cable, typically 49 miles long, to keep the signal going. Those repeaters present an incredibly large road block, much like the eavesdropper, they have to observe the key in order to pass it down the line.

However, using Verizon's optical network, MagiQ showed that it could successfully keep the key intact over a distance of 87 miles. Gelfond hopes to also eventually bring the technology to the Telco providers. He believes that it would have the ability to cover the east coast.

MagiQ already has competitors, including ID Quantique, a Swiss outfit, and Smart Quantum, a company in France. And Japanese companies including Toshiba, NEC, Fujitsu, and Nippon Telegraph & Telephone; are all said to be conducting their own research on the technology. A system from MagiQ costs between $70,000 and $100,000.

Applications of Quantum Encryption

In December 2005, Laszlo Kish of Texas A&M proposed securing a communications link, like a phone or computer line, with a pair of resistors. By adding electronic noise, or using the natural thermal noise of the resistors, called "Johnson noise"2, Kish can prevent eavesdroppers from listening in.

In the field of quantum cryptography, the physics of the subatomic world are harnessed to create a secure, unbreakable communications channel between two points. Kish's research is intriguing because it uses the simpler properties of classic physics to achieve the same results. It works as follows:

Alice and Bob have a two-wire cable between them, and two resistors each (e.g. 10-ohm and a 1,000-ohm). Alice connects a stochastic voltage generator and a resistor in series to each of the two wires. At each clock tick, both Alice and Bob randomly choose one of their two resistors and put it in the circuit. Then, Alice and Bob both measure the current flowing through the circuit. Basically, it's inversely proportional to the sum of their two chosen resistors: 20 ohms, 1,010 ohms or 2,000 ohms. Of course, the eavesdropper can measure the same thing.

If Alice and Bob choose the same size resistor, then the eavesdropper knows what they have chosen, so that clock tick is useless for security. But if they choose a different size resistor, the eavesdropper cannot tell whether it is Alice choosing 10 ohms and Bob choosing 1,000 ohms, or the reverse. Of course, Alice and Bob know, because they know which resistor they're choosing. This happens 50% of the time.

Alice and Bob keep only the data from the clock ticks where they choose a different size resistor. From each such clock tick, they can derive one secret key bit, according to who chooses the 10-ohm resistor and who the 1,000-ohm. That's because they know who's choosing which and the eavesdropper does not. Do it enough times and you've got key material for a one-time pad to encrypt the communications link.

This system is reminiscent of a project that came out of Bell Labs in the 1940s, subsequently lost to all time. However, in the early 1970s James Ellis described it as inspiring his invention of public-key cryptography.

The event which changed this view was the discovery of a wartime, Bell-Telephone report by an unknown author describing an ingenious idea for secure telephone speech. It proposed that the recipient should mask the sender's speech by adding noise to the line. He could subtract the noise afterwards since he had added it and therefore knew what it was.

There are practical problems with the system, though. The bandwidth the system can handle appears very limited. Over a 1-kilometer link, you can only send at 2,000 bps. Even more basic: it's vulnerable to man-in-the-middle attacks. Someone who can intercept and modify messages in transit can break the security. This means you need an authenticated channel to make it work, a link that guarantees you're talking to the person you think you're talking to. Generally, if you can eavesdrop you can also mount active attacks. But this scheme only defends against passive eavesdropping.

Quantum key distributions share many of the same problems with this system. If Kish's scheme is secure, it's superior to quantum communications in every respect: price, maintenance, speed, vibration, thermal resistance and so on. Both this and the quantum solution share another problem too, they're solutions looking for a problem.

In the realm of security, encryption is the one thing we already do pretty well. Software security, network security, operating system security, user interface, these are the hard security problems. Replacing AES with this kind of thing won't make anything more secure, because all the other parts of the security system are so much worse.

Conclusion

The only process at work that currently ensures that an encryption method such as AES is not broken is the academic research in cryptography. It is based on the belief that if there is a flaw, it will be discovered. This works for simple flaws, that many people can discover, for example, the free software community. This certainly does not work for advanced flaws that only experts can discover, and only if they are lucky, and only if trying hard enough. Such flaws are usually discovered when it is too late, and if someone discovers them, they may as well choose not to publish them. People who are working for a secret service, or a multinational company for which it will be systematically cheaper to hide the attacks than to have to upgrade their products will sit on the information as long as they can. Thus, the security of encryption schemes is today based only on the benevolence of some people who happen to be smart enough and extremely lucky to uncover the flaw, yet not corrupt enough to hide it.

The publication will usually be done in the name of the public interest against the interests of the people that employ them. Some people will do this for fame, but it will not work for people that are already famous, don't like or cannot afford publicity. This will not work either if years of research are required to achieve the result; everybody is convinced that the result cannot be achieved, and moreover nobody has any interest in financing this research for nothing more than a purely negative and even devastating result.

Strong cryptography will only be possible if there are working mechanisms challenging this security. Yet, for encryption such as AES, there is no such mechanism. On top of that, there exists no AES cryptographic challenge such as there was for DES. It is the policy of the RSA to only offer brute force challenges, and none for breaking the encryption itself. Since current technology is incapable of an exhaustive search of AES keys, we will be left wondering if the mighty AES is in fact broken.

Top