EssaysForStudent.com - Free Essays, Term Papers & Book Notes
Search

Quantum Computers

By:   •  Research Paper  •  1,217 Words  •  June 12, 2010  •  1,682 Views

Page 1 of 5

Quantum Computers

It was around 25 years ago, at the time Intel was developing its first microprocessor, when Gordon E. Moore prognosticate that every 18 months the number of transistor per microprocessor would double and therefore the cost per each function would drop by half in the same period of time (Moore’s Law).

Till nowadays, Moore’s law has proved to be very accurate. Actually it is accepted that the improvement rate defined by his law will continue until the year 2012. However, it is clear that uncertainty related to physical limitations will be increasingly jeopardizing such improvements.

Quantum computers are not just a new way of miniaturizing computers. The concept is something completely new that is based on quantum mechanics (the way matter behaves at atomic and sub-atomic level). Most probably, this new type of computers will become so powerful that it will be possible to break crypto codes that presently protect financial and military secrets. On the other hand, information will be more protected than ever. This is the reason why this technology is so interesting to security chiefs, bank directors, and the high ranks of the military.

The revolution has just stated. Quantum computers will boost existing technologies, and at the same time will give birth to new ones, changing the way we think about information.

Quantum Computers

Quantum theorems were initially phrased by Max Planck. According to his theories, electromagnetic radiation is radiated in small bundles of waves, which are called “quanta”. The smallest possible unit of radiation, the photon, is also called a “quantum” (Quantum Theory). However the idea of quantum computers did not appeared till the late ‘70s and the early 80’s when electronic engineers saw sizes of chips going smaller and smaller, and they starting to wonder what would happen if chips would become as small as an atom.

In upper levels, classical physic laws were able to explain the world we see. But when going deeper, things were getting strange and Newton laws were out of their depth. Quantum mechanics are often explained in terms of atomic particles that are able to be in two places at the same time. According to Niels Bohr, one of the first thinkers in this field, “Anyone who says they can contemplate quantum mechanics without becoming dizzy has not understood it.”

The promise lies in the strangeness. Quantum computers exploit superposition, an effect which essentially means that the same particle can be located in different places at the same time. In order to understand this concept is useful to think how binary computers work. In a classical computer, the smallest unit of data is the bit. The bit is registered when a switch closes and an electrical current flows, or when this switch opens and the current is shut off. Under these conditions, current means 1 and no current means 0. Long combinations of 0s and 1s compose all the information stored and processed inside a computer (Greene).

But a quantum computer does not have electrical switches inside. Instead, there are a lot of atoms that can be found in different states. The nucleus of these atoms may be spinning in one direction or another, and in addition the axis of spin can be pointing up, down, or any other direction in between. And here is the key point: a quantum bit (also called qubit) can be 0, 1, or even both at the same time (Quantum Computer).

Let’s try to clarify this idea with an example. A three-bit classical computer can store three digits in one of the following eight combinations: 000, 001, 010, 011, 111, 110, 101, or 100. However, a three-bit quantum computer can hold the same eight combinations all at the same time, and as a result it is eight times more powerful. It is estimated that a 30-qubit quantum computer would be around three times more powerful than today’s fastest supercomputers, which can process trillions of operations per second (Quantum Computing).

Once the idea of quantum computers first floated, scientists continue working in the hard task of understanding and describing how this new type of computers might work. However, all this work was performed at the theoretical level, and it was not until 1994 when Peter Shor, a computer scientist of AT&T, mathematically proved that a quantum computer, if one existed, could be capable of factoring very large numbers (a number with 400 digits) in just a few days. Although it looks like than this amount of time is a lot for a computer, it’s actually a very little time if compare it to the billions of years that it would take to a conventional one.

This discovery shoot off mental explosions in research centres, universities and organizations like the U.S. National Security Agency all around the world, because the secret

Download as (for upgraded members)  txt (7.7 Kb)   pdf (111.7 Kb)   docx (13.1 Kb)  
Continue for 4 more pages »