Top Ad unit 728 × 90

Explaining the upside and downside of D-Wave’s new quantum computer

Chris Lee
D-Wave, a company based in British Columbia, has announced a new version of its quantum annealer: the D-Wave 2000Q. As the name suggests, the number of bits has increased from about 1,000 to just over 2,000. This is, for D-Wave, an important step on its roadmap to world domination. D-Wave's approach is to increase the number of qubits come hell, high water, or lack of quantumness.
Luckily, hell has stayed south of the border, winter has prevented flooding, and the associated papers indicate that their new board preserves the quantum behavior of the previous generation. But under the hood, it appears that D-Wave has made some pretty significant changes to scale up.

Ice cold Ising models

The D-Wave computer is based on a process called annealing. Annealing involves a series of magnets that are arranged on a grid. The magnetic field of each magnet influences all the other magnets—together, they flip orientation to arrange themselves to minimize the amount of energy stored in the overall magnetic field. You can use the orientation of the magnets to solve problems by controlling how strongly the magnetic field from each magnet affects all the other magnets.
To obtain a solution, you start with lots of energy so the magnets can flip back and forth easily. As you slowly cool, the flipping magnets settle as the overall field reaches lower and lower energetic states, until you freeze the magnets into the lowest energy state. After that, you read the orientation of each magnet, and that is the solution to the problem. You may not believe me, but this works really well—so well that it's modeled using ordinary computers (where it is called simulated annealing) to solve a wide variety of problems.
One issue with classical annealing is that the grid of magnets can become trapped in a low-energy valley on the way to the deep valley of the correct solution. Here, cooling becomes your enemy, because the magnets need energy to jump above the energy valley, a necessary step for them to find the even lower-energy solution on the other side.
And this is where quantum mechanics plays a role. A classical system has to have the energy to go over an energy barrier, while quantum particles can simply go through an energy barrier. This has several effects: it means that a quantum annealing process can escape valleys more efficiently, making it more likely to find the correct solution. You can also cool the magnets faster, knowing that as long as the barriers are manageable, quantum tunneling will keep you on track.
Most importantly, the physics that allows tunneling also means that the quantum state of neighboring magnets may spatially overlap with each other. As a result, they mix with each other so that they can act collectively. An example: imagine that a group of four neighboring magnets are in a state such that if one, two, or three magnets flip, the energy goes up. But, if all four flip, then the energy goes down. Individual tunneling cannot help here, as all four have to flip at the same time. Classical physics plus tunneling makes this highly unlikely. But if the magnets are in a collective quantum state, this can occur with a much higher probability.

Too many wires

While quantum annealing sounds pretty simple, there is a hidden architectural challenge: you need a controllable connection between every magnet for them to act collectively. In a grid of four magnets, there are six connections; for 16 magnets, there are 120 connections. I don't even want to think about how many interconnects a 2,000-qubit machine would require.
Architecture of D-Wave's 2000Q
Architecture of D-Wave's 2000Q
D-Wave has chosen to scale qubit numbers at the expense of interconnectivity. And, the latest processor is no different. Qubits are arranged on what is called a chimera graph. Essentially, individual qubits are clustered at three levels, which are illustrated in the diagram above. The first level, which I'll call the inner node, is a group of four qubits connected in a loop, so each qubit has two connections to neighbors. The second level, which I'll call the outer node, is another set of four qubits that are also connected in a loop. The inner and outer nodes are connected to each other, with each qubit in the outer node connecting to two in the inner one. These groupings are connected to neighboring ones, with each qubit on the inner node connected to two qubits on each of the nearest neighboring inner nodes.
This is all arranged in a 12×12 block, giving 1,152 logical qubits, with (though it is not explicitly said) the remaining 880 qubits used to control coupling. There are not enough qubits left to control the coupling between every connected qubit, so I'd guess that there is a control qubit on the lines between neighboring nodes but no control over the coupling within and between inner and outer nodes.
Compared to the older D-Wave architecture, the arrangement is different, but no sacrifices in connectivity have been made. In fact, things are better. In the old system, each qubit had four within-node connections and two between-node connections. This is unchanged in the new architecture. However, connections between next-to-nearest neighbors is improved. In the previous architecture, two qubits that are on nodes diagonally next to each other are, at best, three connections away from each other (at worst, they're five connections apart). In the new system, any two qubits that are on a diagonal to each other are only separated by three connections.

Reaping the benefits

In a pair of papers, D-Wave researchers have compared the new architecture to various simulated annealers, including annealers that incorporate quantum properties and make use of GPUs for additional speed up. The take-home message that D-Wave wants you to hear is that this thing is a processing beast, around 1,000 times faster than a normal computer. This is just a comparison of the annealing time, though. The total time taken is only a factor of 30 better, and it's dominated by the time it takes to initialize the problem and read out the solution. These are also just demonstration problems that are not directly applicable in real-world applications.
The more important conclusion is that if you compare the scaling of time-to-solution with problem size, the D-Wave system scales in an identical manner to a quantum simulated annealing algorithm. There has been some evidence of this before, but, the new architecture allows for bigger problems, which makes the scaling clearer.
A second problem that the D-Wave scientists addressed was more subtle: often the ground state is not unique. Many different arrangements of magnets can lead to the same energy, which also happens to be the lowest possible energy. Any particular run with this computer will only find a single ground state—does the processor always find the same ground state, some limited distribution of them, or does it sample them in an unbiased statistical fashion?
In several sets of experiments, D-Wave's people show that the annealer samples in a relatively unbiased fashion. What does this mean? Well, for many quantum problems—for instance, determining the structure and properties of molecules—it is not enough to find a solution. You need to find all of them, and you need to know the barriers between these different solutions before you can understand the stability of each molecular structure.
We now know that the D-Wave annealer is not inherently biased to find particular solutions. So, if such a problem can be put on a D-Wave annealer, you can be reasonably confident of finding a useful set of solutions.
In a final short white paper (PDF), D-Wave compares power consumption. The company shows that, even when you take into account the cost of cooling to below liquid helium temperatures, they still get better performance per Watt than the best supercomputers. Furthermore, because the circuits are actually superconducting, scaling the number of qubits will not substantially increase the power consumption. The main cost is holding the board at a few milliKelvin; as long as the boards remain relatively compact, that cost will not increase much.

And now for the bad news

I am hypocritically worried on two fronts. First, D-Wave's computers are starting to get large enough to be useful for real-world problems. There is a substantial computational cost due to the lack of interconnection, because problems must be rewritten to cope with the limitations of the architecture. This limitation is severe enough that not all problems can be rewritten to fit the architecture. In other words, this is not a universal computer. I expect this means that all of my favorite problems will remain untouched by D-Wave computers, and that makes me sad.
Unfortunately, I think the architecture is suitable for many problems that I wish would remain untouched. I actually like my secure connection to my bank, and I am discomforted by the fact that quantum computing appears to be getting here ahead of alternative encryption algorithms. I know that if it wasn't D-Wave, it would be someone else, but that thought is even less comforting.
arXiV.org, 2016, arXiv:1611.04528  (About the arXiv).
arXiV.org, 2017, arXiv:1701.04579  (About the arXiv).

Explaining the upside and downside of D-Wave’s new quantum computer Reviewed by Chidinma C Amadi on 3:16 PM Rating: 5

No comments:

Kogonuso © All Rights Reserved!

Contact Form

Name

Email *

Message *

Powered by Blogger.