I gave this talk at PyCon Estonia recently, and thought it may be interesting to someone here. It’s an overview of what quantum computing is in principle, how a particular technology (superconducting QC) attempts to implement it, how the full stack looks like (from chip fabrication to high-level algorithm runtime). Then there’s a live demo at the end.
The presentation is intended for the general audience, no physics or science knowledge required. For the CS/programming crowd it may be interesting to recognize the challenges and problems which are similar to those of the pioneers of classical computing in the 60s and 70s: scheduling, compilation, memory management, etc.
I’ve been working at a quantum computing startup for a few years now, and this was my chance to explain what kind of work we do when building the full stack from own chip design and fabrication, to custom “OS”, SDKs, and algorithm execution runtime.
Happy to answer any questions! (Note: I’m not a physicist, and my responsibilities mostly lie in quantum control software, quantum computation, and system integration.)
Rough outline of the contents of the video:
Nice talk! I worked through Nielson and Chuang a while back, and was pleasantly surprised to see that Nielson was one of the authors of quantum.country. (I skimmed it, and it looks like the site is just the first chapter of N+C).
A couple questions: You said that current QC can go up to about 100 qubits. Are those physical qubits or error corrected quibits (based on many physical quibits)? And are they general purpose (can apply any gate to between any pair of qubits for example ) or are they limited (like I’ve seen systems where you can only use a gate between qubit 0 and qubit 1 and between qubit 1 and 2, but there will be no gate between qubit 0 and 2)
Finally, since the M+C book came out 20 years ago, my knowledge of QC is probably dated. Since you work in the space, what’s new? Anything really cool beyond Shor and Grover?
Thanks!
The number of qubits I mention throughout the talk are physical qubits. Currently, we as an industry are the level of a handful of error-corrected qubits, at most. There isn’t a direct mapping between X physical == Y logical, because this depends on the choice and implementation of error correction, and it’s an area of active research. Roughly, we think that ~1000 physical qubits would yield 5-10 logical qubits. Roadmaps of the majority of hardware manufacturers target this amount in the next few years. Some claim huge jumps before 2030.
In typical superconducting QCs qubit connectivity is a big issue, and it’s fundamentally different to e.g. ion trap QCs, where almost arbitrary many-to-many connectivity is possible. (The advantage of superconducting over ion traps is the speed: the former is about 1000 times faster than the latter.) A connection between two qubits requires a physical thing (in our case - a so called tunable coupler). We can at most fit under 10 (usually 4) tunable couplers to 1 qubit, this is why most chips are arranged as a lattice like this:
So yeah, you can’t just have 2 arbitrary qubits connected. In practice, when you’re writing an algorithm in form of a quantum circuit, you treat any 2 qubits as connected because in the the pre-processing and compilation our software adds swap operations. In the picture above you can apply a 2-qubit gate to qubits Q1 and Q3, and the preprocessor would e.g. swap values of Q1 with Q2, apply the operation between Q2 and Q3, then swap back Q2 with Q1. This is similar to what happens in CPU memory. Of course, this adds overhead, reduces the quality of results, and increases the overall time for execution, which is very limited (nanoseconds; though in some circumstances we’ve reached a millisecond).
If your circuit needs an operation between Q1 and Q9, then the preprocessor has to make multiple swaps, iteratively moving the state in a chain of qubits back and forth. In practice, another preprocessing step (routing) would determine the best initial mapping of your qubits and actual physical qubits, so that if your circuit requires lots of interactions between Q1 and Q9, then your Q1 will more likely be mapped to physical Q8, for example. Routing and optimization is an NP-hard problem.
At IQM, where I work, we’re exploring a different architecture where qubits are connected via a so called computational resonator. [Some info here](https://www.meetiqm.com/newsroom/press-releases/iqm-to-deliver-czech-republic-first-quantum-computer-with-unique-star-topology. You can think of it as a bus which allows all-to-all connectivity. It still requires the QC to move the state, but it’s always at most 2 moves to apply a 2-qubit gate between any two qubits.
Some cool research is happening in error-correction algorithms. Check out qldpc and other approaches. In applications, there are exciting things in various optimization and simulations, like battery chemistry simulations. Overall, I think the best and most promising applications always boil down to simulating quantum systems.
Thanks for the long response!
The computational resonator looks really cool. It seems like a good solution to the connectivity problem. I’m having trouble finding more info other than that one press release, but I’m imagining the topology is probably the 24 qubits arranged in 24 point star, and in the center are 2 qubits with connectivity to each of the other ones. Then when you want interaction between any two you swap into the center do the gate, then swap back. In computing analogy, the center is like a registers/ALU and the other bits are the RAM.
Am I close? If so, do you think that it will scale to larger number of qubits? I imagine there are probably physical limits to how many qubits can connect to the “ center “ — I will check those links out, thanks again.
Currently, the only publicly available chip with a computational resonator has 6 qubits (IQM Star 6), and it can be accessed via cloud. Interestingly, you can disable some restrictions and access higher level states of the resonator.
I can’t share much about the current developments, but you are close in a sense that there are various configurations to achieve the balance between fabrication, connectivity, and fidelity. One can imagine building small stars and connecting them with longer-range couplers to other stars. There are some issues still due to the fact that it’s a 2D lattice in the end, while for some error correction techniques the ideal topology is actually a torus; but I don’t think many companies can build such complex 3D chips today.
I think there’s definitely a potential. Computational resonators can be relatively long, and that’s the main factor. Connecting many qubits to it is not, AFAIK, the biggest issue since it’s basically the same coupling as between any 2 qubits in a normal square lattice topology.
BTW, I made a mistake in my previous comment, I said “…increases the overall time for execution, which is very limited (nanoseconds…)” — I meant to say “microseconds”. Nanoseconds is the scale of execution of a single operation (gate). Currently, we can execute hundreds and hundreds of gates reliably.