Counting on (Quantum) Computers

This post was written for the Q# Advent Calendar 2018Check out the calendar for other posts!

A simple question

As a father of a both boy and a girl, I’m always fascinated how they learn new things. A funny situation arrived when my daughter started to count. As most children, she started with; one, two, three, but after a while she learned to count to a hundred. But one given day in the middle of a counting session trying for a thousand, she suddenly stopped and asked me with her usual sweet questioning voice; “Daddy, how far could one count?”. While I was thrilled to have the conversation with my daughter about the differences between countable and uncountable infinity, since she was five, I tried to keep it simple.

Counting Human: ~3 Billion

If you would look up the current world record in the Guinness Book of World Records, you would find that at 14th of September 2007, Jeremy Harper completed counting to one million. It took him just under three months, counting for 16 hours per day. But if one would assume that one could count to ten in ten seconds and keep that pace for a human lifetime, one could reach in about a hundred years just over three billion. For a lot of basic problems this amount of counting is usually sufficient. In the finance industry where I mostly operate, we still have countries where we count paper money using bank tellers. But this got me thinking; How far could one count with other more modern technologies?

Counting CPU using C: ~258

Most modern CPU processors are currently around 3 GHZ. Counting on such a CPU can be as simple as writing a simple C program which increments a 64-bit integer in a tight loop. Ignoring details like; multiple cores, SIMD, vectorisation, and other compiler optimisations, this means that theoretically these CPU’s should be able to count to 3 billion in a second. In practice this means that every second, a processor can count to the theoretical limit of a human lifetime. But imagine we run this CPU for a lifetime instead. Running this CPU for 24/7 for 100 years would be a challenge, but if we could, it would reach a number “just” under 258. That number is smaller than the maximum of a 64-bit integer we used for the counting loop, so would more time to end the program. But it’s not a small number. The number is close to the amount of grains of sand we have in the world. Because we have less money in the world than grains of sand, we use CPU’s for mostly the generic workflows like running payment systems and accounting. But suppose we want to do more; how can we keep counting on?

Counting Graphical Processor Unit (GPU) using GLSL: ~265

When computer graphics became more common, dedicated GPU’s were added to specifically accelerate floating point matrix calculations for 3D rendering. These calculations are executed by having very many small dedicated calculation cores. A modern GPU has thousands of these cores. Optimisations on the lowest level can be performed by writing shaders. These optimisations come at a cost, where it can take more effort during development and maintenance since a GPU is more tailored than the generic CPU. If optimised correctly a GPU program can perform a factor of ten to a hundred times faster than a program for traditional CPU’s. This theoretically would allow a GPU to count to 265, but since a GPU is highly parallelising item processing, do we still call this counting? In the finance industry we have a lot of risk models we use to calculate risks. We use GPU’s for machine learning, data analysis, and to execute these risk models, because they are more powerful than CPU’s. But suppose we need to go even further than this; how can we keep counting on?

Counting Field Programmable Gate Array (FPGA) using VHDL: ~272

When using a Field Programmable Gate Array (FPGA), one is practically designing its own processor specific to the problem on is trying to solve. One designs a circuit of dedicated logic blocs which results in a bitstream. This bitstream activates logic components on the FPGA silicon, building your custom circuit. Since there is a lot of freedom in wiring up a circuit, it possible to make a calculation much more specific than a standard core of a GPU. For example; because the speed of light is limited, even the distance on the small chip can cause delays. The propagation delays of a long counter can be so long on a FPGA chip, that it’s very common not to use the normal binary representations anymore but to use Grey codes. If the design isn’t comprised of mostly floating-point calculations, a highly optimised FPGA bitstream can be processing between a factor 10 to 100 faster than a GPU. Because an FPGA is practically a custom processor, many engineers use it as a step up to a real physical baked chip which can be even faster. These baked chips are called Application-Specific Integrated Circuit or ASIC’s. In the finance industry we mostly use these chips for high frequency trading where even the smallest delays have impact on money or value. But suppose you would want to go faster and further than this; how would you do that?

Counting Quantum Computer using Q#: 2N

Once we reach the quantum computers, we are using the fundamentals of nature to compute. Quantum computers use qubits which can be |0>,|1>, or both at the same time, which we call a superposition. By making superposition of N qubits using entanglement a quantum computer can have a superposition of 2N values at the same time. This superposition collapses to one of those values when we perform a measurement. By performing gates on these qubits, we can modify the statistical outcome of that measurement. Counting within quantum computing normally means finding out how many solutions are there (https://en.wikipedia.org/wiki/Quantum_counting_algorithm). But for all practical purposes, writing a simple Q# application by applying Hadamard gates on all qubits would already create a superposition of all 2N values. Since this is a very common start of a quantum algorithm, it’s therefore wouldn’t be a stretch to say that a quantum computer can count to 2N per execution cycle. A quantum computer wouldn’t process all these possible values simultaneously though, because of the collapse of the superposition during the measurement. But with the expected quantum computers of more than 100 qubits running in the MHz range, the expectation is still that quantum computers have a computation power way beyond even FPGA’s or ASIC’s can deliver. As with other industries, within the finance industry we are just trying to comprehend what this compute power means for our industry in the short term. It’s common to split it up in threats and opportunities. As a threat we can see the breaking of encryption which undermines a lot of the security measures financial institutions like banks rely on. A lot of investment is therefore in post quantum encryption implementation. Opportunities of quantum are the possibility of having quantum networking for secure communication instead and using quantum computing for machine learning and processing risk models. In the long term it’s clear that it’s a game changer that will change the world was we know it.

Conclusion

That escalated quickly. We went from a simple human counting to eternity to a counting quantum computer doing it all in a blink of an eye. What is clear is that all of these technologies are powerful and have their potential and purpose. But it doesn’t answer the question how far my daughter thinks she can count. Her last record we wrote down at time of writing is 2536, so she thinks that’s the limit for now. But something tells me that given her interest in numbers; it wouldn’t surprise me if one day that number will be very high.