Coding the Deutsch-Jozsa Algorithm in Qiskit

Avery Parkinson
7 min readSep 28, 2020

The Deutsch-Jozsa Algorithm was first proposed by David Deutsch and Richard Jozsa in 1992. It was the first algorithm to demonstrate that a quantum computer could outperform classical computers, perhaps bringing us one step closer to quantum supremacy — the idea that a quantum computer can perform a task that no classical computer can do in any amount of time.

The problem which it aims to solve is the following. Let’s say you have a “black box function” (some kind of unknown function)— which you know must either be “constant” (returning the same value for any input) or “balanced” (returning one of two different outputs with equal probability). You want to figure out which of the two it is.

Now of course, this at first glance, may seem like a pretty impractical problem. And in the grand scheme of things, it probably is.

But, let’s consider how long it would take a classical computer to solve this problem. Our best case scenario would be two queries (i.e. passing two numbers through the function), because if we get two different answers, we know it must be balanced. But, say the classical computer keeps getting the same output. If the function has n possible inputs, the classical computer would have to try just over half of all possible inputs (2^(n-1) + 1, to be specific) until it can be absolutely certain that that is the only possible answer.

For small values of n, this might not seem like a significant number. But, as n increases, 2^(n-1) + 1 does so as well…exponentially.

A quantum computer on the other hand, can solve this problem using just one query regardless of how many possible inputs the function has.

So, while the problem the computer is trying to solve is realistically impractical, it is interesting because it demonstrates the very essence of what we hope quantum computers will be able to one day do.

What is the algorithm?

The algorithm itself is pretty straight forward.

  1. Initialize two quantum registers (sets of qubits). The first consists of n qubits each initialized to |0>. The second just has one qubit initialized to |1>.
  2. Apply Hadamard gates to each qubit in both registers.
  3. Pass it through the quantum oracle (the black box function).
  4. Apply the Hadamard gate to each qubit in the first register.
  5. Measure each qubit in the first register, thus measuring the overall quantum state.
  6. If we measure the state |00…0>, we know it is constant. If we measure anything else, we know it is balanced.

Why does the algorithm work?

First, let’s consider this question intuitively. Let’s say we have a constant function. In this case, applying the function doesn’t change the quantum state’s relative phase — if we were to model that state as a vector, applying the function wouldn’t rotate the vector. The Hadamard gate has the handy quality of being it’s own inverse, and so applying it to a qubit two times in succession, will return it to it’s initial state. So, doing that to all the qubits in the first register maintains the state |000…0>.

On the other hand, when the oracle is balanced, applying it to the state will change its relative phase and hence its state. This means that applying the Hadamard gate a second time will not return it to it’s initial state of |000…0>.

And so, in the measurement stage, if we get |000…0>, we know our quantum oracle must be constant. If we get anything else, we know it must be balanced.

Mathematically, we can take a look at the state after we impose each step.

First, when we initialize our registers, our state looks like

because we have n qubits in state |0> and one in state |1>.

Then, after we apply the first round of hadamard gates, we have

Third, when we pass it through the quantum oracle — which we represent using f(x) — we get

And finally, after implementing the last set of hadamard gates, we get

Notice that because of the Born rule, the probability of measuring this state as |00….0> is

If f(x) is constant, then it will either always be 0 or always be 1. If f(x)=0, then all of the (-1)^f(x) evaluate to 1, and the overall result is 1. If f(x)=-1, then all of the (-1)^f(x) evaluate to -1, and still the overall result is 1, because we are taking an absolute value. So, if f(x) is constant, we have a 100% chance of measuring |00…0>.

On the other hand, if f(x) is balanced, it will be 0 or 1 with equal probability and so the (-1)^f(x) terms will collectively cancel out and we get 0 — that is, a 0% chance of getting |000…0>.

For a more detailed breakdown of the math, check out: https://www.youtube.com/watch?v=qrYVO9-dxLE&list=PLOFEBzvs-VvrXTMy5Y2IqmSaUjfnhvBHR&index=5.

How can we implement the algorithm?

So, we are going to implement this algorithm in python using IBM’s qiskit library.

First, we import all the libraries we need, namely numpy and of course, qiskit.

We define the variable n to represent the number of qubits we want in our first register. 2^n is akin to the number of possible inputs our function can have.

Then, we define the constant oracle function. It is a circuit that has n + 1 qubits (because it needs to account for the extra qubit in the second register).

Then we generate a random integer out of 2, and if it is 1 (which happens 50% of the time), we add X gates to each qubit in the to the quantum oracle in the last register which ultimately returns the same thing.

Then, we define the balanced oracle. A really simple balanced oracle is simply adding CNOT gates with each of the qubits in the first register as controls and the qubit in the second register as the target (the qubit the answer is written onto). We can then modify this by adding X gates to any qubit in the first register before the CNOT gates, so long as we add an X gate to the same qubit after the CNOT gates.

Our “b_str” is a variable which dictates which qubits we want to wrap in the X gates. Each character in the string corresponds to one of the qubits. If the character is 1, we an X gate to that qubit, and if its a 0, we leave it be.

Then, we add a barrier (just to keep things neat), add the CNOT gates, add another barrier and repeat the exact same process in order to add X gates to the same qubits as before.

Now, we move on to creating the rest of the circuit, which we call dj_circuit. We define it to have n + 1 qubits and n classical bits (for measuring each of the n qubits in the first register).

First off, we add the hadamard gate to each qubit in the first register.

Then, since qiskit initializes each qubit to the |0> state, by default, we need to change the last qubit to |1> using the X gate. Then, we add a hadamard gate.

Then we add the balanced oracle to the circuit. We could also add in the constant oracle instead, or create a function which randomly adds in either oracle. This is just to demonstrate that we get the answer we expect.

Then, on the other side of the oracle, we add hadamard gates to each qubit in the first register and measure them all, with the ith qubit being mapped to the ith classical bit.

We can then use the IBM simulator to run the function 1024 times and plot the answers in a histogram.

And as we can see, we don’t get |000…0> every single time, so the implementation correctly found that the function was balanced! Technically, we could just run this once rather than 1024 times, and we’d get the same answer, this was just to demonstrate that it wasn’t by chance but holds true every time.

--

--

Avery Parkinson

Activator at The Knowledge Society | A Sandwich or Two Founder