Skip to main content
6 min read

Using Q# to estimate resources needed for quantum advantage in derivative pricing

Goldman Sachs text-only company logo.
Headshot of William Zeng wearing a suit and tie looking at the camera.
a person posing for the camera

Today’s post was written by Dr. William Zeng, Head of Quantum Research at Goldman Sachs, and Dr. Rajiv Krishnakumar, Quantum Computing Researcher with the Goldman Sachs R&D team.

At Goldman Sachs, our Research and Development team is always looking to push forward the cutting edge in technology for financial services. While quantum computing remains in an early stage, the promise of the technology means that we are actively researching where and how it can be applied in the future. A key approach here is for us to “work backward.” We start with a valuable, well-defined mathematical problem in finance that we can pair with a theoretical computational advantage for quantum computers. We then ask: what would the specifications of a real quantum computer need to be to achieve a practical advantage for this problem? In doing this resource estimation work we need to fill in practical details and plug gaps in theoretical approaches. It also often uncovers important optimizations that can, for example, reduce time to solution or the required quantum memory.

Resource estimation for quantum advantage in derivative pricing

One example that we have focused on is the pricing of complex derivatives. Derivatives are financial contracts whose value today is based on some statistical model of what will happen in the future. A common example of a financial derivative is a stock option. When you have a complicated contract or a complicated statistical model then it can be computationally expensive to compute the price. Derivatives are so common in finance that even a small improvement in pricing them, or in calculating related quantities, could be very valuable.

Derivatives are a good target for resource estimation because the underlying algorithm that is often used is Monte Carlo, and it’s known that there is a theoretical speedup available to quantum computers for fairly generic Monte Carlo algorithms. The algorithm builds on a subroutine called amplitude estimation and offers a quadratic speedup. For instance, to achieve an accuracy ε in the price a classical Monte Carlo algorithm needs to run for O(1/ε2) steps. However, the quantum algorithm runs in only O(1/ε) steps. For example, if you are targeting accuracy of one part per thousand (ε = 10-3) then the quantum algorithm could need only 1,000 steps vs. a classical algorithm that would need 1,000,000.

Of course, this is just the theoretical scaling and details need to be filled in to see if this is practical. For example, each step on a quantum computer might take much longer than each step on a classical computer because the clock rate is slower. There also may be other overheads that influence the constant factors in the algorithm.

In 2020, we worked with co-authors at IBM to produce the first end-to-end resource estimate for derivative pricing in our paper “A Threshold for Quantum Advantage in Derivative Pricing.” We used two practical examples of derivative contracts in that paper: an autocallable and a Target Accrual Redemption Forward (TARF). These are examples that are complicated enough to price today that we would like a speedup and that are traded in enough volume that improving their pricing matters. In order to make the resource estimate practical, we introduced some modifications to the algorithm called the re-parameterization method. This resulted in the following estimates for the resources needed for the autocallable example. We include the total resources needed as well as the resources used in an important subroutine of amplitude estimation, the Q operator:

 Total ResourcesQ Operator
T-count1.2 x 10^1011.4M
T-depth5.4 x 10^79.5k
Logical Qubits8k8k

We include three important figures of merit to describe the resources. The T-count gives the number of T-gate operations needed in the algorithm. The T-gate operation in many fault-tolerant quantum computing architectures requires significantly more resources than other operations and so dominates the resources needed by the computation. We also include the T-depth. This is the number of T-gate operations that needed to be executed sequentially. In some architectures, this depth number then determines the overall runtime of the algorithm as other T-gates can be parallelized. Finally, we include the amount of quantum memory needed for the algorithm as measured by the number of qubits.

Resource estimation with Q#

Resource estimation is challenging as all the details matter. For example, our paper uses fully mixed precision in the implementation, where each fixed-point register is optimized to use the right number of qubits. How can we be sure that we didn’t make mistakes when we can’t run a full implementation?

In order to take our resource estimate to the next level, we chose to use Q# and work with Mathias Soeken and Martin Roetteler on the Microsoft Azure Quantum team to develop a full Q# implementation of our algorithm. Doing resource estimation this way had many benefits:

  1. Handling complexity: We could use Q#’s features to automatically handle the allocation and management of quantum memory. Further, features like automatically generating controlled and adjoint operations made it easier for us to express the algorithm at a higher level and let the compiler figure out the details.
  2. Using libraries: Much of the resource complexity in our derivative pricing algorithm is used by reversible arithmetic on quantum registers. Q# already has many libraries for fixed-point arithmetic operations that we could import and invoke without needing to re-implement them ourselves.
  3. Finding mistakes: Since much of the code in our implementation is dealing with reversible versions of classical arithmetic, we were able to make use of Q#’s Toffoli simulator to efficiently test portions of our implementation for correctness. While the whole algorithm cannot be directly simulated, we were able to develop unit tests for key components that we could efficiently simulate to build up confidence in our resource counts.
  4. Modular design: The overall algorithm is complicated. Having a concrete implementation lets one focus on optimizing specific functions one at a time and then letting the system tell you the overall effect on resource counts.

New updates to the algorithm from using Q#

While implementing the algorithm from our previous work in Q# we made some improvements and modifications.

Firstly, we removed the arcsine and square-root arithmetic operations (Step 3 of Algorithm 4.2) and replaced them with the comparator method (Section 2.2 of this work). This reduces the resources needed for that step.

Secondly, we replaced the piecewise polynomial implementation of the exponential function with a lookup table. A lookup table can further reduce resources over reversible fixed-point arithmetic that can be expensive on quantum computers. This lookup table implementation has been open sourced as part of Q#. In the resource estimate results given below, the lookup table for the exponential function has a free parameter given by the number of “swap” qubits used. In the resource estimates below we quote resources for three different choices of swap qubits. As we have an implementation in Q# it is straightforward to manage and compute different resource requirements for differently parameterized implementations.

Resource estimation results

With these updates and the more detailed implementation in Q#, we calculated the resources needed for three key subroutines in derivative pricing and compared them to our previous work. The first is for the Q operator, the key operator in amplitude estimation. The second is for the payoff operator that reversibly implements the derivative payoff. The third is for the exponential function itself, which is the largest resource consumer besides the fundamental amplitude estimation itself.

The benchmark chosen is the 3 asset autocallable on 20 time steps. These parameters match real instances that one could find in practice.

Comparisons are made amongst three methods:

  • Paper: the original hand estimates from our work in Chakrabarti et al:
  • SWAP10: Q# implementation estimates where the exponential lookup table is set to use 10 swap bits.
  • SWAP5: Q# implementation estimates where the exponential lookup table is set to use 5 swap bits.
  • SWAP1: Q# implementation estimates where the exponential lookup table is set to use 1 swap bit.

Overall Q Operator

Logical Qubits8k3.8M124k19.2k

Payoff Operator

Logical Qubits1.6k19.2k19.2k19.2k

Fixed Point Exponential

Logical Qubits5.4k3.8M124k11.5k

Broadly speaking, our SWAP1 implementation results are close but not the same as our by-hand estimates. This means that our by-hand estimates were sometimes pessimistic (like for T-count) and other times optimistic, but not by too much.


By working with a Q# implementation we were able to improve the accuracy and flexibility of our resource estimates for quantum advantage in derivative pricing. The implementation also gives us a foundation to more rapidly iterate on updated versions and on other algorithms that use similar subroutines. We look forward to continuing optimization of this algorithm and implementation by taking advantage of new ideas and developments in the Q# ecosystem.

 “Working directly with the Goldman Sachs team has provided a fantastic opportunity to collaborate on resource estimation for an important problem in the finance industry, gain insights to enhance the offerings across the Azure Quantum ecosystem, and share resource estimation techniques and algorithm improvements with the community. It’s exciting to see the impact Q# can enable, from algorithm development to resource estimation and reduction, and it’s been a pleasure working with Goldman Sachs to further quantum impact.”—Dr. Krysta Svore, Distinguished Engineer and VP Quantum Software for Azure Quantum