The challenge for Quantum Coalition Hackathon 2021

Overview

Qchack 2021 Google Challenge

This is a challenge for the brave 2021 qchack.io participants.

Instructions

Hello, intrepid qchacker, welcome to the <G|oogl|e> challenge!

Background

In quantum computing, the gate model plays a central role. Today most quantum algorithm designers express quantum algorithms via things called quantum gates. Quantum hardware, like the Google Sycamore device can execute only certain types of gates, and only one and two-qubit gates. However, this means that multi-qubit operations need to be compiled down to the supported gates. Also, another complication is that devices have restrictions - not all qubits are connected to each other!

Challenge

Your challenge is to compile an n-qubit (1<=n<=8) unitary matrix ( unitary matrix) to a list of operations that can run on the Google Sycamore device - i.e. to implement this method:

from typing import List, Tuple

import numpy as np
import cirq

def matrix_to_sycamore_operations(target_qubits: List[cirq.GridQubit], matrix: np.ndarray) -> Tuple[cirq.OP_TREE, List[cirq.GridQubit]]:
    """ A method to convert a unitary matrix to a list of Sycamore operations. 
    
    This method will return a list of `cirq.Operation`s using the qubits and (optionally) ancilla 
    qubits to implement the unitary matrix `matrix` on the target qubits `qubits`. 
    The operations are also supported by `cirq.google.gate_sets.SYC_GATESET`. 

    Args:
        target_qubits: list of qubits the returned operations will act on. The qubit order defined by the list 
            is assumed to be used by the operations to implement `matrix`.
        matrix: a matrix that is guaranteed to be unitary and of size (2**len(qs), 2**len(qs)).
    Returns: 
        A tuple of operations and ancilla qubits allocated. 
            Operations: In case the matrix is supported, a list of operations `ops` is returned. 
                `ops` acts on `qs` qubits and for which `cirq.unitary(ops)` is equal to `matrix` up 
                 to certain tolerance. In case the matrix is not supported, it might return NotImplemented to 
                 reduce the noise in the judge output.
            Ancilla qubits: In case ancilla qubits are allocated a list of ancilla qubits. Otherwise 
                an empty list.
        .   
    """
    return NotImplemented, []

Input unitaries

Your method will be tested against different inputs, which will be 1-8 qubit input unitaries. Some of the unitaries will have structure that you should leverage to create efficient circuits, this can give you more points! Some of them will be completely random.

Output

Your method will return a two-tuple of two things:

  • list of operations - these operations using the Sycamore gateset (see more details in additional information) will be used to create a cirq.Circuit and we'll take the unitary matrix of it to compare it with the input as explained below.
  • ancilla qubits (advanced) - list of ancilla qubits, e.g. [] means you didn't allocate any ancillae, [cirq.GridQubit(2,3)] means you allocated a single ancilla qubit

Ancilla qubits

Some unitaries will benefit from using ancilla qubits. You can allocate ancilla qubits, as long as the total number of qubits is less than or equal to 10 qubits.

The expected unitary is the tensor product of the input matrix and the identity on the ancilla qubits:

expected_unitary = cirq.kron(input, np.eye(2 ** len(ancillae)))

Where input is the original input.

Qubit order

Qubit ordering is determined by the passed in target_qubits list and the order of the returned ancilla qubits. This is passed in to the circuit.unitary() method, i.e. given your response solution we will evaluate the unitary matrix of your operation the following way:

response, ancillae = solution.matrix_to_sycamore_operations(target_qubits, input_unitary)
response_unitary = response_circuit.unitary(
                        qubit_order=qs + ancillae, 
                        qubits_that_should_be_present=qs + ancillae
                   )

Recall why this matters: for example, on two qubits, if the order is the regular big endian, the CNOT unitary is:

>>> a,b = cirq.LineQubit.range(2)
>>> cirq.Circuit(cirq.CNOT(a,b)).unitary(qubit_order=[a,b], qubits_that_should_be_present=[a,b])
array([[1.+0.j, 0.+0.j, 0.+0.j, 0.+0.j],
       [0.+0.j, 1.+0.j, 0.+0.j, 0.+0.j],
       [0.+0.j, 0.+0.j, 0.+0.j, 1.+0.j],
       [0.+0.j, 0.+0.j, 1.+0.j, 0.+0.j]])
>>> cirq.Circuit(cirq.CNOT(a,b)).unitary(qubit_order=[b,a], qubits_that_should_be_present=[a,b])
array([[1.+0.j, 0.+0.j, 0.+0.j, 0.+0.j],
       [0.+0.j, 0.+0.j, 0.+0.j, 1.+0.j],
       [0.+0.j, 0.+0.j, 1.+0.j, 0.+0.j],
       [0.+0.j, 1.+0.j, 0.+0.j, 0.+0.j]])

Scoring

We'll score each input separately, and sum them up.

Scoring per input matrix:

  • you code MUST use the qubits that were passed in
  • your code MUST execute otherwise no points are given
  • you MUST provide 1 and 2-qubit gates otherwise no points are given
  • the diamond norm distance of your response MUST be close (max 1e-4) to the input unitary
  • the less two-qubit operations you return the more points you get
  • you get extra points if the operations can run on Sycamore

Some additional information

Google's Sycamore device is a 54 qubit quantum chip:

>>> print(cirq.google.Sycamore)
                                             (0, 5)───(0, 6)
                                             │        │
                                             │        │
                                    (1, 4)───(1, 5)───(1, 6)───(1, 7)
                                    │        │        │        │
                                    │        │        │        │
                           (2, 3)───(2, 4)───(2, 5)───(2, 6)───(2, 7)───(2, 8)
                           │        │        │        │        │        │
                           │        │        │        │        │        │
                  (3, 2)───(3, 3)───(3, 4)───(3, 5)───(3, 6)───(3, 7)───(3, 8)───(3, 9)
                  │        │        │        │        │        │        │        │
                  │        │        │        │        │        │        │        │
         (4, 1)───(4, 2)───(4, 3)───(4, 4)───(4, 5)───(4, 6)───(4, 7)───(4, 8)───(4, 9)
         │        │        │        │        │        │        │        │
         │        │        │        │        │        │        │        │
(5, 0)───(5, 1)───(5, 2)───(5, 3)───(5, 4)───(5, 5)───(5, 6)───(5, 7)───(5, 8)
         │        │        │        │        │        │        │
         │        │        │        │        │        │        │
         (6, 1)───(6, 2)───(6, 3)───(6, 4)───(6, 5)───(6, 6)───(6, 7)
                  │        │        │        │        │
                  │        │        │        │        │
                  (7, 2)───(7, 3)───(7, 4)───(7, 5)───(7, 6)
                           │        │        │
                           │        │        │
                           (8, 3)───(8, 4)───(8, 5)
                                    │
                                    │
                                    (9, 4)

There are only a limited number of gates that this device supports.

Suppose we pick these two adjacent qubits:

>>> a = cirq.GridQubit(3,5)
>>> b = cirq.GridQubit(3,6)

The supported gates on these qubits will be:

  • 1 qubit gates:
    • [X/Z/Y]PowGate: cirq.google.Sycamore.validate_operation(cirq.X(a))
    • PhasedXZGate: cirq.google.Sycamore.validate_operation(cirq.PhasedXZGate(x_exponent=1, z_exponent=1, axis_phase_exponent=1.2)(a))
  • 2 qubit gates:
    • sqrt of ISWAP: cirq.google.Sycamore.validate_operation(cirq.ISWAP(a,b)**0.5)
    • SYC gate: cirq.google.Sycamore.validate_operation(cirq.SYC(a,b))

A Hint: before you jump in trying to figure out compilation to these gates, you might want to check our Cirq intro workshop or the Cirq reference for easier ways.

How to participate?

  1. You will need an email address that you registered with for qchack. Only a single submission (the last one takes precedence) is accepted per email address!
  2. To get started, clone this repo:
git clone github.com/quantumlib/qchack-2021-challenge
  1. Write your code! You can work with a set of automated tests locally to check your progress. See Testing for details!
  2. When you are ready to submit, publish your code to a public Github/Gitlab/Bitbucket repo
  3. Fill out this Google form with your repository URL.

GOOD LUCK from the Google Quantum AI qchack.io team!

Legal eligibility for prizes

Persons who are (1) residents of US embargoed countries, (2) ordinarily resident in US embargoed countries, or (3) otherwise prohibited by applicable export controls and sanctions programs may not receive prizes in this contest. We will also not be able to give out prizes for winners from the following countries: Russia, Ukraine, Kazakhstan, Belarus and Brazil.

How to ensure that my submission is valid?

Do NOTs

The solution folder needs to stay the same. Not following these instructions will result in a erroneous submission.

  • do not change solution folder name
  • do not change the function name matrix_to_sycamore_operations
  • do not change the function signature

Dos

Ensure that the judge test in its current form runs on your code. That will ensure that the submission process will work!

Dependencies

The judge will have the dependencies installed during submission in judge/requirements.txt. If your project requires additional dependencies, only add the additional dependencies to solution/requirements.txt.
If your project requires no additional dependencies, then just leave the solution/requirements.txt empty.

Testing, scoring

You can run the judge yourself! This is provided so that you can track your progress. However, note that your final score might be different as we will use different randomization strategies and unitaries to test your code. Scoring logic however will be similar to score_input in judge/judge_lib.py. We recommend checking out the simpler test cases first to ensure getting some points!

Setup

Python3.8 is required, and the use of a virtual environment is recommended.

pip install -r judge/requirements.txt

Running the judge

Run the following to run all the tests once and get your full score:

pytest judge/judge_test.py -rP

When you run this the first time, you should see something like this:

...
------------------------------------------------------------------------------------------------- Captured stdout call -------------------------------------------------------------------------------------------------
/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ [ 8-qubit incrementer ] /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\

executing method (0 pts): ✔ [0 pts]
2+ qubit gates (0 pts): [skipped] 
Close in trace distance (256 pts): [skipped] 
Circuit structure (512 pts): [skipped] 
Valid for Sycamore device (256 pts): [skipped] 
Result: 0.00 / 1024
----------------------------------------------------------------------------------------------- Captured stdout teardown -----------------------------------------------------------------------------------------------

====================================================================================================
Total score: 0.00 / 8616 points!

================================================================================================== 49 passed in 1.71s ==================================================================================================

Run the following if you want to run a particular test (note that the total score is only the score of the executed tests!):

pytest judge/judge_test.py -rP -k single_qubit

Run this to run all the tests in watch mode (anything you change triggers a rebuild):

ptw -- judge/judge_test.py -rP

Support

If you run into technical issues, feel free to file a ticket on this Github repo and or ping us on the Discord channel throughout the Hackathon.

Owner
quantumlib
Code for the Quantum World
quantumlib
Official implementation for (Show, Attend and Distill: Knowledge Distillation via Attention-based Feature Matching, AAAI-2021)

Show, Attend and Distill: Knowledge Distillation via Attention-based Feature Matching Official pytorch implementation of "Show, Attend and Distill: Kn

Clova AI Research 80 Dec 16, 2022
Improving Calibration for Long-Tailed Recognition (CVPR2021)

MiSLAS Improving Calibration for Long-Tailed Recognition Authors: Zhisheng Zhong, Jiequan Cui, Shu Liu, Jiaya Jia [arXiv] [slide] [BibTeX] Introductio

Jia Research Lab 116 Dec 20, 2022
PyTorch Implementation of Unsupervised Depth Completion with Calibrated Backprojection Layers (ORAL, ICCV 2021)

Unsupervised Depth Completion with Calibrated Backprojection Layers PyTorch implementation of Unsupervised Depth Completion with Calibrated Backprojec

80 Dec 13, 2022
Homepage of paper: Paint Transformer: Feed Forward Neural Painting with Stroke Prediction, ICCV 2021.

Paint Transformer: Feed Forward Neural Painting with Stroke Prediction [Paper] [Official Paddle Implementation] [Huggingface Gradio Demo] [Unofficial

442 Dec 16, 2022
EdiBERT, a generative model for image editing

EdiBERT, a generative model for image editing EdiBERT is a generative model based on a bi-directional transformer, suited for image manipulation. The

16 Dec 07, 2022
FlexConv: Continuous Kernel Convolutions with Differentiable Kernel Sizes

FlexConv: Continuous Kernel Convolutions with Differentiable Kernel Sizes This repository contains the source code accompanying the paper: FlexConv: C

Robert-Jan Bruintjes 96 Dec 12, 2022
Clockwork Convnets for Video Semantic Segmentation

Clockwork Convnets for Video Semantic Segmentation This is the reference implementation of arxiv:1608.03609: Clockwork Convnets for Video Semantic Seg

Evan Shelhamer 141 Nov 21, 2022
Time Delayed NN implemented in pytorch

Pytorch Time Delayed NN Time Delayed NN implemented in PyTorch. Usage kernels = [(1, 25), (2, 50), (3, 75), (4, 100), (5, 125), (6, 150)] tdnn = TDNN

Daniil Gavrilov 79 Aug 04, 2022
An open source app to help calm you down when needed.

By: Seanpm2001, Et; Al. Top README.md Read this article in a different language Sorted by: A-Z Sorting options unavailable ( af Afrikaans Afrikaans |

Sean P. Myrick V19.1.7.2 2 Oct 24, 2022
This repo holds the code of TransFuse: Fusing Transformers and CNNs for Medical Image Segmentation

TransFuse This repo holds the code of TransFuse: Fusing Transformers and CNNs for Medical Image Segmentation Requirements Pytorch=1.6.0, 1.9.0 (=1.

Rayicer 93 Dec 19, 2022
PyTorch implementation of "Image-to-Image Translation Using Conditional Adversarial Networks".

pix2pix-pytorch PyTorch implementation of Image-to-Image Translation Using Conditional Adversarial Networks. Based on pix2pix by Phillip Isola et al.

mrzhu 383 Dec 17, 2022
rastrainer is a QGIS plugin to training remote sensing semantic segmentation model based on PaddlePaddle.

rastrainer rastrainer is a QGIS plugin to training remote sensing semantic segmentation model based on PaddlePaddle. UI TODO Init UI. Add Block. Add l

deepbands 5 Mar 04, 2022
Explicable Reward Design for Reinforcement Learning Agents [NeurIPS'21]

Explicable Reward Design for Reinforcement Learning Agents [NeurIPS'21]

3 May 12, 2022
Code for Neurips2021 Paper "Topology-Imbalance Learning for Semi-Supervised Node Classification".

Topology-Imbalance Learning for Semi-Supervised Node Classification Introduction Code for NeurIPS 2021 paper "Topology-Imbalance Learning for Semi-Sup

Victor Chen 40 Nov 23, 2022
Breaching - Breaching privacy in federated learning scenarios for vision and text

Breaching - A Framework for Attacks against Privacy in Federated Learning This P

Jonas Geiping 139 Jan 03, 2023
Generate vibrant and detailed images using only text.

CLIP Guided Diffusion From RiversHaveWings. Generate vibrant and detailed images using only text. See captions and more generations in the Gallery See

Clay M. 401 Dec 28, 2022
small collection of functions for neural networks

neurobiba other languages: RU small collection of functions for neural networks. very easy to use! Installation: pip install neurobiba See examples h

4 Aug 23, 2021
UMEC: Unified Model and Embedding Compression for Efficient Recommendation Systems

[ICLR 2021] "UMEC: Unified Model and Embedding Compression for Efficient Recommendation Systems" by Jiayi Shen, Haotao Wang*, Shupeng Gui*, Jianchao Tan, Zhangyang Wang, and Ji Liu

VITA 39 Dec 03, 2022
Towards Debiasing NLU Models from Unknown Biases

Towards Debiasing NLU Models from Unknown Biases Abstract: NLU models often exploit biased features to achieve high dataset-specific performance witho

Ubiquitous Knowledge Processing Lab 22 Jun 14, 2022
Kaggleship: Kaggle Notebooks

Kaggleship: Kaggle Notebooks This repository contains my Kaggle notebooks. They are generally about data science, machine learning, and deep learning.

Erfan Sobhaei 1 Jan 25, 2022