Exploring Many-Body Localization with Multiqubit Superconducting Circuits
Here I will review our recent activities on designing and fabricating the multiqubit superconducting circuits featuring different types of connection architectures. In particular, I will introduce a type of superconducting quantum processors featuring multiple individually-accessible Xmon qubits that are controllably coupled to a bus resonator, based on which we observe an energy-resolved many-body localization transition. By initializing the multiqubit state which encodes the total energy of the system and measuring the subsequent time-evolved observables, we find that the onset of localization occurs at different disorder strengths, with distinguishable energy scales. With the flexibility in circuit layout and experimental control, our multiqubit superconducting circuits may provide a promising platform for simulating the intriguing physics of quantum many-body systems.
Near-Term Quantum Simulation and its Applications
Realizing a universal quantum computer is challenging with current technology. Before having a fully-fledged quantum computer, a more realistic question is what we can do with current and near-term quantum hardware. In this talk, we first review the algorithms that are designed for noisy-intermediate-scaled-quantum devices with the so-called hybrid or variational approach. Then, we consider three major challenges in implementing these algorithms related to problem encoding, optimization, and error mitigation. We show recent progresses for overcoming these challenges and discuss potential future directions. With the rapid development of quantum hardware, error-mitigated variational quantum simulation may finally enable genuine quantum advantage demonstration in the noisy-intermediate-scaled quantum era.
Near-term Quantum Algorithms for Quantum Information
Hybrid quantum-classical systems have the potency to utilize existing quantum computers to their fullest extent. Using this framework, we introduce near-term quantum algorithms for several fundamental tasks in quantum information, including Gibbs state preparation, quantum data decomposition, and quantum fidelity/trace distance estimation. These results explore new avenues for quantum information processing beyond the conventional protocols and reveal the capability of matrix decomposition and norm computation on near-term quantum devices. We also expect these results may shed light on quantum machine learning and quantum optimization in the future.
Clifford Sampling for Quantum Circuit Characterisation and Error Mitigation
To perform useful tasks on noisy intermediate-scale quantum computer, we need to employ powerful error mitigation techniques. The quasi-probability method (probabilistic error cancellation) permits perfect error compensation at the cost of additional circuit executions. In this talk, I will present a scalable way to characterise quantum circuits using Clifford sampling. Based on Clifford sampling, we can find the optimal random circuit distribution in error mitigation. This new error mitigation protocol is practical for intermediate-scale quantum computing.
Toward Ultra-high-fidelity Quantum Computing Systems
A practical and large-scale quantum computer must contain hundreds of logical qubits with error correction built-in. Such a system requires a larger number of physical qubits where quantum operations can be performed at very high fidelity. Besides achieving qubit gates with error-rates below the quantum error correction threshold, it is necessary to reach low errors for all the elementary operations, including qubit initialization, single- and two-qubit gates, and qubit readout, to demonstrate a logical qubit. Moreover, it is important to keep lowing all these errors further to reduce the resource overhead of quantum error correction. In this talk, I will give an update of our technology developments, including device design, fabrication, cryogenic testing and system benchmarking toward ultra-high-fidelity quantum operations in a superconducting quantum computing system.
What Problems can be Solved by Exact One-Query Quantum Algorithms?
The query model (or black-box model) has attracted much attention from the communities of both classical and quantum computing. Usually, quantum advantages are revealed by presenting a quantum algorithm that has a better query complexity than its classical counterpart. For example, the well-known quantum algorithms including Deutsch-Jozsa algorithm, Simon algorithm and Grover algorithm all show a considerable advantage of quantum computing from the viewpoint of query complexity. Here we consider such a problem: what problem can be solved by an exact one-query quantum algorithm? An exact one-query quantum algorithm means that the quantum algorithm can make only one query and should return the correct result with certainty. An example of such kind of quantum algorithm is Deutsch-Jozsa algorithm.
几乎所有n位布尔函数的精确经典查询复杂度被证明都为n. 在量子查询复杂度方面上，早在1998年，Beals等（FOCS’98）证明了n位的与函数(AND)的精确量子查询复杂度也为n. 所以学者们一般认为，在精确查询算法方面上，只是对某些特殊的布尔函数，量子相对于经典才会有优势。我们证明了除和AND函数同构的布尔函数之外，其它n位布尔函数的精确量子查询复杂度都小于n。也就是说几乎所有n位布尔函数的精确量子查询复杂度都小于n。这个结果使我们对精确量子算法的优势在广度上有了新的认识。同时，我们将一次精确查询算法进行刻画和对DJ算法进行了推广。
Quingo: A Programming Framework for Heterogeneous Quantum-Classical Computing with NISQ Features
Noisy Intermediate-Scale Quantum (NISQ) technology proposes requirements that cannot be fully satisfied by existing Quantum Programming Languages (QPLs) or frameworks. First, noisy qubits require repeatedly-performed quantum experiments, which explicitly operate low-level details, such as pulses and timing of operations. This requirement is beyond the scope or capability of most existing QPLs. Though multiple existing QPLs or frameworks claim the support for near-term promising Heterogeneous Quantum-Classical Computing (HQCC) algorithms, extra code irrelevant to the computational steps has to be introduced, or the corresponding code can hardly be mapped to HQCC architectures while satisfying timing constraints in quantum-classical interaction.
In this work, we propose Quingo, a modular programming framework for HQCC with NISQ features. Quingo highlights an external domain-specific language with timer-based timing control and opaque operation definition. By adopting a six-phase quantum program life-cycle model, Quingo enables aggressive optimization over quantum code through Just-In-Time compilation while preserving quantum-classical interaction with timing constraints satisfied. We propose a runtime system with a prototype design implemented in Python, which can orchestrate both quantum and classical software and hardware according to the six-phase life-cycle model. It allows components of the framework to focus on their genuine task, thus achieving a modular programming framework.
Quantum Chemistry Simulation and Beyond
当前，摩尔定律逐渐失效，各种新型计算架构层出不穷，量子计算很可能是一种未来革命性的技术。量子计算是基于量子叠加、量子纠缠等量子力学特性的新计算，其潜在算法可把现在量子计算机需要耗时成千上万年的计算任务，压缩到几小时到几分钟完成。 当前量子计算处在含噪中等规模器件时代（Noise Intermediate-Scale Quantum）, 量子比特数可达（50-1000），业界普遍认为量子计算可能在量子化学模拟，组合优化，量子机器学习方面有潜在应用价值。本次报告，将聚焦基于经典-量子混合算法的量子多体模拟（量子化学模拟， Schwinger model, Hubbard model, Heisenberg model ）和基于量子近似优化算法的组合优化问题方面的研究和对未来的展望，并介绍华为针对这些应用场景开发的软件HiQ Fermion 和HiQ Optimizer。
Analog Quantum Chemistry
Using quantum systems to efficiently solve quantum chemistry problems is one of the long-sought applications of near-future quantum technologies. In this talk, I will show how to simulate in an analog way the quantum chemistry in ultracold atom systems. This is a very different path from current digital approaches, which typically project the Hamiltonian in atomic orbital basis sets and map it into qubit operators. In particular, we first discuss how to engineer the different parts of the Hamiltonian, numerically benchmarking the working conditions of the simulator. Then, we discuss the errors of the simulation appearing due to discretization and finite size effects, and, importantly, provide a way to mitigate them. Finally, we benchmark the simulator characterizing the behaviour of two-electron atoms (He) and molecules (HeH+).
Verifying Random Quantum Circuits with Arbitrary Geometry Using Tensor Network States Algorithm
The ability to efficiently simulate random quantum circuits using a classical computer is increasingly important for developing Noisy Intermediate-Scale Quantum devices. Here we present a tensor network states based algorithm specifically designed to compute amplitudes for random quantum circuits with arbitrary geometry. Singular value decomposition based compression together with a two-sided circuit evolution algorithm are used to further compress the resulting tensor network. To further accelerate the simulation, we also propose a heuristic algorithm to compute the optimal tensor contraction path. We demonstrate that our algorithm is up to 2 orders of magnitudes faster than the Schrodinger-Feynman algorithm for verifying random quantum circuits on the 53-qubit Sycamore processor, with circuit depths below 12. We also simulate larger random quantum circuits up to 104 qubits, showing that this algorithm is an ideal tool to verify relatively shallow quantum circuits on near-term quantum computers.
Majorana Qubit systems and their benchmarking scheme
Majorana zero modes provide a potential platform for the storage and processing of quantum information with intrinsic error rates that decrease exponentially with inverse temperature and with the length scales of the system. I will review the recent progress in Majorana search and topological quantum computation from a theoretical point of view. These recent progress paves a way for the future tests of non-abelian braiding statistics and topological quantum information processing. However, if we want to validate Majorana behaviors seriously for quantum information applications, it is reasonable to treat the system as a black-box. We first designed a couple of novel classical hidden variable theories to capture certain key quantum mechanical properties of Majorana systems, which could help us to set up the boundaries and limitations of Majorana operations for quantum information processing. Secondly, we introduce a scheme using a sequence of measurements to reveal their behaviors for nonlocal information encoding.