^{1}

^{2}

^{2}

^{2}

^{3}

^{1}

^{2}

^{3}

A quantum approximate optimization algorithm (QAOA) is a polynomial-time approximate optimization algorithm used to solve combinatorial optimization problems. However, the existing QAOA algorithms have poor generalization performance in finding an optimal solution from a feasible solution set of combinatorial problems. In order to solve this problem, a quantum approximate optimization algorithm with metalearning for the MaxCut problem (MetaQAOA) is proposed. Specifically, a quantum neural network (QNN) is constructed in the form of the parameterized quantum circuit to detect different topological phases of matter, and a classical long short-term memory (LSTM) neural network is used as a black-box optimizer, which can quickly assist QNN to find the approximate optimal QAOA parameters. The experiment simulation via TensorFlow Quantum (TFQ) shows that MetaQAOA requires fewer iterations to reach the threshold of the loss function, and the threshold of the loss value after training is smaller than comparison methods. In addition, our algorithm can learn parameter update heuristics which can generalize to larger system sizes and still outperform other initialization strategies of this scale.

With the rapid development of quantum computing, people pay more and more attention to using quantum computing to solve combinatorial optimization problems. The maximum cut problem (MaxCut) [

In order to solve these problems, we propose a new local optimization method, which is the quantum approximate optimization algorithm with metalearning for the MaxCut problem (MetaQAOA). This algorithm makes use of the quantum-classical metalearning approach composed of a group of metaoptimization techniques to learn how to modify the parameters of learning algorithms to further customize them for the MaxCut QAOA problems. This could be to ensure that the learning can be well generalized, to better fit the given data with fewer iterations, and to adapt a pretrained neural network model to a new task. Through the simulation experiment on TensorFlow Quantum (TFQ) [

The rest of the paper is organized as follows: Section

In general, LSTM refers to long and short-term memory artificial neural network, which is a time loop neural network, and is used to solve the problem of long-term dependence. There are special units called memory blocks (shown in Figure

A single memory block.

LSTM architecture. The LSTM is unrolled over time.

The LSTM calculates the activation of the network cell iteratively from

Metalearning is to study how to design a machine learning model with less training set, so that it can learn well and quickly [

Metalearners have been used in the rapid general optimization of the model, and there are few training examples [

The MaxCut problem is a typical combinatorial optimization problem and an NP-hard problem. A promising approach to solve combinatorial optimization problems on near-term quantum machines is QAOA. The steps of the quantum approximate optimization algorithm for the MaxCut problem are as shown in Algorithm

An initial state

The mixer Hamiltonian

The unitary

We can get the final state after

random

return

A variety of approaches have been used to optimize PQC, including the local optimization methods (Nelder–Mead [

The main purpose of the quantum optimization algorithm with metalearning for the MaxCut problem proposed in this paper is to train a classical long short-term memory (LSTM) network as an outer-loop optimizer to directly optimize the parameters in the QNN. In this section, we mainly introduce the process and training strategy of the MetaQAOA.

The MaxCut problem is to find a split

The simplest method is the enumeration method, which is the only method that can find the global optimal solution (no approximation). However, this method is more complex, especially for the case of numerous vertices. Because when MaxCut only divides the vertices into two groups, there are a total of

The quantum approximate optimization algorithm with metalearning for the MaxCut problem (MetaQAOA) we proposed will be to train an optimizer network, i.e., metalearner, with metalearning to learn parameter update heuristics for QNN constructed in the form of the parameterized quantum circuit. The MetaQAOA is a local optimizer, which has a notion of location in the solution space. It can search for candidate solutions from this location. Besides, MetaQAOA is usually fast and is susceptible to finding local minima. Training an outer-loop optimizer on many instances of MaxCut problem to enhance the efficiency of solving unseen instances is the idea of metalearning for optimization. This may be to ensure that the learning is well generalized and that the model can better fit the given data with fewer iterations. The algorithm we proposed mainly improves the update method of parameters in parameterized quantum circuits, and the steps of the proposed algorithm are shown in Algorithm

random

return

The main steps of our algorithm are as follows. We first prepare a uniformly distributed quantum superposition state as an initial state

Then, we select the mixer Hamiltonian

The reason why we choose this mixer Hamiltonian is that each item is a noncommuting cost Hamiltonian, i.e.,

Thirdly, the unitary

This unitary transformation with parameters expressed in the form of a general parameterized quantum circuit is shown in Figure

Parameterized quantum circuit, with unitaries

A QNN is constructed in the form of a parameterized quantum circuit. A single time step of MetaQAOA is shown in Figure

A single time step of MetaQAOA, where the classical LSTM outputs parameters

This process is repeated

The temporal hybrid computational graph for the metalearning optimization of the LSTM and a QNN is unrolled.

Once a new set of QNN parameters is proposed, the LSTM will send it to the QNN for evaluation, and then the loop will continue.

Finally, the final state is

In this section, we will provide more detailed information on how the MetaQAOA trains an LSTM to optimize the parameters of QNN. The goal of quantum-classical metalearning is to train LSTM to learn an effective parameter update heuristics for a set of cost functions of interest, i.e., to find an optimizer that efficiently optimizes the special distribution of optimization area on average. The better optimizer is an optimizer that finds the best approximate local minimum of the cost function in as few function queries as possible. The whole optimal qualification is decided by the current problem category and the application area of interest.

In order to learn this general optimization strategy, the LSTM is trained by using a random instance, which has a fairly general distribution of functions, that is, functions are sampled from Gaussian processes. Since we focus on the known QNN optimization landscapes which are different from the classical Gaussian process optimization landscapes, we aim to train neural optimizers for MaxCut QAOA problems and QNN ansatze. In order to explore the effectiveness of our method, we will train the LSTM for random QNN instances in the target problem category, i.e., MaxCut QAOA, and test the trained network on (larger) previously unseen instances in the target problem category. The number of variational parameters in these ansatzes does not depend on the size of the system, but only on the number of alternating operators. This means we can use this training strategy to train our LSTM on smaller QNN instance sizes for MaxCut QAOA problems and then test for generalization on larger QNN instance sizes.

In order to train and test the LSTM optimizer on MaxCut QAOA problems, our experiments are conducted on the CPU and GPU. The TensorFlow Quantum (TFQ) [

The experimental environment.

Environments | Parameters |
---|---|

CPU | Intel (R) Xeon (R) Gold 5218 |

Hard disk | Samsung PM881 512 GB |

GPU | NVIDIA GeForce RTX 2080 Ti |

Simulation platform | TensorFlow Quantum 2.3.0 |

Operating system | Windows 10 |

A random 3-regular graph with 10 nodes.

In our experiment, 500 problem instances sampled from this training dataset and 250 problem instances as validation dataset are selected to train the LSTM. In addition, we select 100 problem instances sampled from this testing set as testing data. The batch size of each training is 64, and there are a total of 1500 epochs for iterative training. In addition, the Adam optimizer provided by the open-source library is introduced for the network model, and the learning rate is set as 0.001. We choose a short time horizon to minimize the complexity of the training. For reference, 5 iterations is a significantly less number of quantum-classical optimization iterations than what is typically seen in previous works on QNN optimization.

We only show the circuits of MaxCut QAOA unitary with

The unitary circuits are shown in Figure

The circuits of MetaQAOA unitary (a) with

In order to evaluate the training performance of the algorithm proposed in this paper, some local optimizers, i.e., Nelder–Mead [

In this paper, we use the observed improvement at each time step as the loss function to measure the quality of the neural network model. This loss function is as follows:

Our experiment aims at the QAOA for MaxCut problem of random 3-regular graphs generated by

The comparison of the three methods in terms of the relationship between the training epoch and the value of the loss function.

Different from the other two methods, our proposed algorithm is generalizable. To illustrate the generalization performance, four kinds of experiments are simulated via TFQ. In this paper, we choose

By inputting 1000 randomly selected parameter combinations

The path chosen by the LSTM optimizer on a

According to the shortcomings of weak generalization of the current methods of optimizing MaxCut QAOA circuits, we propose a quantum approximate optimization algorithm with metalearning for MaxCut problem (MetaQAOA) via TFQ platform. Our algorithm trains a classical LSTM network as an outer-loop optimizer to directly optimize the parameters in the QNN constructed by the parameterized quantum circuit. The simulation experiment results via TFQ show that MetaQAOA requires fewer iterations to reach the threshold of the loss function, and the threshold of loss value after training is smaller than the other local optimization methods, i.e., Nelder–Mead and L-BFGS-B. In addition, our method can learn parameter update heuristics that generalize to larger system sizes and still outperform other initialization strategies of this scale. But the metalearning optimizer neural networks in our MetaQAOA cannot scale to arbitrary problems and number of parameters, and research that builds upon it might be addressed in future studies.

The data used to support this study are included within this article.

The authors declare that there are no conflicts of interest regarding the publication of this paper.

The support of all the members of the quantum research group of NUIST is especially acknowledged, and their professional discussions and suggestions have provided us with a lot of help. This work was supported by the Natural Science Foundation of China under Grant nos. 62071240 and 61802002 and in part by the Natural Science Foundation of Jiangsu Higher Education Institutions of China under Grant no. 19KJB520028, and the Priority Academic Program Development of Jiangsu Higher Education Institutions (PAPD).