QUACK: Quantum Aligned Centroid Kernel
Պահպանված է:
| Հրատարակված է: | The Institute of Electrical and Electronics Engineers, Inc. (IEEE) Conference Proceedings vol. 01 (2024) |
|---|---|
| Հիմնական հեղինակ: | |
| Այլ հեղինակներ: | , |
| Հրապարակվել է: |
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
|
| Խորագրեր: | |
| Առցանց հասանելիություն: | Citation/Abstract |
| Ցուցիչներ: |
Չկան պիտակներ, Եղեք առաջինը, ով նշում է այս գրառումը!
|
| Համառոտագրություն: | Conference Title: 2024 IEEE International Conference on Quantum Computing and Engineering (QCE)Conference Start Date: 2024, Sept. 15 Conference End Date: 2024, Sept. 20 Conference Location: Montreal, QC, CanadaQuantum computing (QC) seems to show potential for application in machine learning (ML). In particular quantum kernel methods (QKM) exhibit promising properties for use in supervised ML tasks. However, a major disadvantage of kernel methods is their unfavorable quadratic scaling with the number of training samples. Together with the limits imposed by currently available quantum hardware (NISQ devices) with their low qubit coherence times, small number of qubits, and high error rates, the use of QC in ML at an industrially relevant scale is currently impossible. As a small step in improving the potential applications of QKMs, we introduce QUACK, a quantum kernel algorithm whose time complexity scales linear with the number of samples during training, and independent of the number of training samples in the inference stage. In the training process, only the kernel entries for the samples and the centers of the classes are calculated, i.e. the maximum shape of the kernel for n samples and c classes is (n, c). During training, the parameters of the quantum kernel and the positions of the centroids are optimized iteratively. In the inference stage, for every new sample the circuit is only evaluated for every centroid, i.e. c times. We show that the QUACK algorithm nevertheless provides satisfactory results and can perform at a similar level as classical kernel methods with quadratic scaling during training. In addition, our (simulated) algorithm is able to handle high-dimensional datasets such as MNIST with 784 features without any dimensionality reduction. |
|---|---|
| DOI: | 10.1109/QCE60285.2024.00169 |
| Աղբյուր: | Science Database |