Skip to content

Commit ed5b850

Browse files
quantum autoencoder implementation for v0.1.12
1 parent 071aa5a commit ed5b850

File tree

12 files changed

+917
-0
lines changed

12 files changed

+917
-0
lines changed

CHANGELOG.md

Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,28 @@
11
# CHANGELOG.md
22

3+
## [0.1.12] - 10-04-2026
4+
5+
### Added
6+
7+
- Implemented a first-class quantum autoencoder workflow in `qml.autoencoder`
8+
- Added `autoencoder` CLI support via `python -m qml autoencoder`
9+
- Added smoke, artifact, CLI, and import coverage for the quantum autoencoder
10+
- Added autoencoder documentation and example notebook support
11+
12+
### Summary
13+
14+
New core QML capability:
15+
16+
- variational quantum classification (VQC)
17+
- variational quantum regression (VQR)
18+
- quantum convolutional neural networks (QCNN)
19+
- quantum autoencoders
20+
- quantum kernel methods
21+
- trainable quantum kernels
22+
- quantum metric learning
23+
24+
---
25+
326
## [0.1.11] - 10-04-2026
427

528
### Added

README.md

Lines changed: 28 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,7 @@ Modular **PennyLane-based quantum machine learning library** implementing reusab
1111
• Variational quantum classification (VQC)
1212
• Variational quantum regression (VQR)
1313
• Quantum convolutional neural networks (QCNN)
14+
• Quantum autoencoders
1415
• Quantum kernel methods
1516
• Trainable quantum kernels (kernel-target alignment)
1617
• Quantum metric learning (trainable embedding geometry)
@@ -103,6 +104,27 @@ Learns a small hierarchical quantum classifier using:
103104

104105
---
105106

107+
## Quantum autoencoder
108+
109+
```python
110+
from qml.autoencoder import run_quantum_autoencoder
111+
112+
result = run_quantum_autoencoder(
113+
n_samples=200,
114+
family="correlated",
115+
steps=50,
116+
plot=True,
117+
)
118+
```
119+
120+
Learns a compression map for structured four-qubit state families using:
121+
122+
• a trainable encoder/decoder ansatz
123+
• a latent subspace retained across selected qubits
124+
• compression and reconstruction fidelity metrics
125+
126+
---
127+
106128
## Quantum kernel classifier
107129

108130
```python
@@ -267,6 +289,7 @@ Run workflows directly:
267289
```bash
268290
python -m qml vqc --steps 50 --plot
269291
python -m qml qcnn --steps 50 --plot
292+
python -m qml autoencoder --steps 50 --plot
270293
python -m qml regression --steps 50 --plot
271294
python -m qml kernel --plot
272295
python -m qml trainable-kernel --steps 50 --plot
@@ -308,6 +331,7 @@ Algorithm notes:
308331
• docs/qml/variational_quantum_classifier.md
309332
• docs/qml/variational_regression.md
310333
• docs/qml/qcnn.md
334+
• docs/qml/autoencoder.md
311335
• docs/qml/quantum_kernels.md
312336
• docs/qml/metric_learning.md
313337

@@ -316,6 +340,7 @@ Example notebooks:
316340
• quantum_variational_classifier.ipynb
317341
• quantum_regressor.ipynb
318342
• quantum_convolutional_neural_network.ipynb
343+
• quantum_autoencoder.ipynb
319344
• quantum_kernel_classifier.ipynb
320345
• quantum_metric_learning.ipynb
321346
• classical_vs_quantum_classifier.ipynb
@@ -342,6 +367,9 @@ qml/
342367
qcnn.py
343368
quantum convolutional classifier workflows
344369
370+
autoencoder.py
371+
quantum autoencoder workflows
372+
345373
kernel_methods.py
346374
quantum kernel workflows
347375

THEORY.md

Lines changed: 68 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -810,6 +810,74 @@ Finite-shot execution approximates behaviour of real quantum hardware.
810810

811811
---
812812

813+
# Quantum autoencoders
814+
815+
Quantum autoencoders learn a unitary compression map that moves irrelevant
816+
information into a designated trash subsystem while preserving the informative
817+
degrees of freedom in a smaller latent subsystem.
818+
819+
Let the input state be
820+
821+
$$
822+
|\psi(x)\rangle \in \mathcal{H}_A \otimes \mathcal{H}_B
823+
$$
824+
825+
where:
826+
827+
• $\mathcal{H}_A$ is the retained latent subsystem
828+
• $\mathcal{H}_B$ is the trash subsystem
829+
830+
The encoder aims to transform the state so that the trash subsystem is close to
831+
a fixed reference state, typically $|0\rangle^{\otimes k}$.
832+
833+
---
834+
835+
## Compression objective
836+
837+
Given encoder unitary $U(\theta)$, the compressed state is
838+
839+
$$
840+
|\phi(x,\theta)\rangle
841+
=
842+
U(\theta)|\psi(x)\rangle.
843+
$$
844+
845+
Compression succeeds when the trash subsystem factors as
846+
847+
$$
848+
|\phi(x,\theta)\rangle
849+
\approx
850+
|\tilde{\psi}(x)\rangle_A \otimes |0\rangle_B.
851+
$$
852+
853+
This repository optimizes the probability of measuring the trash subsystem in
854+
the all-zero state.
855+
856+
---
857+
858+
## Reconstruction
859+
860+
After compression, a decoder can be defined by the adjoint unitary
861+
862+
$$
863+
U(\theta)^\dagger.
864+
$$
865+
866+
Applying the decoder gives a reconstructed state
867+
868+
$$
869+
|\psi_{\mathrm{rec}}(x,\theta)\rangle
870+
=
871+
U(\theta)^\dagger U(\theta)|\psi(x)\rangle.
872+
$$
873+
874+
The implementation reports both:
875+
876+
• compression fidelity on the trash subsystem
877+
• reconstruction fidelity on the full state
878+
879+
---
880+
813881
# References
814882

815883
Schuld, M., Sinayskiy, I., & Petruccione, F. (2015)

USAGE.md

Lines changed: 77 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -261,6 +261,81 @@ Typical fields:
261261

262262
---
263263

264+
# Quantum autoencoder
265+
266+
Train a quantum autoencoder on a structured family of four-qubit states:
267+
268+
```python
269+
from qml.autoencoder import run_quantum_autoencoder
270+
271+
result = run_quantum_autoencoder(
272+
n_samples=200,
273+
noise=0.05,
274+
test_size=0.25,
275+
seed=123,
276+
n_layers=2,
277+
latent_qubits=2,
278+
steps=50,
279+
step_size=0.1,
280+
family="correlated",
281+
plot=True,
282+
save=False,
283+
)
284+
```
285+
286+
---
287+
288+
## Parameters
289+
290+
| parameter | description | default |
291+
| --------- | ----------- | ------- |
292+
| n_samples | dataset size | 200 |
293+
| noise | family perturbation level | 0.05 |
294+
| test_size | test fraction | 0.25 |
295+
| seed | random seed | 123 |
296+
| n_layers | autoencoder ansatz depth | 2 |
297+
| latent_qubits | retained latent qubits | 2 |
298+
| steps | optimisation steps | 50 |
299+
| step_size | Adam learning rate | 0.1 |
300+
| family | state family | "correlated" |
301+
| plot | show plots | False |
302+
| save | save JSON + plots | False |
303+
304+
---
305+
306+
## Returned dictionary
307+
308+
Typical fields:
309+
310+
```python
311+
{
312+
"model",
313+
"family",
314+
315+
"seed",
316+
317+
"n_qubits",
318+
"latent_qubits",
319+
"trash_qubits",
320+
321+
"n_layers",
322+
"steps",
323+
"step_size",
324+
325+
"loss_history",
326+
327+
"train_compression_fidelity",
328+
"test_compression_fidelity",
329+
330+
"train_reconstruction_fidelity",
331+
"test_reconstruction_fidelity",
332+
333+
"params",
334+
}
335+
```
336+
337+
---
338+
264339
# Quantum kernel classifier
265340

266341
Compute a quantum kernel matrix and train an SVM:
@@ -611,6 +686,8 @@ python -m qml vqc --steps 50 --plot
611686

612687
python -m qml qcnn --steps 50 --plot
613688

689+
python -m qml autoencoder --steps 50 --plot
690+
614691
python -m qml regression --steps 50 --plot
615692

616693
python -m qml kernel --plot

docs/qml/autoencoder.md

Lines changed: 134 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,134 @@
1+
# Quantum Autoencoder
2+
3+
This note describes the quantum autoencoder workflow implemented in `qml.autoencoder`.
4+
5+
The current implementation is intentionally compact and package-oriented:
6+
7+
• structured four-qubit input state families
8+
• a trainable encoder/decoder ansatz
9+
• latent and trash subsystem separation
10+
• compression and reconstruction fidelity reporting
11+
12+
---
13+
14+
# Overview
15+
16+
A quantum autoencoder learns a unitary compression map that preserves the
17+
informative degrees of freedom of a quantum state in a smaller latent subspace.
18+
19+
Rather than predicting labels directly, it learns a transformation that moves
20+
discardable information into a trash subsystem.
21+
22+
---
23+
24+
# Model structure
25+
26+
Let the input state be
27+
28+
$$
29+
|\psi(x)\rangle.
30+
$$
31+
32+
The encoder applies a trainable unitary
33+
34+
$$
35+
|\phi(x,\theta)\rangle
36+
=
37+
U(\theta)|\psi(x)\rangle.
38+
$$
39+
40+
If compression succeeds, the state factorizes approximately as
41+
42+
$$
43+
|\phi(x,\theta)\rangle
44+
\approx
45+
|\tilde{\psi}(x)\rangle_{\mathrm{latent}}
46+
\otimes
47+
|0\rangle_{\mathrm{trash}}.
48+
$$
49+
50+
The implementation retains a configurable number of latent qubits and measures
51+
how often the trash subsystem lands in the all-zero basis state.
52+
53+
---
54+
55+
# Training objective
56+
57+
The training signal is the probability of measuring the trash subsystem in
58+
$|0\rangle^{\otimes k}$.
59+
60+
If
61+
62+
$$
63+
p_{\mathrm{trash}}(0 \cdots 0 \mid x,\theta)
64+
$$
65+
66+
denotes that probability, the loss is
67+
68+
$$
69+
\mathcal{L}(\theta)
70+
=
71+
1 - \mathbb{E}_x \left[p_{\mathrm{trash}}(0 \cdots 0 \mid x,\theta)\right].
72+
$$
73+
74+
Minimizing this loss encourages the encoder to compress the structured state
75+
family into the latent subsystem.
76+
77+
---
78+
79+
# Reconstruction fidelity
80+
81+
To assess whether useful information is preserved, the workflow also computes a
82+
reconstruction fidelity by applying the decoder
83+
84+
$$
85+
U(\theta)^\dagger
86+
$$
87+
88+
after the encoder and comparing the resulting state to the original state.
89+
90+
This yields two complementary metrics:
91+
92+
• compression fidelity on the trash subsystem
93+
• reconstruction fidelity on the full state
94+
95+
---
96+
97+
# Example usage
98+
99+
```python
100+
from qml.autoencoder import run_quantum_autoencoder
101+
102+
result = run_quantum_autoencoder(
103+
family="correlated",
104+
n_samples=200,
105+
n_layers=2,
106+
latent_qubits=2,
107+
steps=50,
108+
)
109+
```
110+
111+
Outputs include:
112+
113+
• train/test compression fidelity
114+
• train/test reconstruction fidelity
115+
• learned ansatz parameters
116+
• loss history
117+
118+
When `save=True`, the workflow writes JSON results and generated figures to:
119+
120+
`results/autoencoder/`
121+
`images/autoencoder/`
122+
123+
---
124+
125+
# State families
126+
127+
The current implementation provides several synthetic state families:
128+
129+
`correlated`
130+
`entangled`
131+
`hybrid`
132+
133+
These are designed to provide structured low-dimensional families that are
134+
meaningful compression targets for a small autoencoder.

0 commit comments

Comments
 (0)