Top arXiv papers

sign in to customize
  • PDF
    Fault-tolerant logical entangling gates are essential for scalable quantum computing, but are limited by the error rates and overheads of physical two-qubit gates and measurements. To address this limitation, we introduce phantom codes-quantum error-correcting codes that realize entangling gates between all logical qubits in a code block purely through relabelling of physical qubits during compilation, yielding perfect fidelity with no spatial or temporal overhead. We present a systematic study of such codes. First, we identify phantom codes using complementary numerical and analytical approaches. We exhaustively enumerate all $2.71 \times 10^{10}$ inequivalent CSS codes up to $n=14$ and identify additional instances up to $n=21$ via SAT-based methods. We then construct higher-distance phantom-code families using quantum Reed-Muller codes and the binarization of qudit codes. Across all identified codes, we characterize other supported fault-tolerant logical Clifford and non-Clifford operations. Second, through end-to-end noisy simulations with state preparation, full QEC cycles, and realistic physical error rates, we demonstrate scalable advantages of phantom codes over the surface code across multiple tasks. We observe a one-to-two order-of-magnitude reduction in logical infidelity at comparable qubit overhead for GHZ-state preparation and Trotterized many-body simulation tasks, given a modest preselection acceptance rate. Our work establishes phantom codes as a viable architectural route to fault-tolerant quantum computation with scalable benefits for workloads with dense local entangling structure, and introduces general tools for systematically exploring the broader landscape of quantum error-correcting codes.
  • PDF
    Benchmarking physical devices and verifying logical algorithms are important tasks for scalable fault-tolerant quantum computing. Numerous protocols exist for benchmarking devices before running actual algorithms. In this work, we show that both physical and logical errors of fault-tolerant circuits can even be characterized in-situ using syndrome data. To achieve this, we map general fault-tolerant Clifford circuits to subsystem codes using the spacetime code formalism and develop a scheme for estimating Pauli noise in Clifford circuits using syndrome data. We give necessary and sufficient conditions for the learnability of physical and logical noise from given syndrome data, and show that we can accurately predict logical fidelities from the same data. Importantly, our approach requires only a polynomial sample size, even when the logical error rate is exponentially suppressed by the code distance, and thus gives an exponential advantage against methods that use only logical data such as direct fidelity estimation. We demonstrate the practical applicability of our methods in various scenarios using synthetic data as well as the experimental data from a recent demonstration of fault-tolerant circuits by Bluvstein et al. [Nature 626, 7997 (2024)]. Our methods provide an efficient, in-situ way of characterizing a fault-tolerant quantum computer to help gate calibration, improve decoding accuracy, and verify logical circuits.
  • PDF
    Decoders are a critical component of fault-tolerant quantum computing. They must identify errors based on syndrome measurements to correct quantum states. While finding the optimal correction is NP-hard and thus extremely difficult, approximate decoders with faster runtime often rely on uncontrolled heuristics. In this work, we propose a family of hierarchical quantum decoders with a tunable trade-off between speed and accuracy while retaining guarantees of optimality. We use the Lasserre Sum-of-Squares (SOS) hierarchy from optimization theory to relax the decoding problem. This approach creates a sequence of Semidefinite Programs (SDPs). Lower levels of the hierarchy are faster but approximate, while higher levels are slower but more accurate. We demonstrate that even low levels of this hierarchy significantly outperform standard Linear Programming relaxations. Our results on rotated surface codes and honeycomb color codes show that the SOS decoder approaches the performance of exact decoding. We find that Levels 2 and 3 of our hierarchy perform nearly as well as the exact solver. We analyze the convergence using rank-loop criteria and compare the method against other relaxation schemes. This work bridges the gap between fast heuristics and rigorous optimal decoding.
  • PDF
    Topological quantum computation encodes quantum information in the internal fusion space of non-Abelian anyonic quasiparticles, whose braiding implements logical gates. This goes beyond Abelian topological order (TO) such as the toric code, as its anyons lack internal structure. However, the simplest non-Abelian generalizations of the toric code do not support universality via braiding alone. Here we demonstrate that such minimally non-Abelian TOs can be made universal by treating anyon fusion as a computational primitive. We prepare a 54-qubit TO wavefunction associated with the smallest non-Abelian group, $S_3$, on Quantinuum's H2 quantum processor. This phase of matter exhibits cyclic anyon fusion rules, known to underpin universality, which we evidence by trapping a single non-Abelian anyon on the torus. We encode logical qutrits in the nonlocal fusion space of non-Abelian fluxes and, by combining an entangling braiding operation with anyon charge measurements, realize a universal topological gate set and read-out, which we further demonstrate by topologically preparing a magic state. This work establishes $S_3$ TO as simple enough to be prepared efficiently, yet rich enough to enable universal topological quantum computation.
  • PDF
    The Bravyi-König (BK) theorem is an important no-go theorem for the dynamics of topological stabiliser quantum error correcting codes. It states that any logical operation on a $D$-dimensional topological stabiliser code that can be implemented by a short-depth circuit acts on the codespace as an element of the $D$-th level of the Clifford hierarchy. In recent years, a new type of quantum error correcting codes based on Pauli stabilisers, dubbed Floquet codes, has been introduced. In Floquet codes, syndrome measurements are arranged such that they dynamically generate a codespace at each time step. Here, we show that the BK theorem holds for a definition of Floquet codes based on locally conjugate stabiliser groups. Moreover, we introduce and define a class of generalised unitaries in Floquet codes that need not preserve the codespace at each time step, but that combined with the measurements constitute a valid logical operation. We derive a canonical form of these generalised unitaries and show that the BK theorem holds for them too.
  • PDF
    In this paper, we focus on the problem of computing the set of diagonal transversal gates fixing a CSS code. We determine the logical actions of the gates as well as the groups of transversal gates that induce non-trivial logical gates and logical identities. We explicitly declare the set of equations defining the groups, a key advantage and differentiator of our approach. We compute the complete set of transversal stabilizers and transversal gates for any CSS code arising from monomial codes, a family that includes decreasing monomial codes and polar codes. As a consequence, we recover and extend some results in the literature on CSS-T codes, triorthogonal codes, and divisible codes.
  • PDF
    Strassen's asymptotic spectrum offers a framework for analyzing the complexity of tensors. It has found applications in diverse areas, from computer science to additive combinatorics and quantum information. A long-standing open problem, dating back to 1991, asks whether Strassen's support functionals are universal spectral points, that is, points in the asymptotic spectrum of tensors. In this paper, we answer this question in the affirmative by proving that the support functionals coincide with the quantum functionals - universal spectral points that are defined via entropy optimization on entanglement polytopes. We obtain this result as a special case of a general minimax formula for convex optimization on entanglement polytopes (and other moment polytopes) that has further applications to other tensor parameters, including the asymptotic slice rank. Our proof is based on a recent Fenchel-type duality theorem on Hadamard manifolds due to Hirai.
  • PDF
    We investigate a hybrid quantum-classical algorithm for solving the Maximum Independent Set (MIS) problem on regular graphs, combining the Quantum Approximate Optimization Algorithm (QAOA) with a minimal degree classical greedy algorithm. The method leverages pre-computed QAOA angles, derived from depth-$p$ QAOA circuits on regular trees, to compute local expectation values and inform sequential greedy decisions that progressively build an independent set. This hybrid approach maintains shallow quantum circuit and avoids instance-specific parameter training, making it well-suited for implementation on current quantum hardware: we have implemented the algorithm on a 20 qubit IQM superconducting device to find independent sets in graphs with thousands of nodes. We perform tensor network simulations to evaluate the performance of the algorithm beyond the reach of current quantum hardware and compare to established classical heuristics. Our results show that even at low depth ($p=4$), the quantum-enhanced greedy method significantly outperforms purely classical greedy baselines as well as more sophisticated approximation algorithms. The modular structure of the algorithm and relatively low quantum resource requirements make it a compelling candidate for scalable, hybrid optimization in the NISQ era and beyond.
  • PDF
    Quantum machine learning (QML) is often listed as a promising candidate for useful applications of quantum computers, in part due to numerous proofs of possible quantum advantages. A central question is how small a role quantum computers can play while still enabling provable learning advantages over classical methods. We study an especially restricted setting in which a quantum computer is used only as a feature extractor: it acts independently on individual data points, without access to labels or global dataset information, is available only to augment the training set, and is not available at deployment. Training and deployment are therefore carried out by fully classical learners on a dataset augmented with quantum-generated features. We formalize this model by adapting the classical framework of Learning Under Privileged Information (LUPI) to the quantum case, which we call Learning Under Quantum Privileged Information (LUQPI). Within this framework, we show that even such minimally involved quantum feature extraction, available only during training, can yield exponential quantum-classical separations for suitable concept classes and data distributions under reasonable computational assumptions. We further situate LUQPI within a taxonomy of related quantum and classical learning settings and show how standard classical machinery, most notably the SVM+ algorithm, can exploit quantum-augmented data. Finally, we present numerical experiments in a physically motivated many-body setting, where privileged quantum features are expectation values of observables on ground states, and observe consistent performance gains for LUQPI-style models over strong classical baselines.
  • PDF
    We establish efficient algorithms for weakly-interacting quantum spin systems at arbitrary temperature. In particular, we obtain a fully polynomial-time approximation scheme for the partition function and an efficient approximate sampling scheme for the thermal distribution over a classical spin space. Our approach is based on the cluster expansion method and a standard reduction from approximate sampling to approximate counting.
  • PDF
    We propose a framework for preparing quantum states with a holographic entanglement structure, in the sense that the entanglement entropies are governed by minimal surfaces in a chosen bulk geometry. We refer to such entropies as holographic because they obey a relation between entropies and bulk minimal surfaces, known as the Ryu-Takayanagi formula, that is a key feature of holographic models of quantum gravity. Typically in such models, the bulk geometry is determined by solving Einstein's equations. Here, we simply choose a bulk geometry, then discretize the geometry into a coupling graph comprising bulk and boundary nodes. Evolving under this graph of interactions and measuring the bulk nodes leaves behind the desired pure state on the boundary. We numerically demonstrate that the resulting entanglement properties approximately reproduce the predictions of the Ryu-Takayanagi formula in the chosen bulk geometry. We consider graphs associated with hyperbolic disk and wormhole geometries, but the approach is general. The minimal ingredients in our proposal involve only Gaussian operations and measurements and are readily implementable in photonic and cold-atom platforms.
  • PDF
    Quantum state tomography (QST) is essential for validating quantum devices but suffers from exponential scaling in system size. Neural-network quantum states, such as Restricted Boltzmann Machines (RBMs), can efficiently parameterize individual many-body quantum states and have been successfully used for QST. However, existing approaches are point-wise and require retraining at every parameter value in a phase diagram. We introduce a parametric QST framework based on a hypernetwork that conditions an RBM on Hamiltonian control parameters, enabling a single model to represent an entire family of quantum ground states. Applied to the transverse-field Ising model, our HyperRBM achieves high-fidelity reconstructions from local Pauli measurements on 1D and 2D lattices across both phases and through the critical region. Crucially, the model accurately reproduces the fidelity susceptibility and identifies the quantum phase transition without prior knowledge of the critical point. These results demonstrate that hypernetwork-modulated neural quantum states provide an efficient and scalable route to tomographic reconstruction across full phase diagrams.
  • PDF
    The toric code, when deformed in a way that preserves the self-duality $\mathbb{Z}_2$ symmetry exchanging the electric and magnetic excitations, admits a transition to a topologically trivial state that spontaneously breaks the $\mathbb{Z}_2$ symmetry. Numerically, this transition was found to be continuous, which makes it particularly enigmatic given the longstanding absence of a continuum field-theoretic description. In this work we propose such a continuum field theory for the transition dubbed the $SO(4)_{2,-2}$ Chern-Simons-Higgs (CSH) theory. We show that our field theory provides a natural "mean-field" understanding of the phase diagram. Moreover, it can be generalized to an entire series of theories, namely the $SO(4)_{k,-k}$ CSH theories, labeled by an integer $k$. For each $k>2$, the theory describes an analogous transition involving different non-Abelian topological orders, such as the double Fibonacci order ($k=3$) and the $S_3$ quantum double ($k=4$). For $k=1$, we conjecture that the corresponding CSH transition is in fact infrared-dual to the $3d$ Ising transition, in close analogy with the particle-vortex duality of a complex scalar.
  • PDF
    Majorana stars, the $2S$ spin coherent states that are orthogonal to a spin-$S$ state, offer an elegant method to visualize quantum states. This representation offers deep insights into the structure, symmetries, and entanglement properties of quantum states, bridging abstract algebraic formulations with intuitive geometrical intuition. In this paper, we briefly survey the development and applications of the Majorana constellation, exploring its relevance in modern areas of quantum information.
  • PDF
    Symmetry provides powerful non-perturbative constraints in quantum many-body systems. A prominent example is the Lieb-Schultz-Mattis (LSM) anomaly -- a mixed 't Hooft anomaly between internal and translational symmetries that forbids a trivial symmetric gapped phase. In this work, we investigate lattice translation operators in systems with an LSM anomaly. We construct explicit lattice models in two and three spatial dimensions and show that, after gauging the full internal symmetry, translation becomes non-invertible and fuses into defects of the internal symmetry. The result is supported by the anomaly-inflow in view of topological field theory. Our work extends earlier one-dimensional observations to a unified higher-dimensional framework and clarifies their origin in mixed anomalies and higher-group structures, highlighting a coherent interplay between internal and crystalline symmetries.
  • PDF
    Quantum metrology involves the application of quantum resources to enhance measurements. Several communities have developed quantum-metrology strategies that leverage effective time reversals. These strategies, we posit, form four classes. First, echo metrology begins with a preparatory unitary and ends with that unitary's time-reverse. The protocol amplifies the visibility of a small parameter to be sensed. Similarly, weak-value amplification enhances a weak coupling's detectability. The technique exhibits counterintuitive properties captured by a retrocausal model. Using the third strategy, one simulates closed timelike curves, worldlines that loop back on themselves in time. The fourth strategy involves indefinite causal order, which characterises channels applied in a superposition of orderings. We review these four strategies, which we unify under the heading of time-reverse metrology. We also outline opportunities for this toolkit in quantum metrology; quantum information science; quantum foundations; atomic, molecular, and optical physics; and solid-state physics.
  • PDF
    We compare several definitions of entropy production rate introduced in the literature from a large variety of situations and motivations, and then analyze their relations with memory effects. Considering a relevant experimental example of a qubit interacting with a single bosonic mode playing the role of a finite bath, we show that all definitions of entropy production coincide at weak coupling. In the strong coupling regime, significant discrepancies emerge between the different entropy production rates, although some similarities in the overall behaviour remain. However, surprisingly, two of these definitions -- one based on local quantities of the system and the other on non-local quantities -- coincide exactly, even in the case of strong coupling. Finally, a high degree of correspondence is observed when memory effects characterized by P-divisibility are compared with the sign of all entropy production rates in the case of weak coupling. Such correspondence degrades at strong coupling, leading us to extend the concept of entropy production to the dynamical map. We show a perfect equivalence between the sign of this enlarged concept of entropy production and P-divisibility, both numerically and analytically, in the case of phase-covariant master equations.
  • PDF
    Quantifying how much a quantum state breaks a symmetry is essential for characterizing phases, nonequilibrium dynamics, and open-system behavior. Quantum resource theory provides a rigorous operational framework to define and characterize such quantifiers of symmetry-breaking. As a starter, we exemplify the usefulness of resource theory by noting that second-Rényi entanglement asymmetry can increase under symmetric operations, and hence is not a resource monotone, and should not solely be used to capture Quantum Mpemba effect. More importantly, motivated by mixed-state physics where weak and strong symmetries are inequivalent, we formulate a new resource theory tailored to strong symmetry, identifying free states and strong-covariant operations. This framework systematically identifies quantifiers of strong symmetry breaking for a broad class of symmetry groups, including a strong entanglement asymmetry. A particularly transparent structure emerges for U(1) symmetry, where the resource theory for the strong symmetry breaking has a completely parallel structure to the entanglement theory: the variance of the conserved quantity fully characterizes the asymptotic manipulation of strong symmetry breaking. By connecting this result to the knowledge of the geometry of quantum state space, we obtain a quantitative framework to track how weak symmetry breaking is irreversibly converted into strong symmetry breaking in open quantum systems. We further propose extensions to generalized symmetries and illustrate the qualitative impact of strong symmetry breaking in analytically tractable QFT examples and applications.
  • PDF
    Open quantum systems interact with their environment, leading to nonunitary dynamics. We investigate the thermodynamics of linear Open Quantum Walks (OQWs), a class of quantum walks whose dynamics is entirely driven by the environment. We define an equilibrium temperature, identify a population inversion near a finite critical value of a control parameter, analyze the thermalization process, and develop the statistical mechanics needed to describe the thermodynamical properties of linear OQWs. We also study nonequilibrium thermodynamics by analyzing the time evolution of entropy, energy, and temperature, while providing analytical tools to understand the system's evolution as it converges to the thermalized state. We examine the validity of the second and third laws of thermodynamics in this setting. Finally, we employ these developments to shed light on dissipative quantum computation within the OQW framework.
  • PDF
    Protecting quantum information through quantum error correction (QEC) is a cornerstone of future fault-tolerant quantum computation. However, current QEC-protected logical qubits have only achieved coherence times about twice those of their best physical constituents. Here, we show that the primary barrier to higher QEC gains is ancilla-induced operational errors rather than intrinsic cavity coherence. To overcome this bottleneck, we introduce error-detectable universal control of bosonic modes, wherein ancilla relaxation events are detected and the corresponding trajectories discarded, thereby suppressing operational errors on logical qubits. For binomial codes, we demonstrate universal gates with fidelities exceeding $99.6\%$ and QEC gains of $8.33\times$ beyond break-even. Our results establish that gains beyond $10\times$ are achievable with state-of-the-art devices, establishing a path toward fault-tolerant bosonic quantum computing.
  • PDF
    Determining when the multiparameter quantum Cramér--Rao bound (QCRB) is saturable with experimentally relevant single-copy measurements is a central open problem in quantum metrology. Here we establish an equivalence between QCRB saturation and the simultaneous hollowization of a set of traceless operators associated with the estimation model, i.e., the existence of complete (generally nonorthogonal) bases in which all corresponding diagonal matrix elements vanish. This formulation yields a geometric characterization: optimal rank-one measurement vectors are confined to a subspace orthogonal to a state-determined Hermitian span. This provides a direct criterion to construct optimal Positive Operator-Valued Measures(POVMs). We then identify conditions under which the partial commutativity condition proposed in [Phys. Rev. A 100, 032104(2019)] becomes necessary and sufficient for the saturation of the QCRB, demonstrate that this condition is not always sufficient, and prove the counter-intuitive uselessness of informationally-complete POVMs.
  • PDF
    Many quantum software development kits provide a suite of circuit optimisation passes. These passes have been highly optimised and tested in isolation. However, the order in which they are applied is left to the user, or else defined in general-purpose default pass sequences. While general-purpose sequences miss opportunities for optimisation which are particular to individual circuits, designing pass sequences bespoke to particular circuits requires exceptional knowledge about quantum circuit design and optimisation. Here we propose and demonstrate training a reinforcement learning agent to compose optimisation-pass sequences. In particular the agent's action space consists of passes for two-qubit gate count reduction used in default PyTKET pass sequences. For the circuits in our diverse test set, the (mean, median) fraction of two-qubit gates removed by the agent is $(57.7\%, \ 56.7 \%)$, compared to $(41.8 \%, \ 50.0 \%)$ for the next best default pass sequence.
  • PDF
    Superradiance is a quantum phenomenon in which coherence between emitters results in enhanced and directional radiative emission. Many quantum optical phenomena can be characterized by the two-time quantum correlation function $g^{(2)}(t,\tau)$, which describes the photon statistics of emitted radiation. However, the critical task of determining $g^{(2)}(t,\tau)$ becomes intractable for large emitter ensembles due to the exponential scaling of the Hilbert space dimension with the number of emitters. Here, we analyse and benchmark two approximate numerical sampling methods applicable to emitter arrays embedded within electromagnetic environments, which generally provide upper and lower bounds for $g^{(2)}(t,0)$. We also introduce corrections to these methods (termed offset corrections) that significantly improve the quality of the predictions. The optimal choice of method depends on the total number of emitters, such that taken together, the two approaches provide accurate descriptions across a broad range of important regimes. This work therefore provides new theoretical tools for studying the well-known yet complex phenomenon of superradiance in large ensembles of quantum emitters.
  • PDF
    We study the lattice version of higher-form symmetries on tensor-product Hilbert spaces. Interestingly, at low energies, these symmetries may not flow to the topological higher-form symmetries familiar from relativistic quantum field theories, but instead to non-topological higher-form symmetries. We present concrete lattice models exhibiting this phenomenon. One particular model is an $\mathbb{R}$ generalization of the Kitaev honeycomb model featuring an $\mathbb{R}$ lattice 1-form symmetry. We show that its low-energy effective field theory is a gapless, non-relativistic theory with a non-topological $\mathbb{R}$ 1-form symmetry. In both the lattice model and the effective field theory, we demonstrate that the non-topological $\mathbb{R}$ 1-form symmetry is not robust against local perturbations. In contrast, we also study various modifications of the toric code and their low-energy effective field theories to demonstrate that the compact $\mathbb{Z}_2$ lattice 1-form symmetry does become topological at low energies unless the Hamiltonian is fine-tuned. Along the way, we clarify the rules for constructing low-energy effective field theories in the presence of multiple superselection sectors. Finally, we argue on general grounds that non-compact higher-form symmetries (such as $\mathbb{R}$ and $\mathbb{Z}$ 1-form symmetries) in lattice systems generically remain non-topological at low energies, whereas compact higher-form symmetries (such as $\mathbb{Z}_{n}$ and $U(1)$ 1-form symmetries) generically become topological.
  • PDF
    We show that computing even very coarse approximations of critical points is intractable for simple classes of nonconvex functions. More concretely, we prove that if there exists a polynomial-time algorithm that takes as input a polynomial in $n$ variables of constant degree (as low as three) and outputs a point whose gradient has Euclidean norm at most $2^n$ whenever the polynomial has a critical point, then P=NP. The algorithm is permitted to return an arbitrary point when no critical point exists. We also prove hardness results for approximate computation of critical points under additional structural assumptions, including settings in which existence and uniqueness of a critical point are guaranteed, the function is lower bounded, and approximation is measured in terms of distance to a critical point. Overall, our results stand in contrast to the commonly-held belief that, in nonconvex optimization, approximate computation of critical points is a tractable task.
  • PDF
    We study how energy and quantum entanglement are transferred when two identical CFTs are entangled locally. This is probed by considering a local operator insertion in one of the CFTs. When the CFTs have holographic duals via the AdS/CFT correspondence, the transfer happens through an AdS wormhole that allows signal propagation even beyond the horizon from one AdS boundary to the other; we demonstrate this in explicit CFT calculations. We argue that this transmission is possible because the insertion of a local operator is not a unitary process but a regularized version of projection measurement, and that this is interpreted as quantum teleportation. We also find that this leads to a phenomenon opposite to scrambling, where mutual information, instead of being suppressed, gets enhanced by the insertion of a local operator excitation.
  • PDF
    We study the problem of estimating the number of edges in an unknown graph. We consider a hybrid model in which an algorithm may issue independent set, degree, and neighbor queries. We show that this model admits strictly more efficient edge estimation than either access type alone. Specifically, we give a randomized algorithm that outputs a $(1\pm\varepsilon)$-approximation of the number of edges using $O\left(\min\left(\sqrt{m}, \sqrt{\frac{n}{\sqrt{m}}}\right)\cdot\frac{\log n}{\varepsilon^{5/2}}\right)$ queries, and prove a nearly matching lower bound. In contrast, prior work shows that in the local query model (Goldreich and Ron, \textitRandom Structures \& Algorithms 2008) and in the independent set query model (Beame \emphet al. ITCS 2018, Chen \emphet al. SODA 2020), edge estimation requires $\widetilde{\Theta}(n/\sqrt{m})$ queries in the same parameter regimes. Our results therefore yield a quadratic improvement in the hybrid model, and no asymptotically better improvement is possible.
  • PDF
    Belief propagation with quantum messages (BPQM) provides a low-complexity alternative to collective measurements for communication over classical--quantum channels. Prior BPQM constructions and density-evolution (DE) analyses have focused on binary alphabets. Here, we generalize BPQM to symmetric q-ary pure-state channels (PSCs) whose output Gram matrix is circulant. For this class, we show that bit-node and check-node combining can be tracked efficiently via closed-form recursions on the Gram-matrix eigenvalues, independent of the particular physical realization of the output states. These recursions yield explicit BPQM unitaries and analytic bounds on the fidelities of the combined channels in terms of the input-channel fidelities. This provides a DE framework for symmetric q-ary PSCs that allows one to estimate BPQM decoding thresholds for LDPC codes and to construct polar codes on these channels.
  • PDF
    Identifying the top-$k$ items is fundamental but often prohibitive when exact valuations are expensive. We study a two-oracle setting with a fast, noisy weak oracle and a scarce, high-fidelity strong oracle (e.g., human expert verification or expensive simulation). We first analyze a simple screen-then-certify baseline (STC) and prove it makes at most $m(4\varepsilon_{\max})$ strong calls given jointly valid weak confidence intervals with maximum radius $\varepsilon_{\max}$, where $m(\cdot)$ denotes the near-tie mass around the top-$k$ threshold. We establish a conditional lower bound of $\Omega(m(\varepsilon_{\max}))$ for any algorithm given the same weak uncertainty. Our main contribution is ACE, an adaptive certification algorithm that focuses strong queries on critical boundary items, achieving the same $O(m(4\varepsilon_{\max}))$ bound while reducing strong calls in practice. We then introduce ACE-W, a fully adaptive two-phase method that allocates weak budget adaptively before running ACE, further reducing strong costs.
  • PDF
    We demonstrate that a boundary-localized periodic (Floquet) drive can induce nontrivial long-range correlations in a non-interacting fermionic chain which is additionally subject to boundary dissipation. Surprisingly, we find that this phenomenon occurs even when the corresponding isolated bulk is in a trivial gapped phase with exponentially decaying correlations. We argue that this boundary-drive induced non-equilibrium transition (as witnessed through the correlation matrix) is driven by a resonance mechanism whereby the drive frequency bridges bulk energy gaps, allowing boundary-injected particles and holes to propagate and mediate long-range correlations into the bulk. We also numerically establish that when the drive bridges a particle-hole gap, the induced long-range order scales as a power law with the bulk pairing potential ($\chi \sim \gamma^2$). Our results highlight the potential of localized coherent driving for generating macroscopic order in open quantum systems.
  • PDF
    We investigate the classical limit of quantum master equations featuring double-bracket dissipators. Specifically, we consider dissipators defined by double commutators, which describe dephasing dynamics, as well as dissipators involving double anticommutators, associated with fluctuating anti-Hermitian Hamiltonians. The classical limit is obtained by formulating the open quantum dynamics in phase space using the Wigner function and Moyal products, followed by a systematic $\hbar$-expansion. We begin with the well-known model of energy dephasing, associated with energy diffusion. We then turn to master equations containing a double anticommutator with the system Hamiltonian, recently derived in the context of noisy non-Hermitian systems. For both classes of double-bracket equations, we provide a gradient-flow representation of the dynamics. We analyze the classical limit of the resulting evolutions for harmonic and driven anharmonic quantum oscillators, considering both classical and nonclassical initial states. The dynamics is characterized through the evolution of several observables as well as the Wigner logarithmic negativity. We conclude by extending our analysis to generalized master equations involving higher-order nested brackets, which provide a time-continuous description of spectral filtering techniques commonly used in the numerical analysis of quantum systems.
  • PDF
    Hybrid Transformer architectures, which combine softmax attention blocks and recurrent neural networks (RNNs), have shown a desirable performance-throughput tradeoff for long-context modeling, but their adoption and studies are hindered by the prohibitive cost of large-scale pre-training from scratch. Some recent studies have shown that pre-trained softmax attention blocks can be converted into RNN blocks through parameter transfer and knowledge distillation. However, these transfer methods require substantial amounts of training data (more than 10B tokens), and the resulting hybrid models also exhibit poor long-context performance, which is the scenario where hybrid models enjoy significant inference speedups over Transformer-based models. In this paper, we present HALO (Hybrid Attention via Layer Optimization), a pipeline for distilling Transformer models into RNN-attention hybrid models. We then present HypeNet, a hybrid architecture with superior length generalization enabled by a novel position encoding scheme (named HyPE) and various architectural modifications. We convert the Qwen3 series into HypeNet using HALO, achieving performance comparable to the original Transformer models while enjoying superior long-context performance and efficiency. The conversion requires just 2.3B tokens, less than 0.01% of their pre-training data
  • PDF
    Interacting spin systems in solids underpin a wide range of quantum technologies, from quantum sensors and single-photon sources to spin-defect-based quantum registers and processors. We develop a quantum-computer-aided framework for simulating such devices using a general electron spin resonance Hamiltonian incorporating zero-field splitting, the Zeeman effect, hyperfine interactions, dipole-dipole spin-spin terms, and electron-phonon decoherence. Within this model, we combine Gray-encoded qudit-to-qubit mappings, qubit-wise commuting aggregation, and a multi-reference selected quantum Krylov fast-forwarding (sQKFF) hybrid algorithm to access long-time dynamics while remaining compatible with NISQ and early fault-tolerant hardware constraints. Numerical simulations demonstrate the computation of autocorrelation functions up to $\sim100$ ns, together with microwave absorption spectra and the $\ell_1$-norm of coherence, achieving 18-30$\%$ reductions in gate counts and circuit depth for Trotterized time-evolution circuits compared to unoptimized implementations. Using the nitrogen vacancy center in diamond as a testbed, we benchmark the framework against classical simulations and identify the reference-state selection in sQKFF as the primary factor governing accuracy at fixed hardware cost. This methodology provides a flexible blueprint for using quantum computers to design, compare, and optimize solid-state spin-qubit technologies under experimentally realistic conditions.
  • PDF
    Various statistical tasks, including sampling or computing Wasserstein barycenters, can be reformulated as fixed-point problems for operators on probability distributions. Accelerating standard fixed-point iteration schemes provides a promising novel approach to the design of efficient numerical methods for these problems. The Wasserstein geometry on the space of probability measures, although not precisely Riemannian, allows us to define various useful Riemannian notions, such as tangent spaces, exponential maps and parallel transport, motivating the adaptation of Riemannian numerical methods. We demonstrate this by developing and implementing the Riemannian Anderson Mixing (RAM) method for Gaussian distributions. The method reuses the history of the residuals and improves the iteration complexity, and we argue that the additional costs, compared to Picard method, are negligible. We show that certain open balls in the Bures-Wasserstein manifold satisfy the requirements for convergence of RAM. The numerical experiments show a significant acceleration compared to a Picard iteration, and performance on par with Riemannian Gradient Descent and Conjugate Gradient methods.
  • PDF
    A growing variety of optically accessible spin qubits have emerged in recent years as key components for quantum sensors, qubits, and quantum memories. However, the scalability of conventional spin-based quantum architectures remains limited by direct microwave delivery, which introduces thermal noise, electromagnetic cross-talk, and design constraints for cryogenic, high-field, and distributed systems. In this work, we present a unified framework for RF-over-fiber (RFoF) control of optically accessible spins through RFoF optically detected magnetic resonance (ODMR) spectroscopy of nitrogen-vacancy (NV) centers in diamond. The RFoF platform relies on an electro-optically modulated telecom-band laser that transmits microwave signals over fiber and a high-speed photodiode that recovers the RF waveform to drive NV center spin transitions. We obtain an RFoF efficiency of 1.81\% at 2.90~GHz, corresponding to $P_{\mathrm{RF,out}}=-0.7$~dBm. The RFoF architecture provides a path toward low-noise, thermally isolated, and cryo-compatible ODMR systems bridging conventional spin-based quantum sensing protocols with emerging distributed quantum technologies.
  • PDF
    The fundamental trade-off between privacy and utility remains an active area of research. Our contribution is motivated by two observations. First, privacy mechanisms developed for one-time data release cannot straightforwardly be extended to sequential releases. Second, practical databases are likely to be useful to multiple distinct parties. Furthermore, we can not rule out the possibility of data sharing between parties. With utility in mind, we formulate a privacy-utility trade-off problem to adaptively tackle sequential data requests made by different, potentially colluding entities. We consider both expected distortion and mutual information as measures to quantify utility, and use mutual information to measure privacy. We assume an attack model whereby illicit data sharing, which we call collusion, can occur between data receivers. We develop an adaptive algorithm for data releases that makes use of a modified Blahut-Arimoto algorithm. We show that the resulting data releases are optimal when expected distortion quantifies utility, and locally optimal when mutual information quantifies utility. Finally, we discuss how our findings may extend to applications in machine learning.
  • PDF
    Polaron formation in pump-probe experiments is an inherently non-equilibrium phenomenon, driven by the ultrafast coupled dynamics of electrons and phonons, and culminating in the emergence of a localized quasiparticle state. In this work, we present a first-principles quantum-kinetic theory of polaron formation that captures the real-time evolution of electronic and lattice degrees of freedom in presence of electron-phonon coupling. We implement this framework to investigate the ultrafast polaron formation in the prototypical polar insulator MgO. This approach allows us to determine the characteristic timescales of polaron localization and to identify its distinctive dynamical fingerprint. Our results establish clear and experimentally accessible criteria for identifying polaron formation in pump-probe experiments.
  • PDF
    Erasure qubits are beneficial for quantum error correction due to their relaxed threshold requirements. While dual-rail erasure qubits have been demonstrated with a strong error hierarchy in circuit quantum electrodynamics, biased-erasure qubits -- where erasures originate predominantly from one logical basis state -- offer further advantages. Here, we realize a hardware-efficient biased-erasure qubit encoded in the vacuum and two-photon Fock states of a single microwave cavity. The qubit exhibits an erasure bias ratio of over 265. By using a transmon ancilla for logical measurements and mid-circuit erasure detections, we achieve logical state assignment errors below 1% and convert over 99.3% leakage errors into detected erasures. After postselection against erasures, we achieve effective logical relaxation and dephasing rates of $(6.2~\mathrm{ms})^{-1}$ and $(3.1~\mathrm{ms})^{-1}$, respectively, which exceed the erasure error rate by factors of 31 and 15, establishing a strong error hierarchy within the logical subspace. These postselected error rates indicate a coherence gain of about 6.0 beyond the break-even point set by the best physical qubit encoded in the two lowest Fock states in the cavity. Moreover, randomized benchmarking with interleaved erasure detections reveals a residual logical gate error of 0.29%. This work establishes a compact and hardware-efficient platform for biased-erasure qubits, promising concatenations into outer-level stabilizer codes toward fault-tolerant quantum computation.
  • PDF
    It is a matter of ongoing discussion whether quantum states can become entangled while only interacting via a classical mediator. This lively debate is deeply interwoven with the question of whether entanglement studies can prove the quantum nature of gravity. However, the answer to this fundamental question depends crucially on which hybrid quantum-classical theory is used. In this letter, we demonstrate that entanglement by a classical mediator is possible within the framework of hybrid van Hove theory, showing that existing no-go theorems on that matter do not universally apply to hybrid theories in general. After briefly recapitulating the key features of the hybrid van Hove theory, we show this using the example of two quantum spins coupled by a classical harmonic oscillator. By deriving the spin density matrix for this scenario and comparing it to its equivalent for a pure quantum system, we show that entanglement between the two spins is generated in both cases. Conclusively, this is illustrated by presenting the purity and concurrence of the spin-spin system as a decisive measure for entanglement. Our results further imply that quantum entanglement studies cannot rule out consistent quantum theories featuring classical gravity.
  • PDF
    High-coherence, fault-tolerant and scalable quantum computing architectures with unprecedented long coherence times, faster gates, low losses and low bit-flip errors may be one of the only ways forward to achieve the true quantum advantage. In this context, high-frequency high-coherence (HCQC) qubits with new high-performance topologies could be a significant step towards efficient and high-fidelity quantum computing by facilitating compact size, higher scalability and higher than conventional operating temperatures. Although transmon type qubits are designed and manufactured routinely in the range of a few Giga-Hertz, normally from 4 to 6 GHz (and, at times, up to around 10GHz), achieving higher-frequency operation has challenges and entails special design and manufacturing considerations. This report presents the proposal and preliminary design of an 8-qubit transmon (with possible upgrade to up to 72 qubits on a chip) architecture working beyond an operation frequency of 10GHz, as well as presents a new connection topology. The current design spans a range of around 11 to 13.5 GHz (with a possible full range of 9-12GHz at the moment), with a central optimal operating frequency of 12.0 GHz, with the aim to possibly achieve a stable, compact and low-charge-noise operation, as lowest as possible as per the existing fabrication techniques. The aim is to achieve average relaxation times of up to 1.9ms with average quality factors of up to 2.75 x 10^7 after trials, while exploiting the new advances in superconducting junction manufacturing using tantalum and niobium/aluminum/aluminum oxide tri-layer structures on high-resistivity silicon substrates (carried out elsewhere by other groups and referred in this report).
  • PDF
    Fluxonium superconducting circuits were originally proposed to realize highly coherent qubits. In this work, we explore how these circuits can be used to implement and harness qutrits, by tuning their energy levels and matrix elements via an external flux bias. In particular, we investigate the distinctive features of arrays of fluxonium qutrits, and their potential for the quantum simulation of exotic quantum matter. We identify four different operational regimes, classified according to the plasmon-like versus fluxon-like nature of the qutrit excitations. Highly tunable on-site interactions are complemented by correlated single-particle hopping, pair hopping and non-local interactions, which naturally emerge and have different weights in the four regimes. Dispersive corrections and decoherence are also analyzed. We investigate the rich ground-state phase diagram of qutrit arrays and propose practical dynamical experiments to probe the different regimes. Altogether, fluxonium qutrit arrays emerge as a versatile and experimentally accessible platform to explore strongly correlated bosonic matter beyond the Bose-Hubbard paradigm, and with a potential toward simulating lattice gauge theories and non-Abelian topological states.
  • PDF
    We analyze the finite-size corrections to the crosscap overlap in the two-dimensional classical Ising model along its self-dual critical line. Using a fermionic formulation, we express the lattice crosscap overlap in terms of Bogoliubov angles and develop a contour-integral approach by analytically continuing the lattice momentum to the complex plane. This leads to a remarkably simple expression for the crosscap overlap, which demonstrates that the finite-size corrections decay exponentially with system size. We further derive an exact analytical formula for the corresponding decay constant and show that it is determined by the complex singularity structure of the Bogoliubov angle.
  • PDF
    Block-coordinate descent (BCD) is the method of choice to solve numerous large scale optimization problems, however their theoretical study for non-convex optimization, has received less attention. In this paper, we present a new block-coordinate descent (BCD) framework to tackle non-convex composite optimization problems, ensuring decrease of the objective function and convergence to a solution. This framework is general enough to include variable metric proximal gradient updates, proximal Newton updates, and alternated minimization updates. This generality allows to encompass three versions of the most used solvers in the sparse precision matrix estimation problem, deemed Graphical Lasso: graphical ISTA, Primal GLasso, and QUIC. We demonstrate the value of this new framework on non-convex sparse precision matrix estimation problems, providing convergence guarantees and up to a $100$-fold reduction in the number of iterations required to reach state-of-the-art estimation quality.
  • PDF
    Robust principal component analysis seeks to recover a low-rank matrix from fully observed data with sparse corruptions. A scalable approach fits a low-rank factorization by minimizing the sum of entrywise absolute residuals, leading to a nonsmooth and nonconvex objective. Under standard incoherence conditions and a random model for the corruption support, we study factorizations of the ground-truth rank-$r$ matrix with both factors of rank $r$. With high probability, every such factorization is a Clarke critical point. We also characterize the local geometry: when the factorization rank equals $r$, these solutions are sharp local minima; when it exceeds $r$, they are strict saddle points.
  • PDF
    Photons are among the most important carriers of quantum information owing to their rich degrees of freedom (DoFs), including various spatiotemporal structures. The ability to characterize these DoFs, as well as the hidden correlations among them, directly determines whether they can be exploited for quantum tasks. While various methods have been developed for measuring the spatiotemporal structure of classical light fields, owing to the technical challenges posed by weak photon flux, there have so far been no reports of observing such structures in their quantum counterparts, except for a few studies limited to correlations within individual DoFs. Here, we propose and experimentally demonstrate a self-referenced, high-efficiency, and all-optical method, termed 3D imaging of photonic wave packets, for comprehensive characterization of the spatiotemporal structure of a quantum light field, i.e., the biphoton spatiotemporal wave packet. Benefiting from this developed method, we successfully observe the spatial-spatial, spectral-spectral, and spatiotemporal correlations of biphotons generated via spontaneous parametric down-conversion, revealing rich local and nonlocal spatiotemporal structure in quantum light fields. This method will further advance the understanding of the dynamics in nonlinear quantum optics and expand the potential of photons for applications in quantum communication and quantum computing.
  • PDF
    Belenchia et al. [Phys. Rev. D 98, 126009 (2018)] have analyzed a gedankenexperiment where two observers, Alice and Bob, attempt to communicate via superluminal signals using a superposition of massive particles dressed by Newtonian fields and a test particle as field detector. Quantum fluctuations in the particle motion and in the field prevent signaling or violations of quantum mechanics in this setup. We reformulate this thought experiment by considering gravitational waves emitted by an extended quadrupolar object as a detector for Newtonian tidal fields. We find that quantum fluctuations in the gravitational waves prevent signaling. In the Newtonian limit, rotating black holes behave as extended quadrupolar objects, as consequence of the strong equivalence principle. It follows that consistency of the Newtonian limit of general relativity with quantum mechanics requires the quantization of gravitational radiation, even when the waves originate in strong gravity sources.
  • PDF
    In the theory of error-correcting codes, the minimum weight and the weight enumerator play a crucial role in evaluating the error-correcting capacity. In this paper, by viewing the weight enumerator as a quasi-polynomial, we reduce the calculation of the minimum weight to that of a code over a smaller integer residue ring. We also give a transformation formula between the Tutte quasi-polynomial and the weight enumerator. Furthermore, we compute the number of maximum weight codewords for the codes related to the matroids $N_k$ and $Z_k$. This is equivalent to computing the characteristic quasi-polynomial of the hyperplane arrangements related to $N_k$ and $Z_k$.
  • PDF
    In red-detuned magneto-optical traps (MOTs) of molecules, sub-Doppler heating competes with Doppler cooling, resulting in high temperature and low density. A solution is offered by the blue-detuned MOT where sub-Doppler cooling dominates and the cloud is compressed. Several blue-detuned molecular MOTs have been implemented. A recent implementation relies on a pair of orthogonally polarized components whose frequency separation is smaller than the transition linewidth. We identify the trapping force in these MOTs. At a certain magnetic field, there is a state that is dark to the laser propagating in one direction, but not to the counter-propagating one. This Zeeman-induced dark state (ZIDS) sets up an imbalance in the photon scattering rate, leading to a restoring force. We also study the role of the moving lattices generated by the closely-spaced frequency components of the light. We show that there is a velocity-dependent force that drives the molecules towards the speeds of these moving lattices, and that over a relevant range of magnetic fields this combines with the ZIDS force to transport molecules towards the centre of the MOT. Here, gray molasses cooling, assisted by non-adiabatic transitions driven by the time-varying polarization of the light field, cools the molecules towards zero velocity. We study these mechanisms for model systems with simple level structures, then extend them to molecules with ground state hyperfine structure.
  • PDF
    We investigate the emergence of an amplitude (Higgs-like) mode in the gapless phase of the $(1+1)$D XXZ spin chain. Unlike conventional settings where amplitude modes arise from spontaneous symmetry breaking, here, we identify a symmetry-preserving underdamped excitation on top of a Luttinger-liquid ground state. Using nonequilibrium quench spectroscopy, we demonstrate that this mode manifests as oscillations of U(1)-symmetric observables following a sudden quench. By combining numerical simulations with Bethe-ansatz analyses, we trace its microscopic origin to specific families of string excitations. We further discuss experimental pathways to detect this mode in easy-plane quantum magnets and programmable quantum simulators. Our results showcase the utility of quantum quenches as a powerful tool to probe collective excitations, beyond the scope of linear response.
  • PDF
    Cybersecurity operations demand assistant LLMs that support diverse workflows without exposing sensitive data. Existing solutions either rely on proprietary APIs with privacy risks or on open models lacking domain adaptation. To bridge this gap, we curate 11.8B tokens of cybersecurity-focused continual pretraining data via large-scale web filtering and manual collection of high-quality resources, spanning 28.6K documents across frameworks, offensive techniques, and security tools. Building on this, we design an agentic augmentation pipeline that simulates expert workflows to generate 266K multi-turn cybersecurity samples for supervised fine-tuning. Combined with general open-source LLM data, these resources enable the training of RedSage, an open-source, locally deployable cybersecurity assistant with domain-aware pretraining and post-training. To rigorously evaluate the models, we introduce RedSage-Bench, a benchmark with 30K multiple-choice and 240 open-ended Q&A items covering cybersecurity knowledge, skills, and tool expertise. RedSage is further evaluated on established cybersecurity benchmarks (e.g., CTI-Bench, CyberMetric, SECURE) and general LLM benchmarks to assess broader generalization. At the 8B scale, RedSage achieves consistently better results, surpassing the baseline models by up to +5.59 points on cybersecurity benchmarks and +5.05 points on Open LLM Leaderboard tasks. These findings demonstrate that domain-aware agentic augmentation and pre/post-training can not only enhance cybersecurity-specific expertise but also help to improve general reasoning and instruction-following. All models, datasets, and code are publicly available.

Recent comments

Blake Stacey Jan 29 2026 07:14 UTC

Part 2:

The first tenet of QBism is that quantum states are doxastic quantities. This is an interpretation that one can also apply to density operators on a Fock space, to states in a Type II von Neumann algebra, etc. The arguments for giving these entities a doxastic reading work just as well (or

...(continued)
Blake Stacey Jan 29 2026 03:40 UTC

The paper states that QBism "requires a [probabilistic] representation that is not overcomplete". This is inaccurate. See, for example, [arXiv:2312.12790](https://arxiv.org/abs/2312.12790) and [arXiv:2412.13505](https://arxiv.org/abs/2412.13505) by Matt Weiss, or going further back, section 4.1 of F

...(continued)
Seok-Hyung Lee Jan 27 2026 12:30 UTC

Additional note: We've very recently updated [our paper (v4)][1]. Specifically, I think you might be interested in our various attempts for approximating logical gap from a subset of logical classes, which was not very successful. New data for 'BP+LSD logical gap' and 'logical gap proxy' are added t

...(continued)
Seok-Hyung Lee Jan 27 2026 12:11 UTC

Really interesting work! I'm glad to see an efficient, effective, and general post-selection strategy for qLDPC codes developed so soon after our work [18]. I think it is also worth trying to integrate cluster-stat-based and argument-reweighting-based strategies in some way (given that a clustering-

...(continued)
Miguel González Jan 25 2026 21:28 UTC

Is there an introduction to the introduction? Newcomer here with just knowledge about Clifford circuits (without tensor language) and free fermions Hamiltonians. Seems very interesting to me from the point of view of the quantification of "something" that is hard to simulate efficiently (like non st

...(continued)
Wojciech Kryszak Jan 25 2026 20:08 UTC

Dear Blake Stacey,

Thank you! Well, for the general audience, that I belong to, it is of great help to be presented with a clear map where QBism is depicted foremost relatively to the most iconic interpretations only. Stressing relations to those milestones alone can be space consuming enough...

...(continued)
Byungmin Kang Jan 24 2026 14:13 UTC

Hi Oliver, (and also thank Andreas for this nice work)

Let me shamelessly advertise my own work here ([https://scirate.com/arxiv/2505.06336][1]). We address your question by proving a decomposition theorem: any tensor network (with rank-2 tensor closed and open legs) can be (classically) efficien

...(continued)
Alex Nietner Jan 23 2026 21:52 UTC

To add yet another angle to the discussion:
when colloquially talking about a matchgate tensor, or a stabilizer tensor or, a gaussian tensor, one often conflates two levels at which one can talk about these tensors [names are made up]:

1. the array-level
2. the data-level

The array level is im

...(continued)
Andreas Bauer Jan 23 2026 16:34 UTC

Yes, you got it! Small addition: Matchgate tensors are usually defined as qubit tensors, whereas here I'm considering "true" free-fermionic tensors. The difference is that a tensor network of fermionic tensors includes an additional reordering sign, which makes free-fermionic tensor networks efficie

...(continued)
Oliver Reardon-Smith Jan 23 2026 16:22 UTC

Thank you very much for your response Andreas! I was sort-of expecting that was what was happening but managed to confuse myself.

Does the following interpretation make sense?

Even if you have two quadratic tensors which naively look like they should be able to be contracted together (a matc

...(continued)