The field of quantum computing is currently navigating a pivotal and pragmatic stage of development, a period characterized by both remarkable progress and profound limitations. This contemporary phase is best understood through the lens of the "Noisy Intermediate-Scale Quantum" (NISQ) era, a term that has become central to the lexicon of quantum science and technology. This section provides a comprehensive definition of the NISQ era, tracing its conceptual origins, deconstructing its constituent technical characteristics, and establishing the foundational context necessary for a deeper analysis of its capabilities and challenges.
1.1. The Genesis of the Term: A New Lexicon for a New Era
The designation "Noisy Intermediate-Scale Quantum" was formally introduced into the scientific discourse by the esteemed theoretical physicist John Preskill in a seminal 2018 paper.1 Preskill's articulation was not merely a passive description of the existing hardware; it was a strategic framing of a distinct technological epoch and a call to the research community to focus on achievable, near-term objectives. The paper, titled "Quantum Computing in the NISQ era and beyond," posited that quantum processors comprising 50 to 100 qubits might soon perform tasks that surpass the capabilities of the most powerful classical supercomputers.4 However, Preskill critically tempered this optimism by emphasizing that the utility of these devices would be severely constrained by the pervasive effects of noise.6
This carefully balanced perspective was instrumental in shaping the trajectory of quantum research. It provided a realistic framework that managed the often-exuberant expectations surrounding quantum computing, steering the global research effort away from the distant dream of perfectly error-corrected machines and toward the tangible reality of the hardware at hand. The NISQ designation acknowledged that the devices being built were no longer the "toy" few-qubit systems of earlier experiments, yet they fell far short of the large-scale, fault-tolerant quantum computers required for famous algorithms like Shor's prime factorization.4 In doing so, it defined a specific and vital research program: to discover what, if any, useful computations could be performed on this new class of imperfect, medium-sized quantum processors.
1.2. The Three Pillars of NISQ: Deconstructing the Acronym
The descriptive power of the NISQ acronym lies in its three constituent terms, each of which encapsulates a fundamental characteristic of this technological era. A thorough understanding of "Noisy," "Intermediate-Scale," and "Quantum" is essential to grasp the operational realities of current-generation quantum computers.
"Noisy": The Pervasive Challenge of Errors
The "N" in NISQ is arguably its most defining and challenging attribute. It signifies that the quantum bits, or qubits, within these processors are exquisitely sensitive to their external environment and are thus highly susceptible to errors.2 This sensitivity gives rise to a fundamental process known as
quantum decoherence, where the fragile quantum properties of superposition and entanglement are corrupted and lost through interaction with the surroundings.1 This loss of quantum information is the primary source of computational errors.
The physical operations, or "gates," that manipulate qubits are also imperfect. While gate fidelities—a measure of how closely an operation matches its ideal theoretical counterpart—have reached impressive levels, often around 99% to 99.5% for single-qubit gates and 95% to 99% for more complex two-qubit gates, they are critically insufficient for executing long, complex algorithms.1 The core issue is that errors are not static; they accumulate with each successive operation. In the worst-case scenario, this accumulation is exponential, meaning that the computational "signal" is rapidly overwhelmed by "noise." As a practical rule of thumb, current NISQ devices can execute a sequence of approximately 1,000 gates before the accumulated errors render the final result indistinguishable from random noise.1 This constraint on the number of sequential operations, known as "circuit depth," is a hard physical limit that profoundly shapes the landscape of NISQ-era algorithms.
"Intermediate-Scale": Beyond Toys, Before Giants
The "I" in NISQ refers to the number of qubits in the processor, a scale that is typically understood to range from approximately 50 to a few hundred, with some definitions extending this boundary to around 1,000 qubits.1 This scale is "intermediate" in a very specific sense. On one hand, it is large enough to create a quantum state space of staggering complexity. The number of parameters required to describe the state of
qubits is . For a 50-qubit system, this corresponds to (over a quadrillion) complex numbers, a quantity that exceeds the memory capacity of even the largest classical supercomputers. This makes the direct classical simulation of such systems intractable, opening the door to potential quantum advantage.17
On the other hand, this scale is simultaneously too small to implement the sophisticated error-correction protocols necessary for fault-tolerant computation.19 The number of physical qubits is therefore a crude metric of power. A more nuanced measure, known as
quantum volume, has been introduced to provide a more holistic benchmark. Quantum volume integrates not only the number of qubits but also their quality, including gate fidelity and the richness of their connectivity, to assess the true computational capability of a device.1
"Quantum": Harnessing Non-Classical Phenomena
Despite their inherent imperfections, NISQ devices are unequivocally quantum machines. Their potential to outperform classical computers is derived entirely from their ability to harness and manipulate the principles of quantum mechanics, namely superposition and entanglement.19 Superposition allows a qubit to exist in a probabilistic combination of its two basis states,
and , simultaneously. Entanglement creates profoundly strong correlations between qubits, linking their fates in a way that has no classical parallel. A measurement on one entangled qubit can instantaneously influence the outcome of a measurement on another, regardless of the distance separating them. It is the preparation, manipulation, and measurement of these complex, highly entangled multi-qubit states—states that are computationally prohibitive for classical computers to represent and evolve—that form the basis of quantum computation.6
1.3. The Defining Absence: Lack of Fault Tolerance
Perhaps the most critical and defining characteristic of the NISQ era is not a presence but an absence: the inability to perform robust, continuous Quantum Error Correction (QEC).1 In classical computing, errors are managed by encoding information with redundancy (e.g., storing a bit three times and using a majority vote to correct a flip). Quantum error correction is conceptually similar but vastly more complex and resource-intensive.
The foundational principle of QEC involves encoding the information of a single, ideal "logical qubit" across a large number of noisy "physical qubits".19 This redundancy allows the system to detect and correct errors without disturbing the encoded quantum information. However, the resource overhead required is immense. Current estimates suggest that protecting a single logical qubit from errors may require anywhere from tens to hundreds, or even on the order of 1,000, physical qubits.10 By definition, NISQ machines lack both the scale (number of qubits) and the quality (low enough physical error rates) to meet these demanding requirements.19
This inability to actively correct errors during a computation is the direct cause of the strict limitation on circuit depth. Every gate applied is another opportunity for an uncorrected error to occur and propagate through the system. Consequently, algorithms for NISQ devices must be "shallow"—that is, they must achieve their computational task within a very limited number of sequential steps before the accumulated noise corrupts the outcome.1
The constraints imposed by the hardware of the NISQ era have done more than simply limit what is possible; they have actively shaped the direction of algorithmic research, giving rise to a distinct computational paradigm. The impossibility of running deep circuits, such as those required for Shor's algorithm, rendered a large class of theoretical quantum algorithms impractical for near-term hardware. In response, the research community developed a new class of algorithms specifically tailored to the strengths and weaknesses of NISQ processors: hybrid quantum-classical algorithms.8 Prominent examples include the Variational Quantum Eigensolver (VQE) and the Quantum Approximate Optimization Algorithm (QAOA).26
This hybrid model reimagines the role of the quantum computer. Instead of being a standalone computational engine, the NISQ device acts as a specialized co-processor or accelerator within a larger classical computational loop. In a typical VQE or QAOA workflow, a short-depth, parameterized quantum circuit is executed on the quantum processor to prepare and measure a quantum state. The classical measurement outcomes are then fed to a classical optimization algorithm, which analyzes the results and suggests updated parameters for the quantum circuit. This iterative loop continues, with the quantum device exploring complex regions of the computational space and the classical computer providing the optimization and control logic. In this way, the very "disadvantage" of limited circuit depth directly spurred the development of a novel and powerful computational approach. The NISQ era is therefore defined as much by this characteristic hybrid algorithmic strategy as it is by the physical specifications of its hardware.
Section 2: The Anatomy of Quantum Noise
To comprehend the operational landscape of the NISQ era, one must first develop a granular understanding of its defining feature: noise. The term "quantum noise" is not a monolith; it is an umbrella term for a diverse set of physical phenomena that conspire to corrupt quantum information. These phenomena range from fundamental quantum mechanical processes to practical engineering imperfections in hardware control and architecture. This section provides a detailed examination of the sources and manifestations of quantum noise, offering a taxonomy of the errors that plague NISQ devices and outlining the methods used to characterize them.
2.1. Quantum Decoherence: The Fundamental Adversary
At the heart of all quantum noise lies the process of quantum decoherence. This is the fundamental mechanism through which a quantum system loses its uniquely quantum characteristics—superposition and entanglement—and begins to behave in a more classical manner.11 Decoherence is not merely the result of random external disturbances; it is a more profound process stemming from the inevitable and unintentional entanglement of the quantum system (the qubits) with its surrounding environment.12
Every quantum system is coupled, however weakly, to the vast number of uncontrolled degrees of freedom in its environment, such as thermal fluctuations, stray electromagnetic fields, or microscopic defects in the hardware material. This interaction causes the quantum information encoded in the delicate phase relationships and correlations of the qubits to "leak" into the environment, where it becomes effectively lost and inaccessible.11 The result is a decay of the pure quantum state of the qubits into a mixed, classical-like probabilistic state, a process that directly undermines the basis for quantum computational advantage.12
The rate of this decay is quantified by the system's characteristic coherence times. The two most important are the energy relaxation time (), which governs the decay of an excited qubit state () to its ground state (), and the dephasing time (), which governs the loss of phase coherence in a superposition. These coherence times impose a strict upper bound on the total time available for a quantum computation before the stored information is irretrievably lost.12
2.2. A Taxonomy of Errors in NISQ Systems
The fundamental process of decoherence manifests as a variety of specific, classifiable errors at the operational level of a quantum computer. These errors can be broadly categorized into those affecting gate operations, those corrupting the final measurement, and those arising from the system's physical architecture.
Gate Errors: Imperfections in Action
Quantum gates, the building blocks of quantum algorithms, are not the perfect, discrete logical operations of a classical computer. They are analog control processes, typically implemented by precisely timed microwave or laser pulses, and are subject to a range of imperfections.1 These imperfections lead to several distinct types of errors:
Bit-Flip Error (Pauli-X Error): This is the direct quantum analog of a classical bit flip. A qubit that should be in state is erroneously flipped to , or vice versa. This can be mathematically represented by the application of the Pauli-X matrix, .28
Phase-Flip Error (Pauli-Z Error): This is a uniquely quantum error with no classical equivalent. It does not change the probability of measuring or , but it flips the relative phase between them. For example, a superposition state might be transformed into . This error, represented by the Pauli-Z matrix , is particularly insidious as it corrupts the quantum interference patterns that are the source of power for many quantum algorithms.28
Amplitude Damping: This error channel models the physical process of energy dissipation from the qubit to its environment. It describes the irreversible decay of a qubit from the higher-energy excited state () to the lower-energy ground state (), corresponding to the relaxation process.28
Depolarizing Error: This is a more generalized and severe error model that represents a complete randomization of the qubit's state. The qubit state decays towards a maximally mixed state (an equal probability of being or with no phase relationship), which is equivalent to a complete loss of its stored quantum information.28
Measurement Errors: The Final Hurdle
Even if a quantum computation could be performed perfectly, the process of reading out the final result is itself a major source of error. The measurement of a qubit's state is a physically complex process that is often one of the most error-prone operations in a NISQ device, with reported error rates that can be as high as 8% to 30% in some cases.30
These errors typically manifest as classical bit-flips at the point of readout; for instance, the quantum system may have collapsed to the state , but the classical measurement apparatus incorrectly registers the outcome as '0'.30 A particularly subtle and critical issue is the presence of
state-dependent measurement bias. The probability of a readout error is often not uniform but depends on the actual state being measured. On many superconducting quantum processors, for example, the state is more likely to be misidentified as a '0' than the state is to be misidentified as a '1'. This asymmetry is thought to arise because the state is a higher-energy state, making it more susceptible to decay into the ground state during the finite duration of the measurement process.30 This bias can systematically skew the results of a computation, especially for algorithms that are expected to produce outputs with a high proportion of '1's.
Architectural Noise: System-Level Flaws
Beyond the errors affecting individual qubits and gates, the physical layout and architecture of the quantum processor introduce additional, system-level sources of noise that can significantly degrade performance.
Limited Qubit Connectivity: In many leading quantum computing architectures, particularly those based on superconducting circuits, qubits are arranged in a fixed lattice and can only directly interact with their immediate physical neighbors. All-to-all connectivity is rare.16
SWAP Gate Overhead: This physical constraint has profound algorithmic consequences. If a quantum algorithm requires a two-qubit gate (like a CNOT gate) between two qubits that are not physically adjacent, the quantum compiler must insert a series of SWAP gates to move the quantum states of the qubits across the chip until they are neighbors. Each SWAP gate is itself composed of multiple noisy two-qubit gates (typically three CNOTs). This "transpilation" process can dramatically increase the total gate count and overall circuit depth, leading to a much higher accumulation of errors than would be expected from the ideal algorithm alone.32
Crosstalk: The dense packing of qubits and control lines on a quantum chip can lead to crosstalk. Control signals, such as microwave pulses intended to operate on a specific target qubit, can unintentionally "leak" and affect the state of neighboring qubits. This introduces unwanted operations and creates correlated errors across multiple qubits, which are particularly challenging to model and mitigate.29
The total error in a quantum computation is therefore not a simple linear sum of the errors of its constituent parts. It is a compounded effect, where the intrinsic noise of individual operations is significantly amplified by the architectural constraints of the hardware. An algorithm executed on a device with high-fidelity gates but poor connectivity might perform worse than on a device with lower-fidelity gates but all-to-all connectivity. This interplay between component-level noise and architecture-level constraints is a defining challenge of the NISQ era.
2.3. Characterizing and Modeling Noise
To effectively combat noise, it must first be precisely characterized. A suite of advanced diagnostic techniques has been developed for this purpose. The most comprehensive of these is Gate Set Tomography (GST). Unlike simpler methods that measure an average error rate, GST performs a detailed, self-consistent tomographic reconstruction of an entire set of operations, including state preparation, all single- and two-qubit gates, and the final measurement.36
GST yields a complete mathematical description of each noisy operation, known as a "process matrix," which captures not only the ideal operation but also all the coherent (systematic) and incoherent (random) error components. This detailed noise model is invaluable for hardware engineers seeking to improve device performance and is a critical prerequisite for implementing advanced error mitigation techniques that rely on an accurate understanding of the underlying noise channels.36
The following table provides a structured summary of the primary error types encountered in NISQ devices, their physical origins, and their impact on computation.
Section 3: The Paradox of Noise: Can a Noisy Processor Outperform a Perfect Simulator?
One of the most intriguing and frequently misunderstood topics to emerge from the NISQ era is the counter-intuitive claim that a noisy quantum computer can sometimes produce "better" results than a perfect, noiseless classical simulator. This apparent paradox challenges our conventional understanding of computation, where noise is universally regarded as a detriment to accuracy. Resolving this paradox requires a careful deconstruction of what "better" means in different computational contexts and an appreciation for the subtle ways in which noise can interact with specific classes of algorithms.
3.1. The Role and Limits of Classical Simulation
Classical simulators are indispensable tools in the development of quantum computing. They allow researchers to design, test, and debug quantum algorithms before running them on scarce and expensive quantum hardware.42 These simulators typically operate in one of two modes:
State Vector Simulation: This method tracks the complete quantum state by storing the complex amplitudes that define the state vector of an -qubit system. Every gate operation corresponds to multiplying this enormous vector by a unitary matrix. While perfectly accurate for ideal circuits, this approach is extremely memory-intensive, with memory requirements scaling exponentially as .18
Density Matrix Simulation: To model the effects of noise and decoherence, which transform pure quantum states into mixed states, simulators must use the density matrix formalism. This involves tracking a matrix, leading to memory and computational requirements that scale even more unfavorably, as or .43
The objective of a "noiseless" or "ideal" classical simulator is to achieve perfect fidelity—that is, to exactly replicate the mathematical evolution of the quantum state as prescribed by the theoretical model of the circuit.43 Simulating
noisy quantum processes is a further step in complexity, often requiring stochastic methods that average over many possible error trajectories, which introduces a substantial additional computational overhead.43
3.2. Deconstructing the "Better Result" Claim
The assertion that noise can be beneficial almost exclusively arises within the domain of Quantum Machine Learning (QML) and certain optimization tasks, not in applications that demand a high-fidelity simulation of a physical system, such as quantum chemistry.44 The resolution to the paradox lies in the distinction between the goal of a computation: is it maximum fidelity or maximum utility?
Noise as a Regularizer in Machine Learning
A notable study reported that for a specific financial modeling task, a machine learning model that used data features generated by a noisy quantum processor achieved significantly higher out-of-sample test scores than a model that used features from a noiseless classical simulation of the same quantum circuit.44 The noiseless simulation, in fact, performed worse than a purely classical approach.
This phenomenon is best understood through the lens of a classical machine learning concept called regularization. A common failure mode in machine learning is overfitting, where a model learns the statistical noise and idiosyncrasies of its training data so perfectly that it fails to generalize to new, unseen data. To combat this, practitioners often introduce a controlled amount of randomness into the training process—techniques like dropout or adding Gaussian noise—to prevent the model from becoming too specialized to the training set. This process is called regularization.
The prevailing hypothesis is that the inherent, stochastic noise within the NISQ hardware acts as a form of natural, hardware-level regularization. The noiseless simulator, by flawlessly executing the quantum feature-mapping circuit, also flawlessly overfits to the training data, resulting in poor generalization and low test scores. The real, noisy quantum device, by introducing random perturbations into the computation, inadvertently smooths the feature landscape and prevents the model from overfitting, thereby improving its performance on the test data.44
In this context, the "better" result from the noisy hardware does not signify a more accurate execution of the intended quantum circuit. On the contrary, it represents a less accurate execution whose very imperfection provided a tangible benefit for this specific machine learning application. The goal was not fidelity to the algorithm, but utility of the final model.
Noise as a Feature for Simulating Noisy Systems
An alternative, and equally compelling, perspective is to intentionally harness the device's noise as a feature rather than treating it as a flaw.46 Researchers have proposed using current NISQ devices as specialized simulators for other complex, noisy quantum systems, such as quantum communication networks or quantum sensors.
The rationale is that the noise processes in these target systems are often complex and highly correlated, making them extremely difficult and computationally expensive to model accurately on a classical computer. A NISQ device, however, is itself a complex noisy quantum system. The idea is to develop techniques to control and shape the inherent noise of the quantum computer to mimic the noise environment of the target system. In this scenario, the noise is not an obstacle to be overcome but a resource to be utilized. This approach could allow for the large-scale simulation of realistic quantum networks under conditions that are far beyond the reach of classical simulators, which would struggle to handle the exponentially growing state space combined with general, non-idealized noise models.46
3.3. A Critical Perspective and Necessary Caveats
While these findings are intellectually stimulating and point to clever ways of extracting value from imperfect hardware, they must be interpreted with significant caution.
Selection Bias and Unprincipled Comparisons: As prominent critics like Scott Aaronson have pointed out, claims of quantum advantage derived from such experiments are highly susceptible to selection bias.44 It is often possible to find a specific, and perhaps suboptimal, classical algorithm or model that is outperformed by the quantum approach. The validity of the claim hinges entirely on the strength and relevance of the classical benchmark being used for comparison.
"Something Else Cool Happened": The observation that the ideal, noiseless version of the quantum algorithm failed while the noisy version succeeded strongly suggests that the original algorithm itself was poorly designed for the task at hand. The noise did not "improve" the intended algorithm; rather, it transformed the computation into a different, noisy process that, by happenstance, proved more effective.44 This is a discovery about a new, noise-driven process, not evidence that noise is a desirable feature for computation in general.
The Overarching Goal Remains Noise Reduction: It is crucial to recognize that these are opportunistic explorations of the NISQ era's limitations. The grand, long-term vision of quantum computing is not built on harnessing noise. The path to solving truly hard, impactful problems—from breaking cryptography with Shor's algorithm to designing new catalysts with high-precision quantum chemistry simulations—indisputably relies on dramatically reducing noise, achieving extremely high-fidelity operations, and ultimately, implementing full fault tolerance.2 Using noise as a feature is a creative adaptation to the current technological reality, not the final destination.
The "Noise Paradox" is thus resolved by a careful distinction in computational objectives. A classical simulator's goal is to achieve the highest possible fidelity to a theoretical model. A machine learning algorithm's goal is to achieve the highest possible utility on a practical task. The noisy quantum computer was a "worse simulator" in terms of fidelity, but its physical imperfections inadvertently acted as a useful regularizer, leading to a model with higher utility. The paradox vanishes once this fundamental difference between fidelity and utility is understood.
Section 4: A Dichotomy of Capabilities: Advantages and Disadvantages of the NISQ Era
The Noisy Intermediate-Scale Quantum era represents a period of inherent tension in the development of quantum computing. It is an age of systems that are simultaneously powerful enough to venture beyond the limits of classical simulation yet too flawed to realize the full promise of theoretical quantum algorithms. A balanced assessment of this era requires a clear-eyed view of both its genuine opportunities and its profound, fundamental limitations. This section provides a systematic analysis of this dichotomy, outlining the advantages that make the NISQ era a period of vibrant research and the disadvantages that constrain its practical impact.
4.1. The Promise and Potential (Advantages)
Despite its limitations, the NISQ era offers significant advantages that are crucial for the advancement of quantum information science and related fields. These benefits can be categorized as direct scientific and technological applications, and as catalysts for the growth of the broader quantum ecosystem.
A Unique Scientific Instrument
First and foremost, NISQ devices are unprecedented scientific instruments. They provide experimental access to a regime of highly entangled, many-body quantum physics that is computationally inaccessible to any classical machine.4 For the first time, physicists can prepare, control, and measure quantum states of a complexity that defies classical simulation. This capability opens up entirely new avenues for fundamental scientific discovery, allowing researchers to probe the nature of quantum entanglement, simulate exotic states of matter, and potentially explore phenomena related to quantum gravity and black hole physics in a controlled laboratory setting.
Promising Application Areas
Beyond fundamental science, NISQ computers are being actively explored for a range of specific computational tasks where they may offer a near-term advantage.
Quantum Chemistry and Materials Science: This is widely regarded as one of the most natural and promising applications for NISQ devices. The problem of calculating the electronic structure and properties of molecules and materials is fundamentally a quantum mechanical one. Using a quantum computer to simulate a quantum system avoids the exponential overhead incurred by classical computers trying to represent quantum states.9 The
Variational Quantum Eigensolver (VQE) is a flagship NISQ algorithm designed for this purpose. It uses a hybrid quantum-classical approach to find the ground-state energy of a molecule, a critical calculation for understanding chemical reaction rates and designing new drugs, catalysts, and materials.9Combinatorial Optimization: Many challenging problems in industries like finance, logistics, manufacturing, and network design can be formulated as combinatorial optimization problems—finding the best solution from a vast number of possibilities. The Quantum Approximate Optimization Algorithm (QAOA) is a hybrid algorithm designed to find approximate solutions to such problems.9
Quantum Machine Learning (QML): While still in an exploratory phase, there is significant research into whether quantum computers can enhance machine learning models. This includes using quantum circuits as feature maps to project classical data into a high-dimensional quantum state space or developing quantum kernels for support vector machines, with the hope of achieving better model performance.9
Catalyst for Innovation
Perhaps the most enduring legacy of the NISQ era will be its role as a powerful catalyst for innovation across the entire quantum computing field.
Algorithmic Development: The severe constraints of NISQ hardware have been a wellspring of creativity. They have forced algorithm designers to move beyond idealized models and invent entirely new classes of noise-resilient, hybrid quantum-classical algorithms like VQE and QAOA, which are intellectually significant contributions in their own right.8
Hardware and Ecosystem Growth: The practical challenge of building, operating, calibrating, and benchmarking NISQ devices provides invaluable engineering experience. It drives rapid innovation in supporting technologies such as cryogenic systems, high-precision control electronics, and quantum-compatible software stacks. This hands-on engineering is an essential prerequisite for constructing the more advanced, fault-tolerant machines of the future.2
Community and Accessibility: A transformative aspect of the NISQ era has been the widespread availability of quantum processors via the cloud. This has democratized access to real quantum hardware, enabling a global community of researchers, students, and developers to experiment, learn, and contribute to the field. This broad engagement accelerates the pace of discovery and helps to build the skilled quantum workforce needed for future growth.2
4.2. The Fundamental Constraints (Disadvantages)
The potential of the NISQ era is held in check by a set of formidable and fundamental constraints that currently limit its practical utility.
Pervasive Noise and Limited Circuit Depth: As detailed extensively in Section 2, the accumulation of uncorrected errors is the primary bottleneck. Decoherence, faulty gates, and measurement errors combine to severely limit the number of sequential operations (circuit depth) that can be reliably performed, restricting the complexity of solvable problems.1
Scalability and Quality Control: The challenge of quantum computing is not just to increase the number of qubits, but to do so while maintaining or improving their quality. As systems grow larger, issues like qubit-to-qubit uniformity, crosstalk, and the complexity of calibration and control become exponentially more difficult to manage. This makes scaling a monumental engineering hurdle.9
The Moving Target of Classical Algorithms: For many of the target applications of NISQ computing, particularly in optimization and machine learning, there exist highly sophisticated classical algorithms and heuristics that have been refined over decades of development. NISQ approaches often struggle to demonstrate a clear performance advantage over these state-of-the-art classical methods. Furthermore, the announcement of a potential quantum speedup often incentivizes classical computer scientists to develop better classical algorithms, making "quantum advantage" a constantly moving target.9
Variability and Reliability: The performance of NISQ hardware is not static. Qubit coherence times and gate fidelities can drift over hours or even minutes, requiring frequent and time-consuming recalibration. This variability makes it challenging to obtain consistent and reproducible scientific results.9
Barren Plateaus in Variational Algorithms: A particularly daunting theoretical challenge for the flagship hybrid algorithms (VQE and QAOA) is the phenomenon of "barren plateaus." For many problem instances, as the number of qubits increases, the optimization landscape becomes exponentially flat. This means the gradient of the cost function, which the classical optimizer uses to find the solution, vanishes, making it impossible to effectively train the algorithm and find a good solution.55
The collective weight of these disadvantages makes the achievement of practical, commercially relevant quantum advantage in the NISQ era an exceptionally difficult goal. While these devices are invaluable for research, their immediate application to solving real-world business problems is limited. This reality leads to a crucial re-framing of the era's purpose. Instead of judging the NISQ era by its ability to deliver immediate, revolutionary applications, its success is more appropriately measured by its contribution to the long-term project of building a fault-tolerant quantum computer. It is a necessary, and at times frustrating, but ultimately indispensable stepping stone. The knowledge gained from grappling with the challenges of noise, the engineering solutions developed to improve qubit control, and the novel algorithms designed to accommodate hardware limitations are all foundational elements that will pave the way for the more powerful quantum technologies of the future.
Section 5: Charting a Path Through the Noise: Error Mitigation and the Pursuit of Quantum Advantage
The defining characteristic of the NISQ era is the absence of fault tolerance. Without the ability to actively correct errors as they occur, the raw output of any non-trivial quantum computation is invariably corrupted by noise. To salvage a meaningful signal from this noisy background, the field has developed a sophisticated suite of techniques known as Quantum Error Mitigation (QEM). These methods represent a pragmatic middle ground between accepting flawed results and waiting for the advent of full error correction. This section explores the crucial strategies of QEM, distinguishes them from true error correction, and connects these practical tools to the broader, often-misunderstood quests for quantum supremacy and practical quantum advantage.
5.1. Mitigation vs. Correction: A Critical Distinction
It is essential to begin with a clear distinction between Quantum Error Mitigation (QEM) and Quantum Error Correction (QEC), as they represent fundamentally different approaches to handling noise.
Quantum Error Correction (QEC): This is an active and in-situ process. It involves encoding the information of one ideal "logical qubit" into a redundant state of many physical qubits. During the computation, special "syndrome measurements" are performed periodically to detect if and where errors have occurred. Based on the syndrome outcome, a real-time corrective operation is applied to the physical qubits, restoring the integrity of the encoded logical state. QEC is the foundational technology for building a truly fault-tolerant quantum computer, but its high resource overhead places it beyond the capabilities of current NISQ devices.10
Quantum Error Mitigation (QEM): This is a passive and post-processing approach. QEM techniques do not attempt to detect or fix errors during the quantum computation itself. Instead, they involve a clever protocol of executing a family of related quantum circuits on the noisy hardware and then using classical statistical analysis of the collective measurement outcomes to infer an estimate of what the ideal, noise-free result would have been.1 The core principle of QEM is to reduce the
bias in the final expectation value at the cost of a significant increase in the number of measurements (or "shots") required, which translates to longer run times.64
5.2. A Toolkit for Noise Reduction: Key QEM Strategies
A variety of QEM techniques have been developed, each with its own principles, advantages, and overhead costs. Three of the most prominent methods are Zero-Noise Extrapolation, Probabilistic Error Cancellation, and Dynamical Decoupling.
Zero-Noise Extrapolation (ZNE)
Principle: The core idea of ZNE is to measure the effect of noise at different strengths and then extrapolate the results back to the theoretical zero-noise limit. The protocol involves intentionally and controllably increasing the amount of noise in a quantum circuit, running it at several amplified noise levels, and plotting the measured expectation value against the noise amplification factor. A curve is then fitted to these data points and extrapolated back to a noise factor of zero to estimate the ideal result.1
Implementation: A common "digital" method for amplifying noise is known as unitary folding. To amplify the noise by a factor of, for example, three, each gate in the original circuit is replaced by the sequence . Since is the identity operation, this sequence is logically equivalent to the original gate . However, on a noisy processor, it involves executing three times as many physical operations, thus tripling the accumulated gate error in a controlled manner. By varying the number of folds, one can generate the data points at different noise levels needed for the extrapolation.67
Probabilistic Error Cancellation (PEC)
Principle: PEC is a more powerful but also more resource-intensive technique. It begins with a detailed characterization of the noise affecting each gate on the device (e.g., using Gate Set Tomography). This noise model is then used to mathematically express each ideal gate as a linear combination of the actual noisy operations that the hardware can physically implement. Crucially, some of the coefficients in this linear combination can be negative, forming what is known as a "quasi-probability distribution".71
Implementation: To execute the ideal circuit, one does not run a single circuit. Instead, one statistically samples many different noisy circuits from the quasi-probability distribution. The measurement outcomes from these circuits are then averaged, but with each outcome weighted by the sign of its corresponding coefficient (some results are effectively "subtracted"). This Monte Carlo sampling process statistically cancels out the average effect of the noise, yielding an unbiased estimate of the ideal expectation value. The major drawback is that the number of samples required to achieve a given precision grows exponentially with the circuit size and error rate, making it very costly.72
Dynamical Decoupling (DD)
Principle: Unlike ZNE and PEC, which are post-processing techniques, DD is a noise suppression technique applied during the computation. It is specifically designed to combat errors that accumulate while a qubit is idle (i.e., not actively participating in a gate). The method involves applying a carefully timed sequence of rapid pulses (e.g., single-qubit gates like X and Y) to the idle qubit. These pulses are chosen such that their net effect is the identity, but they effectively "refocus" the qubit's evolution, averaging out its unwanted interaction with slowly fluctuating noise fields in the environment.76
Implementation: DD sequences are typically inserted into the circuit by the compiler during any periods of forced inactivity. This protects the qubit's fragile quantum state from decohering while it waits for other operations to complete.63 A key challenge is that the DD pulses themselves are imperfect and can introduce their own errors, creating a trade-off where too much DD can be counterproductive.76
The following table offers a comparative summary of these key QEM techniques, highlighting their distinct principles and resource trade-offs.
5.3. From Demonstrations of Power to Practical Utility
The development of QEM is inextricably linked to the broader goal of demonstrating that quantum computers are genuinely powerful. This goal has been articulated through two related but distinct concepts: quantum supremacy and practical quantum advantage.
Quantum Supremacy (or Computational Advantage): This is the scientific milestone of demonstrating that a programmable quantum computer can solve a problem—any problem, however contrived or useless—that is intractable for even the most powerful classical supercomputers.80
Key Experiments: To date, claims of quantum supremacy have centered on computationally hard sampling problems. In Random Circuit Sampling, performed by Google, the task is to sample the output bitstrings from a randomly generated, shallow quantum circuit. In Boson Sampling, performed by teams at USTC and Xanadu, the task is to sample the output configuration of photons passing through a complex optical interferometer. In both cases, the underlying probability distributions are believed to be classically hard to sample from.80
A Contested Frontier: These supremacy claims represent a "moving target." The initial claims of classical intractability often spur the development of more sophisticated classical simulation algorithms that exploit the specific noise and imperfections of the quantum experiment to perform the simulation much more efficiently than first predicted. This has led to an ongoing and healthy debate about the true boundary between classical and quantum computational power.58
Practical Quantum Advantage (or Quantum Utility): This is the far more pragmatic and commercially significant goal. It is defined as the point where a quantum computer can solve a useful, real-world problem more effectively—meaning faster, more accurately, or more efficiently (e.g., using less energy)—than the best known classical alternative.81
This remains the ultimate, and as yet unrealized, goal of the field. While NISQ devices, enhanced by QEM, show promise for applications in quantum chemistry and optimization, a definitive, unambiguous demonstration of practical quantum advantage over state-of-the-art classical methods for a problem of commercial value has not been achieved.1 The combined obstacles of noise, limited scale, and the sheer power of modern classical heuristics make this an exceptionally high bar to clear.
The techniques of Quantum Error Mitigation do not offer a "free lunch." They introduce a complex, multi-dimensional resource trade-off that is central to the NISQ-era computational challenge. To obtain a more accurate, error-mitigated result, one must "pay" with other computational resources. ZNE demands deeper circuits, which increases a qubit's exposure time to decoherence. PEC demands an exponentially larger number of measurements, which increases the total runtime on the machine. DD adds more gates, increasing the chance of gate errors. This means that for any given problem on a specific NISQ device, there exists a computational "sweet spot"—a level of complexity where the quantum approach might outperform classical methods. If the problem is too simple, a classical computer is superior. If the problem is too complex, the resource overhead required for QEM becomes so prohibitively large that it negates any potential quantum speedup, or even degrades the result. The central engineering and algorithmic challenge of the entire NISQ era is to find, expand, and ultimately exploit this narrow window of opportunity.
Section 6: Conclusion: The NISQ Era as a Foundational Stepping Stone
The Noisy Intermediate-Scale Quantum era represents a critical, transitional phase in the history of computation. It is an epoch defined by a fundamental tension: the nascent, extraordinary power of quantum mechanics harnessed for information processing, set against the severe, practical constraints imposed by environmental noise and limited scale. As this report has detailed, the current generation of quantum processors are powerful enough to explore physical regimes beyond the reach of classical simulation, yet they remain too imperfect to execute the transformative, large-scale algorithms that define the ultimate promise of the field. The journey through the NISQ era is thus one of pragmatic adaptation, clever innovation, and the steady accumulation of foundational knowledge.
6.1. Recapitulation of the NISQ Landscape
The defining characteristics of a NISQ computer are its intermediate number of qubits (typically 50 to a few hundred), the pervasive noise that corrupts its operations, and, most critically, its lack of fault tolerance through quantum error correction. This absence of active error correction limits the achievable "circuit depth," meaning that computations must be completed in a small number of steps before the accumulated errors overwhelm the result. This hardware reality has given rise to a distinct computational paradigm: the hybrid quantum-classical algorithm. In this model, the noisy quantum processor acts as a specialized co-processor, exploring complex quantum states under the guidance of a classical optimization loop.
While these hybrid algorithms, such as VQE for quantum chemistry and QAOA for optimization, have opened promising avenues of research, they face their own significant hurdles, including the challenge of "barren plateaus" and the stiff competition from highly optimized classical heuristics. The counter-intuitive notion that noise can sometimes be "helpful" is a subtle phenomenon confined largely to quantum machine learning, where hardware noise can act as a form of regularization to prevent model overfitting. This is a clever exploitation of a system flaw, not an indication that noise is a desirable feature for computation in general. The primary strategy for dealing with noise in the NISQ era is not to embrace it, but to combat it through a suite of Quantum Error Mitigation (QEM) techniques. These post-processing methods allow for the extraction of more accurate results from noisy hardware, but they do so at the cost of a significant overhead in computational resources, creating a complex trade-off that defines the practical limits of what is achievable today.
6.2. The NISQ Legacy: A Bridge to the Future
When viewed from a long-term perspective, the most significant contribution of the NISQ era may not be the immediate solution of commercially relevant problems. Rather, its legacy will be the indispensable foundation it lays for future generations of quantum technology.8 The challenges of this era are forcing the scientific and engineering communities to solve the fundamental problems that must be overcome to build a scalable quantum computer.
The lessons learned in controlling and calibrating noisy qubits, mitigating the effects of decoherence, and designing noise-resilient algorithms are invaluable. The process of building and operating NISQ devices is driving innovation across a wide range of supporting fields, from cryogenics and microwave engineering to control software and compiler design. This period is fostering a crucial co-design cycle, where algorithms are increasingly tailored to the specific characteristics of the hardware, and hardware designs are, in turn, informed by the performance of key algorithms.2 Furthermore, the accessibility of NISQ machines via the cloud has been instrumental in building a global community and training the next generation of quantum scientists and engineers.
6.3. Beyond NISQ: The Dawn of Fault Tolerance
The NISQ era is, by its very nature, a temporary one. Its conclusion will be marked by the advent of quantum processors that can successfully implement robust Quantum Error Correction, enabling the creation of high-fidelity "logical qubits" that are protected from physical noise.1 This transition is unlikely to be abrupt. Some researchers anticipate an intermediate "ISQ" (Intermediate-Scale Quantum) era, where systems with a small number of error-corrected logical qubits become available. These ISQ devices would allow for significantly deeper circuits than are possible on NISQ machines, potentially unlocking new algorithmic capabilities, even while falling short of the requirements for full fault tolerance.17
Ultimately, the goal remains the construction of a universal, fault-tolerant quantum computer. Such a machine, composed of thousands to millions of logical qubits, will finally unlock the full power of landmark algorithms like Shor's and Grover's, and will be capable of performing high-precision simulations of complex quantum systems, ushering in a new epoch of computation with the potential to revolutionize science, medicine, and technology.9 The Noisy Intermediate-Scale Quantum era, with all its challenges, frustrations, and incremental successes, is the critical and unavoidable first stage of this ambitious journey. It is the period in which the abstract theories of quantum computation are being forged into a tangible, albeit imperfect, physical reality.
Works cited
Noisy intermediate-scale quantum computing - Wikipedia, accessed October 2, 2025, https://en.wikipedia.org/wiki/Noisy_intermediate-scale_quantum_computing
What Is NISQ Quantum Computing?, accessed October 2, 2025, https://thequantuminsider.com/2023/03/13/what-is-nisq-quantum-computing/
Is NISQ Over? Have We Reached The Demise Of The Noisy Intermediate-Scale Quantum Era?, accessed October 2, 2025, https://quantumzeitgeist.com/nisq-dead-john-preskill/
[1801.00862] Quantum Computing in the NISQ era and beyond - arXiv, accessed October 2, 2025, https://arxiv.org/abs/1801.00862
Quantum Computing in the NISQ era and beyond, accessed October 2, 2025, https://quantum-journal.org/papers/q-2018-08-06-79/
Quantum Computing in the NISQ era and beyond - arXiv, accessed October 2, 2025, https://arxiv.org/pdf/1801.00862
Acronyms Beyond NISQ – The Quantum Pontiff - Dave Bacon, accessed October 2, 2025, https://dabacon.org/pontiff/2024/01/03/acronyms-beyond-nisq/
What is NISQ - QuEra Computing, accessed October 2, 2025, https://www.quera.com/glossary/nisq
NISQ - Quantum Computing Explained, accessed October 2, 2025, https://www.quandela.com/resources/quantum-computing-glossary/nisq-noisy-intermediate-scale-quantum-computing/
NISQ Computers – Can We Escape the Noise?, accessed October 2, 2025, https://quantumcomputinginc.com/news/blogs/nisq-computers-can-we-escape-the-noise
www.quandela.com, accessed October 2, 2025, https://www.quandela.com/resources/quantum-computing-glossary/quantum-decoherence/#:~:text=It's%20the%20process%20by%20which,quantum%20superposition%20and%20entanglement%20properties.
What is Quantum Decoherence - QuEra Computing, accessed October 2, 2025, https://www.quera.com/glossary/quantum-decoherence
Quantum decoherence - Wikipedia, accessed October 2, 2025, https://en.wikipedia.org/wiki/Quantum_decoherence
Quantum Decoherence - Quantum Computing Explained - Quandela, accessed October 2, 2025, https://www.quandela.com/resources/quantum-computing-glossary/quantum-decoherence/
Decoherence in Quantum Computing: Causes, Effects, Fixes - SpinQ, accessed October 2, 2025, https://www.spinquanta.com/news-detail/decoherence-in-quantum-computing-everything-you-need-to-know
NISQ Is Dead, a Dying Dead End, With No Prospects for a Brighter Future or Practical Quantum Computing - Jack Krupansky, accessed October 2, 2025, https://jackkrupansky.medium.com/nisq-is-dead-a-dying-dead-end-with-no-prospects-for-a-brighter-future-or-practical-quantum-5933d37fa1b6
From NISQ to ISQ | PennyLane Blog, accessed October 2, 2025, https://pennylane.ai/blog/2023/06/from-nisq-to-isq
Quantum computing - Wikipedia, accessed October 2, 2025, https://en.wikipedia.org/wiki/Quantum_computing
Not All Qubits Are Created EqualA Case for Variability-Aware Policies for NISQ-Era Quantum Computers - Georgia Institute of Technology, accessed October 2, 2025, https://memlab.ece.gatech.edu/papers/ASPLOS_2019_1.pdf
Pushing the boundaries of Noisy Intermediate Scale Quantum (NISQ) computing by Focusing on Quantum Materials, accessed October 2, 2025, https://qmi.ubc.ca/research/noisy-intermediate-scale-quantum/
Quantum Computing Scientists: Give Them Lemons, They'll Make Lemonade, accessed October 2, 2025, https://www.aps.org/archives/publications/apsnews/201905/quantum.cfm
Quantum Error Suppression - QuEra Computing, accessed October 2, 2025, https://www.quera.com/blog-posts/quantum-error-suppression
The NISQ Era of Quantum Computing: Challenges ... - QuantumGrad, accessed October 2, 2025, https://www.quantumgrad.com/article/733
Noisy intermediate-scale quantum algorithms | Rev. Mod. Phys., accessed October 2, 2025, https://link.aps.org/doi/10.1103/RevModPhys.94.015004
[2101.08448] Noisy intermediate-scale quantum (NISQ) algorithms - arXiv, accessed October 2, 2025, https://arxiv.org/abs/2101.08448
Quantum Approximate Optimization Algorithm (QAOA) - Classiq, accessed October 2, 2025, https://www.classiq.io/insights/quantum-approximate-optimization-algorithm-qaoa
What is NISQ computing? - Q-CTRL, accessed October 2, 2025, https://q-ctrl.com/topics/what-is-nisq-computing
Quantum Error: The Critical Challenge in Quantum Computing | SpinQ, accessed October 2, 2025, https://www.spinquanta.com/news-detail/quantum-error-the-critical-challenge-in-quantum-computing
Noise in Quantum Computing | AWS Quantum Technologies Blog, accessed October 2, 2025, https://aws.amazon.com/blogs/quantum-computing/noise-in-quantum-computing/
Mitigating Measurement Errors in Quantum Computers by Exploiting ..., accessed October 2, 2025, https://memlab.ece.gatech.edu/papers/MICRO_2019_1.pdf
Measurement Error Mitigation in Quantum Computers Through Classical Bit-Flip Correction - arXiv, accessed October 2, 2025, https://arxiv.org/pdf/2007.03663
Improving and benchmarking NISQ qubit routers - arXiv, accessed October 2, 2025, https://arxiv.org/html/2502.03908v1
Not All Qubits are Utilized Equally - arXiv, accessed October 2, 2025, https://arxiv.org/html/2509.19241v1
Quantum Error Correction for Dummies - YouTube, accessed October 2, 2025, https://www.youtube.com/watch?v=oHPwRPeX5ZI
Is Noise the Biggest Challenge for Quantum Computing? - GovTech, accessed October 2, 2025, https://www.govtech.com/products/is-noise-the-biggest-challenge-for-quantum-computing
Efficient Characterization of Qudit Logical Gates with Gate Set Tomography Using an Error-Free Virtual Gate Model | Phys. Rev. Lett., accessed October 2, 2025, https://link.aps.org/doi/10.1103/PhysRevLett.133.120802
Implementation of Gate Set Tomography on Quantum Hardware - Técnico Lisboa, accessed October 2, 2025, https://fenix.tecnico.ulisboa.pt/downloadFile/1126295043836852/Extended_Abstract_HSilverio.pdf
Compressive Gate Set Tomography | PRX Quantum - Physical Review Link Manager, accessed October 2, 2025, https://link.aps.org/doi/10.1103/PRXQuantum.4.010325
Gate Set Tomography (Journal Article) | OSTI.GOV, accessed October 2, 2025, https://www.osti.gov/pages/biblio/1828793
[2112.05176] Compressive gate set tomography - arXiv, accessed October 2, 2025, https://arxiv.org/abs/2112.05176
Compressive gate set tomography - arXiv, accessed October 2, 2025, https://arxiv.org/pdf/2112.05176
Scalable Parallel Simulation of Quantum Circuits on CPU and GPU Systems - arXiv, accessed October 2, 2025, https://arxiv.org/html/2509.04955v1
Noisy Quantum Simulation Using Tracking, Uncomputation and Sampling - arXiv, accessed October 2, 2025, https://arxiv.org/html/2508.04880v1
HSBC unleashes yet another “qombie”: a zombie claim of quantum advantage that isn't - Shtetl-Optimized, accessed October 2, 2025, https://scottaaronson.blog/?p=9170
A comprehensive review of Quantum Machine Learning: from NISQ to Fault Tolerance, accessed October 2, 2025, https://arxiv.org/html/2401.11351v2
Quantum Simulation of Noisy Quantum Networks - arXiv, accessed October 2, 2025, https://arxiv.org/html/2506.09144v1
Quantum annealing eigensolver as a NISQ era tool for probing strong correlation effects in quantum chemistry - arXiv, accessed October 2, 2025, https://arxiv.org/html/2412.20464v5
[2301.06260] Quantum simulation of molecular response properties - arXiv, accessed October 2, 2025, https://arxiv.org/abs/2301.06260
[2503.12084] Quantum Simulations of Chemical Reactions: Achieving Accuracy with NISQ Devices - arXiv, accessed October 2, 2025, https://arxiv.org/abs/2503.12084
Accurate Chemical Reaction Modeling on Noisy Intermediate-Scale Quantum Computers Using a Noise-Resilient Wavefunction Ansatz - arXiv, accessed October 2, 2025, https://arxiv.org/html/2404.14038v1
Multiscale quantum approximate optimization algorithm | Phys. Rev. A, accessed October 2, 2025, https://link.aps.org/doi/10.1103/PhysRevA.111.012427
Using the Quantum Approximate Optimization Algorithm (QAOA) to Solve Binary-Variable Optimization Problems --------------------- - Software Engineering Institute, accessed October 2, 2025, https://www.sei.cmu.edu/documents/5282/2022_016_100_887160.pdf
Intro to QAOA | PennyLane Demos, accessed October 2, 2025, https://pennylane.ai/qml/demos/tutorial_qaoa_intro/
[2404.07171] Unlocking Quantum Optimization: A Use Case Study on NISQ Systems - arXiv, accessed October 2, 2025, https://arxiv.org/abs/2404.07171
Bridging Classical and Quantum Computing for Next-Generation Language Models - arXiv, accessed October 2, 2025, https://arxiv.org/html/2508.07026v1
Quantum Resource Management in the NISQ Era: Challenges, Vision, and a Runtime Framework - arXiv, accessed October 2, 2025, https://arxiv.org/html/2508.19276v1
Challenges and Opportunities of Scaling Up Quantum Computation and Circuits - SIAM, accessed October 2, 2025, https://www.siam.org/publications/siam-news/articles/challenges-and-opportunities-of-scaling-up-quantum-computation-and-circuits/
The tug of war around quantum supremacy | by Henry Liu - Medium, accessed October 2, 2025, https://medium.com/@sss441803/the-tug-of-war-around-quantum-supremacy-2b9dc5b2c8e2
NISQ-Computer: Quantum entanglement can be a double-edged sword - MPQ, accessed October 2, 2025, https://www.mpq.mpg.de/6798783/12-entanglement-in-nisq-computers
Quantum Myth Busters: Experts Debunk Common NISQ-Era Myths - The Quantum Insider, accessed October 2, 2025, https://thequantuminsider.com/2025/01/14/quantum-myth-busters-experts-debunk-common-nisq-era-myths/
Quantum Error Correction: the grand challenge - Riverlane, accessed October 2, 2025, https://www.riverlane.com/quantum-error-correction
Quantum error correction - Wikipedia, accessed October 2, 2025, https://en.wikipedia.org/wiki/Quantum_error_correction
Differences in error suppression, mitigation, and correction | IBM Quantum Computing Blog, accessed October 2, 2025, https://www.ibm.com/quantum/blog/quantum-error-suppression-mitigation-correction
Quantum Error Mitigation and Its Progress | NTT R&D Website, accessed October 2, 2025, https://www.rd.ntt/e/research/JN202309_23092.html
Quantum error mitigation | Rev. Mod. Phys., accessed October 2, 2025, https://link.aps.org/doi/10.1103/RevModPhys.95.045005
arxiv.org, accessed October 2, 2025, https://arxiv.org/html/2502.20673v2#:~:text=Zero%2Dnoise%20extrapolation%20(ZNE),which%20relies%20on%20polynomial%20interpolation.
Digital zero noise extrapolation for quantum error mitigation - arXiv, accessed October 2, 2025, https://arxiv.org/pdf/2005.10921
Direct Analysis of Zero-Noise Extrapolation: Polynomial Methods, Error Bounds, and Simultaneous Physical–Algorithmic Error Mitigation - arXiv, accessed October 2, 2025, https://arxiv.org/html/2502.20673v2
Digital zero noise extrapolation for quantum error mitigation | by Monit Sharma - Medium, accessed October 2, 2025, https://medium.com/@_monitsharma/digital-zero-noise-extrapolation-for-quantum-error-mitigation-220f4284054b
What is the theory behind ZNE? — Mitiq 0.47.0 documentation, accessed October 2, 2025, https://mitiq.readthedocs.io/en/stable/guide/zne-5-theory.html
Probabilistic Error Cancellation — Mitiq 0.47.0 documentation, accessed October 2, 2025, https://mitiq.readthedocs.io/en/stable/guide/pec.html
Probabilistic error cancellation with sparse Pauli-Lindblad noise models, accessed October 2, 2025, https://communities.springernature.com/posts/probabilistic-error-cancellation-with-sparse-pauli-lindblad-noise-models
Limitations of probabilistic error cancellation for open dynamics beyond sampling overhead | Phys. Rev. A, accessed October 2, 2025, https://link.aps.org/doi/10.1103/PhysRevA.109.012431
Probabilistic error cancellation for dynamic quantum circuits | Phys. Rev. A, accessed October 2, 2025, https://link.aps.org/doi/10.1103/PhysRevA.109.062617
Reduced Sampling Overhead for Probabilistic Error Cancellation by Pauli Error Propagation, accessed October 2, 2025, https://quantum-journal.org/papers/q-2025-08-29-1840/
Efficacy of noisy dynamical decoupling | Phys. Rev. A, accessed October 2, 2025, https://link.aps.org/doi/10.1103/PhysRevA.107.032615
Digital Dynamical Decoupling — Mitiq 0.47.0 documentation, accessed October 2, 2025, https://mitiq.readthedocs.io/en/stable/guide/ddd.html
Learning How to Dynamically Decouple - arXiv, accessed October 2, 2025, https://arxiv.org/html/2405.08689v1
Suppressing errors with dynamical decoupling using pulse control on Amazon Braket - AWS, accessed October 2, 2025, https://aws.amazon.com/blogs/quantum-computing/suppressing-errors-with-dynamical-decoupling-using-pulse-control-on-amazon-braket/
Quantum supremacy - Wikipedia, accessed October 2, 2025, https://en.wikipedia.org/wiki/Quantum_supremacy
Quantum Utility, Advantage and Supremacy — EITC, accessed October 2, 2025, http://www.eitc.org/research-opportunities/high-performance-and-quantum-computing/quantum-computing-technology-and-networking/quantum-computing-technology/quantum-utility-advantage-and-supremacy
How to compare a noisy quantum processor to a classical computer, accessed October 2, 2025, https://research.google/blog/how-to-compare-a-noisy-quantum-processor-to-a-classical-computer/
Quantum's next leap: Ten septillion years beyond-classical - YouTube, accessed October 2, 2025, https://www.youtube.com/watch?v=l_KrC1mzd0g
Beyond Boson Sampling: Higher Spin Sampling as a Practical Path to Quantum Supremacy, accessed October 2, 2025, https://arxiv.org/html/2505.07312v1
Chinese Research Team Demonstrates Quantum Advantage With Gaussian Boson Sampling, accessed October 2, 2025, https://thequantuminsider.com/2020/12/03/china-joins-the-quantum-supremacy-club-chinese-research-team-claims-to-demonstrate-quantum-supremacy-with-gaussian-boson-sampling/
New Classical Algorithm Enhances Understanding of Quantum Computing's Future, accessed October 2, 2025, https://cs.uchicago.edu/news/new-classical-algorithm-enhances-understanding-of-quantum-computings-future/
Quantum supremacy vs advantage : r/QuantumComputing - Reddit, accessed October 2, 2025, https://www.reddit.com/r/QuantumComputing/comments/17rsenc/quantum_supremacy_vs_advantage/
Quantum Computing: What Is Quantum Advantage ... - FirstPrinciples, accessed October 2, 2025, https://www.firstprinciples.org/article/quantum-supremacy-vs-quantum-advantage-which-is-the-best-target