QT/ Breakthrough in ultraviolet spectroscopy

Paradigm
Paradigm
Published in
29 min readMar 22, 2024

Quantum news biweekly vol. 70, 7th March — 22nd March

TL;DR

  • Physicists make significant strides in precision and accuracy even in extremely low light conditions.
  • Researchers explore analog enhancements in neuromorphic computing, focusing on hardware advancements alongside software.
  • Novel method utilizing disordered superconducting loops emerges for storing and transmitting information effectively.
  • Ongoing advancements in quantum computing raise concerns about the vulnerability of current encryption methods, prompting the development of physics-based encryption techniques.
  • The QUICK space mission aims to secure long-distance communications by deploying satellites.
  • Discovery of a new class of plasma oscillations enhances particle accelerators and commercial fusion energy prospects.
  • Quantum materials research fuels innovation across various industries through techniques like TR-ARPES, revealing their equilibrium and dynamic properties.
  • Challenges arise in identifying suitable materials for constructing components crucial for quantum information transmission and storage.
  • Photon-based methods show promise in establishing stable information exchange within quantum computers.
  • Breakthroughs in particle signal amplification and spin qubit evolution pave the way for faster and more reliable quantum computing applications.
  • And more!

Quantum Computing Market

According to the recent market research report ‘Quantum Computing Market with COVID-19 impact by Offering (Systems and Services), Deployment (On Premises and Cloud Based), Application, Technology, End-use Industry and Region — Global Forecast to 2026’, published by MarketsandMarkets, the Quantum Computing market is expected to grow from USD 472 million in 2021 to USD 1,765 million by 2026, at a CAGR of 30.2%. The early adoption of quantum computing in the banking and finance sector is expected to fuel the growth of the market globally. Other key factors contributing to the growth of the quantum computing market include rising investments by governments of different countries to carry out research and development activities related to quantum computing technology. Several companies are focusing on the adoption of QCaaS post-COVID-19. This, in turn, is expected to contribute to the growth of the quantum computing market. However, stability and error correction issues are expected to restrain the growth of the market.

According to ‘Quantum Computing Market Research Report: By Offering, Deployment Type, Application, Technology, Industry — Industry Share, Growth, Drivers, Trends and Demand Forecast to 2030’ report, the quantum computing market is projected to reach $64,988 million by 2030. Machine learning (ML) is expected to progress at the highest CAGR, during the forecast period, among all application categories, owing to the fact that quantum computing is being integrated in ML for improving the latter’s use case.

Latest Research

Near-ultraviolet photon-counting dual-comb spectroscopy

by Bingxin Xu, Zaijun Chen, Theodor W. Hänsch, Nathalie Picqué in Nature

Researchers at the Max Planck Institute of Quantum Optics (MPQ) have successfully developed a new technique for deciphering the properties of light and matter that can simultaneously detect and precisely quantify many substances with high chemical selectivity. Their technique interrogates the atoms and molecules in the ultraviolet spectral region at very feeble light levels. Exciting prospects for conducting experiments in low-light conditions pave the way for novel applications of photon-level diagnostics, such as precision spectroscopy of single atoms or molecules for fundamental tests of physics and ultraviolet photochemistry in the Earth’s atmosphere or from space telescopes.

Ultraviolet spectroscopy plays a critical role in the study of electronic transitions in atoms and rovibronic transitions in molecules. These studies are essential for tests of fundamental physics, quantum-electrodynamics theory, determination of fundamental constants, precision measurements, optical clocks, high-resolution spectroscopy in support of atmospheric chemistry and astrophysics, and strong-field physics. Scientists in the group of Nathalie Picqué at the Max-Planck Institute of Quantum Optics have now made a significant leap in the field of ultraviolet spectroscopy by successfully implementing high-resolution linear-absorption dual-comb spectroscopy in the ultraviolet spectral range. This groundbreaking achievement opens up new possibilities for performing experiments under low-light conditions, paving the way for novel applications in various scientific and technological fields.

Dual-comb spectroscopy, a powerful technique for precise spectroscopy over broad spectral bandwidths, has been mainly used for infrared linear absorption of small molecules in the gas phase. It relies on measuring the time-dependent interference between two frequency combs with slightly different repetition frequencies. A frequency comb is a spectrum of evenly spaced, phase-coherent laser lines, that acts like a ruler to measure the frequency of light with extreme precision. The dual-comb technique does not suffer from the geometric limitations associated with traditional spectrometers, and offers great potential for high precision and accuracy.

Principle of ultraviolet dual-comb spectroscopy with photon counting.

However, dual-comb spectroscopy typically requires intense laser beams, making it less suitable for scenarios where low light levels are critical. The MPQ team have now experimentally demonstrated that dual-comb spectroscopy can be effectively employed in starved-light conditions, at power levels more than a million times weaker than those typically used. This breakthrough was achieved using two distinct experimental setups with different types of frequency-comb generators. The team developed a photon-level interferometer that accurately records the statistics of photon counting, showcasing a signal-to-noise ratio at the fundamental limit. This achievement highlights the optimal use of available light for experiments, and opens up the prospect of dual-comb spectroscopy in challenging scenarios where low light levels are essential.

The MPQ researchers addressed the challenges associated with generating ultraviolet frequency combs and building dual-comb interferometers with long coherence times, paving the way for advances in this coveted goal. They exquisitely controlled the mutual coherence of two comb lasers with one femtowatt per comb line, demonstrating an optimal build-up of the counting statistics of their interference signal over times exceeding one hour.

“Our innovative approach to low-light interferometry overcomes the challenges posed by the low efficiency of nonlinear frequency conversion, and lays a solid foundation for extending dual-comb spectroscopy to even shorter wavelengths,” comments Bingxin Xu, the post-doctoral scientist who led the experiments.

Near-ultraviolet photon-level dual-comb experimental spectra with resolved comb lines.

Indeed, an exciting future application is the development of dual-comb spectroscopy at short wavelengths, to enable precise vacuum- and extreme-ultraviolet molecular spectroscopy over broad spectral spans. Currently, broadband extreme-UV spectroscopy is limited in resolution and accuracy, and relies on unique instrumentation at specialized facilities.

“Ultraviolet dual-comb spectroscopy, while a challenging goal, has now become a realistic one as a result of our research. Importantly, our results extend the full capabilities of dual-comb spectroscopy to low-light conditions, unlocking novel applications in precision spectroscopy, biomedical sensing, and environmental atmospheric sounding,” Nathalie Picqué concludes.

Collective neural network behavior in a dynamically driven disordered system of superconducting loops

by Uday S. Goteti, Shane A. Cybart, Robert C. Dynes in Proceedings of the National Academy of Sciences

Computers work in digits — 0s and 1s to be exact. Their calculations are digital; their processes are digital; even their memories are digital. All of which requires extraordinary power resources. As we look to the next evolution of computing and developing neuromorphic or “brain like” computing, those power requirements are unfeasible.

To advance neuromorphic computing, some researchers are looking at analog improvements. In other words, not just advancing software, but advancing hardware too. Research from the University of California San Diego and UC Riverside shows a promising new way to store and transmit information using disordered superconducting loops. The team’s research offers the ability of superconducting loops to demonstrate associative memory, which, in humans, allows the brain to remember the relationship between two unrelated items.

“I hope what we’re designing, simulating and building will be able to do that kind of associative processing really fast,” stated UC San Diego Professor of Physics Robert C. Dynes, who is one of the paper’s co-authors.

Distinct circulating current paths in a 4-loop network show the possible switching activity that allow flux to travel between loops. (cr: Q-MEEN-C / UC San Diego)

Picture it: you’re at a party and run into someone you haven’t seen in a while. You know their name but can’t quite recall it. Your brain starts to root around for the information: where did I meet this person? How were we introduced? If you’re lucky, your brain finds the pathway to retrieve what was missing. Sometimes, of course, you’re unlucky.

Dynes believes that short-term memory moves into long-term memory with repetition. In the case of a name, the more you see the person and use the name, the more deeply it is written into memory. This is why we still remember a song from when we were ten years old but can’t remember what we had for lunch yesterday.

“Our brains have this remarkable gift of associative memory, which we don’t really understand,” stated Dynes, who is also president emeritus of the University of California and former UC San Diego chancellor. “It can work through the probability of answers because it’s so highly interconnected. This computer brain we built and modeled is also highly interactive. If you input a signal, the whole computer brain knows you did it.”

How do disordered superconducting loops work? You need a superconducting material — in this case, the team used yttrium barium copper oxide (YBCO). Known as a high-temperature superconductor, YBCO becomes superconducting around 90 Kelvin (-297 F), which in the world of physics, is not that cold. This made it relatively easy to modify. The YBCO thin films (about 10 microns wide) were manipulated with a combination of magnetic fields and currents to create a single flux quantum on the loop. When the current was removed, the flux quantum stayed in the loop. Think of this as a piece of information or memory.

This is one loop, but associative memory and processing require at least two pieces of information. For this, Dynes used disordered loops, meaning the loops are different sizes and follow different patterns — essentially random.

A Josephson juncture, or “weak link,” as it is sometimes known, in each loop acted as a gate through which the flux quanta could pass. This is how information is transferred and the associations are built. Although traditional computing architecture has continuous high-energy requirements, not just for processing but also for memory storage, these superconducting loops show significant power savings — on the scale of a million times less. This is because the loops only require power when performing logic tasks. Memories are stored in the physical superconducting material and can remain there permanently, as long as the loop remains superconducting.

The number of memory locations available increases exponentially with more loops: one loop has three locations, but three loops have 27. For this research, the team built four loops with 81 locations. Next, Dynes would like to expand the number of loops and the number memory locations.

“We know these loops can store memories. We know the associative memory works. We just don’t know how stable it is with a higher number of loops,” he said.

This work is not only noteworthy to physicists and computer engineers; it may also be important to neuroscientists. Dynes talked to another University of California president emeritus, Richard Atkinson, a world-renowned cognitive scientist who helped create a seminal model of human memory called the Atkinson-Shiffrin model.

Atkinson, who is also former UC San Diego chancellor and professor emeritus in the School of Social Sciences, was excited about the possibilities he saw: “Bob and I have had some great discussions trying to determine if his physics-based neural network could be used to model the Atkinson-Shiffrin theory of memory. His system is quite different from other proposed physics-based neural networks, and is rich enough that it could be used to explain the workings of the brain’s memory system in terms of the underlying physical process. It’s a very exciting prospect.”

QUICK3 ‐ Design of a Satellite‐Based Quantum Light Source for Quantum Communication and Extended Physical Theory Tests in Space

by Najme Ahmadi, Sven Schwertfeger, Philipp Werner, et al in Advanced Quantum Technologies

Through steady advances in the development of quantum computers and their ever-improving performance, it will be possible in the future to crack our current encryption processes. To address this challenge, researchers at the Technical University of Munich (TUM) are participating in an international research consortium to develop encryption methods that will apply physical laws to prevent the interception of messages. To safeguard communications over long distances, the QUICK³ space mission will deploy satellites.

How can it be ensured that data transmitted through the internet can be read only by the intended recipient? At present our data are encrypted with mathematical methods that rely on the idea that the factorization of large numbers is a difficult task. With the increasing power of quantum computers, however, these mathematical codes will probably no longer be secure in the future. Tobias Vogl, a professor of Quantum Communication Systems Engineering, is working on an encryption process that relies on principles of physics.

“Security will be based on the information being encoded into individual light particles and then transmitted. The laws of physics do not permit this information to be extracted or copied. When the information is intercepted, the light particles change their characteristics. Because we can measure these state changes, any attempt to intercept the transmitted data will be recognized immediately, regardless of future advances in technology,” says Tobias Vogl.

Concept of the excitation laser system. The butterfly-packaged ECDL emits at 698 nm into the optical isolator that prevents optical feedback. With two mirrors, the beam is steered into a fiber collimation package, such that the fiber can guide the laser to the quantum photonics module.

The big challenge in so-called quantum cryptography lies in the transmission of data over long distances. In classical communications, information is encoded in many light particles and transmitted through optical fibers. However, the information in a single particle cannot be copied. As a result, the light signal cannot be repeatedly amplified, as with current optical fiber transmissions. This limits the transmission distance for the information to a few hundred kilometers.

To send information to other cities or continents, the structure of the atmosphere will be used. At altitudes higher than around 10 kilometers, the atmosphere is so thin that light is neither scattered nor absorbed. This will make it possible to use satellites in order to extend quantum communications over longer distances.

As part of the QUICK³ mission, Tobias Vogl and his team are developing an entire system, including all of the components needed to build a satellite for quantum communications. In a first step, the team tested each of the satellite components. The next step will be to try out the entire system in space. The researchers will investigate whether the technology can withstand outer space conditions and how the individual system components interact. The satellite launch is scheduled for 2025. To create an overarching network for quantum communications, however, hundreds or perhaps thousands of satellites will be needed.

The concept does not necessarily require all information to be transmitted using this method, which is highly complex and costly. It is conceivable that a hybrid network could be implemented in which data can be encrypted either physically or mathematically. Antonia Wachter-Zeh, a professor of Coding and Cryptography, is working to develop algorithms sufficiently complex that not even quantum computers can solve them. In the future it will still be enough to encrypt most information using mathematical algorithms. Quantum cryptography will be an option only for documents requiring special protection, for example in communications between banks.

Space-Time Structured Plasma Waves

by J. P. Palastro, K. G. Miller, R. K. Follett, D. Ramsey, K. Weichman, A. V. Arefiev, D. H. Froula in Physical Review Letters

Most people know about solids, liquids, and gases as the main three states of matter, but a fourth state of matter exists as well. Plasma — also known as ionized gas — is the most abundant, observable form of matter in our universe, found in the sun and other celestial bodies.

Creating the hot mix of freely moving electrons and ions that compose a plasma often requires extreme pressures or temperatures. In these extreme conditions, researchers continue to uncover the unexpected ways that plasma can move and evolve. By better understanding the motion of plasma, scientists gain valuable insights into solar physics, astrophysics, and fusion.

In a paper, researchers from the University of Rochester, along with colleagues at the University of California, San Diego, discovered a new class of plasma oscillations — the back-and-forth, wave-like movement of electrons and ions. The findings have implications for improving the performance of miniature particle accelerators and the reactors used to create fusion energy.

“This new class of plasma oscillations can exhibit extraordinary features that open the door to innovative advancements in particle acceleration and fusion,” says John Palastro, a senior scientist at the Laboratory for Laser Energetics, an assistant professor in the Department of Mechanical Engineering, and an associate professor at the Institute of Optics.

Evolution of the cycle-averaged energy density ϵ0⟨k20ϕ2⟩ for a conventional and space-time structured plasma wave.

One of the properties that characterizes a plasma is its ability to support collective motion, where electrons and ions oscillate — or wave — in unison. These oscillations are like a rhythmic dance. Just as dancers respond to each other’s movements, the charged particles in a plasma interact and oscillate together, creating a coordinated motion. The properties of these oscillations have traditionally been linked to the properties — such as the temperature, density, or velocity — of the plasma as a whole. However, Palastro and his colleagues determined a theoretical framework for plasma oscillations where the properties of the oscillations are completely independent of the plasma in which they exist.

“Imagine a quick pluck of a guitar string where the impulse propagates along the string at a speed determined by the string’s tension and diameter,” Palastro says. “We’ve found a way to ‘pluck’ a plasma, so that the waves move independently of the analogous tension and diameter.”

Within their theoretical framework, the amplitude of the oscillations could be made to travel faster than the speed of light in a vacuum or come to a complete stop, while the plasma itself travels in an entirely different direction. The research has a variety of promising applications, most notably in helping to achieve clean-burning, commercial fusion energy.

Time-resolved ARPES studies of quantum materials

by Fabio Boschini, Marta Zonno, Andrea Damascelli in Reviews of Modern Physics

Research in quantum materials is paving the way for groundbreaking discoveries and is poised to drive technological advancements that will redefine the landscapes of industries like mining, energy, transportation, and medtech.

A technique called time- and angle-resolved photoemission spectroscopy (TR-ARPES) has emerged as a powerful tool, allowing researchers to explore the equilibrium and dynamical properties of quantum materials via light-matter interaction. A recent review paper by Professor Fabio Boschini from the Institut national de la recherche scientifique (INRS), along with colleagues Marta Zonno from Canadian Light Source (CLS) and Andrea Damascelli from UBC’s Stewart Blusson Quantum Matter Institute (Blusson QMI), illustrates that TR-ARPES has rapidly matured into a powerful technique over the last two decades.

“TR-ARPES is an effective technique not only for fundamental studies, but also for characterizing out-of-equilibrium properties of quantum materials for future applications,” says Professor Boschini who specializes in ultrafast spectroscopies of condensed matter, at the Énergie Matériaux Télécommunications Research Centre.

Schematic configuration of an ARPES experiment. Light in the UV to XUV spectral range prompts the photoemission of electrons from the sample.

The new paper provides a comprehensive review of research using TR-ARPES and its evolving significance in exploring light-induced electron dynamics and phase transitions in a wide range of quantum materials.

“The scientific community is currently investigating new ‘tuning knobs’ to control the electronic, transport, and magnetic properties of quantum materials on demand. One of these ‘tuning knobs’ is the light-matter interaction, which promises to provide fine control the properties of quantum materials on ultrafast timescales,” says Professor Boschini, who is also a QMI affiliate investigator. “TR-ARPES is the ideal technique for this purpose, since it provides direct insight into how light excitation modifies electronic states with time, energy, and momentum resolution.”

“TR-ARPES has ushered in a new era of quantum materials research, allowing us to ‘knock on the system’ and observe how it responds, and pushing the materials out of equilibrium to uncover their hidden properties,” adds Blusson QMI Scientific Director Andrea Damascelli.

TR-ARPES combines condensed matter spectroscopy (ARPES) with ultrafast lasers (photonics), bringing together research groups from both fields. The technique owes much of its success to significant advancements in developing new laser sources capable of producing light with precise characteristics.

Boschini is working closely on the subject with Professor François Légaré, a full professor at INRS and an expert in ultrafast laser science and technology. Together, Boschini’s and Légaré’s groups built and are operating a state-of-the-art TR-ARPES endstation with unique intense long-wavelength excitation capabilities at the Advanced Laser Light Source (ALLS) laboratory.

Design Rules, Accurate Enthalpy Prediction, and Synthesis of Stoichiometric Eu3+ Quantum Memory Candidates

by Zachary W. Riedel, Daniel P. Shoemaker in Journal of the American Chemical Society

In the quest to develop quantum computers and networks, there are many components that are fundamentally different than those used today. Like a modern computer, each of these components has different constraints. However, it is currently unclear what materials can be used to construct those components for the transmission and storage of quantum information.

In new research, University of Illinois Urbana Champaign materials science & engineering professor Daniel Shoemaker and graduate student Zachary Riedel used density functional theory (DFT) calculations to identify possible europium (Eu) compounds to serve as a new quantum memory platform. They also synthesized one of the predicted compounds, a brand new, air stable material that is a strong candidate for use in quantum memory, a system for storing quantum states of photons or other entangled particles without destroying the information held by that particle.

“The problem that we are trying to tackle here is finding a material that can store that quantum information for a long time. One way to do this is to use ions of rare earth metals,” says Shoemaker.

Found at the very bottom of the periodic table, rare earth elements, such as europium, have shown promise for use in quantum information devices due to their unique atomic structures. Specifically, rare earth ions have many electrons densely clustered close to the nucleus of the atom. The excitation of these electrons, from the resting state, can “live” for a long time — seconds or possibly even hours, an eternity in the world of computing. Such long-lived states are crucial to avoid the loss of quantum information and position rare earth ions as strong candidates for qubits, the fundamental units of quantum information.

“Normally in materials engineering, you can go to a database and find what known material should work for a particular application,” Shoemaker explains. “For example, people have worked for over 200 years to find proper lightweight, high strength materials for different vehicles. But in quantum information, we have only been working at this for a decade or two, so the population of materials is actually very small, and you quickly find yourself in unknown chemical territory.”

Shoemaker and Riedel imposed a few rules in their search of possible new materials. First, they wanted to use the ionic configuration Eu3+ (as opposed to the other possible configuration, Eu2+) because it operates at the right optical wavelength. To be “written” optically, the materials should be transparent. Second, they wanted a material made of other elements that have only one stable isotope. Elements with more than one isotope yield a mixture of different nuclear masses that vibrate at slightly different frequencies, scrambling the information being stored. Third, they wanted a large separation between individual europium ions to limit unintended interactions. Without separation, the large clouds of europium electrons would act like a canopy of leaves in a forest, rather than well-spaced-out trees in a suburban neighborhood, where the rustling of leaves from one tree would gently interact with leaves from another.

With those rules in place, Riedel composed a DFT computational screening to predict which materials could form. Following this screening, Riedel was able to identify new Eu compound candidates, and further, he was able to synthesize the top suggestion from the list, the double perovskite halide Cs2NaEuF6. This new compound is air stable, which means it can be integrated with other components, a critical property in scalable quantum computing. DFT calculations also predicted several other possible compounds that have yet to be synthesized.

“We have shown that there are a lot of unknown materials left to be made that are good candidates for quantum information storage,” Shoemaker says. “And we have shown that we can make them efficiently and predict which ones are going to be stable.”

Efficient high-fidelity flying qubit shaping

by Benedikt Tissot, Guido Burkard in Physical Review Research

Two physicists at the University of Konstanz are developing a method that could enable the stable exchange of information in quantum computers. In the leading role: photons that make quantum bits “fly.”

Quantum computers are considered the next big evolutionary step in information technology. They are expected to solve computing problems that today’s computers simply cannot solve — or would take ages to do so. Research groups around the world are working on making the quantum computer a reality. This is anything but easy, because the basic components of such a computer, the quantum bits or qubits, are extremely fragile. One type of qubits consists of the intrinsic angular momentum (spin) of a single electron, i.e. they are at the scale of an atom. It is hard enough to keep such a fragile system intact. It is even more difficult to interconnect two or more of these qubits. So how can a stable exchange of information between qubits be achieved?

(a) Illustration of the physical system and (b) energy level diagram of a stimulated Raman emitter.

The two Konstanz physicists Benedikt Tissot and Guido Burkard have now developed a theoretical model of how the information exchange between qubits could succeed by using photons as a “means of transport” for quantum information. The general idea: The information content (electron spin state) of the material qubit is converted into a “flying qubit,” namely a photon. Photons are “light quanta” that constitute the basic building blocks making up the electromagnetic radiation field. The special feature of the new model: stimulated Raman emissions are used for converting the qubit into a photon. This procedure allows more control over the photons.

“We are proposing a paradigm shift from optimizing the control during the generation of the photon to directly optimizing the temporal shape of the light pulse in the flying qubit,” explains Guido Burkard.

Benedikt Tissot compares the basic procedure with the Internet: “In a classic computer, we have our bits, which are encoded on a chip in the form of electrons. If we want to send information over long distances, the information content of the bits is converted into a light signal that is transmitted through optical fibers.” The principle of information exchange between qubits in a quantum computer is very similar: “Here, too, we have to convert the information into states that can be easily transmitted — and photons are ideal for this,” explains Tissot.

“We need to consider several aspects,” says Tissot: “We want to control the direction in which the information flows — as well as when, how quickly and where it flows to. That’s why we need a system that allows for a high level of control.”

The researchers’ method makes this control possible by means of resonator-enhanced, stimulated Raman emissions. Behind this term is a three-level system, which leads to a multi-stage procedure. These stages offer the physicists control over the photon that is created. “We have ‘more buttons’ here that we can operate to control the photon,” Tissot illustrates.

Stimulated Raman emission are an established method in physics. However, using them to send qubit states directly is unusual. The new method might make it possible to balance the consequences of environmental perturbations and unwanted side effects of rapid changes in the temporal shape of the light pulse, so that information transport can be implemented more accurately.

Design and performance of parallel-channel nanocryotrons in magnetic fields

by Timothy Draher, Tomas Polakovic, Yi Li, John Pearson, Alan Dibos, Zein-Eddine Meziani, Zhili Xiao, Valentine Novosad in Applied Physics Letters

Device could help facilitate the operation of new particle colliders, such as the Electron-Ion Collider.

In particle colliders that reveal the hidden secrets of the tiniest constituents of our universe, minute particles leave behind extremely faint electrical traces when they are generated in enormous collisions. Some detectors in these facilities use superconductivity — a phenomenon in which electricity is carried with zero resistance at low temperatures — to function.

For scientists to more accurately observe the behavior of these particles, these weak electrical signals, or currents, need to be multiplied by an instrument capable of turning a faint electrical flicker into a real jolt.

False-color SEM images: 1:8 parallel-channel nTron (a) and 1:8 conventional nTron (b). Blue highlights the ground plane, gray shows the trench and nanowire gaps, green represents the effective NbN channel, and red signifies the NbN gate to choke constriction. Scale bars correspond to 2 μm.

Scientists at the U.S. Department of Energy’s (DOE) Argonne National Laboratory have developed a new device that acts as a “current multiplier.” This device, called a nanocryotron, is a prototype for a mechanism that could turn up a particle’s electrical signal high enough to a level where it temporarily turns off the superconductivity of the material, essentially creating a kind of on-off switch.

“We’re taking a small signal and using it to trigger an electric cascade,” said Tomas Polakovic, one of Argonne’s Maria Goeppert Mayer Fellows and an author of the study. “We’re going to funnel the very small current of these detectors into the switching device, which can be then used to switch a much bigger current.”

To prepare the nanocryotron for a collider experiment will take some more work because of the high magnetic fields involved. While today’s particle detectors can withstand magnetic fields of several tesla in strength, this switch’s performance degrades in high magnetic fields.

“Finding ways to make the device work in higher magnetic fields is key to incorporating it into a real experiment,” said Argonne graduate research assistant Timothy Draher, another author of the study.

To make this possible, the researchers plan to change the geometry of the material and introduce defects, or tiny holes. These defects will help researchers stabilize small superconducting vortices in the material, the movement of which can lead to an unanticipated disruption of superconductivity.

The nanocryotron was created by using electron beam lithography, a kind of stenciling technique that uses a beam of electrons to remove a polymer film to expose a particular region of interest. That region of interest is then etched using plasma ion etching.

“We basically just strip away the parts that are exposed, leaving behind the device that we want to use,” Draher said.

According to Argonne physicist Valentine Novosad, another author of the study, the new device also could serve as the basis for a kind of electronic logic circuitry.

“This work is especially important for collider experiments, such as those that will be performed at the Electron-Ion Collider at Brookhaven National Laboratory. There, superconducting nanowire detectors, positioned close to the beams, would require microelectronics immune to magnetic fields,” said Argonne Distinguished Fellow and group leader Zein-Eddine Meziani.

Accelerated Adiabatic Passage of a Single Electron Spin Qubit in Quantum Dots

by Xiao-Fei Liu, Yuta Matsumoto, Takafumi Fujita, Arne Ludwig, Andreas D. Wieck, Akira Oiwa in Physical Review Letters

Researchers at Osaka University’s Institute of Scientific and Industrial Research (SANKEN) used the shortcuts to the adiabaticity (STA) method to greatly speed-up the adiabatic evolution of spin qubits. The spin flip fidelity after pulse optimization can be as high as 97.8% in GaAs quantum dots. This work may be applicable to other adiabatic passage and will be useful for fast and high-fidelity quantum control.

A quantum computer uses the superposition of “0” and “1” states to perform information processing, which is completely different from classical computing, thus allowing for the solution of certain problems at a much faster rate. High-fidelity quantum state operation in large enough programmable qubit spaces is required to achieve the “quantum advantage.” The conventional method for changing quantum states uses pulse control, which is sensitive to noises and control errors. In contrast, adiabatic evolution can always keep the quantum system in its eigenstate. It is robust to noises but requires a certain length of time.

Recently, a team from SANKEN used the STA method to greatly accelerate the adiabatic evolution of spin qubits in gate-defined quantum dots for the first time. The theory they used was proposed by the scientist Xi Chen.

“We used the transitionless quantum driving style of STA, thus allowing the system to always remain in its ideal eigenstate even under rapid evolution.” co-author Takafumi Fujita explains.

The device and its basic properties.

According to the target evolution of spin qubits, this group’s experiment adds another effective driving to suppress diabatic errors, which guarantees a fast and nearly ideal adiabatic evolution. The dynamic properties were also investigated and proved the effectiveness of this method. Additionally, the modified pulse after optimization was able to further suppress noises and improve the efficiency of quantum state control. Finally, this group achieved spin flip fidelity of up to 97.8%. According to their estimation, the acceleration of adiabatic passage would be much better in Si or Ge quantum dots with less nuclear spin noise.

“This provides a fast and high-fidelity quantum control method. Our results may also be useful to accelerate other adiabatic passage in quantum dots.” corresponding author Akira Oiwa says.

As a promising candidate for quantum computing, gate-defined quantum dots have long coherence times and good compatibility with the modern semiconductor industry. The team is trying to find more applications in gate-defined quantum dots systems, such as the promotion to more spin qubits. They hope to find a simpler and more feasible solution for fault-tolerant quantum information processing using this method.

Ultrahigh-precision Compton polarimetry at 2 GeV

by A. Zec, S. Premathilake, J. C. Cornejo, M. M. Dalton, C. Gal, D. Gaskell, M. Gericke, I. Halilovic, H. Liu, J. Mammei, R. Michaels, C. Palatchi, J. Pan, K. D. Paschke, B. Quinn, J. Zhang in Physical Review C

Scientists are getting a more detailed look than ever before at the electrons they use in precision experiments.

Nuclear physicists with the U.S. Department of Energy’s Thomas Jefferson National Accelerator Facility have shattered a nearly 30-year-old record for the measurement of parallel spin within an electron beam — or electron beam polarimetry, for short. The achievement sets the stage for high-profile experiments at Jefferson Lab that could open the door to new physics discoveries.

In a paper, a collaboration of Jefferson Lab researchers and scientific users reported a measurement more precise than a benchmark achieved during the 1994–95 run of the SLAC Large Detector (SLD) experiment at the SLAC National Accelerator Laboratory in Menlo Park, California.

“No one has measured the polarization of an electron beam to this precision at any lab, anywhere in the world,” said Dave Gaskell, an experimental nuclear physicist at Jefferson Lab and a co-author on the paper. “That’s the headline here. This isn’t just a benchmark for Compton polarimetry, but for any electron polarization measurement technique.”

Compton polarimetry involves detecting photons — particles of light — scattered by charged particles, such as electrons. That scattering, aka the Compton effect, can be achieved by sending laser light and an electron beam on a collision course. Electrons — and photons — carry a property called spin (which physicists measure as angular momentum). Like mass or electric charge, spin is an intrinsic property of the electron. When particles spin in the same direction at a given time, the quantity is known as polarization. And for physicists probing the heart of matter on the tiniest scales, knowledge of that polarization is crucial.

“Think of the electron beam as a tool that you’re using to measure something, like a ruler,” said Mark Macrae Dalton, another Jefferson Lab physicist and co-author on the paper. “Is it in inches or is it in millimeters? You have to understand the ruler in order to understand any measurement. Otherwise, you can’t measure anything.”

The ultra-high precision was achieved during the Calcium Radius Experiment (CREX), conducted in tandem with the Lead Radius Experiment (PREX-II) to probe the nuclei of medium-weight and heavy atoms for insight on the structure of their “neutron skin.”

“Neutron skin” refers to the distribution of protons and neutrons within the nuclei of denser atoms. Lighter elements — generally those with an atomic number of 20 or lower on the periodic table — often have an equal number of protons and neutrons. Medium-weight and heavy atoms typically need more neutrons than protons to remain stable.

PREX-II and CREX focused respectively on lead-208, which has 82 protons and 126 neutrons, and calcium-48, which has 20 protons and 28 neutrons. In these atoms, relatively equal numbers of protons and neutrons cluster around the core of the nucleus while the extra neutrons get pushed to the fringe — forming a sort of “skin.” The experiments determined that lead-208 has a somewhat thick neutron skin, leading to implications for the properties of neutron stars. Calcium-48’s skin, on the other hand, is comparatively thin and confirms some theoretical calculations. These measurements were made to a precision of hundreds of millionths of a nanometer. PREX-II and CREX ran from 2019 to 2020 in Hall A of Jefferson Lab’s Continuous Electron Beam Accelerator Facility, a unique DOE Office of Science user facility that supports the research of more than 1,800 scientists worldwide.

“The CREX and PREX-II collaboration cared about knowing the polarization well enough that we dedicated the beam time to make a high-quality measurement,” Gaskell said. “And we made full use of that time.”

During CREX, the electron beam’s polarization was continuously measured via Compton polarimetry to a precision of 0.36%. That blew past the 0.5% reported during SLAC’s SLD experiment. In these terms, the smaller number is better because the percentages represent the sum of all systematic uncertainties — those created by an experiment’s setup. They can include absolute beam energy, position differences, and knowledge of the laser polarization. Other sources of uncertainty are statistical, meaning they can be reduced as more data are collected.

“Uncertainty is so fundamental, it’s hard to even describe because there’s nothing that we know with infinite precision,” Dalton said. “Whenever we make a measurement, we need to put an uncertainty on it. Otherwise, no one will know how to interpret it.”

In many experiments involving CEBAF, the dominant source of systematic uncertainty is knowledge of the electron beam’s polarization. The CREX team used the Compton polarimeter to bring that unknown to the lowest level ever reported.

“The higher the precision, the more strict a test one has for theoretical interpretation. You must be strict enough to compete with other methods for accessing the physics of PREX-II and CREX,” said Robert Michaels, Jefferson Lab’s deputy leader for Halls A/C. “An imprecise test would have no scientific impact.”

Think of the Compton polarimeter as a pit road for electrons coming off the racetrack-shaped CEBAF. Magnets divert the electrons along this detour, where the beam overlaps with a green laser between reflecting surfaces inside a resonant optical cavity. When the laser is locked, the electron beam scatters with the light and creates high-energy photons.

The photons are captured by a detector, which in this case is essentially a cylindrical crystal with a photomultiplier tube that passes the light signal to the data acquisition system. The difference between the number of hits when the electrons are flipped from a forward longitudinal state to a backward one is proportional to the beam’s polarization. This assumes the polarization of the laser is constant.

“There’s a maximum energy when you work out the basic kinematics of two things smacking into each other at near light speed,” said co-author Allison Zec, who worked on University of Virginia Physics Professor Kent Paschke’s team and is now a postdoctoral researcher at the University of New Hampshire. Her doctoral dissertation focused partly on the Compton polarimeter in the PREX-II and CREX experiments, for which she won the prestigious 2022 Jefferson Science Associates Thesis Prize.

“The most energy you can get is when the electron comes in and the photon is coming straight at it, and the photon gets scattered at 180 degrees,” Zec said. “That’s what we call the Compton edge. Everything is measured to that Compton edge and lower.”

Throw in a suite of calculations and experimental controls, and the 0.36% relative precision was achieved.

“It was basically the stars aligning in a way that we needed,” Zec said, “but not without the hard work to prove that we were able to get there. It took little bit of luck, a little bit of elbow grease, a lot of paying attention, careful thought and a little bit of creativity.”

For the first time, the precision reached a level required for future flagship experiments at Jefferson Lab, such as MOLLER (Measurement of a Lepton-Lepton Electroweak Reaction). MOLLER, which is in the design and construction phase, will measure the weak charge on an electron as a sort of test of the Standard Model of particle physics. It will require electron beam polarimetry with a relative precision of 0.4%. The Standard Model is a theory that attempts to describe subatomic particles, such as quarks and muons, along with the four fundamental forces: strong, weak, electromagnetic and gravity.

“The things you can calculate with the Standard Model are phenomenal,” Dalton said. But the Standard Model isn’t complete. “It doesn’t explain what dark matter is. It doesn’t explain where CP (charge conjugation parity) violation comes from, or why there’s mostly matter in the universe and not antimatter,” Dalton continued.

Each fundamental force carries a so-called “charge,” which dictates its strength or how strongly a particle feels the force. Theorists can use the Standard Model to calculate the weak force’s charge on the electron, while MOLLER would physically measure it and look for deviation from theory.

“The catch phrase is always ‘physics beyond the Standard Model,’” Gaskell said. “We are looking for particles or interactions that may open a window to things that are missing in our description of the universe.”

Subscribe to Paradigm!

Medium, Twitter, Telegram, Telegram Chat, LinkedIn, and Reddit.

Main sources

Research articles

Advanced Quantum Technologies

PRX Quantum

Science Daily

SciTechDaily

Quantum News

Nature

--

--