IBM breaks the 100-Qubit QPU barrier and marks a milestone on its ambitious roadmap

2021-12-14 12:01:30 By : Mr. YFA Company

Since 1987-covering the world's fastest computers and their operators

Since 1987-covering the world's fastest computers and their operators

IBM fully demonstrated the highlights of IBM's steady progress in quantum computing at the 2021 Quantum Summit held last month, and most of the HPC community is surrounded by SC21. With the support of the six milestones achieved this year, IBM announced that 2023 will be the year when its systems provide quantum advantages, and quantum computing will become a powerful tool in the HPC field.

Currently, the progress being made by the entire quantum computing community is impressive and may exceed many observers' expectations. In this case, IBM has long been the 500-pound gorilla in the quantum computing world, and has studied almost all aspects of the technology, its use cases, and customer/developer participation. Of course, IBM focuses on semiconductor-based superconducting qubit technology, and which of the many qubit technologies will prevail is still inconclusive. It is possible that there will not be only one.

Last year, IBM developed a detailed quantum roadmap that contained milestones around hardware, software, and system infrastructure. At this year's IBM Quantum Summit, Jay Gambetta, an IBM researcher and vice president of quantum computing, and several colleagues submitted a report card and briefly introduced IBM's future plans. He highlighted six milestones-especially the recently launched IBM 127-qubit quantum processor Eagle, and the plan of IBM System Two, which is a new complete infrastructure that will replace System One.

Check out the IBM roadmap shown below (click to enlarge). In many ways, it encompasses the challenges and aspirations faced by everyone in the quantum community.

Although fault-tolerant quantum computing is still out of reach, the actual application of quantum computing on noisy medium-sized quantum (NISQ) computers seems to be closer than many people expected. We are beginning to see the emergence of early quantum-based applications-mainly around random number generation (see HPCwire's article on Quantinuum and Zapata, they are all working hard to use quantum-generated random numbers).

Before delving into the technical discussion, it is worth mentioning how IBM expects the business landscape to emerge (picture below). In collaboration with the Boston Consulting Group, IBM came up with a rough roadmap for business applications. "IBM's roadmap is not just specific. It is also ambitious," said Matt Langione, head of in-depth technology and head of North America at the Boston Consulting Group, at the IBM Summit. "We believe that the technical capabilities outlined by [IBM] today will help create $3 billion in value for end users during the said period."

He uses portfolio optimization in financial services as an example. Langione said that efforts to expand optimizers based on classical calculations "fight against issues such as non-continuous non-convex functions, interest rate yield curves, transaction logs, buy thresholds, and transaction costs." Quantum optimizers can overcome these challenges, and "by 2024, through the integration of classical resources and built-in error mitigation function [quantum] runtime, the trading strategy will be improved by up to 25 basis points, and the high fidelity of 4 9s will be maintained. We believe this is the kind of ability that may appear in the workflow of traders around 2025," he said.

He also specifically pointed out computational fluid dynamics grid optimizers for aerospace and automotive design, which have similar constraints. He predicted: "In the next three years, quantum computers may begin to exceed the limits of surface size and accuracy." Check out the BCG/IBM market forecast shown below.

There is no shortage of big plans for quantum computing. IBM is betting that by setting a clear vision and achieving its milestones, it will attract wider support from the wait-and-see community as well as the quantum community. Below is a brief summary of the six topics reviewed by Gambetta and colleagues. IBM released a video of a speech that gave a good and concise review of IBM's progress and plans in just over 30 minutes.

With the launch of the Falcon processor in 2019, IBM began to formally count its current quantum processor product portfolio; it introduced IBM's heavy hexagonal qubit layout with 27 qubits. Since then, IBM has been improving this design. Hummingbird made its debut in 2020 and has 65 qubits. Eagle has just launched at the 2021 summit and has 127 qubits. The number of qubits for each new processor has approximately doubled. Next is Osprey, which will be delivered in 2022, and it will have 433 qubits.

Jerry Chow, IBM’s Director of Quantum Hardware System Development, explained the lineage in this way, “For Falcon, the challenge we face is reliable production. We use the novel Josephson junction tuning process to reduce collisions with our heavy hexagonal lattice phase. Combining to meet this challenge. With Hummingbird, we have achieved a large-scale multiplexed readout, allowing us to reduce the total cryogenic infrastructure required for qubit state readout by eight times. This reduces the number of original components required. "

"Eagle [birth] is to expand the necessity of the way we encapsulate our devices so that we can transmit signals in and out of our superconducting qubits in a more efficient way. We have a lot of work to achieve this goal To a degree depends on IBM's experience in CMOS technology. In fact, it is two chips."

For Eagle, "The Josephson junction (qubit) is located on a chip, which is connected to a single interposer chip by bump bonding. The interposer chip uses packaging technology that is common throughout the CMOS world. Provides connections to qubits. These include things like substrate vias and buried wiring layers, which are new to the technology. The presence of buried layers provides flexibility for signal routing and device layout," Chow said .

IBM stated that Eagle is the most advanced quantum computing chip ever and the world's first quantum processor with more than 100 qubits. Chow said: "Let me emphasize that this is not just a processor we make, but a complete working system that runs quantum circuits today." He said Eagle will be widely used by the end of this year, which probably means now.

Looking at the impact of Eagle, IBM is not shy: “The increased number of qubits will allow users to explore new levels of complexity when conducting experiments and running applications, such as optimizing machine learning or modeling new molecules and materials for Used in fields ranging from the energy industry to the drug discovery process. "Eagle" is the first IBM quantum processor, whose scale makes it impossible for classical computers to simulate reliably. In fact, it is necessary to represent a state on a 127-qubit processor The number of classic bits exceeds the total number of atoms of more than 7.5 billion people today."

As mentioned earlier, Osprey will expire next year and will have 433 qubits. Chow said that it will introduce "the next generation of scalable input and output that can transmit signals from room temperature to low temperature."

Measuring the quality of quantum computing can be tricky. Key components such as coherence duration and gate fidelity are adversely affected by many factors, which are usually classified as system and environmental noise. Taming these effects is why most quantum processors are installed in large dilution refrigerators. IBM has developed a benchmark indicator, Quantum Volume (QV), which has various performance attributes, and QV has been widely used in the quantum community. IBM has implemented a QV of 128 on some of its systems. Honeywell (now Quantinuum) also reported that its trapped ion device has achieved QV 128.

At the IBM Quantum Summit, IBM researcher and chief quantum architect Matthias Steffen reviewed the progress in extending the coherence time and improving the fidelity of the gate.

"We have made a breakthrough on the new Falcon r8 processor. We have successfully increased the T1 time (spin lattice relaxation) from approximately 0.1 milliseconds to 0.3 milliseconds. This breakthrough is not limited to monolithic (good Good rate). It has been repeated several times now. In fact, some of our customers may have noticed [on] the device map recently displayed for IBM Peekskill," Steffen said. "This is just the beginning. We have tested several research test equipment, and now we are measuring 0.6 milliseconds, which is close to reliably crossing the 1 millisecond barrier."

"This year we also made a breakthrough and improved threshold fidelity. You can see these improvements (below) color-coded by device series. Our Falcon r4 devices typically achieve gate errors close to 0.5 x 10-3.) Our Falcon r5 device also includes faster readings and performance is about 1/3 better. In fact, many of our recent demonstrations are from this r5 device series. Finally, in the gold, you will see some of our latest test equipment , Including Falcon r8 with improved coherence time."

“You can also see the fidelity measurement results of other devices, including our recently [developed] Falcon r10 [on it]. We measured a dual-qubit gate and broke the error of 0.001 per gate plane,” Steffen said .

IBM touts the achievement of 0.001 gate fidelity, which is equivalent to more than 1000 gates per error, reaching 3 9 or 99.9% quality, which is an important milestone.

Currently, the Falcon architecture is the main force of IBM. As explained by IBM, the QPU-accessible product portfolio includes core and exploratory chips: "Our users can access exploratory devices, but these devices are not always online. Advanced users can access the core and exploratory systems."

IBM stated that there are three metrics to measure system performance—quality, speed, and scale—and recently released a white paper (a short excerpt at the end of the article) that defines the meaning of these three metrics. Speed ​​is a core element and is defined as "Original circuit layer operations per second." IBM calls this CLOPS (attractive), which is roughly similar to FLOPS in classic computing terms.

Katie Pizzolato, IBM’s director of quantum theory and application systems, said: “This is inevitable.” “Useful quantum computing requires a large number of circuits to run. Most applications need to run at least a billion times. If my system runs a circuit, it needs more than 5 milliseconds. It’s easy. It takes 58 days for 1 billion circuits; that’s not useful quantum computing.”

At the lowest level, QPU speed is driven by the underlying architecture. "This is one of the reasons why we chose superconducting qubits. In these systems, we can easily couple the qubits to the resonator in the processor. This provides us with fast thresholds, fast resets, and fast readouts. The basic principle of speed," Pizzolato said.

"Take the Falcon r5 processor as an example, [this is] a huge upgrade to the Falcon r4. In r5, we integrate new components into the processor, and its measurement rate is eight times faster than r4, and there is no consistency Any impact. This allows a measurement rate of a few hundred nanoseconds compared to a few microseconds. Add this to our other improvements to gate time and you can take a big step forward with Falcon r5," she Say.

IBM now officially marks Falcon r5 as the core system, which is an exploratory step. "We are making sure that Falcon r5 is up and running and has high reliability. We believe that the faster read speed r5 can maintain high availability, so we now mark it as a core system," she said.

Pizzolato did not give specific CLOPS figures for Falcon r5, but in another presentation to the HPC Professional Association in early December, IBM’s Scott Crowder (Vice President and Chief Technology Officer of Quantum) showed a slide showing IBM’s CLOPS It is 4.3 (though there is no specific QPU) and 45 CLOPS of trapped ions.

In May of this year, IBM launched a beta version of Qiskit Runtime, which is said to be "a new architecture provided by IBM Quantum that can simplify calculations that require multiple iterations." The idea is to use the classic system to accelerate the access to the QPU, which is different from the way the CPU manages the access to the GPU in classic computing. All IBM QPUs now support Qiskit Runtime.

Pizzolato said: "We created Qiskit Runtime as a container platform for executing classic code in an environment where quantum hardware can be accessed very quickly." "[It] completely changed the usage model of quantum hardware. It allows users to send IBM quantum data The center submits circuit programs, not just circuits. This method provides us with a 120-fold improvement. Programs like VQE (Variable Quantum Characteristic Solver) used to require our users to run for 45 days, but now they can run for 9 hours Finished within."

IBM believes that these advancements combined with the 127-qubit Eagle processor means "no one really needs to use an emulator anymore."

The Qiskit Runtime on the IBM website is described as follows: "Qiskit Runtime allows authorized users to upload their Qiskit quantum programs for use by themselves or others. Qiskit quantum programs, also known as Qiskit runtime programs, are a piece of Python code that accepts certain inputs. Perform quantum calculations, or classical calculations. If necessary, provide intermediate results interactively and return the processing results. Then, the same or other authorized users can call these quantum programs by simply passing the required input parameters."

According to IBM, Qiskit Runtime is part of a broader effort to bring classic and quantum resources closer together through the cloud and create serverless quantum computing. This will be a big step towards removing many of the obstacles that developers are now facing.

"Qiskit Runtime involves combining our QPU with classic resources to eliminate latency and improve efficiency, thereby gaining more performance from our QPU at the circuit level. We call it classic with a little c," IBM Research Staff Sarah Sheldon (Sarah Sheldon) said. "We also found that we can use classical resources to accelerate the process of achieving quantum advantage and get us there sooner."

"In order to do this, we used what we call classic features that use the capital letter C. These features will be at the kernel and algorithm level. We treat them as a set of tools that allow users to measure sub-resources and classic resources to Optimize the overall performance of the application at the kernel level. This will be achieved using circuit libraries for sampling time, evolution, etc. But at the algorithm level, we see a future where we will provide pre-built Qiskit runtime and classic integration libraries Combined. We call it loop weaving," Sheldon said.

Broadly speaking, circuit weaving is a technique that decomposes a large quantum circuit with more qubits and a greater gate depth into multiple smaller quantum circuits with fewer qubits as a smaller gate depth; then in the classical Combine the results together in the post-processing. "This allows us to simulate systems that are much larger than ever before. We can also weave circuits together along edges where there are high levels of noise or crosstalk. This allows us to simulate quantum systems with higher accuracy, Sheldon said.

IBM reported that it demonstrated circuit weaving by using only five qubits and a specific "entanglement forging" technique to simulate the ground state of water molecules, which weaves the circuit into two weakly entangled halves. IBM said that through circuit weaving, users can use these tools to make speed trade-offs to expand the scale of the problem to be solved or improve the quality of the results.

The new features are bundled into the IBM code engine on the IBM cloud. IBM said the code engine combined with lower-level tools will provide serverless computing. Pizzolato gave an example, "The first step is to define the problem. In this case, we use VQE. Second, we use the Python multi-cloud distributed computing framework Lithops to execute the code. In this function, we open a link to Qiskit Runtime Communication channel and run the program estimator."

"For example, for classical calculations, we use the simultaneous disturbance random approximation algorithm. This is just an example; you can put anything here. So now users can sit down and enjoy the results. As developers increasingly adopt Quantum, Quantum Serverless enables developers to focus on their code without being dragged into configuring classical resources," she said.

IBM's final statement is that it is "closing" the chapter of IBM Quantum System One, which is its fully enclosed quantum computer infrastructure that debuted in 2019. Chow said System One will be able to handle Eagle, but IBM is working with Finnish company Bluefors to develop System Two, its next-generation cryogenic infrastructure.

"We are actively developing a new set of technologies, from new high-density, low-temperature microwave flexible cables to a new generation of FPGA-based high-bandwidth, integrated control electronics," Chow said.

Bluefors launched its latest cryogenic platform, Kid, which will become the foundation of IBM System Two.

Bluefors’ Russell Lake said: “We call it Kid because in Finnish, Kid means snowflake or crystal. It represents the hexagonal crystal geometry of the platform, enabling unprecedented scalability and access rights. "Even if we create a larger platform, we can maintain the same user accessibility as a smaller system. As the scale of advanced quantum hardware continues to expand, this is crucial. We have adopted quantum processors The cooling of the system is separated from the operating heat load to optimize the cooling capacity. In addition, the six-fold symmetry of the platform key means that the system can be connected and clustered to achieve a greatly expanded quantum hardware configuration."

"The modular nature of IBM Quantum System 2 will become the cornerstone of future quantum data centers," Gambetta said. It is speculated that the 433-qubit Osprey processor will be installed in a version of the new system 2 infrastructure.

There are a lot of things to absorb in IBM's presentation. IBM naturally tried to go all out. In fact, there are many companies working on all aspects of quantum computing discussed by IBM, but few have solved all of these problems. For this reason, the IBM report serves as an interesting overview of the overall progress of the entire quantum community.

Reaching quantum advantage in 2023 will be a big deal even if it is only a few applications.

Video link: https://www.youtube.com/watch?v=-qBrLqvESNM

Link to IBM paper (Quality, Speed, and Scale: Three Key Attributes for Measuring the Performance of Quantum Computers in the Near Future): https://arxiv.org/abs/2110.14108

"Quantum computing performance is defined by the amount of useful work performed by a quantum computer per unit of time. In a quantum computer, information processing is implemented by quantum circuits that contain instructions to manipulate quantum data. Unlike classical computer systems where instructions are directly executed by the CPU, A quantum processing unit (QPU) is a combination of control electronics and quantum memory, supported by a classic runtime system, used to convert the circuit into a consumable form QPU, and then retrieve the results for further processing. The performance of the actual application depends on The performance of the entire system, so any performance index must fully consider all components.

"In this white paper, we propose that the performance of a quantum computer is controlled by three key factors: scale, quality, and speed. The scale or the number of qubits determines the size of the problem that can be encoded and solved. The quality determines the faithful execution. The size of the quantum circuit. And the speed is related to the number of original circuits that a quantum computing system can execute per unit time. We have introduced a benchmark for measuring speed in Part C."

Be the most informed person in the room! Stay ahead of technology trends with industry updates provided to you every week!

IBM fully demonstrated the highlights of IBM's steady progress in quantum computing at the 2021 Quantum Summit held last month, and most of the HPC community is surrounded by SC21. Read more on the basis of s...

For decades, researchers have been working to achieve scalable data storage in the four nucleotides (A, T, G, and C) of DNA. Once this technology is mastered, it will produce millions of times efficiency, but it will hinder reliability. Read more...

The exascale era of supercomputing has arrived, with early applications of confirmed systems including quantum circuit simulation, fusion energy, and advanced spectroscopy. Now, two researchers from Jülich Supercomp read more...

A group of DeepMind researchers reported in the journal Science last week that applying deep learning to DFT (density function theory) calculations produces more accurate results than DFT alone. In their abstract, research read more...

HPCwire caught up with SiPearl's Craig Prunty on SC21. They discussed SiPearl's role in developing European exascale technology, the recently announced partnership with Intel and Graphcore, and the French company's next steps. read more…

HPC customers often tell us about their excitement when they first started using the cloud. In conversations, we always want to dig deeper to find out how we can improve these initial experiences and realize the potential they see. read more…

Researchers at Lawrence Berkeley National Laboratory’s Advanced Quantum Test Bench (AQT) proved that an experimental method called random compilation (RC) can significantly reduce the error rate in quantum algorithms and lead to more accuracy and... Read more many...

The highlights of IBM's steady progress in quantum computing were fully demonstrated at the company's 2021 Quantum Summit held last month, and most o Read more...

Thinking from European HPC experts "Can it go green quickly?" For the new milestone on the Green500 list, sustainability must have a place at the hybrid SC21 conference. This is no wonder: the tens of billions of billions era has come, and the power consumption of HPC is soaring...Read more...

In a series of guest posts on heterogeneous computing, James Reinders returned to Intel last year after a short "retirement", and he considered how SYCL will contribute to the heterogeneous future of C++. Reinders delves into SYCL from multiple angles... read more...

Quantinuum-a new company formed by the merger of Honeywell's quantum computing division and Cambridge Quantum Corporation in the UK-today launched Quantum Origin, a service that provides "completely unpredictable encryption keys" based on... Read more many……

Nvidia's 14-month, $40 billion proposal to acquire chip IP supplier Arm has encountered another potential regulatory obstacle after encountering two earlier challenges in Europe since October. This time it was the Federal Trade Commission (FTC), which submitted a...read more...

For a long time, the promised SC21 in person seemed to be an impossible fever dream. The guarantee of a prominent physical component persisted in meetings cancelled for many years, including two virtual ISCs and a virtual SC20. With the emergence of the Delta variant, Covid in St. Louis has surged, and the debate over vaccine requirements...Read more...

"This is the 30th Green500," Wu Feng, the custodian of the Green500 list, said at the SC21 Wing Conference of the list. "You can say that Green500 was founded 15 years ago. I guess that makes it a crystal anniversary." In fact, HPCwire marks the 15th anniversary of Green500—it ranks supercomputers based on floating point numbers per watt, not just It's floating-point numbers-earlier this year...Read more...

MLCommons today released the fifth round of MLPerf training benchmark results, and Nvidia GPU once again dominates. In other words, some other AI accelerator companies read more...

On October 1 this year, IonQ became the first pure quantum computing startup to go public. At the time of writing, the stock (NYSE: IONQ) is approximately $15 and has a market value of approximately $2.89 billion. Co-founder and chief scientist Chris Monroe said that it is interesting to have some of the company's approximately 100 employees travel to New York to ring the opening bell of the New York stock market...Read more...

Two months ago, Tesla revealed a huge GPU cluster, which is said to be "roughly the fifth-ranked supercomputer in the world." This is just the pioneer of Tesla's true supercomputing moon landing: long-rumored, Dojo system with few details. read more…

Esperanto Technologies announced the launch of ET-SoC-1 in December last year. This is a new RISC-V-based chip designed for machine learning. It packs nearly 1,100 cores into one small enough to fit six times on a single PCIe card Package. Now, Esperanto is back, with silicon in his hand, aiming...Read more...

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting held by Zoom this week (September 29-30), it was revealed that the Frontier supercomputer is currently being installed at the Oak Ridge National Laboratory in Oak Ridge, Tennessee. Oak Ridge leadership...read more...

In a virtual event this morning, AMD CEO Lisa Su announced the company’s highly anticipated latest server products: the new Milan-X CPU with AMD’s new 3D V-Cache technology; and its new Instinct MI200 GPU , It provides up to 220 computing units on two Infinity Fabric connected chips, providing an astonishing 47.9 peak double precision teraflops. "Driven by the increasing demand for additional computing performance, we are in the midst of a high-performance computing cycle...Read more...

Following the removal of Intel's HPC division from the data platform division and into the newly created Accelerated Computing Systems and Graphics (AXG) business division led by Raja Koduri in June, Intel is making further updates and announcements to the HPC division. .. read more…

AMD's next-generation supercomputer GPU is under development-and by all accounts, it is about to become famous. AMD Radeon Instinct MI200 GPU (the successor to MI100) will start to power three large systems on three continents next year: the United States' exascale Frontier system; the EU's pre-exascale LUMI system; and Australia's petascale system Setonix system. read more…

The emergence of data processing unit (DPU) and infrastructure processing unit (IPU) as potentially important parts of cloud and data center architecture is to read more...

Earlier this month, D-Wave Systems was a pioneer in quantum computing and has long supported quantum computing based on quantum annealing (sometimes due to read more...

The results of the latest round of MLPerf inference benchmarks (v 1.1) were released today. Nvidia once again dominates and is closed (apples-to-ap Read more...

In the fierce and often controversial field of government IT, HPE won a huge contract worth US$2 billion to provide HPC and AI services to the National Security Agency (NSA). Following the now cancelled $10 billion JEDI contract (reissued as JWCC) and $10 billion... read more...

During the SC21 meeting last week, details about the two previously rumored Chinese exascale systems were exposed. When asked about these systems at the Top500 media briefing on Monday, November 15th, the list author and co-founder Jack Dongarra stated that he knew some very impressive results, but when asked directly if he was. .. read more...

Today at the hybrid virtual/face-to-face SC21 conference, the organizers announced the winners of the 2021 ACM Gordon Bell Prize: A team of Chinese researchers used the new exascale Sunway system to simulate quantum circuits. The Gordon Bell Award is provided by HPC pioneer Gordon Bell with a prize of US$10,000 and is awarded once a year... Read more...

In the spring of 2019, Tesla implicitly mentioned a project called Dojo, which is a "super-powerful training computer" for video data processing. Then, in the summer of 2020, Tesla CEO Elon Musk wrote on Twitter: "Tesla is developing [neural network] to train computers...Read more...

What is the quantum computing market? Energetic (a lot of money) but still chaotic and advancing in an unpredictable way (e.g. competing qubit technology Read more...

IBM today introduced the Power E1080 server, which is its first system powered by a Power10 IBM microprocessor. The new system strengthens IBM's focus on the hybrid cloud market, and the new chip enhances its reasoning capabilities. IBM-like other CPU manufacturers-wants to make inference a core function... read more...

© 2021 HPCwire. all rights reserved. Tabor Newsletter

HPCwire is a registered trademark of Tabor Communications, Inc. The use of this website is governed by our terms of use and privacy policy.

Reproduction in whole or in part in any form or media is prohibited without the express written permission of Tabor Communications, Inc..