LOS ANGELES: Google has unveiled its newest 72-qubit quantum processor, known as Bristlecone, at the annual American Physical Society meeting here this week.
“We are cautiously optimistic that quantum supremacy can be achieved with Bristlecone, and feel that learning to build and operate devices at this level of performance is an exciting challenge,” Julian Kelly, a research scientist at Google’s Quantum AI Lab, wrote on Monday in a blog post.
The chip is an expanded version of their previous 9-qubit linear quantum processor. The device uses the same scheme for coupling, control, and readout, according to Kelly. But instead of using a linear array design, it is scaled to a square array of 72 qubits.
The guiding design principle for this device is to demonstrate similar error rates they were able to achieve on the 9-qubit hardware: 1 percent for readout, 0.1 per cent for single-qubit gates, and 0.6 per cent for two-qubit gates.
“We believe Bristlecone would then be a compelling proof-of-principle for building larger scale quantum computers,” wrote Kelly.
Google believes that Bristlecone will be the chip that helps the internet giant to become the first company to demonstrate quantum supremacy, which is the potential ability of quantum computing devices to solve problems that classical computers practically cannot.
The general assumption in the industry is that it will take 49 or 50 quantum bits, or qubits, to achieve quantum supremacy, the capability of a quantum computer to outperform the largest supercomputers on certain computational tasks, a 72-qubit processor should be more than enough to achieve such a milestone.
However, a quantum computer requires not only a large number of qubits. Crucially, the error rates on readout and logical operations of such a system must be low enough for it to be of practical use.
But demonstrating low error rates is not just a matter of running a few tests on the new chip.
“Operating a device such as Bristlecone at low system error requires harmony between a full stack of technology ranging from software and control electronics to the processor itself,” wrote Kelly. “Getting this right requires careful systems engineering over several iterations.”