Computing Efficiency Barrier Needs Elastic Solutions

The possibilities enabled by artificial intelligence might be endless, but computational power still has limits. As the world buzzes with excitement about the coming AI revolution, those making the hardware to support it are working in overdrive to make sure it can adapt to demand.

The possibilities enabled by artificial intelligence might be endless, but computational power still has limits. As the world buzzes with excitement about the coming AI revolution, those making the hardware to support it are working in overdrive to make sure it can adapt to demand.

Representatives from semiconductor heavyweight ARM, soaring newcomer Ampere, and end-product mainstay Lenovo shared their thoughts on the compute barrier with the COMPUTEX Forum 2023 “Chips and Semiconductor Summit—The Next Big Leap in Computing” on May 31 in Taipei.

Even in the four years since COMPUTEX was last held in full, the pressure on computing infrastructure has skyrocketed. Data generation had grown four-fold from 2019, while at the same time, a slowing Moore’s Law means we can no longer rely on ever-smaller nodes to provide more computing power at reasonable cost.

“And then you’ve got sustainability,” said Mohamed Awad, SVP/GM of Infrastructure Business at ARM. Data centers are already taking up 1% to 4% of all power generated worldwide, and is forecast to reach up to 11% by 2030. “In some places, they’re actually limiting housing development because of the power being used by the local data center,” with West London putting off development until 2035 and Virginia data centers running backup generators just to keep up with daily operations.

The response has been a shift toward specialized processing. “CPUs of 2019, they were Swiss army knives. The state-of-the-art was a 28-core device. That was it. You used it for everything,” Awad said. “In the last four years, the ecosystem has flipped that on its head.” Now there are cloud-native CPUs with nearly 200 cores made by Ampere, edge devices optimized for power, and many more, specialized for the growing diversity of use cases from edge to cloud.

Infrastructure is finally adapting to a change in computation that has already been a decade in the making. While scaling up through silicon innovation has worked over the past few decades, the cloud demands a different strategy, Ampere Chief Product Officer Jeff Wittich said. In the cloud where applications are already spread across many nodes, servers, and cores, “we have a great opportunity to use a scale-out approach in order to deliver increasing amounts of compute performance without increasing power consumption.” This is the idea behind Ampere’s cloud-native CPUs, built to increase performance without guzzling more power.

Yet the most cutting-edge CPU tech cannot be the only solution. Companies like Lenovo, who work with clients to create data centers, are also innovating to solve more immediate challenges. For instance, cooling is a hot topic at this year’s COMPUTEX, as more cores and memory are packed into the same-size package, said Andrew Huang, Senior High-end Systems Professional on the Board of Directors at Lenovo Global Technology (Taiwan). Everyone might be talking about water cooling, but it is not necessarily the most power-efficient solution for every rig. Just like not every person looking to buy a car needs to go from zero to 60 in a few seconds, not every client needs the most cutting-edge cooling system, Huang said.

With the age of AI upon us, it will take more than a different CPU, water cooling, or software to solve power and cost restraints. “I think you have to really reimagine what the infrastructure is and look for efficiencies everywhere,” Awad said. “In the end, what will win is the right compute for the job.”

Hide
Show