Increase functionality with knowledge acceleration

Online acceleration improves the functionality of radio networks.

The demand in development for purposes that require a peak of acceleration consistent with the unit of knowledge provides a foot for online accelerator cards, which can mean new opportunities for some providers and a potential risk for others.

For years, CPUs or CPUs with FPGA accelerators have satisfied most of the desires of the market. But the immediate accumulation of the volume of knowledge everywhere, coupled with the need to process it in places: at the endpoint, at the edge, and in the cloud, has made the faster movement of that knowledge a priority. The big questions now are how productive you are in meeting this challenge and whether new technologies will upgrade or upgrade existing solutions.

One technique is hardware acceleration, or the use of more silicon to speed up knowledge processing and ease some of the CPU load. Proponents of this technique have pointed out that Moore’s Law cannot wait for the evolution of cloud computing as it should be. age, which means processors can’t scale with the success that hyperscalers and network operators want. If this is true, it represents a challenge for the company co-founded by Gordon Moore. Today, Intel supplies many processors that run the servers that manage cloud workloads, and dominates emerging virtualized broadband networks.

Intel says its processors and FPGAs are perfectly capable of accelerating knowledge as desired, and the chip giant has yet to introduce an online accelerator board to process knowledge before it reaches the processor. Meanwhile, many of Intel’s competitors are entering this market, claiming that popular servers need secure workloads.

Online accelerators are included in assets acquired through AMD in its recent acquisition of Xilinx for $49 billion. Jamon Bowen, director of CETA DCCG product development plans at AMD, said the company aims to offload network virtualization into radio networks, as well as other complex instances using high-performance computing, secure download, monetary trading and video encoding.

While online acceleration is new, Bowen noted that traditional online processing has historically driven demand for fpga. “It’s new to us to have popular products that work with more explained software boxes,” he said.

Stefan Pongratz, vice president and industry analyst at Dell’Oro Group, said RAN virtualization has proven effective for some narrowband deployments. he predicts that RAN virtualization in 5G will be “more like classic RAN, which uses optimized compromised silicon. “

Marvell, Qualcomm and AMD are marketing accelerator cards online with telecom and cloud consumers in mind. AMD positions those devices as intermediate workhorses of knowledge, which come with FPGAs as a component of the design. Meanwhile, Marvel and Qualcomm are targeting wireless network operators with ASICs.

These expansions are consistent with efforts in other markets, where the focus on functionality and low force intake are driving chipmakers to expand more custom-designed solutions. So instead of an undeniable CPU or GPU, many of those chips are designed with a variety of processing elements. , some of which are application-specific.

For baseband processing, the calculations are too confusing for a general-purpose processor, said Joel Brand, Marvell’s senior director of product marketing. The company’s line of baseband processors includes a suite of 5G Layer 1 inline hardware accelerators, as well as a neoverse Arm Core.

Fig. 1: Complexity of 5G in the physical layer of the stack. Source: Marvell

“Think of it as a 5G network card,” said Peter Carson, Marvell’s senior director of response marketing. “It’s what’s been used for years in smart network cards in the cloud. .

Qualcomm has also brought an online ACCELERATOR CARD PCIe, an ASIC designed for the functionality of virtualized and cloud-native 5G network deployments through offloading compute-intensive 5G baseband processing from server processors.

Gerardo Giaretta, Qualcomm’s senior director of product control, described the map at a press conference ahead of Mobile World Congress. as a SoC that combines DSPs, ARM processors, and integrated front-end network card functionality (the front-haul is the connection between the baseband and the radio).

“To have a decent point of charge and strength for the solution, you need those accelerators,” Giaretta said. “There are instances where you transfer to a large, higher-order MIMO, where the difference is significant. “The dense processing solution is also useful for network operators when they decide to organize baseband assemblies in a central location.

“Qualcomm is a new entrant to the sub-6 GHz macro silicon box,” Dell’Oro’s Pongratz. “Marvel’s baseband silicon has been shown in the box. “

In fact, Marvell claims five winning consumers with network operators and cloud infrastructure providers. The only visitor who has been allowed to call the chipmaker is Dell. Marvell has partnered with Dell to integrate its accelerator into server hardware.

Meanwhile, Qualcomm has partnered with HPE. Geetha Ram, global head of Open RAN at HPE, described her company’s work with Qualcomm as strategically important. consumption of force, it’s a bit like a heavenly marriage,” he said in a recent presentation.

Ram said HPE has already deployed several thousand consumer RAN sites with an external architecture, where Layer 1 capacity is distributed between server processors and PCIe boards, and network connectivity is treated through a separate board. His team compared performance, price, and power consumption. from those deployments to online acceleration, running distributed, centralized RAN traffic scenarios and the same server CPU types across all scenarios. Studies indicated that overall load savings could reach only 60 percent with online acceleration, he said.

Marvell and Dell are also promising for network operators with their combined solution. Your PCIe board is designed to convert out-of-the-box servers into distributed virtual arrays (vDUs) in carrier networks. It includes a front-end network adapter and handles timing, synchronization, beamforming, and all other vDU Layer 1 features.

Andrew Vaz, vice president of product control at Dell’s telecommunications systems business unit, said that lately Layer 1 capacity consumes up to two-thirds of the CPU processing in a typical vDU. use the servers for purposes unrelated to execution, he said.

“It has functionally freed up a lot of functionality capacity on those servers with those download cards to offer very attractive architectures and great load savings,” Vaz told reporters.

Some of those savings would be achieved by deploying fewer servers. Intel, which has governed the virtual RAN area with its x86 Flex RAN solution, says this is the wrong approach. “uncompromising” and argues that the virtualized radio access network deserves to accelerate workloads as needed.

“Our technique is to consider the complete formula to determine what improvements to incorporate into processor instruction sets and where to locate acceleration, so consumers get the most productive combination of power, functionality and flexibility,” said Sachin Katti, CTO of Intel’s Network and Edge Group and co-chair of the O-RAN Alliance Technical Steering Committee. “For our chips with integrated acceleration, we integrate FEC ON-chip acceleration with Xeon cores with 5G-specific commands to deliver the acceleration benefits with the flexibility of general-purpose processors. These features provide flexibility to our consumers in how they map the right acceleration hardware to the right part of the vRAN workload and thus satisfy operators’ desires more productively compared to a solution that forces an entire Layer 1 into a rigid hardware accelerator.

Intel, which bought FPGA maker Altera in 2015, advocates the use of FPGA-based acceleration and says programmability increases agility for network operators. The company validates and qualifies FPGAs submitted through OEMs. One of the wonderful benefits of programmable logic is that it can evolve. as acceleration requirements are replaced or new protocols or technologies are incorporated or updated. The company has also created an open FPGA stack, delivered through Git repositories.

So far, Intel has signed more than a dozen partnerships for FPGA-based acceleration solutions. Affirmed Networks (owned by Microsoft), Altiostar (soon to be owned by Rakuten), Algo Logic, Arrive Technologies, Benu Networks, Bigstream, Cast, CTAccel, F5, Juniper Networks, Megh Computing, Myrtle. ai, Napatech, and rENIAC all accelerate workload through Intel FPGA.

Intel also enjoys a wide range of OEMs from major server manufacturers, adding those that have partnered with online accelerator board manufacturers. Both Dell and HPE build servers that use Intel’s FPGA-based acceleration cards, as do Fujitsu, Quanta Cloud Technology, Supermicro, Inspur and Kontron.

Intel claims that its FPGAs can be used in radio networks to carry out search acceleration for direct error correction. The company noted that the same FPGA that performs direct error correction can be used to perform front-end in-line compression on the RAN, as well as virtual switching and routing at the core of the network. Other uses of the card include online acceleration of encryption purposes over IPSec and mitigation of DDOS.

However, operators may not get the maximum functionality from virtualized networks by employing existing solutions. HPE’s Geetha Ram said requests from network operators asking how the hardware worked led to her company’s resolve to marry Qualcomm on an online accelerator card. Similarly, Dell pointed out that increased functionality was the main advantage that operators can also expect from marvell’s online accelerator integration.

“This is a step in the right direction that will fill in some of the gaps,” Dell’Oro’s Pongratz said of the new compromised acceleration hardware. He added that if operators need to deploy large virtualized MIMO, traditional hardware will be needed.

Suppliers are investing in a new 300mm capacity, but that’s probably not enough. And despite the growing demand for 200mm, only Okmetic and new players in China are adding capacity.

Continued expansion into new and existing markets indicates sustained growth.

Experts around the table: Design for context and geopolitical effects in a global chain.

Funding is coming for photonics and batteries; 88 startups $1. 3 billion.

Why UCIe is so much for heterogeneous integration.

The disintegration and completion of Moore’s Law replaced everything.

It’s who you ask, yet there are benefits to both.

Research shows an improvement in time to market and optimization of key metrics.

Efficiency improves dramatically, but the amount of knowledge increases faster.

Some designs are based on power, while others are based on durable performance, cost, or flexibility. But opting for the most productive option for a benchmark-based application becomes more difficult.

Funding is coming for photonics and batteries; 88 startups $1. 3 billion.

The clock network is complex, critical to performance, but it’s dealt with after the fact. Making a mistake can ruin your chip.

Moving forward will require a second look at logic.

Leave a Comment

Your email address will not be published. Required fields are marked *