Thursday, May 14, 2026

As Edge AI grows, AMD bullish on embedded microprocessor sector

While mushrooming hyperscale data centers dominate the news, AI is quietly driving growth in another infrastructure sector: the edge.

Edge AI deployments are rising due to promises of real-time responses, reduced network dependence, and — as physical AI that can interact with the real world is piloted — more opportunities for automation.

In fact, according to IDC, global investment in edge computing is expected to rise roughly 50%, from $232 billion in 2024 to nearly $350 billion by 2027.

It’s no surprise, then, that demand for AI-ready embedded microprocessors is also soaring. Unlike chips safely housed in data centers that can be upgraded as needed, embedded processors are specialized to operate in harsh, often constrained environments for several years while delivering maximum performance.

Global semiconductor giant Advanced Micro Devices (AMD) has been in the embedded space for decades, and in recent years, the company has expanded its embedded lineup to meet growing demand for Edge AI.

In an exclusive interview on Monday, May 4, with Steven Fong, AMD corporate vice president for Asia Pacific and Japan embedded business, he discussed how trends in embedded microprocessors reveal which industries are adopting these technologies the fastest and how AMD has positioned its embedded x86 portfolio to meet the demands of today’s edge infrastructure.

Fong began the interview by highlighting how specific industries are driving the rapid adoption of edge applications because of the need for real-time response.

“At its core, edge computing is being driven by the need for real-time decision-making. Applications such as industrial automation, robotics, and intelligent infrastructure are adopting edge systems more rapidly because they cannot tolerate latency or dependence on centralized systems,” Fong said.

Fong identified the automotive, industrial automation, and healthcare sectors as industries to watch. In the automotive space, he explained that the shift toward software-defined vehicles, with digital cockpits and advanced driver assistance systems (ADAS), is increasing compute density.

As a result, most newly manufactured vehicles now rely on chips capable of handling AI inference, graphics, and real-time control on a single platform.

Meanwhile, smart factories and robots are increasing demand for low-latency processing of machine vision and control workloads in industrial applications. In healthcare, AI is fueling real-time imaging and diagnostics.

“[Across different industries,] the common thread is the need to combine real-time performance with reliability at scale, as these deployments become more complex and widely distributed,” Fong said.

“What is emerging is a distributed AI model, where the edge handles real-time inference, while centralized systems focus on training and large-scale processing. This shift is a key reason why Edge AI is seeing such rapid growth — it is enabling AI to move from experimentation into real-world deployment.”

Flexibility, scalability, and security

Compared to other AI deployments, Edge AI comes with its own considerations. Due to constrained environments and the long operational lifespan expected at the edge, AI-ready embedded processors must balance performance with flexibility, scalability, and reliability.

Fong explained that flexibility is essential because each deployment has distinct requirements.

“Different AI workloads place very different demands on the system. Traditional analytics workloads are often CPU-driven, while generative AI and more advanced models require parallel processing and acceleration. Emerging agentic AI and, more importantly, Physical AI introduce continuous decision loops that must process real-world inputs from sensors, interpret context, and act in real time,” Fong said.

AMD aims to meet varied compute requirements through its broad x86 embedded portfolio, with Edge AI use cases specifically served by EPYC Embedded CPUs and Ryzen Embedded processor lines.

“These embedded processors allow customers to deploy solutions across a wide range of use cases ranging from low-power edge devices to high-performance edge systems,” Fong said.

“The key difference lies in their target deployment environments… EPYC Embedded processors are primarily designed for infrastructure-level systems such as networking, storage, and industrial edge servers, where higher core density, memory capacity, and I/O scalability are required.”

In contrast, the Ryzen Embedded models combine multiple components onto a single chip for space efficiency. The latest series released this year — the P100 and X100 — combines AMD’s Zen 5 CPU architecture, RDNA 3.5 GPU for real-time visualization and graphics, and an XDNA 2 NPU for low-latency, low-power AI acceleration.

“Ryzen Embedded processors provide a balance of compute and graphics performance within constrained environments while maintaining the same long-term availability,” Fong said.

Having a flexible lineup, however, is only part of the equation. As edge deployments grow into large-scale systems expected to endure for up to a decade, semiconductor companies must also ensure their portfolios can scale to keep pace with future workloads.

To address this, AMD promotes a holistic approach to scalability that includes both hardware and software.

Fong highlighted AMD’s “Zen” architecture, saying: “We maintain the same efficient Zen core architecture across our embedded, client, and data center portfolios, which allows customers to scale workloads from edge devices to enterprise and cloud environments.”

He also said AMD provides high memory bandwidth and optimized cache hierarchies to minimize performance bottlenecks and support system expansion for growing workloads.

AMD also offers long-term roadmap and software support, allowing customers to scale their systems while maintaining compatibility across product generations.

Another key part of AMD’s Edge AI strategy is ROCm, the company’s open software stack designed to avoid vendor lock-in and provide developers with tools for building embedded projects.

Among its features, ROCm supports machine learning and deep learning frameworks such as TensorFlow, PyTorch, and JAX. It also includes embedded AI-ready models and access to open-source compilers, runtimes, and libraries.

“Ultimately, [through ROCm] our approach is to provide an open and scalable software foundation that reduces complexity, accelerates development cycles, and enables embedded developers to adopt modern programming paradigms while maintaining the reliability and longevity their applications require,” Fong said.

As Edge AI projects expand the attack surface and introduce new vulnerabilities, companies are also prioritizing security and reliability.

“AMD embedded processors incorporate a range of hardware-based security features designed to help protect data, workloads, and system integrity in connected environments,” Fong said.

He cited hardware-validated boot mechanisms, memory encryption capabilities, data preservation features, and isolation mechanisms for virtualized environments among AMD’s built-in security technologies.

“With an extensive IP portfolio, advanced packaging, and cutting-edge process technologies, we continue to offer leading performance, scalability, and reliability. AMD Embedded is well-positioned for sustained leadership in a world increasingly defined by intelligent systems,” Fong said.

- Advertisement -spot_img

RELEVANT STORIES

spot_img

LATEST

- Advertisement -spot_img