May 27, 2025
Article
As Generative AI (GenAI) continues to revolutionize industries, the demand for efficient, high-performance computing platforms has increased. The Versal AI Edge VE2302 System on Module (SoM) emerges as a powerful solution to accelerate Large Language Models (LLMs) at the edge and in data centres. Leveraging the adaptability of AMD Versal Adaptive SoCs, this SoM provides an optimal balance of compute efficiency, power optimization, and AI acceleration, making it an ideal choice for next-generation AI applications.
iWave is thrilled to collaborate with RaiderChip to drive innovation in AI acceleration and together deliver high-performance GenAI LLM acceleration, seamlessly integrating hardware and software and bringing AI intelligence to edge devices. iWave specializes in the design and manufacture of Versal FPGA System on Modules.
RaiderChips’s Meta Llama 3.2 LLM on the iW-RainboW-G57M Versal VE2302 System on Module, the model achieves an impressive 12 tokens per second, delivering fast and efficient AI performance. What makes this model truly versatile is its ability to easily integrate into any edge device, making it ideal for applications like chatbots, predictive analytics, and real-time AI assistants.
Experience iWave’s demo of the GenAI NPU Edge LLM on the Versal AI Edge VE2302 SoM in action. This efficient chatbot delivers responses at 12 tokens per second, ensuring fast, real-time interactions. Designed for ease of use, it seamlessly handles AI-driven conversations with minimal latency. Whether for customer support, automation, or AI research, this solution showcases the power of on-device LLM acceleration.
GenAI NPU Edge LLM demo in iWave’s Versal AI Edge VE2302 System on module
Key Technical Features of iW-RainboW-G57M Versal System on Module:
The Versal AI Edge based System on Module is compatible with an extensive series of chips: VE2302/VE2202/VE2102/VE2002. The System on Module is integrated with an 8GB LPDDR4 RAM and a 128 GB EMMC and 256MB QSPI Flash. Two high speed expansion connectors and 122 User Configurable IOs provided on the System on Module enable a multitude of interfaces available for the user.
The SOM supports a breadth of connectivity options, such as 28.21Gbps high-speed transceiver blocks to support all necessary protocols in edge applications, 40G multi-rate Ethernet, PCIe, and native MIPI support for vision sensors which are a must for advanced AI applications.
The iW-RainboW-G57M, Versal AI Edge System on Module, integrated with the GenAI LLM acceleration, delivering unmatched AI performance, low latency, and power-efficient processing required to power next-generation NLP, conversational AI, and real-time GenAI workloads at the edge and in data centres. iWave provides a robust suite of tools, libraries, and software resources that empower developers to harness the full potential of the Versal AI Edge System on Modules.
iWave is a global leader in the design and manufacturing of FPGA System on Modules and ODM Design Services. With over 25 years of diverse experience in the FPGA domain and a strong design-to-deployment competence, iWave strives to transform your ideas into time-to-market products with reliability, cost, and performance balance.
Looking for more insights? Contact us at mktg@iwave-global.com to explore how the Versal AI Edge SoM can revolutionize your AI solutions!
For more information visit www.iwave-global.com.
We appreciate you contacting iWave.
Our representative will get in touch with you soon!