Sign up for the Design News Daily newsletter.
Optimizing LPDDR and DDR Memory for Cutting-Edge Apps
Infineon, Rambus solutions target software-defined automotive and AI inference architectures.
April 26, 2023
2 Min Read
Infineon Technologies’ SEMPER™ X1 low-power, double data rate (LPDDR) flash memory enables next-generation automotive E/E architectures. Infineon
The role of memory to handle an avalanche of data expected in future leading-edge applications such as automotive and artificial intelligence has led to product innovations from several companies, the latest including Infineon and Rambus.
Recently, Infineon Technologies launched a low-power, double data rate (LPDDR) flash memory to enable next-generation automotive E/E architectures. The Infineon SEMPER™ X1 LPDDR Flash offers safe, reliable and real-time code execution critical for automotive domain and zone controllers. According to Infineon, the SEMPER X1 delivers 8x the performance of current NOR Flash memories and achieves 20x faster random read transactions for real-time applications. This enables software-defined vehicles to deliver advanced features with enhanced safety and architectural flexibility.
Next-generation vehicles increasingly rely on state-of-the-art multi-core processors developed on advanced manufacturing processes. Higher density embedded non-volatile memories are no longer a viable cost option at these advanced nodes, and system architects need to consider using external NOR for code storage.
Infineon’s SEMPER X1 is ideal for next-generation automotive domain and zone controllers running safety-critical, real-time applications. SEMPER X1 delivers up to 3.2 GBytes/sec throughput with its LPDDR interface enabling fast random read transactions for real-time code execution. With a multi-bank architecture, it supports over-the-air firmware updates with zero downtime. The device is ISO26262 ASIL-B compliant, offering advanced error correction and other safety features.
Infineon SEMPER X1 is sampling now with commercial availability in 2024
Optimizing DDR for AI
In the other announcement, memory supplier Rambus has introduced an interface subsystem solution for DDR6 memory that supports the high-bandwidth, low latency requirements of AI and ML (machine learning) inference. The subsystem delivers market-leading performance of up to 24 Gb/s, providing 96 GB/s of bandwidth per GDDR6 memory device.
The Rambus GDDR6 interface is fully compliant with the JEDEC GDDR6 JESD250C standard and consists of a co-verified PHY and digital controller. The PHY is available in advanced FinFET nodes for leading-edge SoC integration. The digital controller maximizes memory bandwidth and minimizes latency via Look-Ahead command processing. The controller is DFI compatible (with extensions for GDDR6) and supports AXI or native interface to user logic.
GDDR6 was originally intended for gaming and graphics market. The memory’s combination of bandwidth, capacity, latency and power suits it for AI applications. In addition, the memory’s use of standard DDR manufacturing processes make it a cost-effective choice.
Rambus works directly with customers to provide full-system signal and power integrity (SI/PI) analysis, creating an optimized chip layout. Customers receive a hard macro solution with a full suite of test software for quick turn-on, characterization, and debug.
Spencer Chin is a Senior Editor for Design News covering the electronics beat. He has many years of experience covering developments in components, semiconductors, subsystems, power, and other facets of electronics from both a business/supply-chain and technology perspective. He can be reached at [email protected].
About the Author(s)
You May Also Like
3 Commonly Overlooked Techniques for Developing Reliable FirmwareMar 5, 2024|6 Min Read
007 Science: Inventing the World of James BondMar 4, 2024|12 Slides
Action on the Floor of IME WestMar 4, 2024|1 Min Read
How Repairable Is Apple's AR/VR Headset?Mar 4, 2024