Logo

Global Best Companies of the Year 2025

Cio Bulletin

StreetLight, transportation analysis platform
MemryX – Redefining Edge AI with High-Performance Accelerators Engineered for Uncompromising Performance and Transformative Industry Impact

The world demands AI that’s not only intelligent but also agile, accessible, and ready to empower every industry. MemryX rises to this challenge, combining visionary innovation with purpose-built solutions to redefine Edge AI for real-world deployment. Guided by a team of pioneers, MemryX is crafting a future where technology amplifies human potential, seamlessly integrating into the fabric of everyday life.

Co-founded in 2019 by Dr. Wei Lu, an IEEE Fellow and renowned Professor of Electrical Engineering at the University of Michigan, alongside a dedicated team, MemryX was born to transform Edge AI. Dr. Lu’s expertise in memory devices and memory-centric computing, paired with the strategic leadership of President and CEO Keith Kressin, drives the company’s mission. With over 25 years in the semiconductor industry, including leadership roles at Qualcomm, Intel, and Texas Instruments, Kressin brings unparalleled insight into product management and silicon innovation.

Together, they envisioned a flexible, scalable core architecture for Edge AI, prioritizing efficient data movement and user-friendly software. Rigorously tested through prototypes (MX1 and MX2) from 2020 to 2022, MemryX’s technology ensures seamless deployment across diverse markets. Poised for production with its MX3 chip, MemryX is set to transform markets with high-volume solutions.

At CIO Bulletin, we had the privilege of interviewing Keith Kressin, President and CEO of MemryX, who shared captivating insights into how MemryX’s human-centered approach is revolutionizing Edge AI. With a passion for empowering industries through efficient, transformative, and accessible technology, Kressin illuminated the company’s bold journey to shape a smarter, more connected future.

Interview Highlights

MemryX emerged from a vision at the University of Michigan. Can you take us back to those early days—what problem were you aiming to solve, and what inspired the founding of MemryX?

We started the company in early 2019 using technology from Michigan, alongside the co-founders, students, and postdocs. The biggest need we identified was that data movement often dictates how efficient artificial intelligence workloads can be processed. We recognized conventional graphics-based or general-purpose architectures, while capable for training, are not optimal for efficient AI computing at the Edge.

Today, workloads are rapidly shifting toward AI-based applications. So, we founded the company with a concept focused on solving this challenge. While the problem is largely data-driven, many efforts still focus on improving the compute itself.

Our goal is to supply AI accelerators that deliver strong AI metrics—power, performance, latency, and accuracy—right out of the box. That means our hardware doesn’t require any special training or tuning of the AI model to run efficiently. This allows us to offer customers the tools to compile, run, and integrate AI models, without needing to provide pre-trained models (e.g. a “model zoo”) or rely on custom engineering support.

To learn more about our vision and approach, watch the 2024 Roundtable Discussion featuring four MemryX Executives visit: https://www.youtube.com/watch?v=X-LlalHbeZk

Your MX3 chip and M.2 module power everything from smart vision to industrial applications. How does MemryX ensure these solutions are approachable for diverse users, from startups to global enterprises, and what excites you most about their potential?

Our MX3 platform was designed from the ground up to deliver exceptional performance, power efficiency, and ease of use—out of the box. Let me explain many of the steps taken to reduce SW complexity for AI. First, our focus is on inference or the application of AI models on the Edge rather than training of those models. Our solution is a dedicated AI accelerator that runs the entire model without burdening the host processor. The host simply sends information to be processed to our accelerator, which returns the results. So, there is no dependency and no burden on the host, no competing with other functions for memory bandwidth, or bus contention, or cache coherency complexities like integrated AI accelerators require.

Next, we don’t use complex sets of RISC-V processors or DSPs or some other off-the-shelf IP block and spend time deciding which portions of which AI models to put in each different processing block. We use in-house design with our own ISA, dedicated specifically for AI operation at scale. We don’t have any caches or prefetchers. We use a dataflow architecture with “at memory computing” or memories co-located with processing elements (minimizes data movement which helps power and minimizes any SW complexity). Also, I think we are also the only hardware vendor who doesn’t even use a NOC (Network on Chip), further simplifying on-chip data movement/management.

Finally, our architecture is designed to put a greater burden on the software compiler to inform the dedicated hardware how to run the models efficiently. All of these things were done to enable us to efficiently run trained AI models without first requiring the AI model to be modified to fit our hardware. The result is that we can 1-click compile 1000s of models that can be immediately implemented using our APIs. So the user starts with the trained model and uses our automated tools to build an executable file that gets used at runtime, and interfaces to the application through APIs. Out of the box, users will get excellent power, performance, and accuracy. And of course, to squeeze every bit of performance and power from each AI model, automated tools are available to allow the customer to make tradeoffs in performance, power, and accuracy if they desire.

Your Developer Hub offers tools and tutorials for students, hobbyists, and educators. What’s your vision for growing this community, and how do you hope to inspire the next wave of AI innovators?

It’s our architecture—what we chose to put in hardware versus software. No NOC, at-memory computing, and a dataflow pipelined architecture greatly simplify the process. An executable file is created at compilation time, which can be executed. APIs enable the application to send source data and receive metadata output from the AI model. The entire model’s weights are loaded on-chip for dataflow execution, rather than loaded layer by layer from external memory. Everyone claims “easy to use,” but then it takes three to six months; we’re different, and it’s best demonstrated by sampling and trying our solution. Because MemryX technology is designed for fast and easy deployment, our Developer Hub helps empower students, educators, and innovators with tools that make Edge AI accessible. The entire AI core and software stack are done in-house, all internal IP, built from the ground up for AI. This results in very high utilization, offline compilation of a dataflow program, and robust board API support.

With teams spanning Ann Arbor, Bangalore, Taipei, and Hsinchu, how does MemryX foster a tight-knit, creative culture across borders? What’s a unique way your global perspectives fuel breakthrough ideas?

Dedication. Team members see their role as individuals and understand the greater role of MemryX in this field. It’s an exciting time for the Edge AI industry. Both new and experienced teams tackle our goals and challenges with vigor. We’re proud of each and every person at MemryX.

Being recognized as a Global Best Company in 2025 is a significant milestone. How does MemryX plan to leverage this recognition to expand its global footprint, particularly in emerging markets where Edge AI could address unique challenges?

We are focused on solving real-world problems with scalable Edge AI platforms, and we appreciate this recognition as it affirms our commitment to delivering innovation that empowers industries. Our goal is to continue solving problems.

MemryX | Leadership

Keith Kressin is the President and CEO of MemryX. A visionary leader with 25+ years of semiconductor industry expertise, Keith Kressin drives MemryX’s strategic direction and growth. As SVP/GM at Qualcomm for 13 years, he spearheaded transformative businesses in AR/VR, PC ecosystems, and AI accelerators for cloud computing. His tenure at Intel (8 years) and Texas Instruments (4 years) further solidified his mastery in semiconductor technologies, product innovation, and global market expansion. Keith’s proven track record in scaling businesses, defining silicon roadmaps, and executing bold strategies positions MemryX at the forefront of the AI revolution.

Dr. Wei Lu is the Co-Founder and CTO of MemryX. A pioneer in next-generation computing, Dr. Wei Lu combines academic brilliance with entrepreneurial success. As a renowned IEEE Fellow and University of Michigan EECS professor since 2005, his research laid the groundwork for MemryX’s breakthrough architecture. Before co-founding MemryX in 2019, he co-created Crossbar Inc. (2010), a leader in resistive RRAM technology. Dr. Lu’s unparalleled expertise in neuromorphic computing, memory devices and in-memory computing systems cements MemryX’s technical leadership in the AI hardware landscape.

“Our goal is to deliver AI accelerators that provide exceptional power, performance, latency, and accuracy—right out of the box. With no need for special training or tuning, our hardware lets customers seamlessly compile, run, and integrate AI models without relying on pre-trained model libraries or custom engineering support.”

Business News

Recommended News

Latest  Magazines