PRESS RELEASE

from WEKA

WEKA Accelerates AI Factory Deployment Times From Months to Minutes with Turnkey NVIDIA AI Data Platform Solution

EQS-News: WEKA / Key word(s): Product Launch
WEKA Accelerates AI Factory Deployment Times From Months to Minutes with Turnkey NVIDIA AI Data Platform Solution

16.03.2026 / 21:35 CET/CEST
The issuer is solely responsible for the content of this announcement.


New NeuralMesh AI Data Platform Closes the Gap Between AI Proof-of-Concept and Profitable Production, Delivering Scalable Business Intelligence and Faster AI Outcomes with NVIDIA

SAN JOSE, Calif. and CAMPBELL, Calif., March 16, 2026 /PRNewswire/ -- From GTC 2026: WEKA, the AI storage and memory systems company, today announced general availability of its enterprise-ready NeuralMesh™ AI Data Platform (AIDP), which delivers composable, high-performance infrastructure optimized for AI Factory deployments. Based on NVIDIA AI Data Platform reference design, the solution is an end-to-end system that accelerates the delivery of AI-ready data to AI factories. The result: AI project timelines speed up from months to minutes, empowering organizations to deliver production-scale agentic AI applications using best-in-class technologies across their ecosystem.

 WEKA and NVIDIA accelerate enterprise-ready AI factories

Leveraging NeuralMesh's uniquely adaptive architecture, the solution addresses the most persistent obstacle in enterprise AI: organizations can demonstrate AI concepts work in proof-of-concept (POC) but consistently struggle to reach production scale.

Built on more than 170 patents and over a decade of AI-native storage innovation, a foundation no competing storage platform can replicate, NeuralMesh is the only solution that gets faster and more resilient as AI environments scale to exabytes and beyond. As AI Factory data infrastructure becomes a critical layer in enterprise AI architecture, NeuralMesh is helping customers to close the gap between POC and production deployments today. Customers running NeuralMesh with Augmented Memory Grid™ can achieve 6.5x more tokens per GPU for inference workloads, reflecting the compounding advantage of a purpose-built architecture over retrofitted infrastructure.

"Enterprises are now deploying AI Factories internally, driving a major shift to inference throughout the ecosystem. These companies require rapid AI outcomes and need turnkey solutions that come with the enterprise table-stakes of reliability, security, and optimal price-performance and cost-effectiveness," said Liran Zvibel, cofounder & CEO at WEKA. "WEKA's NeuralMesh AIDP gives organizations everything they need to run always-on AI factories: extreme storage performance and the flexible architecture required to operationalize AI at production scale. Whether an organization is just beginning its AI journey or running full-stack NVIDIA deployments, NeuralMesh AIDP scales seamlessly as they grow."

"The deployment of agentic AI in production demands a new focus on managing the continuous, coherent flow of data and inference context," said Jason Hardy, vice president, storage technologies at NVIDIA. "By leveraging the NVIDIA AI Data Platform, solutions like WEKA's NeuralMesh AIDP deliver the persistent context tier necessary for stable and high-scale agentic inference."

One System, Every AI Workload: Delivering End-to-End AI Factories

AI factories provide enterprises with purpose-built production systems designed to operate AI at scale, but they demand storage capabilities that extend beyond where data sits to actively support context and continuous data movement. NeuralMesh, WEKA's intelligent, adaptive storage system, delivers the continuous data-loop performance that AI factory workloads demand.

Out-of-the-Box AI Applications Designed to Accelerate Business Outcomes

NeuralMesh AIDP enables enterprises and AI cloud providers to unify AI operations from retrieval to inference on a single, ready-to-deploy platform. With pre-integrated hardware and software options from NVIDIA (including NVIDIA RTX 6000 PRO Server Edition GPUs and the newly announced NVIDIA RTX 4500 PRO Server Edition GPUs) alongside Red Hat, Spectro Cloud and Supermicro, organizations can eliminate months of AI integration work.

The platform provides a simplified solution that allows teams to focus on intelligence output rather than managing underlying infrastructure. It delivers ready-to-use pipelines for a spectrum of business use cases that work across verticals, including: Semantic Search, Video Search & Summarization (VSS), AlphaFold for drug discovery, AIQ/Agentic RAG and more.  

These AI applications are already being used by enterprise and research customers to drive outcomes across high-priority sectors:

  • Health & Life Sciences: Identify patient subgroups across multiple studies and accelerate discovery in data-intensive workflows such as cryo-EM.
  • Financial Services: Get early market signal detection as data lands and institutionalize knowledge access into a shared, secure resource.
  • Public Sector: Detect potential threats based on context and meaning, not keywords, and automate evidence synthesis across sources to improve decision-making cycles.
  • Physical AI & Robotics: Shorten the loop from real-world data capture to retrained model deployment, improving fleet performance, reliability, and time to market.

"The missing piece in production AI isn't reasoning models or compute power. It's having an efficient platform that unifies the AI Factory pipeline and makes it truly scalable," said Shimon Ben-David, CTO at WEKA. "The NeuralMesh AIDP was designed to close AI's production and profitability gap, taking enterprise experiments to full-scale operations and making AI economically viable for everything from next-generation agents to healthcare applications."

Supporting Partner & Customer Quotes

"Getting AI to production requires more than technology— it requires consistency and control. By using the NeuralMesh AI Data Platform with Red Hat AI Enterprise, based on Red Hat OpenShift, organizations can run data-intensive AI pipelines across on-premises and cloud environments at the scale that enterprise production demands, without sacrificing governance or security," said Ryan King, vice president, AI and Infrastructure Partners at Red Hat.

"The real challenge in AI is no longer training models. It is running them reliably in production, at scale, with predictable performance and cost. That's where most AI initiatives stall. The NeuralMesh AI Data Platform integrates with our AI Acceleration Cloud, Neysa Velocis, to solve that problem directly. It gives teams a way to run AI workloads as dependable systems, without carrying the operational burden of stitching together complex infrastructure," said Anindya Das, cofounder and CTO at Neysa.

Availability
The NeuralMesh AI Data Platform solution is available now, delivered as an appliance-style system. Organizations can learn more at weka.io/nvidia or visit WEKA at GTC 2026, booth #1034 for a demo.

For more information on the NeuralMesh AIDP:

About WEKA

WEKA is transforming how organizations build, run, and scale AI workflows with NeuralMesh™ by WEKA®, its intelligent, adaptive mesh storage system. Unlike traditional data infrastructure, which becomes slower and more fragile as workloads expand, NeuralMesh becomes faster, stronger, and more efficient as it scales, dynamically adapting to AI environments to provide a flexible foundation for enterprise AI and agentic AI innovation. Trusted by 30% of the Fortune 50, NeuralMesh helps leading enterprises, AI cloud providers, and AI builders optimize GPUs, scale AI faster, and reduce innovation costs. Learn more at www.weka.io or connect with us on LinkedIn and X.

WEKA and the W logo are registered trademarks of WekaIO, Inc. Other trade names herein may be trademarks of their respective owners.

WEKA: The Foundation for Enterprise AI

Photo - https://mma.prnewswire.com/media/2934370/WEKA_and_NVIDIA.jpg
Logo - https://mma.prnewswire.com/media/1796062/WEKA_v1_Logo_new.jpg

Cision View original content:https://www.prnewswire.co.uk/news-releases/weka-accelerates-ai-factory-deployment-times-from-months-to-minutes-with-turnkey-nvidia-ai-data-platform-solution-302714389.html

rt.gif?NewsItemId=EN10376&Transmission_Id=202603161630PR_NEWS_EURO_ND__EN10376&DateId=20260316


16.03.2026 CET/CEST Dissemination of a Corporate News, transmitted by EQS News - a service of EQS Group.
The issuer is solely responsible for the content of this announcement.

The EQS Distribution Services include Regulatory Announcements, Financial/Corporate News and Press Releases.
View original content: EQS News


2292284  16.03.2026 CET/CEST

See all WEKA news