From IO500 #3 to AI — What Our Core42 Collaboration Proved
In November 2025, at Supercomputing (SC25) in Atlanta, something remarkable happened in the IO500 10-Node Production list. The top three entries — #1, #2, and #3 — all ran on DAOS. Entry #3, submitted by Core42 on their Maximus-01 system, achieved those results using standard TCP networking. No RDMA. No exotic interconnects. Just Ethernet and NVMe.
We worked closely with Core42 to validate the Enakta Storage Platform at scale on their infrastructure — and the results exceeded what either team expected.
All powered by DAOS
on Ethernet — no RDMA
the next 30 systems combined
What the benchmark actually showed
The IO500 measures both bandwidth (IOR) and metadata (mdtest) across a balanced mix of workloads. The overall score is a geometric mean of both — you can't get to the top on metadata alone. Placing #3 in the Production list on just 10 nodes, over TCP, puts this configuration ahead of systems running on significantly more hardware with significantly more expensive networking.
The IO500 Production list is specifically for configurations representative of real deployments, not one-off lab setups. As StorageNewsletter summarised, DAOS appears to be the winner across all the IO500 lists. The Register noted that the top two DAOS entries alone have four times the combined benchmark score of the next 30 storage systems.
Why Core42 matters
Core42, the AI infrastructure arm of Abu Dhabi's G42 group, operates some of the largest GPU clusters in the Middle East. They chose DAOS for Maximus not because it's niche — but because at the scale they're building, every percentage point of storage efficiency translates to millions in GPU utilisation. Their public endorsement of the results speaks for itself:
See Core42's announcement: Core42 on X · Raghu Cherukupalli on LinkedIn
The journey to get here
When we founded Enakta Labs in 2023, we bet on DAOS because the architecture is fundamentally different — user-space I/O, no kernel overhead, true distributed metadata, native NVMe. But an engine alone isn't a product.
Version 1.3 brought SMB and S3 integration, making the same high-performance engine accessible to Windows and macOS workstations — not just HPC clusters. We published a validated reference architecture with Kioxia over a year ago, built sub-10-minute failed node recovery, and tooling that lets you deploy a cluster in under an hour. The Core42 results proved it all at a scale that matters.
From storage to AI
The natural next step has been AI. We developed the native PyTorch integration for DAOS for Google and open sourced it — the same integration that now underpins their Parallelstore service for AI/ML workloads. That work gave PyTorch applications direct access to DAOS storage, bypassing POSIX overhead entirely for training data loading and checkpoint I/O.
FlashActivate builds on that foundation. Rather than serving models through a conventional filesystem, FlashActivate leverages the storage platform's native throughput to activate model weights in under 200ms. The same architecture that earned IO500 #3 is now serving AI model weights at speeds that make traditional NFS-based serving look like a different era.
What's next
On the storage side, we're actively working to get the Enakta Storage Platform into the hands of teams that need this level of performance — whether that's media studios drowning in 8K footage, HPC labs pushing simulation boundaries, neoclouds building differentiated infrastructure, or enterprises replacing ageing NAS with something that actually scales. Version 1.3 is GA with SMB, S3, and PyTorch native access. If your current storage is the bottleneck, we'd love to show you what DAOS can do.
On the AI side, we're building the Enakta Labs AI Platform — in active R&D with close partners — to bring that same storage performance to bare-metal GPU operators running inference at scale. FlashActivate, our model activation layer, is where all of that I/O throughput becomes sub-200ms model cold starts.
Two products, one engine. We're proud of what the team has accomplished in a short time. And we're just getting started.
Interested?
Whether you need high-performance storage today or want early access to the AI Platform, we'd love to hear from you.