
We’re excited to welcome – Victor Fei | Founder & CEO of Ormi Labs.
Victor has an engineering background in high-performance infrastructure. Before starting Ormi, he worked on core components of Microsoft Edge and Chromium and led client-side AI projects for the browser. He studied electrical & computer engineering at Cornell and conducted research under Professor Emin Gün Sirer, Founder and CEO of Avalanche and Ava Labs. Victor now focuses on building real-time blockchain data infrastructure that can support mainstream-scale usage. Victor is awarded two US Patents.
Chris: Ormi positions itself as “the next-generation data layer for Web3” — what specific bottlenecks or failures in existing indexers (The Graph, Goldsky, blockchain RPCs) inspired you to build Ormi from scratch?
Victor: Existing indexing stacks were never designed for the throughput we see today. Most depend entirely on cloud infrastructure that was built for convenience instead of performance. Under load, they slow down, throttle requests, or fail outright. When block cadence and chain activity increase, these weaknesses become real production risks.
We built Ormi from the ground up with a vertically integrated architecture that combines bare-metal performance with elastic cloud scaling. This lets us control every part of the pipeline, from ingestion all the way to query serving.
We have seen major protocols migrate to Ormi after their previous providers failed during high traffic. During one large token launch, we deployed a custom environment in five hours to stabilize traffic that had overwhelmed their indexer. There have also been cases where a single cloud region outage took down the largest prediction market because the data provider had no redundancy. These situations make it clear that the data layer needs a stronger foundation.
Chris: You launched during a period of explosive chain growth and increasingly high-throughput networks. Was this timing intentional, and how do you see the indexing/data landscape evolving over the next 3–5 years?
Victor: We started by building DeFi primitives for reputation-based lending and quickly realized the real bottleneck went far beyond smart contracts. Most of the focus in the industry was on contract security, but it became clear that data indexing would become a serious challenge that could impact the security, reliability, and integrity of any on-chain application. As networks grew, data pipelines became fragile, RPC responses became inconsistent, and indexing often fell behind the chain.
Subgraphs are still the strongest standard for structured, real-time blockchain data, but running them reliably at scale is difficult. Data streams help in certain cases, but they shift reorg handling and validation back to developers.
With providers like Alchemy ending subgraph support, the market is converging towards a small group of performance-focused indexers. Over the next few years, indexing will move closer to the chain with lower latency, stronger redundancy, and infrastructure designed for high-frequency applications. This is the direction we are committed to.
We have already helped dozens of teams migrate from Alchemy to Ormi. If you’re looking to explore alternative subgraph providers, check out our migration guide or contact us.
Chris: Ormi uses a vertically integrated stack merging bare-metal speed with cloud elasticity. What challenges did you face building a hybrid architecture that can survive cloud outages and still stay synced with fast chains?
Victor: We designed Ormi to maintain redundancy and consistency across every layer: RPC access, compute, indexing, storage, and query.
Our control plane monitors latency and system health across regions. It routes traffic between bare-metal and cloud clusters based on real-time performance. If a bare-metal region starts to degrade, workloads shift to cloud capacity to keep applications running smoothly.
We also built a proxy layer that continuously checks RPC health across both in-house and commercial nodes. It verifies block height, finality, and data consistency. If a node lags, misses blocks, or provides inconsistent data, the system reroutes traffic immediately. Redundancy is fundamental to how the platform works.
Chris: Building a real-time indexing platform at this scale is difficult. What about your team’s background makes Ormi best for solving this problem?
Victor: Our team has deep experience in high-performance systems, large-scale data pipelines, and AI applications where accuracy and latency directly affect billions of dollars in value. We have built browser infrastructure used by billions of people, financial systems for bulge-bracket Wall Street banks, and real-time digital asset platforms.
Our engineering background spans low-level systems design to high-performance systems, including core work on Microsoft Edge and Chromium. This gives us the unique edge to design data infrastructure from the ground up with low-latency paths, fault tolerance, and predictable performance at a global scale.
We also bring more than 50 years of combined experience building high-performance financial systems, including arbitrage trading and real-time detection solutions that must remain accurate under heavy load. These are the same demands faced by high-throughput chains, trading platforms, and institutional applications today.
We understand what it takes to run mission-critical systems where data accuracy and freshness are incompromisable. That is why we are uniquely equipped to deliver real-time indexing that stays fast, accurate, and reliable at any scale.
Chris: Ormi claims to stay synced before blocks even finalize — how is this achieved without sacrificing accuracy or consistency?
Victor: We run multiple synchronized RPC endpoints per chain. Our proxy compares responses from all of them in real time. This lets us detect reorgs or inconsistencies as they happen. When discrepancies appear, our system corrects the data stream within milliseconds, staying aligned with the latest valid block without sacrificing accuracy.
Chris: Supporting 70+ chains for subgraphs and 100+ for APIs is massive. How do you architect a system that scales horizontally across chains with very different finality models, RPC quality, and block structures?
Victor: Scaling to this level requires more than infrastructure. It requires a deep understanding of each chain at the protocol level and how dApps operate at the smart contract level. Our team studies each network’s data model, consensus system, and finality rules. We also work closely with ecosystem teams and RPC providers to ensure accuracy.
Unlike most subgraph providers, we do not treat every chain the same. Ormi tunes indexing behavior, caching, and runtime settings based on the chain. This approach lets us support a wide range of networks, including EVM, rollups, and non-EVM chains, while keeping consistency and freshness. This is also a strategic decision not to support as many chains as other providers.
Chris: What criteria do you use when deciding to onboard a new chain? Speed? Developer demand? RPC reliability?
Victor: We consider developer traction, ecosystem maturity, RPC reliability, and whether the chain is driving meaningful volume. We are selective. Once a chain meets our technical and ecosystem requirements, we design a native integration tailored to its specific block cadence and data characteristics.
Chris: 0xGraph claims 5× faster indexing and <30ms queries. What innovations allowed such dramatic performance improvements over legacy subgraph providers?
Victor: Our performance gains come from optimizing every stage of the pipeline.
Traditional indexers rely entirely on third-party solutions, from RPCs to managed cloud services. Ormi uses its own RPC proxy, bare-metal compute, and a custom indexing runtime. This reduces network hops, removes throttling, and keeps data synchronized at the tip of the chain. This is what allows indexing to run up to 5x faster with consistent sub-30 ms query times.
In fact, I cover this extensively in a 5-Point Guide that I’ve been distributing online to developers looking to optimize their data access speed.
Chris: What are the most common migration issues teams face, and how does Ormi help solve them?
Victor: No issue to date. Migration is fast. Subgraphs are portable through IPFS, and we offer a one-click migration tool that accepts an existing deployment ID to recreate the environment on Ormi. Our validation layer automatically checks compatibility, schema design, and indexing health.
Chris: You’ve highlighted Dolomite’s performance jump after switching to Ormi. What types of dApps see the most dramatic gains — DeFi, RWAs, gaming, AI agents?
Victor: High-frequency DeFi protocols and RWA platforms see the largest gains because their logic depends directly on latency and consistency. Teams report faster queries, fewer sync issues, and real-time accuracy during traffic spikes. Many fast-growing applications, including perps, prediction markets, and other market-driven systems, rely on this level of precision.
Chris: What are the most requested custom features from large protocols or institutions?
Victor: Most large partners choose Custom Environments. These provide isolated deployments with dedicated compute, storage, and indexing pipelines. They often request compliance controls, tunable indexing logic, advanced observability, and custom SLAs. Our engineering team works closely with them to design and tune the infrastructure based on their performance requirements.
Chris: Where do you see Ormi in 5 years — a data layer, a cross-chain intelligence layer, or something even bigger?
Victor: We believe the industry is moving closer to mass adoption as policies mature and institutions enter markets like stablecoins and RWAs. For us, this means building infrastructure that can handle real mainstream traffic, serving billions of users. We stay at the tip of the chain today for high-throughput apps, but the demand will only grow as more users and applications come online. Our job is to stay ahead of that curve and continue pushing performance forward.
To make blockchain truly ubiquitous, someone has to invest in speed and reliability at scale. That is our priority. We are also looking at expanding vertically into new use cases, including data intelligence and streaming, although most demand today remains in subgraphs and our data API. Many data providers cover a broad set of solutions, but few invest deeply in raising performance. That is what we are committed to improving and working on in the near future.
About Ormi Labs
Ormi is the next-generation data layer for Web3, purpose-built for real-time, high-throughput applications like DeFi, RWAs, gaming, AI agents, and on-chain infrastructure. Today, Ormi powers ecosystems with over $50B in TVL, fueling the data behind some of Web3’s most critical systems.