You’re building an app on Neon and want your contract data to appear seamlessly in your frontend. How do you get that data fast, reliably, at scale, and with minimal headache?
Pulling directly from the blockchain isn’t a viable option for a production-ready app. That would lead to you battling JSON-RPC rate limits, struggling to reconstruct historical data, and hard-coding workarounds that won’t scale. Hence, you need a middleware layer - an engine that queries, structures, and serves blockchain data on demand.
This is the role of blockchain indexers.
Neon’s dual nature - a problem for indexers?
Quick answer - no. The longer answer is:
On Ethereum, indexers like SubSquid, The Graph, and Envio have long been the backbone of many dApps. But only recently have these tools begun tackling Solana’s account-based data model.
What does this mean for Neon data, given that it is a Solana network extension that executes EVM bytecode on Solana?
Here’s how it works in practice: every Neon transaction is registered in Ethereum-style explorers (like Blockscout - read more in this article), as well as in Solana explorers like Solscan whenever a program call to Solana programs is involved. A single Neon transaction, thus, may contain both Ethereum-compatible calls and underlying Solana instructions, and the resulting events can be indexed directly from the Neon network, giving developers the best of both ecosystems.
This design makes Neon indexer-friendly.
By supporting three proven frameworks - SubSquid, The Graph, and Envio - Neon gives developers the flexibility to choose the right approach: SubSquid for high-throughput analytics, The Graph for customizable subgraphs, and Envio for ultra-fast syncing with HyperSync.
Or you can mix and match: use SubSquid to batch-sync history, pipe that data into a subgraph for structured queries, and layer Envio HyperSync on top to stream the latest events in real time.
A bit more detail on Ethereum vs. Solana data indexing
Ethereum contracts store data internally, making logs and state diffs the backbone of indexing. Solana stores most data externally in accounts, which are mutable but ephemeral. Neon EVM extends Solana’s account history to better support tracing services, but it still requires developers to think carefully about how data is captured and persisted.
Indexers solve this by:
- Observing contract events (contracts deployed on Neon EVM are written in Solidity and use all the usual EVM tooling);
- Translating them into structured entities;
- Persisting them in a database;
- Serving them over a developer-friendly API (often GraphQL).
The challenge is less about whether you can get the data and more about how fast, reliable, and cost-effective the pipeline is.
So, let’s have a closer look at each of those indexers.
Walkthrough 1: SubSquid - optimized for throughput
Best for: High-throughput analytics, historical queries, and production-ready dashboards.
SubSquid is a decentralized query engine optimized for batch data extraction. Using its SDK, developers can batch process events, store them in PostgreSQL, and query via GraphQL.
On Neon, this means everything from NEON transfer histories to complex DeFi liquidity flows can be ingested at scale. A single squid (indexer) can ingest event logs, receipts, and traces, then serve them via GraphQL.
Example: a squid tracking WNEON transfers on Neon Devnet
In a nutshell, all you need to do is define a schema, add ABI, write a processor that listens to the events and inserts them into Postgres, and compile it using docker.
- Setup (Node.js 16+, Docker, and init a project) & install packages:
npm i @subsquid/evm-processor @subsquid/typeorm-store @subsquid/graphql-server
npm i typescript @subsquid/typeorm-codegen @subsquid/evm-typegen --save-dev- Define schema (
schema.graphql
): type Transfer @entity {
id: ID!
src: String! @index
dst: String! @index
wad: BigInt!
}- Generate models with
npx squid-typeorm-codegen
. - Add ABI + processor - fetch WNEON’s ABI from NeonScan, then write a processor (
main.ts
) that listens forTransfer
events and inserts them into Postgres. - Run + query - compile with
npx tsc
, start the processor, then runnpx squid-graphql-server
to query transfers at localhost:4350/graphql
For a more detailed instruction, please go to the official Neon docs.
SubSquid is especially strong when you need to analyze millions of transactions quickly: for example, a DeFi dashboard showing liquidity flows or a research tool analyzing validator behavior.
Walkthrough 2: The Graph - composable & QL-friendly
Best for: dApps needing flexible, community-familiar subgraphs.
The Graph pioneered the “subgraph” model, and Neon extends this workflow with minimal changes.
You create a subgraph that watches the contracts you’re interested in, continuously monitoring the blockchain through your chosen Neon RPC. Whenever a relevant event is emitted, the Graph node captures the log and processes it with a WebAssembly mapping script defined in the subgraph.
The script uses a GraphQL schema file in the subgraph to produce records, called entities, that represent metadata related to your query. These entities are stored on a database, so they may be queried by API requests.
In the standard flow you:
- Deploy a contract to Neon EVM
- Collect the contract's address and block number
- Configure your subgraph.yaml to collect data on events emitted by contract with value and block number from previous step
- Deploy your subgraph
.png)
Please follow a more detailed instruction in the Neon docs.
The beauty of The Graph is its GraphQL interface, which gives developers a powerful and flexible way to query blockchain data. It can be integrated into broader data pipelines, neatly consumed in the backend, or accessed directly in the frontend, for example, by generating React hooks to pull data straight into a UI.
On top of that, subgraphs are composable and reusable across projects, the workflow is familiar to Ethereum-native teams, and the ecosystem encourages sharing and forking existing indexing logic, all of which make building on Neon faster and more efficient.
Walkthrough 3: Envio - ultra-fast with HyperSync
Best for: Fast setup, real-time apps, and large-scale historical sync.
.png)
Envio is a next-gen indexer with HyperSync, a layer that bypasses JSON-RPC and achieves 100x faster sync for historical data on Neon.
Developers can spin up an indexer in minutes using the Contract Import feature - point it at an ABI, select events, and deploy.
Typical flow:
- Run
envio init
- Choose network (Neon mainnet network ID:
245022934
) - Contract Import → provide contract ABI (
abi.json
) - Deploy indexer and query via GraphQL
.png)
Because HyperSync skips slow RPC calls, developers can sync years of Neon data in minutes instead of days. This makes Envio ideal for real-time dashboards, NFT marketplaces, or governance analytics where speed is non-negotiable.
Follow a more detailed instruction in the Neon docs.
Choose your fighter or mix-and-match
Enterprises and startups rarely have one-size-fits-all data needs. Some workloads demand scale for historical analytics (SubSquid). Others benefit from flexibility and composability for modular APIs (The Graph). Still others require real-time responsiveness to power live user experiences (Envio).
On Neon, these aren’t exclusive options. Developers can mix and match, using SubSquid to batch-process years of transfer data, The Graph to expose a clean API for frontends, and Envio to deliver instant updates to real-time dashboards. The result is a data layer that adapts to the unique requirements of each application.
Try it out for yourself
Find out more about Envio, SubSquid, and The Graph.
Go to the official Neon docs to experiment with SubSquid, The Graph, or Envio and tell us all about your experiments on Discord.