ENS Indexer is a multi-chain indexer for the Ethereum Name Service (ENS). It pulls ENS data from Ethereum mainnet and L2 chains (Base, Linea, Optimism, Scroll, Arbitrum), processes it, and stores it in a PostgreSQL database, where it can be quickly and efficiently accessed via a RESTful API.
Each deployment runs the same codebase and independently indexes data from configured chains. An operator can run a private indexer, or expose theirs publicly behind a single API hostname like api.enswhois.com.
The indexer is written entirely in JavaScript and runs on Bun, a fast JavaScript runtime. Each service runs as a systemd unit, making deployment and process management straightforward on any Linux server.
This was an intentional decision to avoid overengineering. Rather than building a complex distributed system with message queues, orchestration layers, and microservice frameworks, the indexer uses simple, proven tools: PostgreSQL for storage, Redis for coordination, and plain JavaScript for business logic. The result is a system that's easy to understand, deploy, debug, and modify.
The indexer connects to an Ethereum RPC provider (Quicknode, dRPC, Infura, or Alchemy) to fetch on-chain events from ENS smart contracts. These raw events are stored in PostgreSQL, then processed into queryable domain records. Redis is used for coordination between services and managing processing queues.
Data stores: PostgreSQL (primary database), Redis (queues and coordination)
Contracts indexed (45 total across 6 chains):
| Chain | Contracts |
|---|---|
| Ethereum Mainnet (23) |
|
| Base (11) |
|
| Linea (6) | |
| Optimism (3) |
|
| Scroll (1) | |
| Arbitrum (1) |
ENABLED_CHAINS environment variable (default: mainnet).
The fetcher polls configured chains via eth_getLogs RPC calls for new events from
ENS smart contracts. It processes events in batches and writes raw event data to the
ens_events table.
The processor reads unprocessed events from ens_events and transforms them into
domain records in the domains table. Events are staged in Redis queues and bulk-flushed
to PostgreSQL for efficiency.
Each domain's effective_owner_address is computed using a priority hierarchy:
wrapper owner > registrar owner > registry owner.
The address service extracts all Ethereum addresses referenced in events (owners, resolvers,
transaction senders) and stores them in address_table. This enables optimized data retrieval like "show me all domains owned by this address", and allows us to process reverse names efficiently.
During idle time (once at the head of the chain), it also resolves pending domains against the local rainbow table to discover unknown labels.
Resolves primary ENS names from reverse node claims. Handles both mainnet (addr.reverse) and L2 chains per ENSIP-19.
Mainnet (addr.reverse) - processes ReverseClaimed events using a tiered strategy:
NameChanged event in the same transaction (zero RPC cost)NewResolver event in the same transaction, then call resolver.name(reverseNode) via RPC at that blockL2 chains (Base, Linea, Optimism, Scroll, Arbitrum) - each chain has its own ENSIP-19 reverse namespace (e.g. 80002105.reverse for Base). L2 reverse registrars emit NameForAddrChanged events that include both the address and the name directly, so no RPC calls are needed.
Resolved names are written to address_primary_names with the registrar field indicating the source namespace. The API prioritises addr.reverse over L2-specific registrars when resolving an address's primary name.
The monitor provides a web interface that allows you see the synchronization status of your indexer. It also exposes a RESTful API for querying indexed data.
| Table | Purpose |
|---|---|
ens_events |
Raw blockchain events |
domains |
Processed domain records with computed effective_owner_address |
ens_event_domains |
Junction table linking events to domains |
address_table |
Unique Ethereum addresses with reverse node references |
address_primary_names |
Maps addresses to their primary ENS names, keyed by registrar namespace (addr.reverse, 80002105.reverse, etc.) |
domain_data |
Append-only history of all domain property changes (text records, addresses, contenthash, ownership, resolver, expiry, fuses) with source attribution |
Here's what happens when someone registers a domain. e.g. test.eth:
NameRegistered and Transfer. Writes them to ens_events.NameRegistered events we can extract the label and the expiry. We know that the parent node is .eth as this event is emitted by the EthRegistrarController.domains table.address_table which are then associated with the respective rows in the domains table.address_primary_names./api/whois/test.ethWhen a new indexer is deployed, it must process all historical ENS events from the beginning (block 3,327,417 for the earliest mainnet contract). The duration depends on which chains are enabled and your RPC provider.
All five services run in parallel - the processor and address service start working on events as soon as the fetcher writes them, even while the fetcher is still catching up.
sudo tail -f /var/log/cloud-init-output.log
The indexer uses Redis for coordination between services and to handle edge cases:
| Queue | Purpose |
|---|---|
| Pending Domains | In many cases the information discerned from an individual event is not enough to uniquely identify it. Domains waiting for more data (e.g., missing parent domain or label) are added to the 'Pending Domains' queue. |
| Ready Domains | When we have enough data about a given domain, we add it to the 'Ready Domains' queue. Rows in this queue are batched and efficiently inserted into the domains table. |
| Queue | Purpose |
|---|---|
| Pending Events | If an event hasn't provided us enough data to uniquely identify the domain to which it relates, we queue it here. When we have enough data, i.e. the relevant 'Pending' domain becomes 'Ready' and a domain row has been inserted, associated 'Pending Events' will be linked to their domain via the ens_event_domains junction table. |
| Direct Events | When a row already exists for the domain to which an event relates, we add it to the 'Direct Events' queue. Rows in this queue are batched and efficiently inserted into the ens_event_domains junction table, linking events to their domains. |
We maintain 7 queues for metadata updates for already existing domain rows. This mechanism was implemented as an optimization such that once a row has been created in the domains table we can batch and efficiently update the associated data when additional events are processed. The result is that if a domain name has changed ownership 10 times throughout history, during the initial sync we will only update the appropriate column once, at the end of the processing queue.
The API at api.enswhois.com serves indexed data directly from PostgreSQL and augments it with live on-chain resolution for domains the indexer doesn't fully cover.
When a domain's resolver is not one the indexer tracks (status unindexed or none), the API resolves the data live via the Universal Resolver (supporting ENSIP-10 wildcard resolution and CCIP-Read for L2 names). Live results are also written back to the domain_data table with origin='live' so subsequent lookups can serve history from one table. If live resolution fails, a 502 error is returned rather than stale data.
For bulk requests, all ownership calls are batched into a single Multicall3 call while individual Universal Resolver calls run in parallel (to support CCIP-Read).
Some ENS events only contain a labelhash (keccak256 hash of the label) rather than
the actual label text. To resolve these, the indexer uses a "rainbow table" - a precomputed
database of ~200M known label to hash mappings.
During idle time (once at the head of the chain), the address service automatically checks pending domains against the local rainbow table and resolves any matches. Labels are discovered through multiple mechanisms including LLM inference, web search, and external APIs.