Near QueryAPI is a fully managed solution to build indexer functions, extract on-chain data, store it in a database, and be able to query it using GraphQL endpoints.
Blockchain Indexers are known to be difficult to create, maintain, and operate. You have to focus on the business logic of your indexer, while you also have to take care of everything else around it. A dedicated team member could be needed to deal with all these challenges.
Common indexing challenges include:
- Design Database Schema and provision it with correct configurations for security, data retention, and performance.
- Write and test indexer code that interacts with the database
- Deploy Indexer to a Cloud provider. Ensure network permissions firewalls, PCs, or other network-related settings are setup correctly.
- Create an API endpoint to retrieve data from your database for your fronted applications
- Monitor performance of your database and scale it as needed
- Manage permissions and access to database with changing requirements
- Re-index data due to issues and updates. Ensuring that production environments don't get disrupted.
- Perform database schema migrations
- Scale the API as your application grows
- Keep up with all the underlying blockchain nodes and upgrades
As you can see, running indexers is a complex and comprehensive set of processes, and Near QueryAPI tries to cover most (or all) of these needs offering an open-source solution for creating, managing, and exploring indexers.
QueryAPI has a
QueryApi.App BOS widget, hosted under the
With this component you can see all the public indexers currently available on the Near blockchain.
If you would like to create a new indexer, simply click Create New Indexer.
Indexers stored on-chain
QueryAPI stores all the indexer logic and schemas used to provision the databases on-chain.
Whenever you interact with the QueryAPI BOS component, in the background it's making an RPC query to
where a smart contract stores all of your indexer logic as well as your schemas.
- Currently under closed beta-testing.
- It always takes the latest
- It doesn't support schema migrations.
- If you have an indexer whose schema needs to change you may need to create a new indexer and do historical backfill on that new indexer again.
- No way to stop your indexer or restart it truncating all tables.
- Historical backfill works in parallel to the real-time indexing.
- Historical processing won't happen in order. (it will happen at the same time as top of network)
- Keep that in mind just to be sure that you don't have unintended side effects.
- Pagoda currently doesn't charge for storage of your indexer code and data as well as running the indexer, but we will introduce this soon.