Write to the database
The purpose of indexing functions is to write application-ready data to the database. Ponder has two reorg-resistant database interfaces: the store API and raw SQL.
Store API
The store API is a SQL-like query builder optimized for EVM indexing workloads. It's less flexible than raw SQL, but is often several orders of magnitude faster.
The examples below use this ponder.schema.ts
.
ponder.schema.ts
.import { onchainTable, primaryKey } from "ponder";
export const accounts = onchainTable("accounts", (t) => ({
address: t.hex().primaryKey(),
balance: t.bigint().notNull(),
nickname: t.text(),
}));
export const allowances = onchainTable(
"allowances",
(t) => ({
owner: t.hex().notNull(),
spender: t.hex().notNull(),
value: t.bigint().notNull(),
}),
(table) => ({
pk: primaryKey({ columns: [table.owner, table.spender] }),
})
);
insert
Insert one or many rows into the database. Returns the inserted rows, including any default values that were generated.
import { accounts } from "ponder:schema";
const row = await db.insert(accounts).values({
address: "0x7Df1", balance: 0n
});
const rows = await db.insert(accounts).values([
{ address: "0x7Df2", balance: -50n },
{ address: "0x7Df3", balance: 100n },
]);
If you insert a row that violates a not null constraint, insert
will reject with an error.
import { tokens } from "ponder:schema";
const row = await db.insert(accounts).values({
address: "0x7Df1",
});
// Error: Column "balance" is required but not present in the values object.
find
Find a single row by primary key. Returns the row, or null
if no matching row is found.
import { accounts } from "ponder:schema";
const row = await db.find(accounts, { address: "0x7Df1" });
If the table has a composite primary key, the second argument is an object including all the primary key values.
import { allowances } from "ponder:schema";
const row = await db.find(allowances, { owner: "0x7Df1", spender: "0x7Df2" });
update
Update a row by primary key. Returns the updated row.
import { accounts } from "ponder:schema";
const row = await db
.update(accounts, { address: "0x7Df1" })
.set({ balance: 100n });
You can also pass a function to set
which receives the existing row and returns the update object.
import { accounts } from "ponder:schema";
const row = await db
.update(accounts, { address: "0x7Df1" })
.set((row) => ({ balance: row.balance + 100n }));
If the target row is not found, update
will reject with an error.
import { tokens } from "ponder:schema";
const row = await db
.update(accounts, { address: "0x7Df1" })
.set({ balance: null });
// Error: No row found for address "0x7Df1".
If the new row violates a not null constraint, update
will reject with an error.
import { tokens } from "ponder:schema";
const row = await db
.update(accounts, { address: "0x7Df1" })
.set({ balance: null });
// Error: Column "balance" is required but not present in the object.
delete
Delete a row by primary key. Returns true
if the row was deleted, or false
if no matching row was found.
import { accounts } from "ponder:schema";
const deleted = await db.delete(accounts, { address: "0x7Df1" });
Upsert & conflict resolution
If you insert a duplicate row that violates the table's primary key constraint, insert
will reject with an error. Use onConflictDoNothing
to skip the insert operation if a row with the same primary key already exists.
import { accounts } from "ponder:schema";
const row = await db
.insert(accounts)
.values({ address: "0x7Df1", balance: 0n })
.onConflictDoNothing();
Or, perform an upsert with onConflictDoUpdate
.
import { accounts } from "ponder:schema";
const row = await db
.insert(accounts)
.values({ address: "0x7Df1", balance: 0n, activeAt: event.block.timestamp })
.onConflictDoUpdate({ activeAt: event.block.timestamp });
Like update
, you can pass a function to onConflictDoUpdate
which receives the existing row and returns the update object.
import { accounts } from "ponder:schema";
const row = await db
.insert(accounts)
.values({ address: "0x7Df1", balance: 0n })
.onConflictDoUpdate((row) => ({ balance: row.balance + 100n }));
Both onConflictDoNothing
and onConflictDoUpdate
also work when inserting many rows at once. The conflict resolution logic gets applied to each row individually.
import { accounts } from "ponder:schema";
const rows = await db
.insert(accounts)
.values([
{ address: "0x7Df1", balance: 0n },
{ address: "0x7Df2", balance: 100n },
])
.onConflictDoNothing();
How it works
EVM indexing workloads often involve a large number of small inserts and updates. To mitigate the performance penalty of many (blocking) database queries, the store API runs in-memory during historical indexing.
When the in-memory cache exceeds a certain size, the store flushes all pending data to the database using one COPY
statement per table. During development, the store also flushes every 5 seconds regardless of size to ensure that the database state is reasonably up-to-date to support ad-hoc queries.
Raw SQL
Raw SQL queries are much slower than the store API. Avoid raw SQL for indexing logic that runs often.
The constraints of the store API make it difficult to implement complex business logic. In these cases, you can drop down to raw SQL.
Query builder
The db.sql
object exposes the raw Drizzle PostgreSQL query builder, including the select
, insert
, update
, and delete
functions. Visit the Drizzle docs for more information and a detailed API reference.
Here's an example that uses the raw SQL update
function to execute a complex bulk update query.
import { ponder } from "ponder:registry";
import { accounts, tradeEvents } from "ponder:schema";
import { eq, and, gte, inArray, sql } from "drizzle-orm";
// Add 100 points to all accounts that submitted a trade in the last 24 hours.
ponder.on("EveryDay:block", async ({ event, context }) => {
await db.sql
.update(accounts)
.set({ points: sql`${accounts.points} + 100` })
.where(
inArray(
accounts.address,
db.sql
.select({ address: tradeEvents.from })
.from(tradeEvents)
.where(
gte(tradeEvents.timestamp, event.block.timestamp - 24 * 60 * 60)
)
)
);
});
Relational queries
Drizzle's relational query builder (AKA Drizzle Queries) offers a great developer experience for complex SELECT
queries that join multiple tables. The db.sql.query
object exposes the raw Drizzle relational query builder. Visit the Drizzle Queries docs for more details.
Here's an example that uses the relational query builder in an API function to find the 10 largest trades in the past hour joined with the account that made the trade.
import { eq, and, gte, inArray, sql } from "drizzle-orm";
import { accounts, tradeEvents } from "ponder:schema";
ponder.get("/hot-trades", async (c) => {
const trades = await c.db.query.tradeEvents.findMany({
where: (table, { gt, gte, and }) =>
and(
gt(table.amount, 1_000n),
gte(table.timestamp, Date.now() - 1000 * 60 * 60)
),
limit: 10,
with: { account: true },
});
return c.json(trades);
});