Migration guide
0.7
The 0.7
release includes several breaking changes.
Install & run codegen
pnpm add @ponder/core@0.7
To ensure strong type safety during the migration, regenerate ponder-env.d.ts
.
pnpm codegen
Migrate ponder.schema.ts
Here's a table defined with the new schema definition API, which uses Drizzle under the hood.
import { onchainTable } from "@ponder/core";
export const accounts = onchainTable("account", (t) => ({
address: t.hex().primaryKey(),
daiBalance: t.bigint().notNull(),
isAdmin: t.boolean().notNull(),
graffiti: t.text(),
}));
Key changes:
- Declare tables with the
onchainTable
function exported from@ponder/core
- Export all table objects from
ponder.schema.ts
- Use
.primaryKey()
to mark the primary key column - Columns are nullable by default, use
.notNull()
to add the constraint
The new onchainTable
function adds several new capabilities.
- Custom primary key column name (other than
id
) - Composite primary keys
- Default column values
Here's a more advanced example with indexes and a composite primary key.
import { onchainTable, index, primaryKey } from "@ponder/core";
export const transferEvents = onchainTable(
"transfer_event",
(t) => ({
id: t.text().primaryKey(),
amount: t.bigint().notNull(),
timestamp: t.integer().notNull(),
from: t.hex().notNull(),
to: t.hex().notNull(),
}),
(table) => ({
fromIdx: index().on(table.from),
})
);
export const allowance = onchainTable(
"allowance",
(t) => ({
owner: t.hex().notNull(),
spender: t.hex().notNull(),
amount: t.bigint().notNull(),
}),
(table) => ({
pk: primaryKey({ columns: [table.owner, table.spender] }),
})
);
export const approvalEvent = onchainTable("approval_event", (t) => ({
id: t.text().primaryKey(),
amount: t.bigint().notNull(),
timestamp: t.integer().notNull(),
owner: t.hex().notNull(),
spender: t.hex().notNull(),
}));
Migrate indexing functions
This release updates the indexing function database API to offer a unified SQL experience based on Drizzle.
Here's an indexing function defined with the new API, which uses the table objects exported from ponder.schema.ts
.
import { ponder } from "@/generated";
import { account } from "../ponder.schema";
ponder.on("ERC20:Transfer", async ({ event, context }) => {
await context.db
.insert(account)
.values({
address: event.args.from,
balance: 0n,
isOwner: false,
})
.onConflictDoUpdate((row) => ({
balance: row.balance - event.args.amount,
}));
});
Key changes:
- Transition from ORM pattern
db.Account.create({ ... })
to query builder patterndb.insert(accounts, { ... })
- Import table objects from
ponder.schema.ts
- Replace
findMany
withdb.sql.select(...)
ordb.sql.query(...)
Here is a simple migration example to familiarize yourself with the API.
// Create a single allowance
await context.db.Allowance.create({
id: event.log.id,
data: {
owner: event.args.owner,
spender: event.args.spender,
amount: event.args.amount,
},
});
import { allowance } from "../ponder.schema";
// Create a single allowance
await context.db
.insert(allowance)
.values({
id: event.log.id,
owner: event.args.owner,
spender: event.args.spender,
amount: event.args.amount,
});
Here is a reference for how to migrate each method.
// create -> insert
await context.db.Account.create({
id: event.args.from,
data: { balance: 0n },
});
await context.db.insert(account).values({ id: event.args.from, balance: 0n });
// createMany -> insert
await context.db.Account.createMany({
data: [
{ id: event.args.from, balance: 0n },
{ id: event.args.to, balance: 0n },
],
});
await context.db.insert(account).values([
{ id: event.args.from, balance: 0n },
{ id: event.args.to, balance: 0n },
]);
// findUnique -> find
await context.db.Account.findUnique({ id: event.args.from });
await context.db.find(account, { address: event.args.from });
// update
await context.db.Account.update({
id: event.args.from,
data: ({ current }) => ({ balance: current.balance + 100n }),
});
await context.db
.update(account, { address: event.args.from })
.set((row) => ({ balance: row.balance + 100n }));
// upsert
await context.db.Account.upsert({
id: event.args.from,
create: { balance: 0n },
update: ({ current }) => ({ balance: current.balance + 100n }),
});
await context.db
.insert(account)
.values({ address: event.args.from, balance: 0n })
.onConflictDoUpdate((row) => ({ balance: row.balance + 100n }));
// delete
await context.db.Account.delete({ id: event.args.from });
await context.db.delete(account, { address: event.args.from });
// findMany -> select
await context.db.Account.findMany({ where: { balance: { gt: 100n } } });
await context.db.sql.select().from(account).where(eq(account.balance, 100n));
Finally, another migration example for an ERC20 Transfer indexing function using upsert
.
import { ponder } from "@/generated";
ponder.on("ERC20:Transfer", async ({ event, context }) => {
const { Account, TransferEvent } = context.db;
await Account.upsert({
id: event.args.from,
create: {
balance: BigInt(0),
isOwner: false,
},
update: ({ current }) => ({
balance: current.balance - event.args.amount,
}),
});
});
import { ponder } from "@/generated";
import { account } from "../ponder.schema";
ponder.on("ERC20:Transfer", async ({ event, context }) => {
await context.db
.insert(account)
.values({
address: event.args.from,
balance: 0n,
isOwner: false,
})
.onConflictDoUpdate((row) => ({
balance: row.balance - event.args.amount,
}));
});
Direct SQL API
The context.db.sql
interface replaces the rigid findMany
method and supports any valid SQL select
query.
import { desc } from "@ponder/core";
import { account } from "../ponder.schema";
ponder.on("...", ({ event, context }) => {
const result = await context.db.sql
.select()
.from(account)
.orderBy(desc(account.balance))
.limit(1);
});
Migrate API functions
- Removed
c.tables
in favor of importing table objects fromponder.schema.ts
0.6.0
Updated viem
to >=2
This release updates the viem
peer dependency requirement to >=2
. The context.client
action getBytecode
was renamed to getCode
.
pnpm add viem@latest
Simplified Postgres schema pattern
Starting with this release, the indexed tables, reorg tables, and metadata table for a Ponder app are contained in one Postgres schema, specified by the user in ponder.config.ts
(defaults to public
). This means the shared ponder
schema is no longer used. (Note: The ponder_sync
schema is still in use).
This release also removes the view publishing pattern and the publishSchema
option from ponder.config.ts
, which may disrupt production setups using horizontal scaling or direct SQL. If you relied on the publish pattern, please get in touch on Telegram and we'll work to get you unblocked.
Added /ready
, updated /health
The new /ready
endpoint returns an HTTP 200
response once the app is ready to serve requests. This means that historical indexing is complete and the app is indexing events in realtime.
The existing /health
endpoint now returns an HTTP 200
response as soon as the process starts. (This release removes the maxHealthcheckDuration
option, which previously governed the behavior of /health
.)
For Railway users, we now recommend using /ready
as the health check endpoint to enable zero downtime deployments. If your app takes a while to sync, be sure to set the healthcheck timeout accordingly. Read the Railway deployment guide for more details.
Metrics updates
Please see the changelog for specifics.
0.5.0
hono
peer dependency
Breaking: This release adds Hono as a peer dependency. After upgrading, install hono
in your project.
pnpm add hono@latest
Introduced API functions
This release added support for API functions. Read more.
0.4.0
This release changes the location of database tables when using both SQLite and Postgres.
It does not require any changes to your application code, and does not bust the sync cache for SQLite or Postgres.
Please read the new docs on direct SQL for a detailed overview.
SQLite
Ponder now uses the .ponder/sqlite/public.db
file for indexed tables. Before, the tables were present as views in the .ponder/sqlite/ponder.db
. Now, the.ponder/sqlite/ponder.db
file is only used internally by Ponder.
Postgres
Ponder now creates a table in the public
schema for each table in ponder.schema.ts
. Before, Ponder created them as views in the ponder
schema.
Isolation while running multiple Ponder instances against the same database also works differently. Before, Ponder used a schema with a pseudorandom name if the desired schema was in use. Now, Ponder will fail on startup with an error if it cannot acquire a lock on the desired schema.
This also changes the zero-downtime behavior on platforms like Railway. For more information on how this works in 0.4
, please reference:
Postgres table cleanup
After upgrading to 0.4.x
, you can run the following Postgres SQL script to clean up stale tables and views created by 0.3.x
Ponder apps.
Note: This script could obviously be destructive, so please read it carefully before executing.
DO $$
DECLARE
view_name TEXT;
schema_name_var TEXT;
BEGIN
-- Drop all views from the 'ponder' schema
FOR view_name IN SELECT table_name FROM information_schema.views WHERE table_schema = 'ponder'
LOOP
EXECUTE format('DROP VIEW IF EXISTS ponder.%I CASCADE', view_name);
RAISE NOTICE 'Dropped view "ponder"."%"', view_name;
END LOOP;
-- Drop the 'ponder_cache' schema
EXECUTE 'DROP SCHEMA IF EXISTS ponder_cache CASCADE';
RAISE NOTICE 'Dropped schema "ponder_cache"';
-- Find and drop any 'ponder_instance_*' schemas
FOR schema_name_var IN SELECT schema_name AS schema_name_alias FROM information_schema.schemata WHERE schema_name LIKE 'ponder_instance_%'
LOOP
EXECUTE format('DROP SCHEMA IF EXISTS %I CASCADE', schema_name_var);
RAISE NOTICE 'Dropped schema "%"', schema_name_var;
END LOOP;
END $$;
0.3.0
No breaking API changes.
Moved SQLite directory
Note: This release busted the SQLite sync cache.
The SQLite database was moved from the .ponder/store
directory to .ponder/sqlite
. The old .ponder/store
directory will still be used by older versions.
Moved Postgres sync tables
Similar to SQLite, the sync tables for Postgres were moved from the public
schema to ponder_sync
. Now, Ponder does not use the public
schema whatsoever.
This change did NOT bust the sync cache; the tables were actually moved. This process emits some WARN
-level logs that you should see after upgrading.
0.2.0
Replaced p.bytes()
with p.hex()
Removed p.bytes()
in favor of a new p.hex()
primitive column type. p.hex()
is suitable for Ethereum addresses and other hex-encoded data, including EVM bytes
types. p.hex()
values are stored as bytea
(Postgres) or blob
(SQLite). To migrate, replace each occurence of p.bytes()
in ponder.schema.ts
with p.hex()
, and ensure that any values you pass into hex columns are valid hexadecimal strings. The GraphQL API returns p.hex()
values as hexadecimal strings, and allows sorting/filtering on p.hex()
columns using the numeric comparison operators (gt
, gte
, le
, lte
).
Cursor pagination
Updated the GraphQL API to use cursor pagination instead of offset pagination. Note that this change also affects the findMany
database method. See the GraphQL pagination docs for more details.
0.1.0
Config
- In general,
ponder.config.ts
now has much more static validation using TypeScript. This includes network names incontracts
, ABI event names for the contractevent
andfactory
options, and more. - The
networks
andcontracts
fields were changed from an array to an object. The network or contract name is now specified using an object property name. Thename
field for both networks and contracts was removed. - The
filter
field has been removed. To index all events matching a specific signature across all contract addresses, add a contract that specifies theevent
field without specifying anaddress
. - The
abi
field now requires an ABI object that has been asserted as const (cannot use a file path). See the ABIType documentation for more details.
Schema
- The schema definition API was rebuilt from scratch to use a TypeScript file
ponder.schema.ts
instead ofschema.graphql
. Theponder.schema.ts
file has static validation using TypeScript. - Note that it is possible to convert a
schema.graphql
file into aponder.schema.ts
file without introducing any breaking changes to the autogenerated GraphQL API schema. - Please see the
design your schema
guide for an overview of the new API.
Indexing functions
event.params
was renamed toevent.args
to better match Ethereum terminology norms.- If a contract uses the
event
option, only the specified events will be available for registration. Before, all events in the ABI were available. context.models
was renamed tocontext.db
- Now, a read-only Viem client is available at
context.client
. This client uses the same transport you specify inponder.config.ts
, except all method are cached to speed up subsequent indexing. - The
context.contracts
object now contains the contract addresses and ABIs specified inponder.config.ts
, typed as strictly as possible. (You should not need to copy addresses and ABIs around anymore, just usecontext.contracts
). - A new
context.network
object was added which contains the network name and chain ID that the current event is from.
Multi-chain indexing
- The contract
network
fieldponder.config.ts
was upgraded to support an object of network-specific overrides. This is a much better DX for indexing the same contract on multiple chains. - The options that you can specify per-network are
address
,event
,startBlock
,endBlock
, andfactory
. - When you add a contract on multiple networks, Ponder will sync the contract on each network you specify. Any indexing functions you register for the contract will now process events across all networks.
- The
context.network
object is typed according to the networks that the current contract runs on, so you can write network-specific logic likeif (context.network.name === “optimism”) { …
Vite
- Ponder now uses Vite to transform and load your code. This means you can import files from outside the project root directory.
- Vite’s module graph makes it possible to invalidate project files granularly, only reloading the specific parts of your app that need to be updated when a specific file changes. For example, if you save a change to one of your ABI files,
ponder.config.ts
will reload because it imports that file, but your schema will not reload. - This update also unblocks a path towards concurrent indexing and granular caching of indexing function results.