Official TypeScript ingester SDK for GreptimeDB. gRPC row inserts, streaming inserts, and Arrow Flight bulk writes in one package.
- Three write modes on one
Client: unary, streaming, and Arrow Flight bulk with LZ4 / ZSTD body compression - Stage-3 decorators (TS 5) for object mapping — no
reflect-metadata - TLS (system / PEM / file), basic auth, gzip transport compression, random-peer load balancing
- Configurable retry (
aggressive/conservative) with full-jitter exponential backoff +AbortSignal - Dual ESM + CJS, strict TypeScript, Node.js ≥ 20
pnpm add @greptime/ingester
# or: npm install @greptime/ingester
# or: yarn add @greptime/ingesterimport { Client, DataType, Precision, Table } from '@greptime/ingester';
const client = new Client(Client.create('localhost:4001').withDatabase('public').build());
const table = Table.new('cpu_usage')
.addTagColumn('host', DataType.String)
.addFieldColumn('usage', DataType.Float64)
.addTimestampColumn('ts', Precision.Millisecond)
.addRow(['server-01', 75.3, Date.now()]);
const result = await client.write(table);
console.log(`inserted ${result.value} rows`);
await client.close();| Mode | API | When |
|---|---|---|
| Unary | client.write(tables) |
<1k rows/s, mixed schemas, supports auto-create table |
| Streaming | client.createStreamWriter() |
Sustained writes on a single connection |
| Bulk | client.createBulkStreamWriter(schema) |
>10k rows/s, Arrow Flight DoPut with parallelism |
const stream = client.createStreamWriter();
for (const batch of batches) await stream.write(batch);
const { value } = await stream.finish();The stream is not auto-retried — rebuild it on error.
// Prerequisite: the table exists with this schema. One unary write auto-creates it.
await client.write(buildTable().addRow([...sampleRow]));
const schema = buildTable().schema();
const bulk = await client.createBulkStreamWriter(schema);
for (const batch of batches) {
await bulk.writeRows({ kind: 'rows', rows: batch });
}
const { totalAffectedRows } = await bulk.finish();For LZ4 / ZSTD body compression, pass { compression: BulkCompression.Lz4 }. See examples/05-bulk-compression-lz4.ts.
For fire-and-forget, use writeRowsAsync(batch) — it returns the request id and lets you submit the next batch immediately. Every id must be claimed with waitForResponse(id) before finish(); unclaimed acks accumulate in an internal map and are not auto-evicted, so skipping the claim in a long-running stream leaks memory. writeRows(batch) does the claim for you.
Stage-3 decorators. Keep experimentalDecorators off in your tsconfig.
import { Client, DataType, Precision, field, tableName, tag, timestamp } from '@greptime/ingester';
@tableName('cpu_usage')
class CpuMetric {
@tag(DataType.String) host!: string;
@field(DataType.Float64) usage!: number;
@timestamp({ precision: Precision.Millisecond }) ts!: number | Date;
}
await client.writeObject([
Object.assign(new CpuMetric(), { host: 'a', usage: 1.5, ts: Date.now() }),
]);Client.create('host:port')
.withDatabase('public')
.withBasicAuth('user', 'pw')
.withTls({ kind: 'system' })
.withRetry({ mode: 'aggressive', maxAttempts: 3 })
.build();Full reference: docs/configuration.md.
All errors extend IngesterError. Non-retriable: ConfigError, SchemaError, ValueError, StateError. Retriable or case-by-case: TransportError (.grpcCode), ServerError (.statusCode), TimeoutError, AbortedError, BulkError.
Classify with isRetriable(err, 'aggressive' | 'conservative'). Default is aggressive and mirrors the Rust SDK; conservative narrows to transient gRPC codes only.
| File | What |
|---|---|
examples/01-simple-insert.ts |
Table builder → client.write |
examples/02-insert-object-decorators.ts |
@tableName / @tag / @field / @timestamp + writeObject |
examples/03-stream-insert.ts |
StreamWriter with 10k rows |
examples/04-bulk-insert.ts |
Unary bootstrap → bulk 100k rows |
examples/05-bulk-compression-lz4.ts |
LZ4 frame compression on the bulk path |
examples/06-auth-and-tls.ts |
Basic auth + TLS config |
examples/07-multi-endpoint-lb.ts |
Multiple endpoints, random LB |
examples/08-abort-and-retry.ts |
AbortSignal + conservative retry |
Run any of them with pnpm example <name> after ./scripts/run-greptimedb.sh starts a local server.
Writing to GreptimeDB from Node.js? The bulk path is the fastest option by a wide margin. Median of 3 runs against a local GreptimeDB on an Apple M4 Max, 1M rows with the 4-tag / 5-field CPU schema, default SDK config, parallelism=8:
| JS client | batch=1000 | batch=5000 | Relative |
|---|---|---|---|
@greptime/ingester (bulk) |
789k r/s | 758k r/s | baseline |
@opentelemetry/exporter-logs-otlp-proto |
679k r/s | 638k r/s | 0.85× |
@influxdata/influxdb-client |
494k r/s | 520k r/s | 0.66× |
Same schema, same data generator, same server, same Node.js runtime; each client is driven with its own default configuration. Arrow Flight ships the batch already-columnar so the server skips text/proto parsing and per-attribute column mapping.
On the 22-column log schema the bulk path reaches ~137k rows/s (2M rows, batch=5000). Unary and streaming numbers, the exact SDK-usage decisions behind each bench, and reproduction commands: docs/benchmarking.md.
- Tested: Node.js 22.x + GreptimeDB 1.0.0 / latest.
- Node.js 20.x: supported (minimum).
- Bun / Deno (node-compat): best-effort, not CI-gated.
See docs/divergences.md for where the TS SDK intentionally differs from the Rust / Go SDKs.
- Multi-endpoint failover
- Pluggable
EndpointSelector(random / round-robin / health-aware with outlier detection), replacing the current fixed random pick - Retry-time exclusion of already-failed peers so a single dead endpoint cannot burn the entire retry budget
- Auto-reconnect wrapper for streaming and bulk writers (re-pick endpoint, rebuild the stream, surface a resumable handle)
- Pluggable
- JSON v2 column type (binary JSON encoding)
- OpenTelemetry instrumentation of the SDK itself (write latency, retries, bulk/stream state as metrics + spans)
- Browser build via gRPC-Web in a separate
@greptime/ingester-webpackage (unary + streaming only; no bulk)
- Documentation — configuration, benchmarking, divergences
- Contributing
- Security policy
- Changelog
- Issues
Apache-2.0 — see LICENSE.