无服务器驱动程序
Prisma Postgres 无服务器驱动程序 (@prisma/ppg) 是一个轻量级客户端,用于使用原始 SQL 连接到 Prisma Postgres。使用的协议是 HTTP 和 WebSocket,而非传统的 TCP 连接,这使得在原生 PostgreSQL 驱动程序无法运行的受限环境中也能访问数据库。
¥The Prisma Postgres serverless driver (@prisma/ppg) is a lightweight client for connecting to Prisma Postgres using raw SQL. It uses HTTP and WebSocket protocols instead of traditional TCP connections, enabling database access in constrained environments where native PostgreSQL drivers cannot run.
Prisma Postgres 无服务器驱动程序目前处于 抢先体验 状态,尚不推荐用于生产场景。
¥The Prisma Postgres serverless driver is currently in Early Access and not yet recommended for production scenarios.
主要特性
¥Key features
无服务器驱动程序使用 HTTP 和 WebSocket 协议而非 TCP,从而能够在传统 PostgreSQL 驱动程序无法运行的环境中访问数据库:
¥The serverless driver uses HTTP and WebSocket protocols instead of TCP, enabling database access in environments where traditional PostgreSQL drivers cannot run:
-
兼容 Cloudflare Workers、Vercel Edge Functions、Deno Deploy、AWS Lambda、Bun 和浏览器。
¥Compatible with Cloudflare Workers, Vercel Edge Functions, Deno Deploy, AWS Lambda, Bun, and browsers
-
逐行流式传输结果,以处理内存占用恒定的大型数据集
¥Stream results row-by-row to handle large datasets with constant memory usage
-
通过单个连接流式传输多个查询,最多可降低 3 倍延迟。
¥Pipeline multiple queries over a single connection, reducing latency by up to 3x
-
支持自动参数化和完整 TypeScript 支持的 SQL 模板字面量
¥SQL template literals with automatic parameterization and full TypeScript support
-
内置事务、批量操作和可扩展类型系统
¥Built-in transactions, batch operations, and extensible type system
-
通过 适用于所有可用的 Prisma Postgres 区域 自动连接池实现最佳性能
¥Automatic connection pooling across all available Prisma Postgres regions for optimal performance
在不支持完整 Node.js 的边缘/无服务器环境中,或者处理可受益于流式处理的大型结果集时,请使用此驱动程序。
¥Use this driver for edge/serverless environments without full Node.js support, or when working with large result sets that benefit from streaming.
对于标准的 Node.js 环境,请使用 node-postgres 驱动程序 以通过直接 TCP 连接实现更低的延迟。
¥For standard Node.js environments, use the node-postgres driver for lower latency with direct TCP connections.
前提条件:获取连接字符串
¥Prerequisite: Get your connection string
无服务器驱动程序需要 Prisma Postgres Direct TCP 连接 URL:
¥The serverless driver requires a Prisma Postgres Direct TCP connection URL:
postgres://identifier:key@db.prisma.io:5432/postgres?sslmode=require
在 的 API 密钥部分找到它。连接字符串仅用于提取身份验证凭据。客户端不会建立直接的 TCP 连接。
¥Find this in the API Keys section of your . The connection string is used only to extract authentication credentials. No direct TCP connection is made from the client.
如果你没有 Prisma Postgres 数据库,请使用 create-db CLI 工具创建一个:
¥If you don't have a Prisma Postgres database, create one using the create-db CLI tool:
npx prisma create-db
安装
¥Installation
安装根据你的使用场景选择合适的软件包:
¥Install the appropriate package based on your use case:
- Without Prisma ORM
- With Prisma ORM
npm install @prisma/ppg
npm install @prisma/ppg @prisma/adapter-ppg
用法
¥Usage
使用 SQL 模板字面量进行查询
¥Query with SQL template literals
使用 prismaPostgres() 高级 API,支持 SQL 模板字面量和自动参数化:
¥Use the prismaPostgres() high-level API with SQL template literals and automatic parameterization:
import { prismaPostgres, defaultClientConfig } from "@prisma/ppg"
const ppg = prismaPostgres(
defaultClientConfig(process.env.PRISMA_DIRECT_TCP_URL!)
)
type User = { id: number; name: string; email: string }
// SQL template literals with automatic parameterization
const users = await ppg.sql<User>`
SELECT * FROM users WHERE email = ${'user@example.com'}
`.collect()
console.log(users[0].name)
与 Prisma ORM 结合使用
¥Use with Prisma ORM
使用 PrismaPostgresAdapter 通过无服务器驱动程序连接 Prisma Client:
¥Use the PrismaPostgresAdapter to connect Prisma Client via the serverless driver:
import { PrismaClient } from '../generated/prisma/client'
import { PrismaPostgresAdapter } from '@prisma/adapter-ppg'
const prisma = new PrismaClient({
adapter: new PrismaPostgresAdapter({
connectionString: process.env.PRISMA_DIRECT_TCP_URL,
}),
})
const users = await prisma.user.findMany()
流式传输结果
¥Stream results
结果将以 CollectableIterator<T> 的形式返回。逐行流式传输以保持内存占用恒定,或将所有行收集到一个数组中:
¥Results are returned as CollectableIterator<T>. Stream rows one at a time for constant memory usage, or collect all rows into an array:
type User = { id: number; name: string; email: string }
// Stream rows one at a time (constant memory usage)
for await (const user of ppg.sql<User>`SELECT * FROM users`) {
console.log(user.name)
}
// Or collect all rows into an array
const allUsers = await ppg.sql<User>`SELECT * FROM users`.collect()
管道查询
¥Pipeline queries
通过单个 WebSocket 连接发送多个查询,无需等待响应。查询会立即发送,结果按先进先出 (FIFO) 的顺序到达:
¥Send multiple queries over a single WebSocket connection without waiting for responses. Queries are sent immediately and results arrive in FIFO order:
import { client, defaultClientConfig } from "@prisma/ppg"
const cl = client(defaultClientConfig(process.env.PRISMA_DIRECT_TCP_URL!))
const session = await cl.newSession()
// Send all queries immediately (pipelined)
const [usersResult, ordersResult, productsResult] = await Promise.all([
session.query("SELECT * FROM users"),
session.query("SELECT * FROM orders"),
session.query("SELECT * FROM products")
])
session.close()
在 100 毫秒的网络延迟下,3 个顺序查询需要 300 毫秒(3 x RTT),但流水线式查询仅需 100 毫秒(1 x RTT)。
¥With 100ms network latency, 3 sequential queries take 300ms (3 x RTT), but pipelined queries take only 100ms (1 x RTT).
参数流式传输
¥Parameter streaming
超过 1KB 的参数将自动流式传输,无需在内存中缓冲。对于大型二进制参数,你必须使用 boundedByteStreamParameter(),它会创建一个 BoundedByteStreamParameter 对象,该对象包含 PostgreSQL 协议要求的总字节大小:
¥Parameters over 1KB are automatically streamed without buffering in memory. For large binary parameters, you must use boundedByteStreamParameter() which creates a BoundedByteStreamParameter object that carries the total byte size, required by the PostgreSQL protocol:
import { client, defaultClientConfig, boundedByteStreamParameter, BINARY } from "@prisma/ppg"
const cl = client(defaultClientConfig(process.env.PRISMA_DIRECT_TCP_URL!))
// Large binary data (e.g., file content)
const stream = getReadableStream() // Your ReadableStream source
const totalSize = 1024 * 1024 // Total size must be known in advance
// Create a bounded byte stream parameter
const streamParam = boundedByteStreamParameter(stream, BINARY, totalSize)
// Automatically streamed - constant memory usage
await cl.query("INSERT INTO files (data) VALUES ($1)", streamParam)
对于 Uint8Array 数据,请使用 byteArrayParameter() 文件:
¥For Uint8Array data, use byteArrayParameter():
import { client, defaultClientConfig, byteArrayParameter, BINARY } from "@prisma/ppg"
const cl = client(defaultClientConfig(process.env.PRISMA_DIRECT_TCP_URL!))
const bytes = new Uint8Array([1, 2, 3, 4])
const param = byteArrayParameter(bytes, BINARY)
await cl.query("INSERT INTO files (data) VALUES ($1)", param)
由于 PostgreSQL 协议的要求,boundedByteStreamParameter() 函数由 @prisma/ppg 库提供,并且需要预先知道总字节大小。
¥The boundedByteStreamParameter() function is provided by the @prisma/ppg library and requires the total byte size to be known in advance due to PostgreSQL protocol requirements.
事务和批量操作
¥Transactions and batch operations
事务会自动处理 BEGIN、COMMIT 和 ROLLBACK 操作:
¥Transactions automatically handle BEGIN, COMMIT, and ROLLBACK:
const result = await ppg.transaction(async (tx) => {
await tx.sql.exec`INSERT INTO users (name) VALUES ('Alice')`
const users = await tx.sql<User>`SELECT * FROM users WHERE name = 'Alice'`.collect()
return users[0].name
})
批量操作可在自动事务中一次性执行多个语句:
¥Batch operations execute multiple statements in a single round-trip within an automatic transaction:
const [users, affected] = await ppg.batch<[User[], number]>(
{ query: "SELECT * FROM users WHERE id < $1", parameters: [5] },
{ exec: "INSERT INTO users (name) VALUES ($1)", parameters: ["Charlie"] }
)
类型处理
¥Type handling
使用 defaultClientConfig() 时,会自动解析常见的 PostgreSQL 类型(boolean、int2、int4、int8、float4、float8、text、varchar、json、jsonb、date、timestamp、timestamptz):
¥When using defaultClientConfig(), common PostgreSQL types are automatically parsed (boolean, int2, int4, int8, float4, float8, text, varchar, json, jsonb, date, timestamp, timestamptz):
import { prismaPostgres, defaultClientConfig } from "@prisma/ppg"
const ppg = prismaPostgres(defaultClientConfig(process.env.PRISMA_DIRECT_TCP_URL!))
// JSON/JSONB automatically parsed
const rows = await ppg.sql<{ data: { key: string } }>`
SELECT '{"key": "value"}'::jsonb as data
`.collect()
console.log(rows[0].data.key) // "value"
// BigInt parsed to JavaScript BigInt
const bigints = await ppg.sql<{ big: bigint }>`SELECT 9007199254740991::int8 as big`.collect()
// Dates parsed to Date objects
const dates = await ppg.sql<{ created: Date }>`SELECT NOW() as created`.collect()
自定义解析器和序列化器
¥Custom parsers and serializers
使用自定义解析器(通过 PostgreSQL OID)和序列化器(通过类型守卫)扩展或覆盖类型系统:
¥Extend or override the type system with custom parsers (by PostgreSQL OID) and serializers (by type guard):
import { client, defaultClientConfig } from "@prisma/ppg"
import type { ValueParser } from "@prisma/ppg"
// Custom parser for UUID type
const uuidParser: ValueParser<string> = {
oid: 2950,
parse: (value) => value ? value.toUpperCase() : null
}
const config = defaultClientConfig(process.env.PRISMA_DIRECT_TCP_URL!)
const cl = client({
...config,
parsers: [...config.parsers, uuidParser] // Append to defaults
})
对于自定义序列化器,请将其放在默认序列化器之前,以确保其优先级更高:
¥For custom serializers, place them before defaults so they take precedence:
import type { ValueSerializer } from "@prisma/ppg"
class Point { constructor(public x: number, public y: number) {} }
const pointSerializer: ValueSerializer<Point> = {
supports: (value: unknown): value is Point => value instanceof Point,
serialize: (value: Point) => `(${value.x},${value.y})`
}
const config = defaultClientConfig(process.env.PRISMA_DIRECT_TCP_URL!)
const cl = client({
...config,
serializers: [pointSerializer, ...config.serializers] // Your serializer first
})
await cl.query("INSERT INTO locations (point) VALUES ($1)", new Point(10, 20))
有关更多详细信息,请参阅 npm 软件包文档。
¥See the npm package documentation for more details.
平台兼容性
¥Platform compatibility
该驱动程序可在任何支持 fetch 和 WebSocket API 的环境中运行:
¥The driver works in any environment with fetch and WebSocket APIs:
| 平台 | HTTP 传输 | WebSocket 传输 |
|---|---|---|
| Cloudflare Workers | ✅ | ✅ |
| Vercel 边函数 | ✅ | ✅ |
| AWS Lambda | ✅ | ✅ |
| Deno 部署 | ✅ | ✅ |
| Bun | ✅ | ✅ |
| Node.js 18+ | ✅ | ✅ |
| 浏览器 | ✅ | ✅ (启用 CORS) |
传输模式
¥Transport modes
-
HTTP 传输(无状态):每个查询都是一个独立的 HTTP 请求。最适合简单查询和边缘函数。
¥HTTP transport (stateless): Each query is an independent HTTP request. Best for simple queries and edge functions.
-
WebSocket 传输(有状态):用于多路复用查询的持久连接。最适合事务、管道和多查询。使用
client().newSession()创建会话。¥WebSocket transport (stateful): Persistent connection for multiplexed queries. Best for transactions, pipelining, and multiple queries. Create a session with
client().newSession().
API 概述
¥API overview
prismaPostgres(config)
包含 SQL 模板字面量、事务和批量操作的高级 API。推荐用于大多数用例。
¥High-level API with SQL template literals, transactions, and batch operations. Recommended for most use cases.
client(config)
底层 API,支持显式参数传递和会话管理。当你需要精细控制时,请使用此驱动程序。
¥Low-level API with explicit parameter passing and session management. Use when you need fine-grained control.
有关完整的 API 文档,请参阅 npm 软件包。
¥See the npm package for complete API documentation.
错误处理
¥Error handling
提供结构化错误类型:DatabaseError、HttpResponseError、WebSocketError、ValidationError。
¥Structured error types are provided: DatabaseError, HttpResponseError, WebSocketError, ValidationError.
import { DatabaseError } from "@prisma/ppg"
try {
await ppg.sql`SELECT * FROM invalid_table`.collect()
} catch (error) {
if (error instanceof DatabaseError) {
console.log(error.code)
}
}
已启用连接池默认
¥Connection pooling enabled by default
无服务器驱动程序会自动使用连接池 适用于所有可用的 Prisma Postgres 区域 以实现最佳性能和资源利用率。
¥The serverless driver automatically uses connection pooling across all available Prisma Postgres regions for optimal performance and resource utilization.
连接池默认启用,无需额外配置。
¥Connection pooling is enabled by default and requires no additional configuration.
这将确保无论部署区域如何,都能高效地建立数据库连接,从而减少连接开销并提高查询性能。
¥This ensures efficient database connections regardless of your deployment region, reducing connection overhead and improving query performance.
局限性
¥Limitations
-
需要 Prisma Postgres 实例,不适用于 本地开发 数据库。
¥Requires a Prisma Postgres instance and does not work with local development databases
-
目前处于早期访问阶段,不建议用于生产环境
¥Currently in Early Access and not yet recommended for production
了解更多
¥Learn more