Skip to main content

错误参考

使用 Prisma Postgres 时,你可能会在开发和操作过程中遇到错误,这些错误通常会以特定的错误代码高亮。

¥When working with Prisma Postgres, you may encounter errors often highlighted by specific error codes during development and operations.

为了确保应用的平稳运行,了解这些错误的含义、发生的原因以及如何解决它们非常重要。本指南旨在提供见解和步骤,以排查 Prisma Postgres 遇到的特定错误代码。

¥It is important to understand the meaning of these errors, why they occur, and how to resolve them in order to ensure the smooth operation of your applications. This guide aims to provide insights and steps to troubleshoot specific error codes encountered with Prisma Postgres.

P6009(ResponseSizeLimitExceeded)

当数据库查询的响应大小超过 配置的查询响应大小限制 时,会触发此错误。我们实现此限制是为了保障你的应用性能,因为通过 5MB 检索数据会由于多个网络层而显著降低应用的速度。

¥This error is triggered when the response size from a database query exceeds the configured query response size limit. We've implemented this restriction to safeguard your application performance, as retrieving data over 5MB can significantly slow down your application due to multiple network layers.

通常,在执行 ETL(提取、转换、加载)操作时,传输超过 5MB 的数据是很常见的。但是,对于其他场景,例如事务查询、用户界面的实时数据提取、批量数据更新或聚合大型数据集以在 ETL 上下文之外进行分析,通常应避免使用 Accelerate。这些用例虽然必不可少,但通常可以进行优化以在 配置的查询响应大小限制 中运行,从而确保更流畅的性能和更好的用户体验。

¥Typically, transmitting more than 5MB of data is common when conducting ETL (Extract, Transform, Load) operations. However, for other scenarios such as transactional queries, real-time data fetching for user interfaces, bulk data updates, or aggregating large datasets for analytics outside of ETL contexts, it should generally be avoided. These use cases, while essential, can often be optimized to work within the configured query response size limit, ensuring smoother performance and a better user experience.

P6009 的可能原因

¥Possible causes for P6009

响应中传输图片/文件

¥Transmitting images/files in response

如果正在获取表中存储的图片或文件,导致响应大小过大,则可能会出现此错误。通常不建议将资源直接存储在数据库中,因为这会显著影响数据库的性能和可扩展性。除了性能之外,它还会使数据库备份速度变慢,并显著增加存储常规备份的成本。

¥This error may arise if images or files stored within your table are being fetched, resulting in a large response size. Storing assets directly in the database is generally discouraged because it significantly impacts database performance and scalability. In addition to performance, it makes database backups slow and significantly increases the cost of storing routine backups.

建议的解决方案:将 查询响应大小限制 配置得更大一些。如果仍然超出限制,请考虑将图片或文件存储在 BLOB 存储中,例如 Cloudflare R2AWS S3Cloudinary。这些服务允许你以最佳方式存储资源并返回访问 URL。与其将资源直接存储在数据库中,不如存储 URL,这将大大减少响应大小。

¥Suggested solution: Configure the query response size limit to be larger. If the limit is still exceeded, consider storing the image or file in a BLOB store like Cloudflare R2, AWS S3, or Cloudinary. These services allow you to store assets optimally and return a URL for access. Instead of storing the asset directly in the database, store the URL, which will substantially reduce the response size.

数据过度获取

¥Over-fetching of data

在某些情况下,会无意中获取大量记录或字段,导致超出 配置的查询响应大小限制 的限制。当查询中的 where 子句 不正确或完全缺失时,可能会发生这种情况。

¥In certain cases, a large number of records or fields are unintentionally fetched, which results in exceeding the configured query response size limit. This could happen when the where clause in the query is incorrect or entirely missing.

建议的解决方案:将 查询响应大小限制 配置得更大一些。如果仍然超出限制,请仔细检查 where 子句是否按预期过滤数据。为防止获取过多记录,请考虑使用 pagination。此外,使用 select 子句仅返回必要的字段,从而减少响应大小。

¥Suggested solution: Configure the query response size limit to be larger. If the limit is still exceeded, double-check that the where clause is filtering data as expected. To prevent fetching too many records, consider using pagination. Additionally, use the select clause to return only the necessary fields, reducing the response size.

获取大量数据

¥Fetching a large volume of data

在许多数据处理工作流中,尤其是涉及 ETL(提取-转换-加载)流程或计划的 CRON 作业的工作流,需要从数据源(例如数据库、API 或文件系统)提取大量数据,以进行分析、报告或进一步处理。如果你正在运行 ETL/CRON 工作负载,需要获取大量数据进行分析处理,那么你可能会遇到此限制。

¥In many data processing workflows, especially those involving ETL (Extract-Transform-Load) processes or scheduled CRON jobs, there's a need to extract large amounts of data from data sources (like databases, APIs, or file systems) for analysis, reporting, or further processing. If you are running an ETL/CRON workload that fetches a huge chunk of data for analytical processing then you might run into this limit.

建议的解决方案:将 查询响应大小限制 配置得更大一些。如果超出限制,请考虑将查询拆分为多个批次。这种方法确保每个批次只获取部分数据,防止你超出单个操作的大小限制。

¥Suggested solution: Configure the query response size limit to be larger. If the limit is exceeded, consider splitting your query into batches. This approach ensures that each batch fetches only a portion of the data, preventing you from exceeding the size limit for a single operation.

P6004(QueryTimeout)

当数据库查询未能在 配置的查询超时限制 内返回响应时,会发生此错误。查询超时限制包括等待池连接的时长、到数据库的网络延迟以及查询本身的执行时间。我们强制执行此限制是为了防止意外的长时间运行查询导致系统资源过载。

¥This error occurs when a database query fails to return a response within the configured query timeout limit. The query timeout limit includes the duration of waiting for a connection from the pool, network latency to the database, and the execution time of the query itself. We enforce this limit to prevent unintentional long-running queries that can overload system resources.

信息

Prisma Postgres 的跨区域网络时间不受 配置的查询超时限制 限制的影响。

¥The time for Prisma Postgres's cross-region networking is excluded from the configured query timeout limit limit.

P6004 的可能原因

¥Possible causes for P6004

此错误可能由多种原因引起。其中一些突出的框架包括:

¥This error could be caused by numerous reasons. Some of the prominent ones are:

高流量和连接数不足

¥High traffic and insufficient connections

如果应用接收的流量非常高,并且数据库可用的连接数量不足,则查询需要等待连接可用。这种情况可能导致查询等待连接的时间超过 配置的查询超时限制,如果查询未在此时间段内得到服务,最终将触发超时错误。

¥If the application is receiving very high traffic and there are not a sufficient number of connections available to the database, then the queries would need to wait for a connection to become available. This situation can lead to queries waiting longer than the configured query timeout limit for a connection, ultimately triggering a timeout error if they do not get serviced within this duration.

建议的解决方案:在平台环境 (reference) 中设置 Accelerate 时,请检查连接字符串参数中指定的 connection_limit,并可能增加其值。此限制应与数据库的最大连接数一致。

¥Suggested solution: Review and possibly increase the connection_limit specified in the connection string parameter when setting up Accelerate in a platform environment (reference). This limit should align with your database's maximum number of connections.

默认情况下,连接限制设置为 10,除非在数据库连接字符串中指定了其他 connection_limit

¥By default, the connection limit is set to 10 unless a different connection_limit is specified in your database connection string.

长时间运行的查询

¥Long-running queries

查询响应速度可能很慢,即使连接可用也会触发 配置的查询超时限制 错误。如果在单个查询中获取大量数据,或者表中缺少相应的索引,则可能会发生这种情况。

¥Queries may be slow to respond, hitting the configured query timeout limit even when connections are available. This could happen if a very large amount of data is being fetched in a single query or if appropriate indexes are missing from the table.

建议的解决方案:将 查询超时限制 配置得更大一些。如果超出限制,请识别运行缓慢的查询并仅获取必要的数据。使用 select 子句检索特定字段并避免获取不必要的数据。此外,考虑添加适当的索引以提高查询效率。你可能还会将长时间运行的查询隔离到单独的环境中,以防止它们影响事务查询。

¥Suggested solution: Configure the query timeout limit to be larger. If the limit is exceeded, identify the slow-running queries and fetch only the necessary data. Use the select clause to retrieve specific fields and avoid fetching unnecessary data. Additionally, consider adding appropriate indexes to improve query efficiency. You might also isolate long-running queries into separate environments to prevent them from affecting transactional queries.

数据库资源争用

¥Database resource contention

一个常见但具有挑战性的问题是,当在同一数据库上运行的其他服务执行繁重的分析或数据处理任务时,会显著消耗数据库资源。这些操作可能会独占数据库连接和处理能力,导致即使是简单的查询也无法及时执行。此 "busy" 或 "noisy" 数据库环境可能会导致通常运行速度很快的查询运行缓慢甚至超时,尤其是在其他服务活动频繁期间。

¥A common yet challenging issue is when other services operating on the same database perform heavy analytics or data processing tasks, significantly consuming database resources. These operations can monopolize database connections and processing power, leading to a scenario where even simple queries cannot be executed in a timely manner. This "busy" or "noisy" database environment can cause queries that are typically fast to run slowly or even timeout, particularly during periods of high activity from other services.

用户通常依靠 CPU 和内存使用率指标来衡量数据库负载,这可能会产生误导。虽然这些是重要指标,但它们可能无法完全反映数据库的运行状态。读取次数、写入次数和等待时间等直接指标可以更清晰地反映数据库的性能,应密切监控。这些指标的明显下降,尤其是在查询或数据模型没有变化的情况下,表明外部压力正在影响数据库性能。

¥Users often rely on CPU and memory usage metrics to gauge database load, which can be misleading. While these are important indicators, they might not fully represent the database's operational state. Direct metrics like the number of reads, writes, and wait times offer a clearer view of the database's performance and should be monitored closely. A noticeable degradation in these metrics, especially in the absence of changes to the queries or data model, suggests that external pressures are affecting database performance.

建议的解决方案:如果通常快速的查询在未进行任何修改的情况下间歇性地变慢或超时,则可能是竞争查询对相同的数据库表施加了压力。要诊断此问题,请采用监控工具或利用数据库的固有功能来观察读取、写入和等待时间。此类监控将揭示与观察到的性能下降相符的活动模式或峰值。

¥Suggested solution: If normally quick queries are intermittently slow or timing out without any modifications to them, it's probable that competing queries are exerting pressure on the same database tables. To diagnose this, adopt monitoring tools or leverage your database's inherent capabilities to observe reads, writes, and wait times. Such monitoring will unveil activity patterns or spikes that align with the observed performance dips.

此外,定期检查和优化基本查询并验证表是否已正确索引至关重要。这种主动方法最大限度地降低了这些查询因竞争工作负载导致的速度减慢而造成的脆弱性。

¥Moreover, it's crucial to periodically scrutinize and refine essential queries and verify that tables are properly indexed. This proactive approach minimizes the vulnerability of these queries to slowdowns caused by competing workloads.

P6008(ConnectionError|EngineStartError)

此错误表示 Prisma ORM 无法与你的 Prisma Postgres 数据库建立连接,可能由于多种原因。

¥This error indicates that Prisma ORM cannot establish a connection to your Prisma Postgres database, potentially due to several reasons.

P6008 的可能原因

¥Possible causes for P6008

无法访问的数据库主机/端口

¥Unreachable Database Host/Port

如果数据库的服务器地址(主机名)和端口不正确或无法访问,则可能会遇到此错误。

¥If the database's server address (hostname) and port are incorrect or unreachable then you may encounter this error.

建议的解决方案:验证创建 Prisma Accelerate 项目时提供的数据库连接字符串的主机名/端口。此外,尝试使用数据库 GUI 工具(例如 Prisma 工作室TablePlusDataGrip)连接到数据库以进行进一步调查。

¥Suggested solution: Verify the hostname/port of the database connection string that was provided while creating the Prisma Accelerate project. Additionally, attempt to connect to the database using a Database GUI tool (e.g., Prisma Studio, TablePlus, or DataGrip) for further investigation.

用户名/密码/数据库名称不正确

¥Incorrect username/password/database name

当提供错误的凭据导致无法与数据库建立连接时,可能会发生此错误。

¥This error can happen when the wrong credentials are provided, preventing it from establishing a connection to your database.

建议的解决方案:验证提供给 Prisma Accelerate 的连接字符串中数据库用户名、密码和名称的正确性。确保这些凭据与数据库所需的凭据匹配。使用直接数据库 GUI 工具测试连接也有助于确认提供的凭据是否正确。

¥Suggested solution: Verify the correctness of your database's username, password, and name in the connection string provided to Prisma Accelerate. Ensure that these credentials match those required by your database. Testing the connection using a direct database GUI tool can also help in confirming if the provided credentials are correct.

P5011(TooManyRequests)

当 Prisma Postgres 检测到大量请求超过允许阈值时,就会发生此错误。它作为一种保护措施,可保护 Prisma Postgres 和你的底层数据库免受过载。

¥This error occurs when Prisma Postgres detects a high volume of requests that surpasses allowable thresholds. It acts as a protective measure to safeguard both Prisma Postgres and your underlying database from excessive load.

P5011 的可能原因

¥Possible causes for P5011

主动重试循环

¥Aggressive retry loops

如果你的应用需要实时或近实时数据,缓存失效功能可确保用户看到最新的数据,即使在使用较大的 (Time-To-Live)时也是如此。

¥If your application retries queries immediately or with minimal delay, especially after receiving certain errors, the rapid accumulation of requests can surpass the threshold.

建议的解决方案:

¥Suggested solution:

  • 实现指数退避策略。不要立即重试或以固定的延迟时间重试,而是在每次尝试失败后逐渐增加延迟时间。

    ¥Implement an exponential backoff strategy. Rather than retrying immediately or with a fixed delay, gradually increase the delay period after each failed attempt.

  • 这为系统提供了恢复时间,并降低了 Prisma Accelerate 和数据库不堪重负的可能性。

    ¥This allows the system time to recover and reduces the likelihood of overwhelming Prisma Accelerate and your database.

突发流量高峰

¥Sudden traffic spikes

意外的流量激增(例如,在产品发布、限时抢购或病毒式增长活动期间)可能导致达到阈值并导致 P5011

¥Unpredicted traffic surges (for example, during product launches, flash sales, or viral growth events) can cause the threshold to be met and result into P5011.

建议的解决方案:

¥Suggested solution:

  • 请考虑 Prisma Accelerate 和数据库的主动扩展策略。

    ¥Consider proactive scaling strategies for both Prisma Accelerate and your database.

  • 监控流量和资源使用情况。如果你预计会出现流量激增,请联系 support 进行容量规划和可能的配置调整。

    ¥Monitor traffic and resource usage. If you anticipate a surge, please contact support for capacity planning and potential configuration adjustments.

长期或计划内的高负载

¥Prolonged or planned high workloads

某些进程(例如批量数据导入、ETL 操作或扩展的 CRON 作业)可能会随着时间的推移持续产生大量查询。

¥Certain processes, such as bulk data imports, ETL operations, or extended CRON jobs, can generate continuous high query volume over time.

建议的解决方案:

¥Suggested solution:

  • 使用批处理或分块技术将大型操作分解为较小的部分。

    ¥Use batching or chunking techniques to break large operations into smaller parts.

  • 设置限制或调度以更均匀地分配负载。

    ¥Establish throttling or scheduling to distribute the load more evenly.