-
-
Notifications
You must be signed in to change notification settings - Fork 40
⭐ feat(core): Add usage billing #268
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: dev
Are you sure you want to change the base?
Conversation
WalkthroughThis update introduces new billing and usage tracking capabilities to the core package. It adds a PostgreSQL schema for usage tracking with a credits type enum, a corresponding Zod schema for API validation, and example usage objects. The member table's Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant API
participant BillingSchema
participant Database
Client->>API: Submit usage billing info
API->>BillingSchema: Validate with Billing.Info (Zod)
BillingSchema-->>API: Validation result
API->>Database: Store usage in 'usage' table (credits type enum)
Database-->>API: Confirmation
API-->>Client: Success response
Poem
✨ Finishing Touches
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Nitpick comments (6)
packages/core/src/examples.ts (2)
37-43
: Inconsistent naming convention for examples.While the examples are functionally correct,
game
andsession
use lowercase naming which is inconsistent with other examples in this file that use PascalCase (likeSteam
,User
,Product
, etc.).-export const game = { +export const Game = { id: Id("game") } -export const session = { +export const Session = { id: Id("session") }
45-51
: Consider expanding usage examples for all usage types.The code comment indicates other possible usage types (
bandwidth
,storage
), but onlygpu
is represented in the example. Consider adding examples for the other usage types or documenting the complete set of valid types.export const Usage = { id: Id("usage"), creditsUsed: 20, type: "gpu" as const, //or bandwidth, storage game: [game], session: [session] } +export const BandwidthUsage = { + id: Id("usage"), + creditsUsed: 5, + type: "bandwidth" as const, + game: [game], + session: [session] +} + +export const StorageUsage = { + id: Id("usage"), + creditsUsed: 10, + type: "storage" as const, + game: [game], + session: [session] +}Also, consider adding a comment explaining why
game
andsession
are arrays rather than single objects to clarify the data model relationships.packages/core/src/billing/index.ts (3)
19-20
: Clarify intention of commented propertiesThe code has commented placeholders for
game
andsession
properties. If these are intended for future implementation, consider adding TODO comments with context about when and why they should be implemented.- // game: - // session: + // TODO: Add game relationship when game entity is fully implemented + // TODO: Add session relationship when session tracking is implemented
12-15
: Add validation constraints for creditsUsedThe
creditsUsed
property is defined as a number without any constraints. Consider adding validation to ensure it's positive and within an acceptable range.- creditsUsed: z.number().openapi({ + creditsUsed: z.number().positive().openapi({ description: "The credits used", example: Examples.Usage.creditsUsed }),
7-21
: Consider making schema properties explicitly requiredWhile Zod object properties are required by default, it's a good practice to be explicit about it, especially when some fields might be optional in the future.
You could modify the schema to use
.required()
on the object itself:- export const Info = z.object({ + export const Info = z.object({ id: z.string().openapi({ description: Common.IdDescription, example: Examples.Usage.id, }), creditsUsed: z.number().openapi({ description: "The credits used", example: Examples.Usage.creditsUsed }), type: z.enum(CreditsType).openapi({ description: "The type of credits this was billed on" }), // game: // session: - }) + }).required()packages/core/src/billing/billing.sql.ts (1)
7-15
: Add indexes for frequently queried columnsConsider adding indexes to columns that will be frequently used in WHERE clauses, such as team_id, type, and time-related columns, to improve query performance.
export const usage = pgTable( "usage", { ...teamID, ...timestamps, type: creditsEnum("type").notNull(), creditsUsed: bigint("credits_used", { mode: "number" }).notNull(), } + , (usage) => { + return { + teamIdx: index("usage_team_id_idx").on(usage.teamID), + typeIdx: index("usage_type_idx").on(usage.type), + timeCreatedIdx: index("usage_time_created_idx").on(usage.timeCreated), + } + } )
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
bun.lock
is excluded by!**/*.lock
📒 Files selected for processing (6)
packages/core/package.json
(1 hunks)packages/core/src/billing/billing.sql.ts
(1 hunks)packages/core/src/billing/index.ts
(1 hunks)packages/core/src/examples.ts
(1 hunks)packages/core/src/member/member.sql.ts
(1 hunks)packages/core/src/utils/id.ts
(1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (3)
packages/core/src/member/member.sql.ts (1)
packages/core/src/drizzle/types.ts (3)
teamID
(12-19)teamID
(16-18)timestamps
(36-40)
packages/core/src/billing/index.ts (1)
packages/core/src/billing/billing.sql.ts (1)
CreditsType
(4-4)
packages/core/src/billing/billing.sql.ts (1)
packages/core/src/drizzle/types.ts (1)
timestamps
(36-40)
🔇 Additional comments (4)
packages/core/src/utils/id.ts (1)
13-15
: Clean addition of new ID prefixes.The new prefixes for usage, game, and session entities follow the existing naming pattern and support the billing functionality being introduced.
packages/core/src/billing/index.ts (1)
1-27
: Looks good overallThe schema definition is well-structured and uses Zod correctly for validation. The OpenAPI metadata provides good documentation, and the integration with the database schema via the CreditsType enum ensures consistency.
packages/core/src/billing/billing.sql.ts (2)
4-4
: LGTM: Credits type enum looks goodThe
CreditsType
enum is well-defined with common resource types (gpu, bandwidth, storage) and correctly typed as a const array.
13-13
:❓ Verification inconclusive
Verify bigint is appropriate for creditsUsed
Using
bigint
with{ mode: "number" }
is appropriate if you expect potentially large credit values, but ensure your application correctly handles the potential limitations of JavaScript numbers for very large values.
🌐 Web query:
What are the limitations of using bigint with mode: "number" in Drizzle ORM?
💡 Result:
Limitations of Using
bigint
withmode: "number"
in Drizzle ORMUsing the
bigint
column type withmode: "number"
in Drizzle ORM is convenient for JavaScript compatibility, but it carries significant limitations due to the nature of JavaScript numbers and database value ranges:1. Precision Loss Beyond 2^53
- JavaScript’s
number
type uses the IEEE 754 double-precision floating-point format, which can only safely represent integers between (-2^{53} + 1) and (2^{53} - 1). PostgreSQLbigint
(orint8
) can represent values from (-2^{63}) to (2^{63} - 1).- If your database column stores values outside the JavaScript safe integer range, using
mode: "number"
will lead to silent precision loss, rounding errors, or incorrect data[1].2. Not Suitable for Arbitrarily Large Values
- If you expect to deal with
bigint
values that might exceed JavaScript’s safe integer range, you should usemode: "bigint"
to map directly to JavaScript’sBigInt
type. Usingmode: "number"
in such cases is unsafe, and Drizzle ORM documentation explicitly warns against it[1][8].3. Serialization and Integration Issues
- Using
mode: "bigint"
can cause issues with JSON serialization since the standardJSON.stringify
does not supportBigInt
values, leading to runtime errors likeTypeError: Do not know how to serialize a BigInt
[2][6]. As a workaround, some developers switch back tomode: "number"
to avoid these errors, but this merely masks the underlying precision limitation[2][6].- If your ecosystem or integration layer expects all numbers to be serializable, mapping to
number
may be tempting but can produce subtle bugs if values exceed the safe integer range.4. Implications for Foreign Keys and Auto-Increment Columns
- For auto-incrementing keys (like
bigserial
), Drizzle may default tomode: "number"
if values are assumed small. However, if value growth is unbounded, or you interlink with otherbigint
columns, you risk overflows—potentially breaking referential integrity or causing difficult-to-diagnose bugs[2][1].5. API and Driver Inconsistencies
- Different drivers may return
bigint
columns as either JavaScriptnumber
orBigInt
, depending on driver configuration. If your Drizzle schema usesmode: "number"
but the driver returnsBigInt
, you may encounter type mismatches, requiring careful manual handling[10].Summary Table
Limitation Description Precision limits Safe only for values between (-2^{53} + 1) and (2^{53} - 1); above this, precision lost[1] Data corruption risk Out-of-range values silently rounded or altered Serialization safety Using mode: "bigint"
causes JSON errors;mode: "number"
avoids this but at the cost of correctness[2][6]Foreign key/autoincrement risk High values in linked columns or serials could overflow JavaScript’s safe range[2] Driver/schema mismatch Driver may return native BigInt
, requiring manual conversion[10]Best Practice
- Use
mode: "number"
only when you are certain that all yourbigint
values will always be within JavaScript’s safe integer range[1][8].- For potentially large values, prefer
mode: "bigint"
, and handle serialization at the application boundary to avoidJSON.stringify
errors[2][6].In summary:
mode: "number"
withbigint
in Drizzle ORM is only safe for values below (2^{53}); exceeding this limit risks silent data corruption, especially for IDs, counters, or monetary amounts where full precision is required[1][8]. Always weigh these trade-offs in your schema design.Citations:
- 1: https://orm.drizzle.team/docs/column-types/pg
- 2: drizzle-team/drizzle-orm#2382
- 3: https://orm.drizzle.team/docs/column-types/mysql
- 4: https://orm.drizzle.team/docs/column-types/sqlite
- 5: https://orm.drizzle.team/docs/latest-releases/drizzle-orm-v0290
- 6: https://www.answeroverflow.com/m/1240639529962967111
- 7: https://orm.drizzle.team/docs/indexes-constraints
- 8: https://wanago.io/2024/08/19/api-nestjs-drizzle-identity-columns-postgresql/
- 9: https://app.studyraid.com/en/read/11288/352146/understanding-data-types-in-drizzle-orm
- 10: drizzle-team/drizzle-orm#1980
Validate
bigint("credits_used", { mode: "number" })
for creditsUsedJavaScript’s
number
type can only safely represent integers up to ±2^53‑1 (~9 × 10^15). PostgreSQL’sBIGINT
goes far beyond that, so any value above the JS safe range will be silently rounded or corrupted.Please ensure one of the following:
- Your application’s maximum
credits_used
will never exceed 2^53‑1 (≈9e15).- If larger values are possible, switch to
mode: "bigint"
(JSBigInt
) and handle JSON serialization (e.g. convert to string or use a custom replacer).Locations to review:
- packages/core/src/billing/billing.sql.ts: line 13
Let me know which approach you’d prefer so we can avoid silent precision loss.
"@polar-sh/ingestion": "^0.2.2", | ||
"@polar-sh/sdk": "^0.32.10", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Verify dependency versions.
Adding the ingestion package and updating the SDK is aligned with adding billing functionality. However, it's worth verifying these are the most appropriate versions for your needs.
🏁 Script executed:
#!/bin/bash
# Description: Check for the latest versions of @polar-sh packages being used
echo "Checking latest versions of polar-sh packages on npm..."
npm view @polar-sh/ingestion version
npm view @polar-sh/sdk version
echo "Checking if there are any breaking changes in the changelog or release notes..."
curl -s "https://api.github.com/repos/polarsource/polar/releases" | grep -A 3 "\"name\": \"v0.3" || echo "No v0.3+ releases found"
Length of output: 526
Update @polar-sh/sdk to the latest patch release
Based on the npm registry and GitHub releases:
- @polar-sh/ingestion @^0.2.2 is already at the published latest (0.2.2).
- @polar-sh/sdk @^0.32.10 is one patch behind; the latest is 0.32.11.
- No v0.3+ releases (major bumps) are available yet.
Please bump the SDK dependency to pick up the newest patch:
--- a/packages/core/package.json
+++ b/packages/core/package.json
@@ lines 35-36
- "@polar-sh/ingestion": "^0.2.2",
- "@polar-sh/sdk": "^0.32.10",
+ "@polar-sh/ingestion": "^0.2.2",
+ "@polar-sh/sdk": "^0.32.11",
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
"@polar-sh/ingestion": "^0.2.2", | |
"@polar-sh/sdk": "^0.32.10", | |
"@polar-sh/ingestion": "^0.2.2", | |
"@polar-sh/sdk": "^0.32.11", |
@@ -1,15 +1,16 @@ | |||
import { teamIndexes } from "../team/team.sql"; | |||
import { timestamps, utc, teamID } from "../drizzle/types"; | |||
import { index, pgTable, text, uniqueIndex, varchar } from "drizzle-orm/pg-core"; | |||
import { index, pgEnum, pgTable, text, uniqueIndex, varchar } from "drizzle-orm/pg-core"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Good improvement using PostgreSQL native enum type.
Converting from a text column with validation to a native PostgreSQL enum type improves type safety and database integrity. However, this change requires a database migration strategy to handle existing data.
Make sure you have a migration plan in place for this schema change. Existing records with role
column values will need to be converted to use the new enum type.
Also applies to: 6-6, 13-13
🏁 Script executed:
#!/bin/bash
# Description: Check if migration files exist for the member table role column change
echo "Looking for migration files related to member table:"
fd -t f "member.*migration" -E "*.md"
echo "Checking if there are any drizzle migration files:"
fd -t f "migration" -E "*.md" | grep -i "member\|role" || echo "No specific member role migration files found"
Length of output: 421
Add migration for member.role enum change
We didn’t find any existing migration files handling the conversion of the role
column from text to a native PostgreSQL enum. To ensure data integrity and a smooth rollout, please add a migration that:
- Creates the new enum type (e.g.
member_role_enum
) if it doesn’t already exist - Converts existing string values in
member.role
to the new enum type - Alters the
member
table’srole
column to use the enum type - (Optionally) Drops or renames the old text column once the migration is complete
Once you’ve added and tested this migration, the enum change in packages/core/src/member/member.sql.ts
can be safely deployed.
export const usage = pgTable( | ||
"usage", | ||
{ | ||
...teamID, | ||
...timestamps, | ||
type: creditsEnum("type").notNull(), | ||
creditsUsed: bigint("credits_used", { mode: "number" }).notNull(), | ||
} | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider adding references to related entities
The usage table doesn't include references to the user, game, or session that generated the usage. Consider adding foreign key columns to establish these relationships.
export const usage = pgTable(
"usage",
{
...teamID,
...timestamps,
type: creditsEnum("type").notNull(),
creditsUsed: bigint("credits_used", { mode: "number" }).notNull(),
+ // Reference to the user who generated this usage
+ userID: text("user_id").notNull().references(() => user.id),
+ // References to related entities (uncomment when implemented)
+ // gameID: text("game_id").references(() => game.id),
+ // sessionID: text("session_id").references(() => session.id),
}
)
Add a primary key to the usage table
The usage
table is missing a primary key column. Every table should have a unique identifier to ensure data integrity and efficient querying.
export const usage = pgTable(
"usage",
{
+ id: text("id").primaryKey(),
...teamID,
...timestamps,
type: creditsEnum("type").notNull(),
creditsUsed: bigint("credits_used", { mode: "number" }).notNull(),
}
)
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
export const usage = pgTable( | |
"usage", | |
{ | |
...teamID, | |
...timestamps, | |
type: creditsEnum("type").notNull(), | |
creditsUsed: bigint("credits_used", { mode: "number" }).notNull(), | |
} | |
) | |
export const usage = pgTable( | |
"usage", | |
{ | |
id: text("id").primaryKey(), | |
...teamID, | |
...timestamps, | |
type: creditsEnum("type").notNull(), | |
creditsUsed: bigint("credits_used", { mode: "number" }).notNull(), | |
} | |
) |
Description
Summary by CodeRabbit
New Features
Improvements