-
Notifications
You must be signed in to change notification settings - Fork 46
feat: add KYC data migration script from CSV to Supabase #304
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
WalkthroughIntroduces a one-time KYC data migration solution consisting of a TypeScript script and documentation. The script reads CSV data, filters approved records, validates required fields, and upserts transformed KYC profiles into Supabase using wallet address as the unique identifier. Supports dry-run mode for validation. Changes
Sequence DiagramsequenceDiagram
participant CLI as Migration CLI
participant FS as File System
participant Parser as CSV Parser
participant Transform as Data Transformer
participant Validate as Validator
participant DB as Supabase DB
CLI->>FS: Read kyc-data.csv
FS-->>CLI: CSV content
CLI->>Parser: Parse CSV rows
Parser-->>CLI: Parsed records
loop For each record
CLI->>Validate: Check required fields
alt Valid & Approved
Validate-->>CLI: ✓ Valid
CLI->>Transform: Map fields (wallet, country, id_type, verified, timestamps)
Transform-->>CLI: Normalized record
CLI->>DB: Upsert on conflict(wallet_address)
DB-->>CLI: Success/Updated
else Invalid or Rejected
Validate-->>CLI: ✗ Skip (log reason)
end
end
CLI-->>CLI: Output: dry-run preview or commit results
Estimated code review effort🎯 2 (Simple) | ⏱️ ~15 minutes
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 10
🧹 Nitpick comments (1)
scripts/migrate-kyc-data.ts (1)
129-141: Consider batch upserts for better performance.The current implementation performs sequential upserts, which will be slow for large datasets. Supabase supports batch operations that can significantly improve performance.
Consider batching the upserts:
async function upsertRows(rows: KycProfileRow[]) { console.log(`\nUpserting ${rows.length} records into public.user_kyc_profiles...`); - let ok = 0, failed = 0; - - for (const row of rows) { - const { error } = await supabase - .from('user_kyc_profiles') - .upsert(row, { onConflict: 'wallet_address' }); - - if (error) { - console.error(`❌ ${row.wallet_address}: ${error.message}`); - failed++; - } else { - console.log(`✅ ${row.wallet_address}`); - ok++; - } - } - console.log(`\n Summary: OK=${ok}, Failed=${failed}`); + + // Batch upsert in chunks (e.g., 100 records at a time) + const BATCH_SIZE = 100; + let ok = 0, failed = 0; + + for (let i = 0; i < rows.length; i += BATCH_SIZE) { + const batch = rows.slice(i, i + BATCH_SIZE); + const { data, error } = await supabase + .from('user_kyc_profiles') + .upsert(batch, { onConflict: 'wallet_address' }); + + if (error) { + console.error(`❌ Batch ${Math.floor(i / BATCH_SIZE) + 1} failed: ${error.message}`); + failed += batch.length; + } else { + console.log(`✅ Batch ${Math.floor(i / BATCH_SIZE) + 1}: ${batch.length} records`); + ok += batch.length; + } + } + console.log(`\nSummary: OK=${ok}, Failed=${failed}`); }Note: With batching, individual row errors won't be logged separately. If per-row error tracking is critical, keep the current approach but be aware of the performance trade-off.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
scripts/MIGRATION_README.md(1 hunks)scripts/migrate-kyc-data.ts(1 hunks)
🧰 Additional context used
🪛 LanguageTool
scripts/MIGRATION_README.md
[style] ~109-~109: As an alternative to the over-used intensifier ‘very’, consider replacing this phrase.
Context: ...QL query in the Supabase SQL Editor. Be very careful with this operation. ```sql -- Example...
(EN_WEAK_ADJECTIVE)
🔇 Additional comments (4)
scripts/migrate-kyc-data.ts (4)
16-37: LGTM!The imports, ESM-safe path resolution, and environment variable validation are well-structured. Using the service role key is appropriate for migration operations that bypass RLS.
40-43: CSV filename default is inconsistent with documentation.The default CSV filename is
kyc-export.csv, but the README documentation referenceskyc-data.csvthroughout. While this has been flagged in the README review, ensure consistency across both files.
71-100: LGTM!The CSV reading logic is robust with proper validation, normalization, and clear logging. The use of
columns: trueandtrim: trueensures clean data parsing, and the validation step appropriately filters out incomplete records.
145-172: LGTM!The main function flow is well-structured with:
- Dry run support for safe validation
- Clear separation of data loading, filtering, transformation, and upserting
- Appropriate error handling with non-zero exit code
- Informative console logging throughout
The use of
anyfor the error type (line 166) is acceptable in catch blocks where the error type is unknown.
| platform: [ | ||
| { | ||
| type: 'id', | ||
| identifier: 'smile_id', | ||
| reference: '', | ||
| } | ||
| ], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use job_id for platform reference instead of empty string.
The platform.reference field is hardcoded as an empty string, but the job_id from the CSV (Smile ID job reference) is available and should be stored here for traceability and potential future API lookups.
Apply this diff:
platform: [
{
type: 'id',
identifier: 'smile_id',
- reference: '',
+ reference: r.job_id,
}
],📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| platform: [ | |
| { | |
| type: 'id', | |
| identifier: 'smile_id', | |
| reference: '', | |
| } | |
| ], | |
| platform: [ | |
| { | |
| type: 'id', | |
| identifier: 'smile_id', | |
| reference: r.job_id, | |
| } | |
| ], |
🤖 Prompt for AI Agents
In scripts/migrate-kyc-data.ts around lines 111 to 117, the platform.reference
is currently set to an empty string; replace it so platform.reference is
populated with the CSV's job_id (Smile ID job reference). Ensure you reference
the existing variable containing job_id (or parse it from the CSV row), and if
job_id may be missing use null/undefined or omit the field rather than an empty
string; keep type as string when present so downstream consumers can use it for
traceability and API lookups.
| } | ||
| ], | ||
| verified: isApproved, | ||
| verified_at: isApproved ? nowISO : null, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
find . -type f -name "migrate-kyc-data.ts"Repository: paycrest/noblocks
Length of output: 90
🏁 Script executed:
head -200 scripts/migrate-kyc-data.ts | cat -nRepository: paycrest/noblocks
Length of output: 6143
🏁 Script executed:
wc -l scripts/migrate-kyc-data.tsRepository: paycrest/noblocks
Length of output: 92
Add historical verification timestamp from CSV instead of current migration time.
The CSV export contains a Timestamp field (line 51 in CsvRowRaw), but it's not being mapped to CsvRow during normalization. As a result, verified_at is set to the current migration time (nowISO) at line 119, losing the original verification date from the CSV data.
To preserve historical accuracy:
type CsvRow = {
job_id: string;
user_id: string;
country?: string | null;
id_type?: string | null;
result: string;
+ timestamp?: string | null;
};
const rows: CsvRow[] = raw.map((r) => ({
job_id: (r['Job ID'] || '').trim(),
user_id: (r['User ID'] || '').trim(),
country: r['Country'] ? r['Country'].trim() : null,
id_type: r['ID Type'] ? r['ID Type'].trim() : null,
result: (r['Result'] || '').trim(),
+ timestamp: r['Timestamp'] ? r['Timestamp'].trim() : null,
}));
function buildRow(r: CsvRow) {
const isApproved = r.result === 'Approved';
+ const verifiedAt = isApproved && r.timestamp
+ ? new Date(r.timestamp).toISOString()
+ : (isApproved ? new Date().toISOString() : null);
return {
wallet_address: r.user_id.toLowerCase(),
id_country: r.country || null,
id_type: r.id_type || null,
platform: [
{
type: 'id',
identifier: 'smile_id',
reference: '',
}
],
verified: isApproved,
- verified_at: isApproved ? nowISO : null,
+ verified_at: verifiedAt,
- updated_at: nowISO,
+ updated_at: new Date().toISOString(),
};
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| verified_at: isApproved ? nowISO : null, | |
| type CsvRow = { | |
| job_id: string; | |
| user_id: string; | |
| country?: string | null; | |
| id_type?: string | null; | |
| result: string; | |
| timestamp?: string | null; | |
| }; | |
| const rows: CsvRow[] = raw.map((r) => ({ | |
| job_id: (r['Job ID'] || '').trim(), | |
| user_id: (r['User ID'] || '').trim(), | |
| country: r['Country'] ? r['Country'].trim() : null, | |
| id_type: r['ID Type'] ? r['ID Type'].trim() : null, | |
| result: (r['Result'] || '').trim(), | |
| timestamp: r['Timestamp'] ? r['Timestamp'].trim() : null, | |
| })); | |
| function buildRow(r: CsvRow) { | |
| const isApproved = r.result === 'Approved'; | |
| const verifiedAt = isApproved && r.timestamp | |
| ? new Date(r.timestamp).toISOString() | |
| : (isApproved ? new Date().toISOString() : null); | |
| return { | |
| wallet_address: r.user_id.toLowerCase(), | |
| id_country: r.country || null, | |
| id_type: r.id_type || null, | |
| platform: [ | |
| { | |
| type: 'id', | |
| identifier: 'smile_id', | |
| reference: '', | |
| } | |
| ], | |
| verified: isApproved, | |
| verified_at: verifiedAt, | |
| updated_at: new Date().toISOString(), | |
| }; | |
| } |
🤖 Prompt for AI Agents
In scripts/migrate-kyc-data.ts around line 119, verified_at is being set to
nowISO which overwrites the original CSV verification time; instead map the
CSV's Timestamp field into CsvRow during normalization (parse/convert to ISO
when present), then when setting verified_at use that parsed timestamp when
isApproved and the CSV timestamp exists (otherwise null), ensuring the value is
in ISO format and falls back to null if invalid or missing; update the CsvRow
type/mapping and replace nowISO usage at line 119 with the normalized CSV
timestamp variable.
| } | ||
|
|
||
| // Upsert — conflict target: wallet_address (PK) | ||
| async function upsertRows(rows: any[]) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Replace any[] with proper typing.
The function parameter uses any[] which bypasses TypeScript's type safety. Define a proper type for the row structure.
Apply this diff:
+type KycProfileRow = {
+ wallet_address: string;
+ id_country: string | null;
+ id_type: string | null;
+ platform: Array<{ type: string; identifier: string; reference: string }>;
+ verified: boolean;
+ verified_at: string | null;
+ updated_at: string;
+};
+
-async function upsertRows(rows: any[]) {
+async function upsertRows(rows: KycProfileRow[]) {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| async function upsertRows(rows: any[]) { | |
| type KycProfileRow = { | |
| wallet_address: string; | |
| id_country: string | null; | |
| id_type: string | null; | |
| platform: Array<{ type: string; identifier: string; reference: string }>; | |
| verified: boolean; | |
| verified_at: string | null; | |
| updated_at: string; | |
| }; | |
| async function upsertRows(rows: KycProfileRow[]) { |
🤖 Prompt for AI Agents
In scripts/migrate-kyc-data.ts around line 125, the upsertRows parameter is
typed as any[] which disables TypeScript safety; define a Row interface (or type
alias) that matches the actual row shape used in this file (fields like id,
name, email, kycStatus, createdAt, etc. — exactly match the properties accessed
in the function), replace any[] with Row[] in the upsertRows signature, and
update any local variables or function calls to use that Row type (or
export/import it if it belongs in a shared types file) so the compiler enforces
the correct structure.
|
|
||
| ### 3. CSV Data File | ||
|
|
||
| Place a CSV file named `kyc-data.csv` inside the `scripts/` directory. The script expects the file to have a header row with column names that match the `CsvRow` interface in the script. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix CSV filename inconsistency.
The documentation states the CSV file should be named kyc-data.csv, but the actual script defaults to kyc-export.csv (line 43 in migrate-kyc-data.ts). This mismatch will cause confusion when users follow the documentation.
Update the filename to match the script's default:
-Place a CSV file named `kyc-data.csv` inside the `scripts/` directory. The script expects the file to have a header row with column names that match the `CsvRow` interface in the script.
+Place a CSV file named `kyc-export.csv` inside the `scripts/` directory. The script expects the file to have a header row with column names that match the `CsvRow` interface in the script.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| Place a CSV file named `kyc-data.csv` inside the `scripts/` directory. The script expects the file to have a header row with column names that match the `CsvRow` interface in the script. | |
| Place a CSV file named `kyc-export.csv` inside the `scripts/` directory. The script expects the file to have a header row with column names that match the `CsvRow` interface in the script. |
🤖 Prompt for AI Agents
scripts/MIGRATION_README.md around line 26: the README instructs users to place
a CSV named `kyc-data.csv` but the script defaults to `kyc-export.csv`; update
the README line to reference `kyc-export.csv` so the documented filename matches
the script default (or alternatively change the script default if you prefer
`kyc-data.csv`) — edit the line to replace `kyc-data.csv` with `kyc-export.csv`
and ensure any other README references use the same filename.
| ``` | ||
|
|
||
| This command will: | ||
| - Read and parse `kyc-data.csv`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Update CSV filename references in usage instructions.
These lines also reference kyc-data.csv which is inconsistent with the script's actual default of kyc-export.csv.
Also applies to: 58-58
🤖 Prompt for AI Agents
In scripts/MIGRATION_README.md around lines 44 and 58, update the usage
instructions that reference "kyc-data.csv" to match the script's actual default
filename "kyc-export.csv"; replace those occurrences (and any other inconsistent
mentions in the README) so the documentation accurately reflects the script's
default input filename.
| ## How It Works | ||
|
|
||
| ### 1. Extraction | ||
| - The script reads the `kyc-data.csv` file from the `scripts/` directory. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Update CSV filename reference.
Another reference to kyc-data.csv instead of kyc-export.csv.
🤖 Prompt for AI Agents
In scripts/MIGRATION_README.md around line 65, the README incorrectly references
`kyc-data.csv`; update that sentence to reference the correct filename
`kyc-export.csv` so the documentation matches the actual script input filename.
| For each row in the CSV: | ||
| - It maps the CSV columns to the fields in the `user_kyc_profiles` table. | ||
| - `user_id` from the CSV is used as the `wallet_address`. | ||
| - It defaults to `tier: 2` for all migrated users. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove incorrect tier field documentation.
The documentation states "It defaults to tier: 2 for all migrated users," but the script (lines 107-122) does not set a tier field in the buildRow function. This is misleading.
Remove or correct this statement:
-For each row in the CSV:
-- It maps the CSV columns to the fields in the `user_kyc_profiles` table.
-- `user_id` from the CSV is used as the `wallet_address`.
-- It defaults to `tier: 2` for all migrated users.
+For each row in the CSV:
+- It maps the CSV columns to the fields in the `user_kyc_profiles` table.
+- `user_id` from the CSV is used as the `wallet_address`.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - It defaults to `tier: 2` for all migrated users. | |
| For each row in the CSV: | |
| - It maps the CSV columns to the fields in the `user_kyc_profiles` table. | |
| - `user_id` from the CSV is used as the `wallet_address`. |
🤖 Prompt for AI Agents
In scripts/MIGRATION_README.md around line 72, the sentence saying "It defaults
to `tier: 2` for all migrated users" is incorrect because the migration script
(see buildRow in the script around lines 107-122) does not set a tier field;
either remove that sentence or replace it with an accurate statement such as
"The script does not set a `tier` field; tiers must be assigned separately" —
alternatively, if the intended behavior is to default migrated users to tier 2,
update the buildRow implementation to explicitly set tier: 2 and then update the
README to reflect that explicit default.
| | `smile_job_id` | `smile_job_id` | | | ||
| | `verified_at` | `verified_at` | Also sets `verified` to `true`. | | ||
| | - | `tier` | Hardcoded to `2` for all records. | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct the data mapping table.
The data mapping table contains inaccuracies:
smile_job_idis not mapped in the actual script (lines 107-122 of migrate-kyc-data.ts). TheJob IDis read but never stored in the output.- The
tierfield is documented as "Hardcoded to2" but is not set anywhere in the script.
Remove these entries or update the script to include them:
| CSV Column | New Schema Field | Notes |
|----------------|--------------------|--------------------------------------------|
| `user_id` | `wallet_address` | Primary key, lowercased for consistency. |
| `id_type` | `id_type` | |
| `country` | `id_country` | |
-| `smile_job_id` | `smile_job_id` | |
| `verified_at` | `verified_at` | Also sets `verified` to `true`. |
-| - | `tier` | Hardcoded to `2` for all records. |📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| | `smile_job_id` | `smile_job_id` | | | |
| | `verified_at` | `verified_at` | Also sets `verified` to `true`. | | |
| | - | `tier` | Hardcoded to `2` for all records. | | |
| | `verified_at` | `verified_at` | Also sets `verified` to `true`. | |
🤖 Prompt for AI Agents
In scripts/MIGRATION_README.md around lines 86-88, the table wrongly documents
`smile_job_id` and `tier` — either remove these two rows from the mapping table
or update the migration implementation (migrate-kyc-data.ts, around lines
107-122) to actually write them: ensure the Job ID read in the script is
assigned to the output `smile_job_id` field and add a `tier` property (set to 2
if intended) to each output record; then update the README to reflect the actual
behavior (or remove the entries if you choose not to persist them).
|
|
||
| ### `CSV file not found` | ||
| **Issue**: The script throws an error `CSV file not found at: <path>`. | ||
| **Solution**: Make sure your CSV file is named exactly `kyc-data.csv` and is located in the `/Users/prof/Documents/paycrest/noblocks/scripts/` directory. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove hardcoded absolute path.
The troubleshooting section contains a hardcoded absolute path specific to a developer's machine: /Users/prof/Documents/paycrest/noblocks/scripts/. This should be replaced with a generic reference.
Use a generic path reference:
-**Solution**: Make sure your CSV file is named exactly `kyc-data.csv` and is located in the `/Users/prof/Documents/paycrest/noblocks/scripts/` directory.
+**Solution**: Make sure your CSV file is named exactly `kyc-export.csv` and is located in the `scripts/` directory.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| **Solution**: Make sure your CSV file is named exactly `kyc-data.csv` and is located in the `/Users/prof/Documents/paycrest/noblocks/scripts/` directory. | |
| **Solution**: Make sure your CSV file is named exactly `kyc-export.csv` and is located in the `scripts/` directory. |
🤖 Prompt for AI Agents
In scripts/MIGRATION_README.md around line 94, the troubleshooting section uses
a hardcoded absolute path (/Users/prof/Documents/paycrest/noblocks/scripts/)
which is machine-specific; replace that with a generic, relative or
environment-agnostic path (e.g., ./scripts/ or <project_root>/scripts/) and
update the instruction to say “place the CSV named kyc-data.csv in your
project’s scripts directory (e.g., ./scripts/)” so the README is portable across
machines.
| **Issue**: The data in Supabase doesn't look right. | ||
| **Solution**: | ||
| 1. Run the script with `--dry-run` and inspect the JSON output in your console. | ||
| 2. Check that the column headers in your `kyc-data.csv` file exactly match the expected names (e.g., `user_id`, `phone_number`). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove incorrect field reference.
Line 104 mentions phone_number as an expected column header, but neither the CSV structure (lines 46-60 of the script) nor the script logic includes a phone_number field.
Remove this incorrect reference:
**Solution**:
1. Run the script with `--dry-run` and inspect the JSON output in your console.
-2. Check that the column headers in your `kyc-data.csv` file exactly match the expected names (e.g., `user_id`, `phone_number`).
+2. Check that the column headers in your CSV file exactly match the expected names (e.g., `Job ID`, `User ID`, `Country`, `ID Type`, `Result`).
3. Verify the data formats in the CSV (e.g., dates in `verified_at` are valid).📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| 2. Check that the column headers in your `kyc-data.csv` file exactly match the expected names (e.g., `user_id`, `phone_number`). | |
| 2. Check that the column headers in your CSV file exactly match the expected names (e.g., `Job ID`, `User ID`, `Country`, `ID Type`, `Result`). |
🤖 Prompt for AI Agents
In scripts/MIGRATION_README.md around line 104, the README incorrectly lists
`phone_number` as an expected CSV column header; this field does not exist in
the CSV structure or script logic (see script lines ~46-60). Remove the
`phone_number` reference from the sentence and ensure the remaining example
header list exactly matches the actual expected column names used by the script.
Description
This pull request introduces a new script and documentation to support migrating user KYC data from a CSV file into the Supabase
user_kyc_profilestable. The changes provide a robust, documented workflow for extracting, transforming, and loading KYC data, including support for dry runs and detailed error handling.New KYC Data Migration Script and Documentation
Migration script implementation:
scripts/migrate-kyc-data.ts, a TypeScript script that reads a CSV file, filters for approved KYC results, transforms each record to match the Supabase schema, and upserts them into theuser_kyc_profilestable usingwallet_addressas the conflict key. Includes support for dry run mode and error reporting.Documentation and usage instructions:
scripts/MIGRATION_README.mdwith step-by-step instructions for preparing the environment, running the migration (dry run and live), expected CSV format, troubleshooting, and rollback guidance.Data extraction, transformation, and loading:
user_id→wallet_address,country→id_country), setstier: 2and other default fields, and only migrates rows whereResultis"Approved". [1] [2]Error handling and reporting:
Rollback instructions:
References
closes #284
Testing
There is a dry run feature so you can preview the data to be added.
Checklist
mainBy submitting a PR, I agree to Paycrest's Contributor Code of Conduct and Contribution Guide.
Summary by CodeRabbit
Documentation
Chores
✏️ Tip: You can customize this high-level summary in your review settings.