Skip to content
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
171 changes: 171 additions & 0 deletions assessments/FastBridge_Math/FastBridgemapping.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,171 @@
# FastBridge English Assessment Mapping

![alt text](image.png)

## Score Types for Assessment and Sub-Assessments

### Main Assessment Score Types
The FastBridge English assessment provides several score types for the overall assessment:

1. **Percentile at LEA** - Percentile ranking within the Local Education Agency (district)
2. **Percentile at Nation** - Percentile ranking compared to national norms
3. **Percentile at School** - Percentile ranking within the school
4. **Risk Level** - Risk categorization ( Low Risk, Some Risk, High Risk)

### Sub-Assessment Score Types
Each sub-assessment (component skill) within FastBridge English provides the same score types:

1. **Percentile at LEA** - Sub-assessment percentile ranking within the Local Education Agency
2. **Percentile at Nation** - Sub-assessment percentile ranking compared to national norms
3. **Percentile at School** - Sub-assessment percentile ranking within the school
4. **Risk Level** - Sub-assessment risk categorization

### Risk Level Categories (Performance Level)
Risk levels are typically categorized as:
- **Low Risk**: Student performing at or above expected levels
- **Some Risk**: Student showing some areas of concern
- **High Risk**: Student requiring intensive intervention

## 4 · Additional, Item-Level Fields
FASTBridge exports extra granular metrics—useful for diagnostic drilling, but can be safely excluded from the assessment mapping.

- **Error (Total Items − Items Correct)** – raw count
- **IC per minute** – items answered correctly per minute
- **Items Correct** – total correct responses
- **Total Items** – items presented

## 5 · Growth Metrics Breakdown
FASTBridge files also contain **150 + growth-derived columns**. These are calculated by comparing scores **between assessment windows** rather than within a single window, so they don’t map neatly onto any one period.

**Typical growth metrics (computed across Fall → Winter → Spring → Screening 4 → Screening 5):**

- District Growth Percentile
- Growth Percentile by Start Score
- Growth Score
- National Growth Percentile
- School Growth Percentile

Because they’re window-to-window deltas, they can be **safely excluded** from the mappings.

### Understanding Growth Metrics

#### Growth Score
- **Definition**: Simple arithmetic difference between two assessment scores for the same year
- **Calculation**: Later Score - Earlier Score
#### Growth Percentile
- **Definition**: Ranking of a student's growth relative to other students who had similar starting scores
- **Purpose**: Shows how well a student grew compared to academic peers


### More details # FASTBRIDGE ASSESSMENT FIELDS BREAKDOWN

## Demographic / Basic Fields
- Assessment
- Assessment Language
- DOB
- District
- FAST ID
- First Name
- Gender
- Grade
- Last Name
- Local ID
- Race
- School
- Special Ed. Status
- State
- State ID

## Assessment Periods
1. Fall
2. Winter
3. Spring
4. Screening Period 4 | Summer
5. Screening Period 5

## Assessment Data Breakdown

### Shared Fields
- Final Date
- Percentile at LEA
- Percentile at Nation
- Percentile at School
- Risk Level

### Assessments

#### CBMR‑English
- Error (TWR ‑ WRC)
- Median Accuracy
- Total Words Read
- WRC per minute
- Words Read Correct

#### Concepts of Print
- Error (Total Items ‑ Items Correct)
- IC per minute
- Items Correct
- Total Items

#### Decodable Words
- Total Words Read
- WRC per minute
- Words Read Correct

#### Early Reading English
- Composite Score

#### Letter Names
- Total Words Read
- WRC per minute
- Words Read Correct

#### Letter Sounds
- Total Words Read
- WRC per minute
- Words Read Correct

#### Nonsense Words
- Total Words Read
- WRC per minute
- Words Read Correct

#### Onset Sounds
- Error (Total Items ‑ Items Correct)
- IC per minute
- Items Correct
- Total Items

#### Oral Repetition
- Error (Total Items ‑ Items Correct)
- IC per minute
- Items Correct
- Total Items

#### Sentence Reading
- Total Words Read
- WRC per minute
- Words Read Correct

#### Sight Words
- Total Words Read
- WRC per minute
- Words Read Correct

#### Word Blending
- Error (Total Items ‑ Items Correct)
- IC per minute
- Items Correct
- Total Items

#### Word Rhyming
- Error (Total Items ‑ Items Correct)
- IC per minute
- Items Correct
- Total Items

#### Word Segmentation
- Error (Total Items ‑ Items Correct)
- IC per minute
- Items Correct
- Total Items
87 changes: 87 additions & 0 deletions assessments/FastBridge_Math/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,87 @@
## FastBridge Math (Early Math)

* **Title:** FastBridge Math Early Math Assessment Results
* **Description:** This template maps FastBridge Math Early Math Assessment Results, providing comprehensive evaluation of various mathematical skills including composing, counting objects, decomposing, equal partitioning, match quantity, number sequence, numeral identification, place value, quantity discrimination, subitizing, verbal addition, verbal subtraction, and story problems. The bundle includes pre-processing to pivot season-based columns into student assessment rows, handle growth metrics, and map positional error columns.
* **API version:** 5.3
* **Submitter name:** Bruk Woldearegay
* **Submitter organization:** Crocus LLC.

To run this bundle, please add your own source file(s) and column(s):
<details>
This template works with vendor layout file structure. The pre-execute script transforms the wide CSV format (seasons as columns) into a long format (seasons as rows) suitable for Ed-Fi ingestion. See the sample anonymized file.
</details>

Sample file: `data/Sample_FastBridge_earlyMath_2023_2024_deidentified.csv`

### CLI Parameters

### Required
- **OUTPUT_DIR**: Where output files will be written
- **STATE_FILE**: Where to store the earthmover runs.csv file
- **INPUT_FILE**: The student assessment file to be mapped
- **STUDENT_ID_NAME**: Which column to use as the Ed-Fi `studentUniqueId`. Default column is the 'StateID' from the vendor layout file.
- **SCHOOL_YEAR**: The year of the assessment file (format as 'YYYY' e.g. '2024', etc).

### Examples

**Step 1: Running the pre-execute script to transform the file structure**
The FastBridge Math CSV comes in a wide format with seasons as columns. The pre-execute script pivots this into a long format and handles growth metrics and error column mapping:

```python
fast_bridge_math_pre_exec(source_file, output_file)
```

Example:
```python
fast_bridge_math_pre_exec(
'data/Sample_FastBridge_earlyMath_2023_2024_deidentified.csv',
'data/Sample_FastBridge_earlyMath_2023_2024_deidentified_pivoted.csv'
)
```

**Step 2: Running earthmover with the transformed file:**
```bash
earthmover run -c ./earthmover.yaml -p '{
"INPUT_FILE": "data/Sample_FastBridge_earlyMath_2023_2024_deidentified_pivoted.csv",
"OUTPUT_DIR": "output/",
"STUDENT_ID_NAME": "State ID",
"SCHOOL_YEAR": "2024"}'
```

### Pre-Execute Script Features

The pre-execute script (`python_pre_exec/pre-execute.py`) performs the following transformations:

1. **Season Pivoting**: Converts wide format (seasons as columns) to long format (seasons as rows)
2. **Growth Metrics Processing**: Handles growth columns (e.g., "Composing from Fall to Winter") and pivots them based on ending season
3. **Error Column Mapping**: Maps generic "Error" columns to objective-specific error fields using positional anchors (IC per minute, NRC per minute)
4. **Column Normalization**: Converts all column names to snake_case format for consistency
5. **Data Validation**: Filters out empty rows and ensures data quality

### Error Column Handling

The script automatically detects and maps generic "Error" columns that appear positionally after anchor metrics:
- Generic columns named "Error", "Error.1", "Error.2", etc.
- Maps to objective-specific names like "numeral_identification_one_error", "numeral_identification_kg_error"
- Uses "IC per minute" and "NRC per minute" columns as anchors to determine which objective each error belongs to

Once you have inspected the output JSONL for issues, check the settings in `lightbeam.yaml` and transmit them to your Ed-Fi API with
```bash
lightbeam validate+send -c ./lightbeam.yaml -p '{
"DATA_DIR": "./output/",
"STATE_DIR": "./tmp/.lightbeam/",
"EDFI_API_BASE_URL": "<yourURL>",
"EDFI_API_CLIENT_ID": "<yourID>",
"EDFI_API_CLIENT_SECRET": "<yourSecret>",
"SCHOOL_YEAR": "<yourAPIYear>"}'
```


### Output Structure

After transformation, the pivoted file will contain:
- One row per student per season with assessment data
- Snake_case column names for consistency
- Growth metrics attached to records based on ending season
- Mapped error columns with objective-specific names (e.g., `numeral_identification_one_error`, `numeral_identification_kg_error`)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similar to reading, please shorten these a bit

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

Large diffs are not rendered by default.

Loading