Skip to content

OOM errors for large datasets #218

Open
@piotrlaczkowski

Description

@piotrlaczkowski

If we load a sufficiently big dataset (using tf.data.dataset ==> TFDS in "not all in memory mode"), the instance crashes with an OOM error. Since we are iteratively using TFDS in batches, this should not be the case, right ... ?

Thus, we can conclude that the model tries to load the entire dataset into memory. Is this behavior normal?
How can we scale this to big-data usage ?

THNX!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions