-
Notifications
You must be signed in to change notification settings - Fork 392
Open
Description
Hi, I noticed there seem to be two different ways to enable caching in version 0.5, and I'm a bit confused about their differences.
The first method uses environment variables:
export LMMS_EVAL_USE_CACHE=True
export LMMS_EVAL_HOME="/path/to/cache_root" # optional
python -m lmms_eval \
--model async_openai \
--model_args model_version=gpt-4o-2024-11-20,base_url=$OPENAI_API_BASE \
--tasks mmmu_val \
--batch_size 1 \
--output_path ./logs/
The second method uses a command-line argument:
--use_cache "$CACHE_PATH"
According to the docstring:
:param use_cache: str, optional
A path to a sqlite db file for caching model responses.None
if not caching.
Could you please clarify the distinction between these two caching mechanisms? Specifically:
- What does
LMMS_EVAL_USE_CACHE
control, and how does it interact with--use_cache
? - Is
LMMS_EVAL_HOME
related to the path provided in--use_cache
, or are they independent? - Which one should be preferred for caching model responses?
Thanks for your help!
Metadata
Metadata
Assignees
Labels
No labels