You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
My dataset appears to be too large for streamMetabolizer to run on my computer. I am estimating metabolism from the Columbia River. Data set size is ~ 66,000 entries at 15 min intervals so about 690 days of metabolism, not too long. It runs roughly overnight (24,000 seconds) and returns the following error (error 1-3 are about tidyverse deprecations). It saves a small output file, but there are no mcmc samples in the file.
What you saw on your computer
4: In metab_fun(specs = specs, data = data, data_daily = data_daily, :
Modeling failed
Warnings:
error in running command
There were 4 chains where the estimated Bayesian Fraction of Missing Information was low. See http://mc-stan.org/misc/warnings.html#bfmi-low
Examine the pairs() plot to diagnose sampling problems Errors:
vector memory exhausted (limit reached?)
This last error suggests a memory problem. I am running a 2015 Macbook Pro with 16 GB memory. Run is on 4 cores. I am sampling 1000 warm up and 1000 saved steps. When I run a file with about 1/10 of the days and 500 warm up and saved steps, it works fine, and I get the expected results. That the small file works suggest that the data or model output is too big for my computer. Thus there is no problem per se, rather this is a PSA that 700 days of metabolism and 100 steps might be too much for my laptop. Time to run on a cluster.
Include all code you ran (a minimal example) and all console output, errors, and warnings. Include a data file if needed.
load("CRall_sm.RData") #load data fram ready to go for sM, Bob can supply if neededcr_name<- mm_name(type='bayes', pool_K600='normal', err_obs_iid=T, err_proc_iid=T)
cr_specs<- specs(cr_name, K600_daily_meanlog_meanlog=1, K600_daily_meanlog_sdlog=0.75, K600_daily_sdlog_sigma=0.1, burnin_steps=1000,
saved_steps=1000)
CRall_fit<- metab(cr_specs, data=CRall_sm, info=c(site='Columbia, date/time correction', source='Bob Hall'))
Session information
Run the following code line (or sessionInfo() if that doesn't work) and paste in your output.
I just ran into the same problem, I found a reference for the description and solution of this problem, but I haven't found the answer yet, I hope it will help you, if you find a solution, you can also share it, the original text is as follows:
Bayesian models require the rstan
interface to Stan. Sometimes this is as simple as
installing Rtools and calling the above install.packages or install_github command, but other
times everything seems fine until you try to run a Bayesian model in streamMetabolizer. Symptoms of an imperfect rstan installation are probably
diverse. Here's one we've seen:
> bayes_fit <- metab(specs('bayes'), data=mydat)
Warning message:
In metab_fun(specs = specs, data = data, data_daily = data_daily, :
Modeling failed: argument is of length zero
> get_fit(bayes_fit)
...
$warnings
[1] "running command ''/Library/Frameworks/R.framework/Resources/bin/R' CMD config CXX 2>/dev/null' had status 1"
$errors
[1] "argument is of length zero"
In such cases you should refer to the detailed instructions on the rstan website for Mac and Linux
or Windows.
Brief problem description
My dataset appears to be too large for streamMetabolizer to run on my computer. I am estimating metabolism from the Columbia River. Data set size is ~ 66,000 entries at 15 min intervals so about 690 days of metabolism, not too long. It runs roughly overnight (24,000 seconds) and returns the following error (error 1-3 are about tidyverse deprecations). It saves a small output file, but there are no mcmc samples in the file.
What you saw on your computer
4: In metab_fun(specs = specs, data = data, data_daily = data_daily, :
Modeling failed
Warnings:
error in running command
There were 4 chains where the estimated Bayesian Fraction of Missing Information was low. See
http://mc-stan.org/misc/warnings.html#bfmi-low
Examine the pairs() plot to diagnose sampling problems Errors:
vector memory exhausted (limit reached?)
This last error suggests a memory problem. I am running a 2015 Macbook Pro with 16 GB memory. Run is on 4 cores. I am sampling 1000 warm up and 1000 saved steps. When I run a file with about 1/10 of the days and 500 warm up and saved steps, it works fine, and I get the expected results. That the small file works suggest that the data or model output is too big for my computer. Thus there is no problem per se, rather this is a PSA that 700 days of metabolism and 100 steps might be too much for my laptop. Time to run on a cluster.
Include all code you ran (a minimal example) and all console output, errors, and warnings. Include a data file if needed.
Session information
Run the following code line (or
sessionInfo()
if that doesn't work) and paste in your output.The text was updated successfully, but these errors were encountered: