Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bucc resume (for virtualbox) #29

Open
rkoster opened this issue Jul 11, 2017 · 5 comments
Open

Bucc resume (for virtualbox) #29

rkoster opened this issue Jul 11, 2017 · 5 comments

Comments

@rkoster
Copy link
Contributor

rkoster commented Jul 11, 2017

It would be nice to be able to resume a bucc deployed virtualbox vm.
Currently the bosh jobs won't be able to start because they are missing their persistent disk: cloudfoundry/bosh-virtualbox-cpi-release#7

below some things I have tried so far to workaround the above issue, with mixed results:

df /var/vcap/data | grep /var/vcap/data # wait for agent to mount /var/vcap/data
disk=$(sort <(cat /etc/mtab | grep /dev/sd | cut -d ' ' -f1) <(fdisk -l | grep /dev/sd | grep -v : | grep -v swap | cut -d ' ' -f1) | uniq -u)
mount ${disk} /var/vcap/store
monit



vboxmanage startvm $(bosh int state/state.json --path /current_vm_cid) --type headless
bucc ssh
cd /dev/disk/by-id/ && ln -s ../../sdc 1ATA
@bchalk101
Copy link

Currently if my machine shuts down the concourse vm goes down and can not be brought back up. If there was a way to save state that would be great. Also, the same with credhub, I have lost all credentials that were entered as my machine shut down and could not figure out how to bring it back up without doing a complete bucc up

@drnic
Copy link
Contributor

drnic commented Mar 19, 2018

If you remove current_vm_cid from your state.json then bosh create-env should go thru the motions of creating it again and moving on.

@shreddedbacon
Copy link

shreddedbacon commented Sep 25, 2018

Have had limited success using the following

# savestate
vboxmanage controlvm $(bosh int state/state.json --path /current_vm_cid) savestate
# resume
vboxmanage startvm $(bosh int state/state.json --path /current_vm_cid) --type headless

@shreddedbacon
Copy link

Just on this, I've been saving and resuming state on my local bucc for a while now
I've found that the date is out of sync when resuming, setting the time using the following works.

sudo date --set="$(date -s "$(wget -qSO- --max-redirect=0 google.com 2>&1 | grep Date: | cut -d' ' -f5-8)Z" +"%Y-%m-%d %H:%M:%S")"

I have the following helper commands for resuming

vboxmanage startvm $(bosh int state/state.json --path /current_vm_cid) --type headless
bucc ssh cat > cmd.sh << "EOF"
date --set="$(date -s "$(wget -qSO- --max-redirect=0 google.com 2>&1 | grep Date: | cut -d' ' -f5-8)Z" +"%Y-%m-%d %H:%M:%S")"
EOF
bucc ssh sudo bash cmd.sh

@ramonskie
Copy link
Contributor

i'm still convinced that this should be solved in the virtualbox-cpi and not in bucc it self.
we need to address this to the bosh developers

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants