Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can the document be more detailed?Please! #1

Open
Bangyan-Zhang opened this issue Aug 2, 2022 · 1 comment
Open

Can the document be more detailed?Please! #1

Bangyan-Zhang opened this issue Aug 2, 2022 · 1 comment

Comments

@Bangyan-Zhang
Copy link

Can you write the document in more detail? I want to run this system on the virtual machine, but I don't know the running order of the files.

@jaharkes
Copy link
Member

jaharkes commented Aug 8, 2022

The current documentation is definitely not very useful, it contains mostly development notes and not really usage information.
This is partly because everything is very much under development. Aside from access control, which is currently non-existant, I feel that the tier1 (cloud) and tier2 (cloudlet) backends are getting pretty close to usable, but the tier3 (mobile client) is definitely not ready for general usage.

After you check out the git repository, use poetry install to set up a virtualenv with the right dependencies.

In 'production' usage there is a tier1 component that would run in the cloud, this one is pretty easy to start as poetry run sinfonia-tier1 so that it runs from the previously set up virtualenv.

The trickier one to run is the tier2 component because it needs to be deployed along (or inside) a kubernetes cluster so that it can deploy backends when requested by tier3 clients. For this one there is an ansible script in deploy-tier2 which, when given a server/vm image with a basic ubuntu18.04-server installation, will install all necessary dependencies, a single node k3s kubernetes cluster, prometheus monitoring, stuff to create wireguard tunnel endpoints into the cluster using kilo, optional nvidia GPU drivers and it will install sinfonia-tier2 from the most recent docker image stored in our github container registry. This needs some configuration parameters, namely its own address so that tier1 can call back to it, and the tier1 url(s) with which it should register. If it comes up successfully and is able to register itself, it should show up in the list of known cloudlets on the tier1 instance which can be accessed as 'http://tier1-address:port/api/v1/cloudlets/'.

For development purposes it is also possible to run tier2 outside of the kubernetes cluster, but that cluster does need to have both kilo and prometheus/node-exporter installed. We also need access to to the k8s cluster credentials and the prometheus instance that is running in the cluster. In this scenario I normally copy the credentials to a local file and make sure I can run kubectl when i point at that file, I then use kubectl proxy to expose the prometheus endpoint locally and run poetry run sinfonia-tier2 --kubeconfig ... --kubecontext ... --prometheus .... I don't give it a tier1 url to register with and simply use the tier2 endpoint for my tier3 clients. There will be no tier1 level decision making about which tier2 is closer/better because there is only one tier2 instance to use, but it is useful when developing an app that depends on sinfonia locally without having to set up a complete distributed cloud/cloudlet environment.

Again all of this should eventually become more simplified and streamlined.

Tier3 is currently even worse usage-wise (sorry). Instead of an application integrating tier3, it is currently more like a wrapper around an existing application. The wrapper requests a backend deployment on tier2, gets back the vpn information and creates a local network namespace with only the VPN as an available network and then runs the application in that network namespace. When the application exits the network namespace and vpn endpoint are cleaned up. Creating and destroying the vpn tunnel and network namespace require root privileges, to the wrapper is currently relying on sudo, eventually this should be handled by a (setuid root?) helper process similar to how docker uses containerd for privileged operations.

Tier3 is started as poetry run sinfonia-tier3 <tier1-url> <uuid of backend> <application and arguments>. By default sinfonia doesn't know too many backends, there are openrtist CPU and GPU variants and a 'hello-world' type backend that runs an nginx server, which is the easiest to tests with.

poetry run sinfonia-tier3 http://tier2(or1)-address:port/ 00000000-0000-0000-0000-000000000000 bash

If it connects successfully and deploys a backend it will get VPN endpoint information and should prompt for your password so that sudo can get the network namespace set up. Finally it will run the application which in this case is the bash shell. This shell should only have a single network interface that is connected to the same network namespace as the deployed backend in the kubernetes cluster (check with ip addr). The resolver is set to the kubedns resolver so you should be able to run curl -v http://helloworld/ and get back the nginx welcome page. When the shell is exited (^d or exit) it should delete the network namespace and tunnel. After the tunnel is no longer active for some amount of time, the tier2 instance should shut down the backend by removing the kubernetes namespace it was deployed in. Tier2 relies on kilo+prometheus to track if the tunnel has been active.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants