Skip to content

NuxiNL/flower

Repository files navigation

flow·er: a label-based networking daemon

Introduction

On most operating systems that are in use today, programs are allowed to create connections to other systems on the network rather liberally. As this is often a bad thing, systems provide additional kernel subsystems that allow you to restrict this, called firewalls. The downside of firewalls are that they will always remain bolted on to the system. There is no way a regular, unprivileged processes can programmatically (and portably) place restrictions on subprocesses they are about to spawn.

Programming interfaces such as the Berkeley sockets API and getaddrinfo() are also strongly coupled against IPv4, IPv6, TCP, UDP and DNS. Operating systems provide little to no features to experiment with custom transport protocols and domain-specific name services.

With Flower, we're trying to move the responsibility of setting up network connections into a separate daemon, called the Switchboard. Processes can register servers on the Switchboard, to which clients can connect. When establishing a connection, the Switchboard creates a socket pair and uses Unix file descriptor passing to hand out socket endpoints to both processes. Special processes, called ingresses and egresses, can push existing socket file descriptors (e.g., IPv4, IPv6) into the Switchboard.

Identification through labels

The Switchboard identifies its clients and servers by a set of string key-value pairs (labels). Clients and servers are allowed to establish a connection if there are no labels for which the keys match, but the values contradict. Once connected, both parties obtain a copy of the union of both sets of labels. This allows clients to pass connection metadata (network addresses, hostnames, local usernames) to servers and vice versa.

An interesting aspect of the Switchboard is that these labels also act as a security mechanism. Handles to the Switchboard have a set of labels attached to them that can only be extended over time. Every time a new label is attached, the size of the space in which it can establish connections is reduced. The Switchboard's security model is capability-based.

Example usage

A Switchboard process can be started, simply by running:

flower_switchboard /var/run/flower

Flower ships with a utility similar to nc(1), called flower_cat, that allows you to easily start clients and servers. A simple one-shot server can be started by running:

flower_cat -l \
    /var/run/flower '{"program": "demo", "server_name": "My server v0.1"}'

A client can be started similarly:

flower_cat \
    /var/run/flower '{"program": "demo", "client_name": "My client v0.1"}'

This will establish a connection, having labels:

{
    "client_name": "My client v0.1",
    "program": "demo",
    "server_name": "My server v0.1"
}

Other utilities shipped with Flower include flower_ingress_accept and flower_egress_connect. These utilities act as bindings for accept() and connect(), allowing processes to interact with the local network. For example, the following command shows how incoming network traffic on TCP port 80 can be delivered to a running server:

flower_ingress_accept \
    0.0.0.0:80 \
    /var/run/flower '{"program": "demo"}'

Connection metadata (client/server address/port) is attached as additional labels.

Of course, it makes far more sense to communicate with the Switchboard programmatically, as opposed to using the utilities above. The Switchboard makes use of ARPC. Its protocol is specified in a .proto file.

Motivation

Flower has been developed for use with CloudABI, a POSIX-like runtime environment that is strongly sandboxed. CloudABI doesn't allow programs to open arbitrary network connections (i.e., there is no bind() and connect()). A system like Flower is thus needed to grant processes access to the network in a sensible way.