Sergey Galushko - Fotolia

Tip

Assess the micro VM market with AWS Firecracker

Consider the benefits of AWS Firecracker as it compares to other micro VMs and system containers, but be wary of its limitations before installing this virtualization technology.

AWS Firecracker powers AWS' repertoire of serverless offerings, such as Lambda and Fargate. AWS users can also take advantage of Firecracker's micro VM technology to mix the benefits of containers and virtual machines -- but some limitations, particularly for production workloads, still exist.

Firecracker features and management

Firecracker uses a virtualization technology called Kernel-based Virtualization Machine (KVM). This enables developers to bring up micro VMs that are hosted on a hypervisor at the Linux kernel level. Micro VM technology is similar to containers -- it's fast and lightweight -- but is still virtual machines under the hood.

These VMs do not contain entire operating systems and, since they're hosted at the kernel-level of Linux, they are fast on both cold startup and execution. Compared to containers, micro VMs are also more isolated, which improves security and ensures whatever runs in the micro VM stays there.

The isolation aspect of Firecracker, in particular, shouldn't go unnoticed. Typically, KVM is used in conjunction with QEMU system emulation, which is a hypervisor that runs within user context. This setup poses a problem for privileged system calls that need a higher level of access -- QEMU slows down because it must translate all of these system calls between the user and system context.

Firecracker replaces QEMU and instead introduces its own VM manager. This move, along with its reliance on KVM, enables Firecracker to bring up VMs in a fraction of a second.

Infrastructure engineers can manage micro VMs via a REST API that exposes a Unix socket. Common command-line tools -- such as curl or PowerShell, with its Invoke-RestMethod cmdlet -- can send instructions to Firecracker and receive output. However, because these micro VMs typically come online and then break down so quickly, they don't require much management while running.

Weigh the alternatives -- and early limitations

Compared to alternatives, such as containers or Hyper-V VMs, Firecracker, at least at this time, is not well-suited for production workloads. The service is still relatively new -- again, especially compared to other hosting options like containers, which are a more seasoned technology with many hours of production use.

Also, since Firecracker operates at the Linux kernel level, it can only host Linux, not Windows, micro VMs.

Lastly, the only way to manage Firecracker is through the Unix socket, which isn't the most user-friendly way to manage micro VMs. Again, this is something Amazon could address with time.

Get started

If developers find that AWS Firecracker is a good fit for their requirements, they can follow these steps to get started with the technology.

Since Firecracker runs on bare metal, it requires a nonvirtualized server. One way to meet this requirement is to spin up an i3.metal EC2 instance in AWS. For the OS, choose Ubuntu 18.04. After the EC2 instance is set up, provide rights to it:

sudo setfacl -m u:${USER}:rw /dev/kvm

Next, get the Firecracker binary. Since Firecracker is open source, access this binary via the GitHub repo's releases page. On the EC2 instance, download the binary with curl, rename it as firecracker and make the binary executable:

curl -LOJ https://github.com/firecracker-microvm/firecracker/releases/download/v${latest}/firecracker-v${latest}
mv firecracker-v${latest} firecracker
chmod +x firecracker

With Firecracker now downloaded, it's time to get it up and running. This requires a Linux kernel binary and an ext4 file system image. The necessary copies of these system images are downloadable here and here.

Open up a shell and start Firecracker:

./firecracker --api-sock /tmp/firecracker.socket

Next, execute curl commands to tell Firecracker where the guest kernel and guest rootfs (ext4 file system image) are:

curl --unix-socket /tmp/firecracker.socket -i \
    -X PUT 'http://localhost/boot-source'   \
    -H 'Accept: application/json'           \
    -H 'Content-Type: application/json'     \
    -d '{
        "kernel_image_path": "./hello-vmlinux.bin",
        "boot_args": "console=ttyS0 reboot=k panic=1 pci=off"
    }'

curl --unix-socket /tmp/firecracker.socket -i \
    -X PUT 'http://localhost/drives/rootfs' \
    -H 'Accept: application/json'           \
    -H 'Content-Type: application/json'     \
    -d '{
        "drive_id": "rootfs",
        "path_on_host": "./hello-rootfs.ext4",
        "is_root_device": true,
        "is_read_only": false
    }'

Once Firecracker knows where the kernel and rootfs are, bring up a VM:

curl --unix-socket /tmp/firecracker.socket -i \
    -X PUT 'http://localhost/actions'       \
    -H  'Accept: application/json'          \
    -H  'Content-Type: application/json'    \
    -d '{
        "action_type": "InstanceStart"
     }'

In the shell where you brought up Firecracker, you should see a serial TTY prompting you to log in to the virtual machine.

Remember that starting Firecracker opens up a Unix socket. Firecracker VMs do not have SSH, so control them via a serial connection as shown here, or by sending the VM various REST API methods.

Dig Deeper on Systems automation and orchestration