commit | e425f57d5bfc3c1344156144ba67161b6291f7b5 | [log] [tgz] |
---|---|---|
author | Chirantan Ekbote <chirantan@chromium.org> | Fri Apr 02 21:35:45 2021 +0900 |
committer | Commit Bot <commit-bot@chromium.org> | Tue Apr 06 09:20:25 2021 +0000 |
tree | f1c87dc4f46c3df4945b7f1592a59cb9cf40edff | |
parent | 1a3dadca93a7ecf353ffc3c52dad0e377d2a586d [diff] |
sync: Align structs to cache lines Updating an atomic value invalidates the entire cache line to which it belongs, which can make the next access to that cache line slower on other CPU cores. This can lead to "destructive interference" or "false sharing", where atomic operations on two or more unrelated values on the same cache line cause hardware interference with each other, reducing overall performance. Deal with this by aligning atomic primitives to the cache line width so that two primitives are not placed on the same cache line. This also has the benefit of causing *constructive* interference between the atomic value and the data it protects. Since the user of the atomic primitive likely wants to access the protected data after acquiring access, having them both on the same cache line makes the subsequent access to the data faster. A common pattern for synchronization primitives is to put them inside an Arc. However, if the primitive did not specify cache line alignment then both the atomic reference count and the atomic state could end up on the same cache line. In this case, changing the reference count of the primitive would cause destructive interference with its operation. With the proper alignment, both the atomic state and the reference count end up on different cache lines so there would be no interference between them. Since we can't query the cache line width of the target machine at build time, we pessimistically use an alignment of 128 bytes based on the following observations: * On x86, the cache line is usually 64 bytes. However, on Intel cpus the spatial prefetcher "strives to complete every cache line fetched to the L2 cache with the pair line that completes it to a 128-byte aligned chunk" (section 2.3.5.4 of [1]). So to avoid destructive interference we need to align on every pair of cache lines. * On ARM, both cortex A-15 (armv7 [2]) and cortex A-77 (aarch64 [3]) have 64-byte data cache lines. However, Qualcomm Snapdragon CPUs can have 128-byte data cache lines [4]. Since Chrome OS code compiled for armv7 can still run on aarch64 cpus with 128-byte cache lines assume we need 128-byte alignment there as well. [1]: https://www.intel.com/content/dam/www/public/us/en/documents/manuals/64-ia-32-architectures-optimization-manual.pdf [2]: https://developer.arm.com/documentation/ddi0438/d/Level-2-Memory-System/About-the-L2-memory-system [3]: https://developer.arm.com/documentation/101111/0101/functional-description/level-2-memory-system/about-the-l2-memory-system [4]: https://www.7-cpu.com/cpu/Snapdragon.html BUG=none TEST=unit tests Change-Id: Iaf6a29ad0d35411c70fd0e833cc6a49eda029bbc Reviewed-on: https://chromium-review.googlesource.com/c/chromiumos/platform/crosvm/+/2804869 Reviewed-by: Daniel Verkamp <dverkamp@chromium.org> Tested-by: kokoro <noreply+kokoro@google.com> Commit-Queue: Chirantan Ekbote <chirantan@chromium.org>
This component, known as crosvm, runs untrusted operating systems along with virtualized devices. This only runs VMs through the Linux's KVM interface. What makes crosvm unique is a focus on safety within the programming language and a sandbox around the virtual devices to protect the kernel from attack in case of an exploit in the devices.
The channel #crosvm on freenode is used for technical discussion related to crosvm development and integration.
crosvm on Chromium OS is built with Portage, so it follows the same general workflow as any cros_workon
package. The full package name is chromeos-base/crosvm
.
See the Chromium OS developer guide for more on how to build and deploy with Portage.
See the README from the ci
subdirectory to learn how to build and test crosvm in enviroments outside of the Chrome OS chroot.
NOTE: Building for Linux natively is new and not fully supported.
First, set up depot_tools and use repo
to sync down the crosvm source tree. This is a subset of the entire Chromium OS manifest with just enough repos to build crosvm.
mkdir crosvm cd crosvm repo init -g crosvm -u https://chromium.googlesource.com/chromiumos/manifest.git --repo-url=https://chromium.googlesource.com/external/repo.git repo sync
A basic crosvm build links against libcap
. On a Debian-based system, you can install libcap-dev
.
Handy Debian one-liner for all build and runtime deps, particularly if you're running Crostini:
sudo apt install build-essential libcap-dev libgbm-dev libvirglrenderer-dev libwayland-bin libwayland-dev pkg-config protobuf-compiler python wayland-protocols
Known issues:
sudo mkdir /usr/share/policy && sudo ln -s /path/to/crosvm/seccomp/x86_64 /usr/share/policy/crosvm
. We'll eventually build the precompiled policies into the crosvm binary./var/empty
doesn’t exist. sudo mkdir -p /var/empty
to work around this for now./dev/kvm
to run tests or other crosvm instances. Usually it's owned by the kvm
group, so sudo usermod -a -G kvm $USER
and then log out and back in again to fix this.CAP_NET_ADMIN
so those usually need to be run as root.And that's it! You should be able to cargo build/run/test
.
To see the usage information for your version of crosvm, run crosvm
or crosvm run --help
.
To run a very basic VM with just a kernel and default devices:
$ crosvm run "${KERNEL_PATH}"
The uncompressed kernel image, also known as vmlinux, can be found in your kernel build directory in the case of x86 at arch/x86/boot/compressed/vmlinux
.
In most cases, you will want to give the VM a virtual block device to use as a root file system:
$ crosvm run -r "${ROOT_IMAGE}" "${KERNEL_PATH}"
The root image must be a path to a disk image formatted in a way that the kernel can read. Typically this is a squashfs image made with mksquashfs
or an ext4 image made with mkfs.ext4
. By using the -r
argument, the kernel is automatically told to use that image as the root, and therefore can only be given once. More disks can be given with -d
or --rwdisk
if a writable disk is desired.
To run crosvm with a writable rootfs:
WARNING: Writable disks are at risk of corruption by a malicious or malfunctioning guest OS.
crosvm run --rwdisk "${ROOT_IMAGE}" -p "root=/dev/vda" vmlinux
NOTE: If more disks arguments are added prior to the desired rootfs image, the
root=/dev/vda
must be adjusted to the appropriate letter.
Linux kernel 5.4+ is required for using virtiofs. This is convenient for testing. The file system must be named “mtd*” or “ubi*”.
crosvm run --shared-dir "/:mtdfake:type=fs:cache=always" \ -p "rootfstype=virtiofs root=mtdfake" vmlinux
If the control socket was enabled with -s
, the main process can be controlled while crosvm is running. To tell crosvm to stop and exit, for example:
NOTE: If the socket path given is for a directory, a socket name underneath that path will be generated based on crosvm's PID.
$ crosvm run -s /run/crosvm.sock ${USUAL_CROSVM_ARGS} <in another shell> $ crosvm stop /run/crosvm.sock
WARNING: The guest OS will not be notified or gracefully shutdown.
This will cause the original crosvm process to exit in an orderly fashion, allowing it to clean up any OS resources that might have stuck around if crosvm were terminated early.
By default crosvm runs in multiprocess mode. Each device that supports running inside of a sandbox will run in a jailed child process of crosvm. The appropriate minijail seccomp policy files must be present either in /usr/share/policy/crosvm
or in the path specified by the --seccomp-policy-dir
argument. The sandbox can be disabled for testing with the --disable-sandbox
option.
Virtio Wayland support requires special support on the part of the guest and as such is unlikely to work out of the box unless you are using a Chrome OS kernel along with a termina
rootfs.
To use it, ensure that the XDG_RUNTIME_DIR
enviroment variable is set and that the path $XDG_RUNTIME_DIR/wayland-0
points to the socket of the Wayland compositor you would like the guest to use.
crosvm supports GDB Remote Serial Protocol to allow developers to debug guest kernel via GDB.
You can enable the feature by --gdb
flag:
# Use uncompressed vmlinux $ crosvm run --gdb <port> ${USUAL_CROSVM_ARGS} vmlinux
Then, you can start GDB in another shell.
$ gdb vmlinux (gdb) target remote :<port> (gdb) hbreak start_kernel (gdb) c <start booting in the other shell>
For general techniques for debugging the Linux kernel via GDB, see this kernel documentation.
The following are crosvm's default arguments and how to override them.
-m
)-c
)-r
, -d
, or --rwdisk
)--host_ip
, --netmask
, and --mac
)XDG_RUNTIME_DIR
enviroment variable is set (disable with --no-wl
)-p
)--disable-sandbox
)-s
)A Linux kernel with KVM support (check for /dev/kvm
) is required to run crosvm. In order to run certain devices, there are additional system requirements:
virtio-wayland
- The memfd_create
syscall, introduced in Linux 3.17, and a Wayland compositor.vsock
- Host Linux kernel with vhost-vsock support, introduced in Linux 4.8.multiprocess
- Host Linux kernel with seccomp-bpf and Linux namespacing support.virtio-net
- Host Linux kernel with TUN/TAP support (check for /dev/net/tun
) and running with CAP_NET_ADMIN
privileges.Device | Description |
---|---|
CMOS/RTC | Used to get the current calendar time. |
i8042 | Used by the guest kernel to exit crosvm. |
serial | x86 I/O port driven serial devices that print to stdout and take input from stdin. |
virtio-block | Basic read/write block device. |
virtio-net | Device to interface the host and guest networks. |
virtio-rng | Entropy source used to seed guest OS's entropy pool. |
virtio-vsock | Enabled VSOCKs for the guests. |
virtio-wayland | Allowed guest to use host Wayland socket. |
test_all
Crosvm provides docker containers to build and run tests for both x86_64 and aarch64, which can be run with the ./test_all
script. See ci/README.md
for more details on how to use the containers for local development.
rustfmt
All code should be formatted with rustfmt
. We have a script that applies rustfmt to all Rust code in the crosvm repo: please run bin/fmt
before checking in a change. This is different from cargo fmt --all
which formats multiple crates but a single workspace only; crosvm consists of multiple workspaces.
clippy
The clippy
linter is used to check for common Rust problems. The crosvm project uses a specific set of clippy
checks; please run bin/clippy
before checking in a change.
With a few exceptions, external dependencies inside of the Cargo.toml
files are not allowed. The reason being that community made crates tend to explode the binary size by including dozens of transitive dependencies. All these dependencies also must be reviewed to ensure their suitability to the crosvm project. Currently allowed crates are:
cc
- Build time dependency needed to build C source code used in crosvm.libc
- Required to use the standard library, this crate is a simple wrapper around libc
's symbols.The crosvm source code is written in Rust and C. To build, crosvm generally requires the most recent stable version of rustc.
Source code is organized into crates, each with their own unit tests. These crates are:
crosvm
- The top-level binary front-end for using crosvm.devices
- Virtual devices exposed to the guest OS.kernel_loader
- Loads elf64 kernel files to a slice of memory.kvm_sys
- Low-level (mostly) auto-generated structures and constants for using KVM.kvm
- Unsafe, low-level wrapper code for using kvm_sys
.net_sys
- Low-level (mostly) auto-generated structures and constants for creating TUN/TAP devices.net_util
- Wrapper for creating TUN/TAP devices.sys_util
- Mostly safe wrappers for small system facilities such as eventfd
or syslog
.syscall_defines
- Lists of syscall numbers in each architecture used to make syscalls not supported in libc
.vhost
- Wrappers for creating vhost based devices.virtio_sys
- Low-level (mostly) auto-generated structures and constants for interfacing with kernel vhost support.vm_control
- IPC for the VM.x86_64
- Support code specific to 64 bit intel machines.The seccomp
folder contains minijail seccomp policy files for each sandboxed device. Because some syscalls vary by architecture, the seccomp policies are split by architecture.