Note: nixos-shell must be able to extend the specified system configuration with [certain modules]!(share/modules).
If your version of nixpkgs provides the extendModules function on system configurations, nixos-shell will use it to inject the required modules; no additional work on your part is needed.
If your version of nixpkgsdoes not provide extendModules, you must make your system configurations overridable with lib.makeOverridable to use them with nixos-shell:
{
nixosConfigurations=let
lib=nixpkgs.lib;
in {
vm=lib.makeOverridablelib.nixosSystem {
# ...
};
};
}
Specifying a non-overridable system configuration will cause nixos-shell to abort with a non-zero exit status.
When using the --flake flag, if no attribute is given, nixos-shell tries the following flake output attributes:
packages.<system>.nixosConfigurations.<vm>
nixosConfigurations.<vm>
nixosModules.<vm>
If an attribute name is given, nixos-shell tries the following flake output attributes:
To forward ports from the virtual machine to the host, use the
virtualisation.forwardPorts NixOS option.
See examples/vm-forward.nix where the ssh server running on port 22 in the
virtual machine is made accessible through port 2222 on the host.
The same can be also achieved by using the QEMU_NET_OPTS environment variable.
Your keys are used to enable passwordless login for the root user.
At the moment only ~/.ssh/id_rsa.pub, ~/.ssh/id_ecdsa.pub and ~/.ssh/id_ed25519.pub are
added automatically. Use users.users.root.openssh.authorizedKeys.keyFiles to add more.
Note: sshd is not started by default. It can be enabled by setting
services.openssh.enable = true.
QEMU is started with user mode network by default. To use bridge network instead,
set virtualisation.qemu.networkingOptions to something like
[ "-nic bridge,br=br0,model=virtio-net-pci,mac=11:11:11:11:11:11,helper=/run/wrappers/bin/qemu-bridge-helper" ]. /run/wrappers/bin/qemu-bridge-helper is a NixOS specific
path for qemu-bridge-helper on other Linux distributions it will be different.
QEMU needs to be installed on the host to get qemu-bridge-helper with setuid bit
set - otherwise you will need to start VM as root. On NixOS this can be achieved using
virtualisation.libvirtd.enable = true;
To increase the size of the virtual hard drive, i. e. to 20 GB (see [virtualisation] options at bottom, defaults to 512M):
{ virtualisation.diskSize=20*1024; }
Notice that for this option to become effective you may also need to delete previous block device files created by qemu (nixos.qcow2).
Notice that changes in the nix store are written to an overlayfs backed by tmpfs rather than the block device
that is configured by virtualisation.diskSize. This tmpfs can be disabled however by using:
{ virtualisation.writableStoreUseTmpfs=false; }
This option is recommend if you plan to use nixos-shell as a remote builder.
There does not exists any explicit options right now but
one can use either the $QEMU_OPTS environment variable
or set virtualisation.qemu.options to pass the right qemu
command line flags:
{
# /dev/sdc also needs to be read-writable by the user executing nixos-shell
In many cloud environments KVM is not available and therefore nixos-shell will fail with: CPU model 'host' requires KVM.
In newer versions of nixpkgs this has been fixed by falling back to emulation.
In older version one can set the virtualisation.qemu.options or set the environment variable QEMU_OPTS:
Terminal window
exportQEMU_OPTS="-cpu max"
nixos-shell
A full list of supported qemu cpus can be obtained by running qemu-kvm -cpu help.
By default VMs will have a NIX_PATH configured for nix channels but no channel are downloaded yet.
To avoid having to download a nix-channel every time the VM is reset, you can use the following nixos configuration:
{...}: {
nix.nixPath=[
"nixpkgs=${pkgs.path}"
];
}
This will add the nixpkgs that is used for the VM in the NIX_PATH of login shell.
Embedding nixos-shell in your own nixos-configuration
It’s possible to specify a different architecture using --guest-system.
This requires your host system to have a either a remote builder
(i.e. darwin-builder on macOS)
or beeing able to run builds in emulation
for the guest system (boot.binfmt.emulatedSystems on NixOS.).
Here is an example for macOS (arm) that will run an aarch64-linux vm:
It solves the same problem as things like virtualenv,
RVM and tools like Vagrant: The issue of quickly
being able to enter an environment with all the
dependecies you need for working on your application without
polluting your environment.
You add a configuration.nix file to each of your
applications. Then when you want to work on an
application you navigate to your project and boot a container:
$ cd my-awesome-project
$ sudo nixos-shell
[10.0.2.12:/src]$ echo "I'm in a container"
A container is built as defined in your project’s configuration.nix,
spawned, and you are logged in via SSH. The container has a
private networking namespace so you can start multiple containers
with clashing ports.
You can access things running in the container from the host via
the ip address advertised in the bash prompt.
Your application dir (the path on the host where you ran nixos-shell)
is bind mounted to /src inside the container. This is analgous to
the /vagrantsynced folder in vagrant.
If you want your containers to be able to connect to the internet you will need
to setup NAT on your host by adding something like the following to your
config:
####What’s a configuration.nix file
See the NixOS manual.
####Isn’t this just nix-shell?
No. nix-shell will drop you into a chroot, with any required build
dependencies, but won’t handle dependent services. nixos-shell will
drop you into a containter which is closer to booting a virtual machine
with everything you need.
####Isn’t this just nixos-container?
Not quite. nixos-shell builds on tops of nixos-container to spawn
a temporary environment. That is, it sets up your environment, gets you
logged in, then takes care of tearing it up and tidying up after you when
you log out.