The idea of this post is to set up an environment to play with the Linux kernel. This post serves as a self-note reference on the process taken.

Environment setup

To create a minimalistic and customized environment I decided to build the kernel from source (kernel.org) and compile my own initramfs using buildroot (buildroot.org) with busybox support (busybox.net). To emulate the entire system I decided to use QEMU (QEMU.org).

The following variables should be set in order to follow this guide:

$ export BUILD_PATH="/your/path/build/environment"
$ mkdir -p $BUILD_PATH

Installing QEMU

Simply follow the command line corresponding to your distribution here.

Compiling the kernel

To download the kernel source go to kernel.org and look up for the version you want, in this post I'm going to use 4.20 (link).

Once the archive is extracted, to make a minimal config use the allnoconfig make rule. In this case I'm going to use the defconfig option to have a fully working kernel and add some custom settings afterwards.

$ cd $BUILD_PATH
$ wget -O- https://cdn.kernel.org/pub/linux/kernel/v4.x/linux-4.20.tar.gz | tar -xzv
$ mv linux-4.20 linux
$ cd linux
$ ARCH=x86_64 make defconfig
#
# configuration written to .config
#
$ make menuconfig

Then enable the following options:

To easily navigate, press the first letter of the option. To search, press / and type your search pattern. When the results appear, press the number shown on the left side of the option to navigate to it.

# Debugging
Kernel hacking ---> Compile-time checks and compiler options ---> Compile the kernel with debug info ---> yes
Kernel hacking ---> Compile-time checks and compiler options ---> Provide GDB scripts for kernel debugging ---> yes
General setup ---> Configure standard kernel features ---> yes
General setup ---> Configure standard kernel features ---> Load all symbols for debugging/ksymoops ---> yes
General setup ---> Configure standard kernel features ---> Include all symbols in kallsyms ---> yes

# Only if 64 bits is selected
Binary Emulations ---> IA32 a.out support ---> yes
Binary Emulations ---> IA32 ABI for 64-bit mode ---> yes

# Make sure the following options are enabled:
General setup ---> Initial RAM filesystem and RAM disk (initramfs/initrd) support ---> yes
General setup ---> Configure standard kernel features ---> Multiple users, groups and capabilities support ---> yes
General setup ---> Configure standard kernel features ---> Sysfs syscall support ---> yes
Device Drivers ---> Generic Driver Options ---> Maintain a devtmpfs filesystem to mount at /dev ---> yes
Device Drivers ---> Generic Driver Options ---> Automount devtmpfs at /dev, after the kernel mounted the rootfs ---> yes
Device Drivers ---> Character devices ---> Enable TTY ---> yes
Device Drivers ---> Character devices ---> Serial drivers ---> 8250/16550 and compatible serial support ---> yes
Device Drivers ---> Character devices ---> Serial drivers ---> Console on 8250/16550 and compatible serial port ---> yes
File systems ---> Pseudo filesystems ---> /proc file system support ---> yes
File systems ---> Pseudo filesystems ---> sysfs file system support ---> yes

I recommended compiling the kernel with and without the KASan option for debugging purposes. Kernel hacking ---> Memory Debugging ---> KASan: runtime memory debugger ---> yes

Depending on the exploit or thing we want to test on this kernel you may want to disable the following countermeasures:

# Allocations at NULL to allow nullptr de-reference
Memory Management options ---> Low address space to protect from user allocation ---> 0
# SMAP disabled
Processor type and features ---> Supervisor Mode Access Prevention ---> no
# KASL, can be disabled by setting nokaslr in the bootloader cmdline
Processor type and features ---> Build a relocatable kernel ---> no
# Canary disabled
General architecture-dependent options ---> Stack Protector buffer overflow detection ---> no

Once the configuration is set and tuned as you want, compile the kernel:

$ nproc
8
$ time ARCH=x86_64 make -j 8
(...)
real    2m52.923s
user    2m32.595s
sys 0m20.537s

Now test the new compiled kernel with QEMU using the following line (<Ctrl>a + x to exit ):

$ qemu-system-x86_64 -kernel arch/x86_64/boot/bzImage -nographic -append "console=ttyS0" -enable-kvm
(...)
[    1.991148] Kernel Offset: 0x5a00000 from 0xc1000000 (relocation range: 0xc0000000-0xc87dffff)
[    1.991649] ---[ end Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0) ]---

This kernel panic is normal since we are missing the init process from the initramfs or root fs.

ARM

The kernel can be compile for any architecture including ARM using a toolchain, this involves exporting the following variables before compiling the kernel (Probably you will have to adjust the toolchain):

$ ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf-

Compiling buildroot

Time to compile the initramfs with some bin/sbin utils including gdbserver and python:

$ cd $BUILD_PATH
$ wget -O- https://buildroot.org/downloads/buildroot-2019.02.2.tar.gz | tar -xzv
$ mv buildroot-2019.02.2 buildroot
$ cd buildroot
$ make menuconfig

Set the following options:

# General
Target options ---> Target Architecture ---> x86_64
Build options ---> Enable compiler cache ---> yes
Build options ---> Compiler cache location ---> $(BUILD_PATH)/.buildroot-ccache

# It should be greater or equal than your kernel version
Toolchain ---> Kernel Headers ---> Linux 4.20.x kernel headers
Toolchain ---> C library ---> glibc
Toolchain ---> Enable C++ support ---> yes
System configuration ---> Run a getty (login prompt) after boot ---> TTY port ---> ttyS0
System configuration ---> Network interface to configure through DHCP ---> eth0
System configuration ---> Root filesystem overlay directories ---> $(BUILD_PATH)/buildroot/overlay
Target packages ---> Debugging, profiling and benchmark ---> gdb
Target packages ---> Interpreter languages and scripting ---> python
Target packages ---> Networking applications ---> dropbear ---> yes
Filesystem images ---> cpio the root filesystem (for use as an initial RAM filesystem) ---> yes
Filesystem images ---> ext2/3/4 root filesystem ---> ext2/3/4 variant ---> ext4

# Optional
System configuration ---> System hostname ---> nullbyte
System configuration ---> System banner ---> Welcome to nullbyte.cat

I recommended compiling the buildroot with the ext2/3/4 option as well as the cpio option. This way we will have a read-only filesystem as well as a permanent filesystem where we can store files that will be kept on reboots.

Create an overlay directory. All the files stored there will be copied in the filesystem root. Add a minimal init script to it:

$ mkdir $BUILD_PATH/buildroot/overlay
$ cat $BUILD_PATH/buildroot/overlay/init
$ chmod +x $BUILD_PATH/buildroot/overlay/init
#!/bin/sh
# devtmpfs does not get automounted for initramfs
/bin/mount -t devtmpfs devtmpfs /dev
exec 0</dev/console
exec 1>/dev/console
exec 2>/dev/console

exec /sbin/init "$@"

Adding users

With the current configuration, only the root user without password will be created. To create a normal user and update the root password, we need to create the /etc/passwd and /etc/shadow files:

$ mkdir $BUILD_PATH/buildroot/overlay/etc/
$ cat $BUILD_PATH/buildroot/overlay/etc/shadow
root:$5$AQRgXbdJ$eCko6aRPrhOBegsJGLy36fmmrheNtfkUMBjlKPWEXW9:10000:0:99999:7:::
daemon:*:10933:0:99999:7:::
bin:*:10933:0:99999:7:::
sys:*:10933:0:99999:7:::
sync:*:10933:0:99999:7:::
mail:*:10933:0:99999:7:::
www-data:*:10933:0:99999:7:::
operator:*:10933:0:99999:7:::
nobody:*:10933:0:99999:7:::
user:$5$QAucgwIL$onnijv2MwdMD.Jze4LgPx7z3kksIjU18y3jffH2urv3:10000:0:99999:7:::

The passwords corresponding to the previous hashes are root:root and user:user (user:password).

If you want to generate a custom hash for a new user or to change the previous ones use the following command:

python -c "import random,string,crypt;
randomsalt = ''.join(random.sample(string.ascii_letters,8));
print crypt.crypt('PASSWORD', '\$5\$%s\$' % randomsalt)"

Update the /etc/passwd accordingly:

$ cat $BUILD_PATH/buildroot/overlay/etc/passwd
root:x:0:0:root:/root:/bin/sh
daemon:x:1:1:daemon:/usr/sbin:/bin/false
bin:x:2:2:bin:/bin:/bin/false
sys:x:3:3:sys:/dev:/bin/false
sync:x:4:100:sync:/bin:/bin/sync
mail:x:8:8:mail:/var/spool/mail:/bin/false
www-data:x:33:33:www-data:/var/www:/bin/false
operator:x:37:37:Operator:/var:/bin/false
nobody:x:65534:65534:nobody:/home:/bin/false
user:x:1000:1000:Linux User,,,:/home/user:/bin/sh

Finally, create the user home and set the permissions to the device table.

$ mkdir -p $BUILD_PATH/buildroot/overlay/home/user
$ echo -e '/home/user\td\t755\t1000\t100\t-\t-\t-\t-\t-' >> $BUILD_PATH/buildroot/system/device_table.txt

Adding the kernel modules

To add the modules compiled during the kernel step to the filesystem we can use the overlay directory as the destination:

$ cd $BUILD_PATH/linux
$ make modules_install INSTALL_MOD_PATH=$BUILD_PATH/buildroot/overlay
(...)
DEPMOD  4.20.0

Finishing up

Once everything is ready, compile bootroot (It will take some minutes on the first compilation):

$ cd $BUILD_PATH/buildroot
$ make source
$ nproc
8
$ make -j 8

Testing the environment

As already mentioned, if you compile buildroot with the ext2/3/4 filesystem you will have a permanent storage where you can transfer files via ssh and store them. However, if you use the cpio variant, the dropbear key will be randomly generated on each boot. This means that the certificate will be invalid on the host on each reboot.

To solve this you have 2 options:

  • Use the ext2 filesystem.
  • Ignore the ~/.ssh/known_host appending the -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no command to the ssh client.

To ssh to the machine, we need to forward the port to the host, we can do this by appending hostfwd=tcp::2222-:22 to the -net user option. (We can add as many as we need). I will forward 3 ports, 9999 for user space gdbserver, 22 to 2222 for ssh and 8000 as an extra. So the full command using ext2 looks like this:

$ cd $BUILD_PATH
$ qemu-system-x86_64 -kernel linux/arch/x86_64/boot/bzImage \
-drive file=buildroot/output/images/rootfs.ext2,format=raw \
-net nic -net user,hostfwd=tcp::2222-:22,hostfwd=tcp::9999-:9999,hostfwd=tcp::8000-:8000 \
-nographic -append "root=/dev/sda console=ttyS0" -enable-kvm

If you want to use the temporary system, replace the -drive line with -initrd buildroot/output/images/rootfs.cpio and remove root=/dev/sda from -append.

Once the kernel has started, we can connect to it using root or user and the respective password via ssh.

$ ssh -p 2222 root@localhost
$ ssh -p 2222 user@localhost
# To enable ssh without password copy the keys (use only on the permanent system)
$ ssh-copy-id -p 2222 root@localhost
$ ssh-copy-id -p 2222 user@localhost

Compiling binaries

Since we specified glibc on the buildroot configuration as the toolchain, we can use our own system compiler. If you are targeting a different architecture, use the gcc compiler from $BUILD_PATH/buildroo/output/host/usr/bin.

Once you have the binary compiled, you can upload it via scp. (Remember to use the ext2 filesystem for persistency).

Debugging binaries

We can use gdbserver to debug user-land binaries on QEMU by listening on one of the forwarded QEMU ports.

$ scp -P 2222 binary user@localhost
$ ssh -p 2222 user@localhost 'gdbserver localhost:9999 ./binary'
$ gdb binary
gdb> target remote localhost:9999

Debugging the kernel

To debug the kernel we can add the -s (Shorthand for -gdb tcp::1234) and the -S (Do not start CPU at startup).

$ cd $BUILD_PATH
$ qemu-system-x86_64 -kernel linux/arch/x86_64/boot/bzImage \
-drive file=buildroot/output/images/rootfs.ext2,format=raw \
-net nic -net user,hostfwd=tcp::2222-:22,hostfwd=tcp::9999-:9999,hostfwd=tcp::8000-:8000 \
-nographic -append "root=/dev/sda console=ttyS0" -s -S -enable-kvm

Then, on the host, we can load the vmlinux binary and attach to the QEMU gdb stub.

$ cd $BUILD_PATH/linux
$ gdb vmlinux
gdb> target remote localhost:1234

If we run gdb from the linux source code directory, we will be able to list the source code and break on it with b mm/slub.c:3770. If this is not the case we can specify inside gdb using the command dir <source-path> the kernel source directory.

Compiling modules

Use the following Makefile to compile the module named module.c.

obj-m += module.o

all:
    make -C $BUILD_PATH/linux M=$(PWD) modules

clean:
    make -C $BUILD_PATH/linux M=$(PWD) clean```

We can then upload the compiled module.ko via scp as root and install it with insmod or modprobe. Another option is to add the module to the buildroot overlay directory and load it using an init script.

Debugging kernel modules

Once the kernel is loaded (confirm it with lsmod in the guest) we can get its address with cat /proc/modules | grep <module-name> or in cat /sys/module/<module>/sections/.text.

We can then map the module into the gdb session that we have attached to the QEMU stub like this:

gdb> add-symbols-file <module.ko> <address>

We can also specify the segments manually taken from /sys/module/<module>/sections/.<segment>

gdb> add-symbols-file <module.ko> -s .text <.text-addr> -s .data <.data-addr>

Snapshots

To fast restore the vm state (useful when testing a kernel exploit) we need to convert our ext2 image to qcow2:

$ cd $BUILD_PATH/buildroot/output/images/
$ qemu-img convert -O qcow2 rootfs.ext2 rootfs.qcow2 -enable-kvm

To control the vm we need to append the QEMU monitor flag and change the drive file to image.qcow2:

$ cd $BUILD_PATH
$ qemu-system-x86_64 -kernel linux/arch/x86_64/boot/bzImage \
-drive file=buildroot/output/images/rootfs.qcow2 \
-net nic -net user,hostfwd=tcp::2222-:22,hostfwd=tcp::9999-:9999,hostfwd=tcp::8000-:8000 \
-nographic -append "root=/dev/sda console=ttyS0" -monitor telnet:127.0.0.1:55555,server,nowait -enable-kvm

Once those options are added we can connect to QEMU monitor with nc localhost 55555 port and send console commands such as the followings:

  • info snapshots: List all the snapshots
  • savevm <tag>: Creates a new snapshot
  • loadvm <tag>: Loads the <tag> snapshot
  • delvm <tag>: Removes the <tag> snapshot

We can keep the QEMU gdb stub running and issue the loadvm <tag> allowing us to revert to the state before executing the exploit.

If you are using the -S together with the -monitor remember to issue the c (continue) command to the monitor or the vm will not start.

The easier way is to create the ctags file with make tags inside the $BUILD_PATH/linux path and navigate thought it with vim and the command :tag <tag> (vim wiki).

EDIT:

Now I'm using cscope together with vim since it allows better crossreference, you can create the database use make cscope and navigate like ctags by adding set cscopetag in your .vimrc. (Take a look at this plugin that does that and autoload the cscope.out database).

SystemTap

You can cross-compile the stap from your host and upload it into the VM with the following:

$ sudo stap -a x86_64 -p 4 -v -m stap.ko --sysroot $BUILD_PATH/linux -r $BUILD_PATH/linux  stap.stp

You will need to install staprun on the guest. I recommend following the next section and install it via apt install systemtap.

Ubuntu initramfs

To create an Ubuntu qcow2 bootable image to use with our custom kernel we can create a bootstrap with the following command:

$ sudo debootstrap \
    --include linux-image-generic \
    xenial \
    debootstrap \
    http://archive.ubuntu.com/ubuntu

To install the modules make modules_install INSTALL_MOD_PATH=debootstrap/

We can then chroot to it and create our user and change the root password, we need to also remount the root filesystem as rw and enable the DHCP service at boot:

$ sudo chroot debootstrap
# echo -e 'root\nroot' passwd
# /usr/sbin/adduser user
# exit
$ # Remount the root filesystem as rw.
$ cat << EOF | sudo tee "debootstrap/etc/fstab"
/dev/sda / ext4 errors=remount-ro,acl 0 1
EOF
$ # Automaticaly start networking.
$ cat << EOF | sudo tee debootstrap/etc/systemd/system/dhclient.service
[Unit]
Description=DHCP Client
Documentation=man:dhclient(8)
Wants=network.target
Before=network.target
[Service]
Type=forking
PIDFile=/var/run/dhclient.pid
ExecStart=/sbin/dhclient -4 -q
[Install]
WantedBy=multi-user.target
EOF
$ sudo ln -sf debootstrap/etc/systemd/system/dhclient.service \
    debootstrap/etc/systemd/system/multi-user.target.wants/dhclient.service

And finally create the qcow2 image:

If you want more disk space, change the +1G (This image should not be used to compile, only to test).

$ sudo virt-make-fs \
    --format qcow2 \
    --size +1G \
    --type ext2 \
    debootstrap \
    xenial-debootstrap.ext2.qcow2

You can always resize it later with:

$ qemu-img resize xenial-debootstrap.ext2.qcow2 +5G
$ # Inside the QEMU vm as root:
# resize2fs /dev/sda

Run our compiled kernel with the Ubuntu bootstrap:

$ qemu-system-x86_64 -kernel linux/arch/x86_64/boot/bzImage \
-drive file=xenial-debootstrap.ext2.qcow2 \
-net nic -net user,hostfwd=tcp::2222-:22,hostfwd=tcp::9999-:9999,hostfwd=tcp::8000-:8000 \
-nographic -append "root=/dev/sda console=ttyS0" -monitor telnet:127.0.0.1:55555,server,nowait -enable-kvm

Now you have a fully working custom linux kernel with a package manager and some other tools.

If errors are displayed during boot are because of the kernel config missmatch. You can get the configuration from debootstrap/config-* and make oldconfig.

You can also compile the ubuntu kernel using this method wiki and downloading the kernel version you want from the mainline repository here.

ARM

For ARM the process is similar:

$ sudo debootstrap \
    --arch armhf xenial \
    debootstrap \
    http://ports.ubuntu.com/ubuntu-ports

We need to do the same steps as the ones explained in the normal ubuntu. This time, in order to chroot to the bootstraped enviroment we will need qemu-user and the corresponding libc for the architecture we are targetting. We will have to configure binfmt and symlink the libraries so we can execute arm transparently:

$ sudo ln -s /usr/arm-linux-gnueabihf /etc/qemu-binfmt/armhf

Now, we can chroot to it and create our user and change the root password, we need to also remount the root filesystem as rw and enable the DHCP service at boot as done in the x86_64 version.

Notice how this time, vda is specified as being the root partition instead of sda since we are going to use the virt machine.

$ sudo chroot debootstrap
# echo -e 'root\nroot' passwd
# /usr/sbin/adduser user
# exit
$ # Remount the root filesystem as rw.
$ cat << EOF | sudo tee "debootstrap/etc/fstab"
/dev/vda / ext4 errors=remount-ro,acl 0 1
EOF
$ # Automaticaly start networking.
$ cat << EOF | sudo tee debootstrap/etc/systemd/system/dhclient.service
[Unit]
Description=DHCP Client
Documentation=man:dhclient(8)
Wants=network.target
Before=network.target
[Service]
Type=forking
PIDFile=/var/run/dhclient.pid
ExecStart=/sbin/dhclient -4 -q
[Install]
WantedBy=multi-user.target
EOF
$ sudo ln -sf debootstrap/etc/systemd/system/dhclient.service \
    debootstrap/etc/systemd/system/multi-user.target.wants/dhclient.service

And finally create the qcow2 image:

If you want more disk space, change the +1G (This image should not be used to compile, only to test).

$ sudo virt-make-fs \
    --format qcow2 \
    --size +1G \
    --type ext2 \
    debootstrap \
    xenial-debootstrap-arm.ext2.qcow2

You can always resize it later with:

$ qemu-img resize xenial-debootstrap.ext2.qcow2 +5G
$ # Inside the QEMU vm as root:
# resize2fs /dev/sda

Run our compiled kernel with the Ubuntu ARM bootstrap (I recommend using more than one cpu, with -smp <n> since CPU emulation is slow):

$ qemu-system-arm \
  -M virt -kernel linux/arch/arm/boot/zImage -M virt -smp 4 \
  -append 'root=/dev/vda console=ttyAMA0' \
  -drive if=none,file=xenial-debootstrap-arm.ext2.qcow2,format=qcow2,id=hd \
  -device virtio-blk-device,drive=hd \
  -device virtio-net-device,netdev=net0 \
  -netdev user,id=net0,hostfwd=tcp::2222-:22,hostfwd=tcp::9999-:9999 \
  -monitor telnet:127.0.0.1:55555,server,nowait \
  -nographic