translatedcode

QEMU for and on ARM cores

Installing Debian on QEMU’s 64-bit ARM “virt” board

with 26 comments

This post is a 64-bit companion to an earlier post of mine where I described how to get Debian running on QEMU emulating a 32-bit ARM “virt” board. Thanks to commenter snak3xe for reminding me that I’d said I’d write this up…

Why the “virt” board?

For 64-bit ARM QEMU emulates many fewer boards, so “virt” is almost the only choice, unless you specifically know that you want to emulate one of the 64-bit Xilinx boards. “virt” supports supports PCI, virtio, a recent ARM CPU and large amounts of RAM. The only thing it doesn’t have out of the box is graphics.

Prerequisites and assumptions

I’m going to assume you have a Linux host, and a recent version of QEMU (at least QEMU 2.8). I also use libguestfs to extract files from a QEMU disk image, but you could use a different tool for that step if you prefer.

I’m going to document how to set up a guest which directly boots the kernel. It should also be possible to have QEMU boot a UEFI image which then boots the kernel from a disk image, but that’s not something I’ve looked into doing myself. (There may be tutorials elsewhere on the web.)

Getting the installer files

I suggest creating a subdirectory for these and the other files we’re going to create.

wget -O installer-linux http://http.us.debian.org/debian/dists/stretch/main/installer-arm64/current/images/netboot/debian-installer/arm64/linux
wget -O installer-initrd.gz http://http.us.debian.org/debian/dists/stretch/main/installer-arm64/current/images/netboot/debian-installer/arm64/initrd.gz

Saving them locally as installer-linux and installer-initrd.gz means they won’t be confused with the final kernel and initrd that the installation process produces.

(If we were installing on real hardware we would also need a “device tree” file to tell the kernel the details of the exact hardware it’s running on. QEMU’s “virt” board automatically creates a device tree internally and passes it to the kernel, so we don’t need to provide one.)

Installing

First we need to create an empty disk drive to install onto. I picked a 5GB disk but you can make it larger if you like.

qemu-img create -f qcow2 hda.qcow2 5G

(Oops — an earlier version of this blogpost created a “qcow” format image, which will work but is less efficient. If you created a qcow image by mistake, you can convert it to qcow2 with mv hda.qcow2 old-hda.qcow && qemu-img convert -O qcow2 old-hda.qcow hda.qcow2. Don’t try it while the VM is running! You then need to update your QEMU command line to say “format=qcow2” rather than “format=qcow”. You can delete the old-hda.qcow once you’ve checked that the new qcow2 file works.)

Now we can run the installer:

qemu-system-aarch64 -M virt -m 1024 -cpu cortex-a53 \
  -kernel installer-linux \
  -initrd installer-initrd.gz \
  -drive if=none,file=hda.qcow2,format=qcow2,id=hd \
  -device virtio-blk-pci,drive=hd \
  -netdev user,id=mynet \
  -device virtio-net-pci,netdev=mynet \
  -nographic -no-reboot

The installer will display its messages on the text console (via an emulated serial port). Follow its instructions to install Debian to the virtual disk; it’s straightforward, but if you have any difficulty the Debian installation guide may help.

The actual install process will take a few hours as it downloads packages over the network and writes them to disk. It will occasionally stop to ask you questions.

Late in the process, the installer will print the following warning dialog:

   +-----------------| [!] Continue without boot loader |------------------+
   |                                                                       |
   |                       No boot loader installed                        |
   | No boot loader has been installed, either because you chose not to or |
   | because your specific architecture doesn't support a boot loader yet. |
   |                                                                       |
   | You will need to boot manually with the /vmlinuz kernel on partition  |
   | /dev/vda1 and root=/dev/vda2 passed as a kernel argument.             |
   |                                                                       |
   |                              <Continue>                               |
   |                                                                       |
   +-----------------------------------------------------------------------+  

Press continue for now, and we’ll sort this out later.

Eventually the installer will finish by rebooting — this should cause QEMU to exit (since we used the -no-reboot option).

At this point you might like to make a copy of the hard disk image file, to save the tedium of repeating the install later.

Extracting the kernel

The installer warned us that it didn’t know how to arrange to automatically boot the right kernel, so we need to do it manually. For QEMU that means we need to extract the kernel the installer put into the disk image so that we can pass it to QEMU on the command line.

There are various tools you can use for this, but I’m going to recommend libguestfs, because it’s the simplest to use. To check that it works, let’s look at the partitions in our virtual disk image:

$ virt-filesystems -a hda.qcow2 
/dev/sda1
/dev/sda2

If this doesn’t work, then you should sort that out first. A couple of common reasons I’ve seen:

  • if you’re on Ubuntu then your kernels in /boot are installed not-world-readable; you can fix this with sudo chmod 644 /boot/vmlinuz*
  • if you’re running Virtualbox on the same host it will interfere with libguestfs’s attempt to run KVM; you can fix that by exiting Virtualbox

Looking at what’s in our disk we can see the kernel and initrd in /boot:

$ virt-ls -a hda.qcow2 /boot/
System.map-4.9.0-3-arm64
config-4.9.0-3-arm64
initrd.img
initrd.img-4.9.0-3-arm64
initrd.img.old
lost+found
vmlinuz
vmlinuz-4.9.0-3-arm64
vmlinuz.old

and we can copy them out to the host filesystem:

virt-copy-out -a hda.qcow2 /boot/vmlinuz-4.9.0-3-arm64 /boot/initrd.img-4.9.0-3-arm64 .

(We want the longer filenames, because vmlinuz and initrd.img are just symlinks and virt-copy-out won’t copy them.)

An important warning about libguestfs, or any other tools for accessing disk images from the host system: do not try to use them while QEMU is running, or you will get disk corruption when both the guest OS inside QEMU and libguestfs try to update the same image.

If you subsequently upgrade the kernel inside the guest, you’ll need to repeat this step to extract the new kernel and initrd, and then update your QEMU command line appropriately.

Running

To run the installed system we need a different command line which boots the installed kernel and initrd, and passes the kernel the command line arguments the installer told us we’d need:

qemu-system-aarch64 -M virt -m 1024 -cpu cortex-a53 \
  -kernel vmlinuz-4.9.0-3-arm64 \
  -initrd initrd.img-4.9.0-3-arm64 \
  -append 'root=/dev/vda2' \
  -drive if=none,file=hda.qcow2,format=qcow2,id=hd \
  -device virtio-blk-pci,drive=hd \
  -netdev user,id=mynet \
  -device virtio-net-pci,netdev=mynet \
  -nographic

This should boot to a login prompt, where you can log in with the user and password you set up during the install.

The installation has an SSH client, so one easy way to get files in and out is to use “scp” from inside the VM to talk to an SSH server outside it. Or you can use libguestfs to write files directly into the disk image (for instance using virt-copy-in) — but make sure you only use libguestfs when the VM is not running, or you will get disk corruption.

Written by pm215

July 24, 2017 at 10:25 am

Posted in linaro, qemu

26 Responses

Subscribe to comments with RSS.

  1. Nice, thank’s ;)

    snak3xe

    July 24, 2017 at 4:56 pm

  2. hi,
    is there any chance to run Xorg ?

    sacarde

    October 21, 2017 at 4:48 pm

    • I’ve never needed to, so I’ve never tried. There’s no graphics on the virt board by default, but you can try adding a graphics device (probably virtio graphics) with a suitable -device argument, and then get the guest OS to use it.

      pm215

      October 21, 2017 at 5:32 pm

      • with: -vga virtio
        I have error:
        qemu-system-arm: Virtio VGA not available

        sacarde

        October 21, 2017 at 7:33 pm

        • You don’t want vga virtio, and you don’t want to try to use the -vga convenience option (you don’t want any kind of VGA device), you want a -device virtio-something option to give you the non-vga virtio graphics device. But you’ll have to investigate for yourself, because I don’t want to do it in comments here one error message at a time, I’m afraid.

          pm215

          October 21, 2017 at 8:52 pm

  3. Thank you very much! Got working arm64 system.

    Ivan Uskov

    October 28, 2017 at 8:33 pm

  4. Wow that was very very useful thank you! :)

    silveronemi

    January 19, 2018 at 1:14 am

  5. Excellent guide! Thank you!

    Balint Szente

    March 20, 2018 at 10:53 am

  6. How do I shut down? Running halt ends with a “system halted” but qemu never returns. :)

    Simon Templar

    March 25, 2018 at 8:17 pm

  7. Hi Peter, thanks for your posts !!

    I was able to get the “virt” machine installed of 32-arm, but when I try doing for 64-arm, I don’t see anything on the terminal whatsoever after I launch the installer-command.

    My host-machine is as below :

    #####################################
    uname -a
    Linux latitude-3480 4.13.0-38-generic #43~16.04.1-Ubuntu SMP Wed Mar 14 17:48:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
    #####################################

    Ajay Garg

    April 20, 2018 at 10:30 am

    • The recipes I give in the blog post work for me. I can’t help any further than that.

      pm215

      April 20, 2018 at 10:35 am

  8. really nice tutorial. one question though, couldn’t find it anywhere: cortex-a53 is a cpu that has 1-4 cores, but in the default configuration it reverts to 1 core, which is rather slow. maybe i missed the switch, but do you know how to increase the number of cores to use? thanks!

    ibazulic

    November 20, 2018 at 10:09 am

    • At the time I wrote this, released versions of QEMU only supported emulation of SMP guests on a single host thread, so emulating 4 guest cores would actually be slower than emulating 1, due to the overhead of switching between them. For newer versions of QEMU, starting with QEMU 2.9, use of multiple threads for ARM guests on x86-64 hosts is supported. You can add more guest CPUs with a command line like “-smp 4” — but note that although that is supported on pre-2.9 QEMU versions it will tend to make it slower rather than faster!

      pm215

      November 20, 2018 at 10:16 am

      • Thanks for the quick reply. Yeah, I’m actually using qemu 2.11 and parsing `-smp 4` seems to increase performance, rather than diminish it.

        ibazulic

        November 20, 2018 at 10:50 am

  9. sorry not too familiar with Qemu – I got almost to the end, (installing popularity contest …), and connection broke. How to pick it back up from there, (since I hadn’t done the kernel-injection stuff yet)?
    Thanks

    Levone1

    December 20, 2018 at 10:20 am

    • Can’t really help, but it’s pretty much no different from recovering a half-broken partial install on real hardware: option 1, if you know what you’re doing you can investigate what’s happened and make it resume and complete. Option 2 is almost always much easier and faster overall: just blow away the broken half-install and restart a fresh install attempt from the beginning.

      pm215

      December 20, 2018 at 1:28 pm

      • okay, thanks so much for reply. One more question if you would, (I’m just getting to know this stuff…) –
        I can boot an iso emulating i386 easily on my machine, with a simple ‘qemu-system-i386 -m 1024 -boot d -enable-kvm -smp 3 -net nic -net user -hda (image).img -cdrom (whatever).iso’, with a vnc viewer, and install, and run it later with the same command, minus the cdrom part, (or live boot no problem), but obviously not too fast or smooth on my arm64. If I try to boot an iso using similar commands, but with ‘qemu-system-aarch64’, (or …-arm), when I open the port with vnc viewer, I only get qemu monitor, and seem to be unable to boot the iso. I guess it’s limited Qemu arm support that males it not that easy using ‘qemu-system-aarch64’, (?), so I was glad to find this post, but I wondered if there was a way to use an iso to load the files, instead of using the network. I guess the installer files would have to be modified, (I tried to add cdrom commands to your posted command, but seemed to be ignored).
        Thanks again, and if I can donate, let me know…

        Levone1

        December 20, 2018 at 2:58 pm

  10. OK, I’ll attempt one more question and leave you alone… Everything went fine – installed, got to end message, extracted files… When I go to run it, I get lots of output, then, ‘…kernel panic, attempting to shut down… ‘. I’ve read up quite a bit, and see that that is a common error that could mean many things. I have tried some variation, but no difference. I copied part of the output here – https://pastebin.com/bW92CMGC. Any ideas / suggestions, etc appreciated.
    Thanks again.

    levone1

    December 26, 2018 at 3:04 am

  11. Thank you! It works like a charm :)

    albertomolina

    January 18, 2019 at 8:19 pm

  12. hi, anyone can share the option to ssh (or in general access servers running on the guest) from the host? i’m trying
    -netdev user,id=mynet,hostfwd=tcp::10022-:22
    but no luck

    santox

    March 8, 2019 at 8:15 am

  13. Hi, Thanks for the detailed information on running Qemu on arm64. I built kernel 4.9 image and “rootfs.cpio” from buildroot-2019.08.2.tar.gz. When I run qemu-system-aarch64 i see kernel logs but at the end the console get stuck after below log. There is no login prompt, could anyone help me to fix this.

    [ 2.456507] registered taskstats version 1
    [ 2.481936] input: gpio-keys as /devices/platform/gpio-keys/input/input0
    [ 2.493678] rtc-pl031 9010000.pl031: setting system clock to 2019-11-28 08:27:16 UTC (1574929636)
    [ 2.502405] ALSA device list:
    [ 2.502591] No soundcards found.
    [ 2.506174] uart-pl011 9000000.pl011: no DMA platform data
    [ 2.743025] Freeing unused kernel memory: 7296K
    Starting syslogd: OK
    Starting klogd: OK
    Running sysctl: OK
    Initializing random number generator… [ 3.953597] random: dd: uninitialized urandom read (512 bytes read)
    done.
    Starting network: OK
    Starting dhcpcd…
    no interfaces have a carrier
    forked to background, child pid 1150
    [ 5.041330] random: dhcpcd: uninitialized urandom read (112 bytes read)

    Vvvvv

    November 28, 2019 at 8:52 am

  14. Hi, this method worked nicely for Ubuntu 16, but when I tried the same steps for Ubuntu 19.10 eoan, the boot partition does not include the initrd.img-5.3.0-24-generic, although there is a link to it.

    “`
    $ sudo fdisk /dev/nbd0 -l
    Disk /dev/nbd0: 32 GiB, 34359738368 bytes, 67108864 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: gpt
    Disk identifier: 51E4CF69-0E17-459D-B287-554B7D9AC4D6

    Device Start End Sectors Size Type
    /dev/nbd0p1 2048 499711 497664 243M Linux filesystem
    /dev/nbd0p2 499712 67106815 66607104 31,8G Linux filesystem
    $ mkdir -p $HOME/tmp/mntpoint
    $ sudo mount /dev/nbd0p1 $HOME/tmp/mntpoint
    $ ls -l $HOME/tmp/mntpoint
    total 14936
    -rw——- 1 root root 5285975 nov 14 00:41 System.map-5.3.0-24-generic
    -rw-r–r– 1 root root 253827 nov 14 00:41 config-5.3.0-24-generic
    lrwxrwxrwx 1 root root 27 dec 17 22:51 initrd.img -> initrd.img-5.3.0-24-generic
    lrwxrwxrwx 1 root root 27 dec 17 22:51 initrd.img.old -> initrd.img-5.3.0-24-generic
    drwx—— 2 root root 16384 dec 17 22:20 lost+found
    lrwxrwxrwx 1 root root 24 dec 17 22:51 vmlinuz -> vmlinuz-5.3.0-24-generic
    -rw——- 1 root root 9732920 nov 14 02:14 vmlinuz-5.3.0-24-generic
    lrwxrwxrwx 1 root root 24 dec 17 22:51 vmlinuz.old -> vmlinuz-5.3.0-24-generic
    $ df -h /dev/nbd0p1
    Filesystem Size Used Avail Use% Mounted on
    /dev/nbd0p1 220M 15M 188M 8% /home/ilg/tmp/mntpoint
    $
    “`

    Any idea where it might be?

    Liviu Ionescu (ilg)

    December 17, 2019 at 10:48 pm

  15. Many thanks for the great guide, worked perfectly for me on Ubuntu 19.04 as host and Debian Buster as guest.
    One addition maybe. There is an alternative way of getting the vmlinuz kernel and the initrd.img out of the virtual disk.
    After the installer complaints of not being able to install the bootloader, it drops into the Debian installer menu. From there, select the “Execute a shell” option.

    On the host, execute the following command:
    1) Have netcat (nc) listen port 1234 for an incoming connection and save everything to a file:
    ubuntu:~$ nc -l -p 1234 >vmlinuz

    On the guest do the following:
    2) Look up the IP address of the default GW, in this case 10.0.2.2:
    ~ # ip route show
    default via 10.0.2.2 dev enp0s2
    10.0.2.0/24 dev enp0s2 scope link src 10.0.2.15
    ~ #

    3) Lookup the exact filename of the linux kernel (look for the size), in this case vmlinuz-4.19.0-8-arm64:
    ~ # ls -1s /target/boot
    3675 System.map-4.19.0-8-arm64
    204 config-4.19.0-8-arm64
    0 initrd.img
    24531 initrd.img-4.19.0-8-arm64
    0 initrd.img.old
    12 lost+found
    0 vmlinuz
    18329 vmlinuz-4.19.0-8-arm64
    0 vmlinuz.old
    ~ #

    4) Send the file to the host using netcat (there will be no progress bar or confirmation):
    ~ # nc -w 3 10.0.2.2 1234 </target/boot/vmlinuz-4.19.0-8-arm64
    ~ #

    5) Check that the transfer was successful by comparing MD5 checksums on host and guest:
    ubuntu:~$ md5sum vmlinuz
    e51b6f2c3ffc1beaec507e528a171f85 vmlinuz

    ~ # md5sum /target/boot/vmlinuz-4.19.0-8-arm64
    e51b6f2c3ffc1beaec507e528a171f85 /target/boot/vmlinuz-4.19.0-8-arm64

    6) Repeat the same process for the initrd.img (in this case initrd.img-4.19.0-8-arm64)

    Hope it helps!

    Tobias

    Tobias

    March 17, 2020 at 10:30 pm

  16. hi,I have a question on qemu virt,I want to load a arm64 kernel on virt using the command “emu-system-aarch64 -machine virt,virtualization=true,secure=true -cpu cortex-a53 -m 1G -smp 2 -kernel ./build/EAOS64.bin -nographic -s -S” ,but I got the wrong EL,the value I read from the register CurrentEL is 0x8,not 0xC.what is wrong with my command?Thanks for your help.

    WanShupeng

    June 24, 2020 at 2:37 am

  17. Hey,
    I know it’s a noob question, but how to connect to the vm using `ssh` ?

    Regards

    ak-47

    October 11, 2020 at 4:20 pm

  18. If you can download the installation iso, an example setup can be done as follows:

    qemu-system-aarch64 -M virt -enable-kvm -cpu host -smp 2 -m 2048 \
    -device nec-usb-xhci -device usb-kbd -device usb-mouse -usb -serial stdio \
    -device usb-storage,drive=install -drive file=debian-10.8.0-arm64-xfce-CD-1.iso,if=none,id=install,media=cdrom,readonly=on \
    -drive if=none,file=hda3.qcow2,format=qcow2,id=hd -device virtio-blk-device,drive=hd \
    -device virtio-gpu-pci,virgl=on,xres=1600,yres=900 -display sdl,gl=on \
    -nic user,model=virtio -bios edk2-aarch64-code.fd

    Ravishankar S

    May 7, 2021 at 6:40 pm


Leave a comment