Installing Debian on QEMU’s 64-bit ARM “virt” board
This post is a 64-bit companion to an earlier post of mine where I described how to get Debian running on QEMU emulating a 32-bit ARM “virt” board. Thanks to commenter snak3xe for reminding me that I’d said I’d write this up…
Why the “virt” board?
For 64-bit ARM QEMU emulates many fewer boards, so “virt” is almost the only choice, unless you specifically know that you want to emulate one of the 64-bit Xilinx boards. “virt” supports supports PCI, virtio, a recent ARM CPU and large amounts of RAM. The only thing it doesn’t have out of the box is graphics.
Prerequisites and assumptions
I’m going to assume you have a Linux host, and a recent version of QEMU (at least QEMU 2.8). I also use libguestfs to extract files from a QEMU disk image, but you could use a different tool for that step if you prefer.
I’m going to document how to set up a guest which directly boots the kernel. It should also be possible to have QEMU boot a UEFI image which then boots the kernel from a disk image, but that’s not something I’ve looked into doing myself. (There may be tutorials elsewhere on the web.)
Getting the installer files
I suggest creating a subdirectory for these and the other files we’re going to create.
wget -O installer-linux http://http.us.debian.org/debian/dists/stretch/main/installer-arm64/current/images/netboot/debian-installer/arm64/linux wget -O installer-initrd.gz http://http.us.debian.org/debian/dists/stretch/main/installer-arm64/current/images/netboot/debian-installer/arm64/initrd.gz
Saving them locally as installer-linux
and installer-initrd.gz
means they won’t be confused with the final kernel and initrd that the installation process produces.
(If we were installing on real hardware we would also need a “device tree” file to tell the kernel the details of the exact hardware it’s running on. QEMU’s “virt” board automatically creates a device tree internally and passes it to the kernel, so we don’t need to provide one.)
Installing
First we need to create an empty disk drive to install onto. I picked a 5GB disk but you can make it larger if you like.
qemu-img create -f qcow2 hda.qcow2 5G
(Oops — an earlier version of this blogpost created a “qcow” format image, which will work but is less efficient. If you created a qcow image by mistake, you can convert it to qcow2 with mv hda.qcow2 old-hda.qcow && qemu-img convert -O qcow2 old-hda.qcow hda.qcow2
. Don’t try it while the VM is running! You then need to update your QEMU command line to say “format=qcow2” rather than “format=qcow”. You can delete the old-hda.qcow
once you’ve checked that the new qcow2 file works.)
Now we can run the installer:
qemu-system-aarch64 -M virt -m 1024 -cpu cortex-a53 \ -kernel installer-linux \ -initrd installer-initrd.gz \ -drive if=none,file=hda.qcow2,format=qcow2,id=hd \ -device virtio-blk-pci,drive=hd \ -netdev user,id=mynet \ -device virtio-net-pci,netdev=mynet \ -nographic -no-reboot
The installer will display its messages on the text console (via an emulated serial port). Follow its instructions to install Debian to the virtual disk; it’s straightforward, but if you have any difficulty the Debian installation guide may help.
The actual install process will take a few hours as it downloads packages over the network and writes them to disk. It will occasionally stop to ask you questions.
Late in the process, the installer will print the following warning dialog:
+-----------------| [!] Continue without boot loader |------------------+ | | | No boot loader installed | | No boot loader has been installed, either because you chose not to or | | because your specific architecture doesn't support a boot loader yet. | | | | You will need to boot manually with the /vmlinuz kernel on partition | | /dev/vda1 and root=/dev/vda2 passed as a kernel argument. | | | | <Continue> | | | +-----------------------------------------------------------------------+
Press continue for now, and we’ll sort this out later.
Eventually the installer will finish by rebooting — this should cause QEMU to exit (since we used the -no-reboot
option).
At this point you might like to make a copy of the hard disk image file, to save the tedium of repeating the install later.
Extracting the kernel
The installer warned us that it didn’t know how to arrange to automatically boot the right kernel, so we need to do it manually. For QEMU that means we need to extract the kernel the installer put into the disk image so that we can pass it to QEMU on the command line.
There are various tools you can use for this, but I’m going to recommend libguestfs, because it’s the simplest to use. To check that it works, let’s look at the partitions in our virtual disk image:
$ virt-filesystems -a hda.qcow2 /dev/sda1 /dev/sda2
If this doesn’t work, then you should sort that out first. A couple of common reasons I’ve seen:
- if you’re on Ubuntu then your kernels in
/boot
are installed not-world-readable; you can fix this withsudo chmod 644 /boot/vmlinuz*
- if you’re running Virtualbox on the same host it will interfere with libguestfs’s attempt to run KVM; you can fix that by exiting Virtualbox
Looking at what’s in our disk we can see the kernel and initrd in /boot:
$ virt-ls -a hda.qcow2 /boot/ System.map-4.9.0-3-arm64 config-4.9.0-3-arm64 initrd.img initrd.img-4.9.0-3-arm64 initrd.img.old lost+found vmlinuz vmlinuz-4.9.0-3-arm64 vmlinuz.old
and we can copy them out to the host filesystem:
virt-copy-out -a hda.qcow2 /boot/vmlinuz-4.9.0-3-arm64 /boot/initrd.img-4.9.0-3-arm64 .
(We want the longer filenames, because vmlinuz
and initrd.img
are just symlinks and virt-copy-out won’t copy them.)
An important warning about libguestfs, or any other tools for accessing disk images from the host system: do not try to use them while QEMU is running, or you will get disk corruption when both the guest OS inside QEMU and libguestfs try to update the same image.
If you subsequently upgrade the kernel inside the guest, you’ll need to repeat this step to extract the new kernel and initrd, and then update your QEMU command line appropriately.
Running
To run the installed system we need a different command line which boots the installed kernel and initrd, and passes the kernel the command line arguments the installer told us we’d need:
qemu-system-aarch64 -M virt -m 1024 -cpu cortex-a53 \ -kernel vmlinuz-4.9.0-3-arm64 \ -initrd initrd.img-4.9.0-3-arm64 \ -append 'root=/dev/vda2' \ -drive if=none,file=hda.qcow2,format=qcow2,id=hd \ -device virtio-blk-pci,drive=hd \ -netdev user,id=mynet \ -device virtio-net-pci,netdev=mynet \ -nographic
This should boot to a login prompt, where you can log in with the user and password you set up during the install.
The installation has an SSH client, so one easy way to get files in and out is to use “scp” from inside the VM to talk to an SSH server outside it. Or you can use libguestfs to write files directly into the disk image (for instance using virt-copy-in) — but make sure you only use libguestfs when the VM is not running, or you will get disk corruption.
Installing Debian on QEMU’s 32-bit ARM “virt” board
In this post I’m going to describe how to set up Debian on QEMU emulating a 32-bit ARM “virt” board. There are a lot of older tutorials out there which suggest using boards like “versatilepb” or “vexpress-a9”, but these days “virt” is a far better choice for most people, so some documentation of how to use it seems overdue. (I may do a followup post for 64-bit ARM later.)
Update 2017-07-24: I have now written that post about installing a 64-bit ARM guest.
Why the “virt” board?
QEMU has models of nearly 50 different ARM boards, which makes it difficult for new users to pick one which is right for their purposes. This wild profusion reflects a similar diversity in the real hardware world: ARM systems come in many different flavours with very different hardware components and capabilities. A kernel which is expecting to run on one system will likely not run on another. Many of QEMU’s models are annoyingly limited because the real hardware was also limited — there’s no PCI bus on most mobile devices, after all, and a fifteen year old development board wouldn’t have had a gigabyte of RAM on it.
My recommendation is that if you don’t know for certain that you want a model of a specific device, you should choose the “virt” board. This is a purely virtual platform designed for use in virtual machines, and it supports PCI, virtio, a recent ARM CPU and large amounts of RAM. The only thing it doesn’t have out of the box is graphics, but graphical programs on a fully emulated system run very slowly anyway so are best avoided.
Why Debian?
Debian has had good support for ARM for a long time, and with the Debian Jessie release it has a “multiplatform” kernel, so there’s no need to build a custom kernel. Because we’re installing a full distribution rather than a cut-down embedded environment, any development tools you need inside the VM will be easy to install later.
Prerequisites and assumptions
I’m going to assume you have a Linux host, and a recent version of QEMU (at least QEMU 2.6). I also use libguestfs to extract files from a QEMU disk image, but you could use a different tool for that step if you prefer.
Getting the installer files
I suggest creating a subdirectory for these and the other files we’re going to create.
To install on QEMU we will want the multiplatform “armmp” kernel and initrd from the Debian website:
wget -O installer-vmlinuz http://http.us.debian.org/debian/dists/jessie/main/installer-armhf/current/images/netboot/vmlinuz wget -O installer-initrd.gz http://http.us.debian.org/debian/dists/jessie/main/installer-armhf/current/images/netboot/initrd.gz
Saving them locally as installer-vmlinuz
and installer-initrd.gz
means they won’t be confused with the final kernel and initrd that the installation process produces.
(If we were installing on real hardware we would also need a “device tree” file to tell the kernel the details of the exact hardware it’s running on. QEMU’s “virt” board automatically creates a device tree internally and passes it to the kernel, so we don’t need to provide one.)
Installing
First we need to create an empty disk drive to install onto. I picked a 5GB disk but you can make it larger if you like.
qemu-img create -f qcow2 hda.qcow2 5G
(Oops — an earlier version of this blogpost created a “qcow” format image, which will work but is less efficient. If you created a qcow image by mistake, you can convert it to qcow2 with mv hda.qcow2 old-hda.qcow && qemu-img convert -O qcow2 old-hda.qcow hda.qcow2
. Don’t try it while the VM is running! You then need to update your QEMU command line to say “format=qcow2” rather than “format=qcow”. You can delete the old-hda.qcow
once you’ve checked that the new qcow2 file works.)
Now we can run the installer:
qemu-system-arm -M virt -m 1024 \ -kernel installer-vmlinuz \ -initrd installer-initrd.gz \ -drive if=none,file=hda.qcow2,format=qcow2,id=hd \ -device virtio-blk-device,drive=hd \ -netdev user,id=mynet \ -device virtio-net-device,netdev=mynet \ -nographic -no-reboot
(I would have preferred to use QEMU’s PCI virtio devices, but unfortunately the Debian kernel doesn’t support them; a future Debian release very likely will, which would allow you to use virtio-blk-pci
and virtio-net-pci
instead of virtio-blk-device
and virtio-net-device
.)
The installer will display its messages on the text console (via an emulated serial port). Follow its instructions to install Debian to the virtual disk; it’s straightforward, but if you have any difficulty the Debian release manual may help.
(Don’t worry about all the warnings the installer kernel produces about GPIOs when it first boots.)
The actual install process will take a few hours as it downloads packages over the network and writes them to disk. It will occasionally stop to ask you questions.
Late in the process, the installer will print the following warning dialog:
+-----------------| [!] Continue without boot loader |------------------+ | | | No boot loader installed | | No boot loader has been installed, either because you chose not to or | | because your specific architecture doesn't support a boot loader yet. | | | | You will need to boot manually with the /vmlinuz kernel on partition | | /dev/vda1 and root=/dev/vda2 passed as a kernel argument. | | | | <Continue> | | | +-----------------------------------------------------------------------+
Press continue for now, and we’ll sort this out later.
Eventually the installer will finish by rebooting — this should cause QEMU to exit (since we used the -no-reboot
option).
At this point you might like to make a copy of the hard disk image file, to save the tedium of repeating the install later.
Extracting the kernel
The installer warned us that it didn’t know how to arrange to automatically boot the right kernel, so we need to do it manually. For QEMU that means we need to extract the kernel the installer put into the disk image so that we can pass it to QEMU on the command line.
There are various tools you can use for this, but I’m going to recommend libguestfs, because it’s the simplest to use. To check that it works, let’s look at the partitions in our virtual disk image:
$ virt-filesystems -a hda.qcow2 /dev/sda1 /dev/sda2
If this doesn’t work, then you should sort that out first. A couple of common reasons I’ve seen:
- if you’re on Ubuntu then your kernels in
/boot
are installed not-world-readable; you can fix this withsudo chmod 644 /boot/vmlinuz*
- if you’re running Virtualbox on the same host it will interfere with libguestfs’s attempt to run KVM; you can fix that by exiting Virtualbox
Looking at what’s in our disk we can see the kernel and initrd in /boot:
$ virt-ls -a hda.qcow2 /boot/ System.map-3.16.0-4-armmp-lpae config-3.16.0-4-armmp-lpae initrd.img initrd.img-3.16.0-4-armmp-lpae lost+found vmlinuz vmlinuz-3.16.0-4-armmp-lpae
and we can copy them out to the host filesystem:
$ virt-copy-out -a hda.qcow2 /boot/vmlinuz-3.16.0-4-armmp-lpae /boot/initrd.img-3.16.0-4-armmp-lpae .
(We want the longer filenames, because vmlinuz
and initrd.img
are just symlinks and virt-copy-out won’t copy them.)
An important warning about libguestfs, or any other tools for accessing disk images from the host system: do not try to use them while QEMU is running, or you will get disk corruption when both the guest OS inside QEMU and libguestfs try to update the same image.
Running
To run the installed system we need a different command line which boots the installed kernel and initrd, and passes the kernel the command line arguments the installer told us we’d need:
qemu-system-arm -M virt -m 1024 \ -kernel vmlinuz-3.16.0-4-armmp-lpae \ -initrd initrd.img-3.16.0-4-armmp-lpae \ -append 'root=/dev/vda2' \ -drive if=none,file=hda.qcow2,format=qcow2,id=hd \ -device virtio-blk-device,drive=hd \ -netdev user,id=mynet \ -device virtio-net-device,netdev=mynet \ -nographic
This should boot to a login prompt, where you can log in with the user and password you set up during the install.
The installation has an SSH client, so one easy way to get files in and out is to use “scp” from inside the VM to talk to an SSH server outside it. Or you can use libguestfs to write files directly into the disk image (for instance using virt-copy-in) — but make sure you only use libguestfs when the VM is not running, or you will get disk corruption.
Tricks for Debugging QEMU — savevm snapshots
For the next entry in this occasional series of posts about tricks for debugging QEMU I want to talk about savevm snapshots.
QEMU’s savevm snapshot feature is designed as a user feature, but it’s surprisingly handy as a developer tool too. Suppose you have a guest image which misbehaves when you run a particular userspace program inside the guest. This can be very awkward to debug because it takes so long to get to the point of failure, especially if it requires user interaction along the way. If you take a snapshot of the VM state just before the bug manifests itself, you can create a simpler and shorter test case by making QEMU start execution from the snapshot point. It’s then often practical to use debug techniques like turning on QEMU’s slow and voluminous tracing of all execution, now that you’re only dealing with a short run of execution.
To use savevm snapshots you’ll need to be using a disk image format which supports them, like QCOW2. If you have a different format like a raw disk, you can convert it with qemu-img:
qemu-img convert -f raw -O qcow2 your-disk.img your-disk.qcow2
and then change your command line to use the qcow2 file rather than the old raw image. (As a bonus it should be faster and take less disk space too!)
If the QEMU system you’re trying to debug doesn’t have a disk image at all, you can create a dummy disk which will be used for nothing but snapshots like this:
qemu-img create -f qcow2 dummy.qcow2 32M
and then add this option to your command line:
-drive if=none,format=qcow2,file=dummy.qcow2
(QEMU may warn that the drive is “orphaned” because it’s not connected to anything, but that’s fine.)
To create a snapshot, you use this QEMU monitor command:
savevm some-name
This will save the VM state, and usually takes a second or two. Once it’s done you can type quit
at the monitor to exit QEMU. You can make multiple snapshots with different names.
Then to make QEMU start automatically from the snapshot add the option:
-loadvm some-name
to your QEMU command line. (You still need to specify all the same device and configuration options you did when you saved the snapshot.)
Before you dive into debugging your reduced test case, do check that the bug you’re reproducing is still present in the shortened test case. Some bugs don’t reproduce in a snapshot — for instance if the problem is that QEMU has stale information cached in its TLB or translated code cache, then the bug will probably not manifest when the snapshot is loaded, because these caches will be empty. (Not reproducing in a snapshot is interesting diagnostic information in itself, in fact.)
You should also be aware that snapshotting requires support from all the devices in the system QEMU is modelling. This works fine for the x86 PC models, and also for most of the major ARM boards (including ‘virt’, ‘vexpress’ and ‘versatilepb’), but if you’re trying this on a more obscure guest CPU architecture or board you might run into trouble. Missing snapshotting support will manifest as the reloaded system misbehaving (eg device stops working, or perhaps there are no interrupts so nothing responds). I think this debugging technique is valuable enough that it’s worth stopping to fix up missing snapshot support in devices just so you can use it. If you don’t feel up to that, feel free to report the bugs on qemu-devel…
You can automate the process of taking the initial snapshot using the ‘expect’ utility. Here are some command line options that create a monitor session on TCP port 4444 and make QEMU start up in a ‘stopped’ state, so the VM doesn’t run until we ask it to:
-chardev socket,id=monitor,host=127.0.0.1,port=4444,server,nowait,telnet -mon chardev=monitor,mode=readline -S
And here’s an expect script that connects to the monitor, tells QEMU to start, and then takes a snapshot 0.6 seconds into the run:
#!/usr/bin/expect -f set timeout -1 spawn telnet localhost 4444 expect "(qemu)" send "c\r" sleep 0.6 send "savevm foo\r" expect "(qemu)" send "quit\r" expect eof
I used this recently to debug a problem in early boot that was causing a hang — by adjusting the timeout I was able to get a snapshot very close to the point where the trouble occured. Even a second of execution can generate enough execution trace to be unmanageable…
Snapshots won’t solve your debugging problem all on their own, but they can cut the problem down to a size where you can apply some of the other tools in your toolbox.
Tricks for debugging QEMU — rr
Over the years I’ve picked up a few tricks for tracking down problems in QEMU, and it seemed worth writing them up. First on the list is a tool I’ve found relatively recently: rr, from the folks at Mozilla.
rr is a record-and-replay tool for C and C++: you run your program under the recorder and provoke the bug you’re interested in. Then you can debug using a replay of the recording. The replay is deterministic and side-effect-free, so you can debug it as many times as you want, knowing that even an intermittent bug will always reveal itself in the same way. Better still, rr recently gained support for reverse-debugging, so you can set a breakpoint or watchpoint and then run time backwards to find the previous occurrence of what you’re looking for. This is fantastic for debugging problems which manifest only a long time after they occur, like memory corruption or stale entries in cache data structures. The idea of record-and-replay is not new; where rr is different is that it’s very low overhead and capable of handling complex programs like QEMU and Mozilla. It’s a usable production quality debug tool, not just a research project. It has a few rough edges, but the developers have been very responsive to bug reports.
Here’s a worked example with a real-world bug I tracked down last week. (This is a compressed account of the last part of a couple of weeks of head-scratching; I have omitted various wrong turns and false starts…)
I had an image for QEMU’s Zaurus (“spitz”) machine, which managed to boot the guest kernel but then got random segfaults trying to execute userspace. Use of git bisect showed that this regression happened with commit 2f0d8631b7. That change is valid, but it did vastly reduce the number of unnecessary guest TLB flushes we were doing. This suggested that the cause of the segfaults was a bug where we weren’t flushing the TLB properly somewhere, which was only exposed when we stopped flushing the TLB on practically every guest kernel-to-userspace transition.
Insufficient TLB flushing is a little odd for an ARM guest, because in practice we end up flushing all of QEMU’s TLB every time the guest asks for a single page to be flushed. (This is forced on us by having to support the legacy ARMv5 1K page tables, so for most guests which use 4K pages all pages are “huge pages” and take a less efficient path through QEMU’s TLB handling.) So I had a hunch that maybe we weren’t actually doing the flush correctly. OK, change the code to handle the “TLB invalidate by virtual address” guest operations so that they explicitly flush the whole TLB — bug goes away. Take that back out, and put an assert(0)
in the cputlb.c
function that handles “delete a single entry from the TLB cache”. This should never fire for an ARM guest with 4K pages, and yet it did.
At this point I was pretty sure I was near to tracking down the cause of the bug; but the problem wasn’t likely to be near the assertion, but somewhere further back in execution when the entry got added to the TLB in the first place. Time for rr.
Recording is simple: just rr record qemu-system-arm args...
. Then rr replay
will start replaying the last record, and by default will drop you into a gdb at the start of the recording. Let’s just let it run forward until the assertion:
(gdb) c Continuing. [...] qemu-system-arm: /home/petmay01/linaro/qemu-from-laptop/qemu/cputlb.c:80: tlb_flush_entry: Assertion `0' failed. Program received signal SIGABRT, Aborted. [Switching to Thread 18096.18098] 0x0000000070000018 in ?? ()
Looking back up the stack we find that we were definitely trying to flush a valid TLB entry:
(gdb) frame 13 #13 0x0000555555665eb1 in tlb_flush_page (cpu=0x55555653bea0, addr=1074962432) at /home/petmay01/linaro/qemu-from-laptop/qemu/cputlb.c:118 118 tlb_flush_entry(&env->tlb_v_table[mmu_idx][k], addr); (gdb) print /x env->tlb_v_table[mmu_idx][k] $2 = {addr_read = 0x4012a000, addr_write = 0x4012a000, addr_code = 0x4012a000, addend = 0x2aaa83cf6000, dummy = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff}}
and checking env->tlb_flush_mask
and env->tlb_flush_addr
shows that QEMU thinks this address is outside the range covered by huge pages. Maybe we miscalculated them when we were adding the page? Let’s go back and find out what happened then:
(gdb) break tlb_set_page_with_attrs if vaddr == 0x4012a000 Breakpoint 1 at 0x5555556663f2: file /home/petmay01/linaro/qemu-from-laptop/qemu/cputlb.c, line 256. (gdb) rc Continuing. Program received signal SIGABRT, Aborted. 0x0000000070000016 in ?? () (gdb) rc Continuing. Breakpoint 1, tlb_set_page_with_attrs (cpu=0x55555653bea0, vaddr=1074962432, paddr=2684485632, attrs=..., prot=7, mmu_idx=0, size=1024) at /home/petmay01/linaro/qemu-from-laptop/qemu/cputlb.c:256
(Notice that we hit the assertion again as we went backwards over it, so we just repeat the reverse-continue.) We stop exactly where we want to be to investigate the insertion of the TLB entry. In a normal debug session we could have tried restarting execution from the beginning with a conditional breakpoint, but there would be no guarantee that guest execution was deterministic enough for the guest address to be the same, or that the call we wanted to stop at was the only time we added a TLB entry for this address. Stepping forwards through the tlb code I notice we don’t think this is a huge page at all, and in fact you can see from the function parameters that the size is 1024, not the expected 4096. Where did this come from? Setting a breakpoint in arm_cpu_handle_mmu_fault
and doing yet another reverse-continue brings us to the start of the code that’s doing the page table walk so we can step forwards through it. (You can use rn
and rs
to step backwards if you like but personally I find that a little confusing.). Now rr has led us to the scene of the crime it’s very obvious that the problem is in our handling of an XScale-specific page table descriptor, which we’re incorrectly claiming to indicate a 1K page rather than 4K. Fix that, and the bug is vanquished.
Without rr this would have been much more tedious to track down. Being able to follow the chain of causation backwards from the failing assertion to the exact point where things diverged from your expectations is priceless. And anybody who’s done much debugging will have had the experience of accidentally stepping or continuing one time too often and zooming irrevocably past the point they wanted to look at — with reverse execution those errors are easily undoable.
I can’t recommend rr highly enough — I think it deserves to become a standard part of the Linux C/C++ developer’s toolkit, as valgrind has done before it.
AArch64 system emulation has landed in QEMU upstream
In my last post I mentioned that we were nearly done with support for emulating an entire AArch64 system in QEMU. Those last few pieces of code have now landed upstream, and Alex Bennée has written a great guide to how to build QEMU and a test image so you can give it a spin.
64-bit ARM usermode emulation in QEMU 2.0.0
The QEMU Project released version 2.0.0 of QEMU last week; this seems like a good time to summarise our progress with ARMv8 QEMU work.
One of the major new ARM related features in this release is support for emulating AArch64 processes in QEMU’s “linux-user” mode; in Linaro we’ve been working on this over the last few months (building on a great foundation established by SUSE) and we just managed to squeeze support for the last few instructions into 2.0.0.
“linux-user” mode is where we run a single Linux guest binary, and QEMU converts the system calls the guest makes into system calls to the host Linux kernel. Typically you’d use this to run an AArch64 binary on a more conveniently available host, usually x86_64, by setting up a cross-architecture chroot and putting QEMU in it. We’ve implemented support for all the mandatory A64 instructions, including floating point and Advanced SIMD, but not the optional instructions in the crypto and CRC extensions.
As well as adding an entirely new instruction set for 64 bit support, the ARMv8 architecture included a few new instructions for the 32 bit A32 and T32 instruction sets. QEMU also now implements all the mandatory new instructions, though this will for the moment probably mostly be of use only to people running compiler test suites.
Two other uses for QEMU involve running it on AArch64 hardware. Firstly, you can use it to emulate other CPU architectures on AArch64 hosts, for instance running an x86 kernel in an emulated machine. This was contributed by Huawei last year, and has been supported since the previous release of QEMU (1.7).
You can also use QEMU as the userspace device emulation part of a virtual machine which uses KVM and the hardware’s virtualization extensions to provide fast AArch64-on-AArch64 VMs. This too has been supported since 1.7, though some features are not yet implemented (for instance, VM migration and debugging a guest VM are both not currently supported).
The final use for QEMU I want to talk about is the only one which isn’t in the 2.0.0 release, but many people have been waiting for it so here’s a status update. AArch64 system emulation is where you emulate a complete system and boot a full system including an AArch64 Linux kernel and user space, typically running on an x86 host. We’re working on this right now, and in fact as soon as QEMU’s git repository reopened for development after the 2.0.0 release we landed a large set of patches which implement all the necessary CPU emulation support. The only remaining missing piece in upstream QEMU master to be able to boot a kernel is to add support for running the “virt” board model with a Cortex-A57 and a GICv2 with an appropriate register layout. This last bit of work should be done shortly.
If you want to try out QEMU 2.0.0 you can build it yourself from the upstream released tarballs. If you’re an Ubuntu user then you’re in luck, because these changes are also in the QEMU shipped in the newly released Ubuntu 14.04 LTS.
QEMU KVM on ARMv7 support is upstream!
This week the QEMU support patches for KVM on ARM were accepted into upstream. Since the kernel KVM on ARM patchset was accepted for the 3.9 kernel, this means that there is now enough support in purely upstream kernel and QEMU for basic working KVM on ARMv7 cores with the Virtualization Extensions (specifically, Cortex-A15). There are still a number of features left to be implemented but nonetheless I feel this is an important milestone. Thanks to everybody who’s played a part in getting us this far!