ArmVirtPkg/Library/PlatformBootManagerLib/PlatformBm.c | 16 +- OvmfPkg/Library/PlatformBootManagerLib/BdsPlatform.c | 262 ++++++++++---------- 2 files changed, 136 insertions(+), 142 deletions(-)
(Copying Rich, Xiang and Gabriel for testing requests below.) Repo: https://github.com/lersek/edk2.git Branch: kernel_before_bootdevs After the recent series "OvmfPkg, ArmVirtQemu: leaner platform BDS policy for connecting devices", I'm picking up another earlier idea -- a direct kernel boot does not need devices such as disks and NICs to be bound by UEFI. I tested this series extensively on QEMU, in OVMF (IA32X64) and ArmVirtQemu (AARCH64), both with and without direct kernel boot. I compared the logs in all sensible relations within a given architecture. Rich, can you please test this on ARM64, with guestfish/libguestfs? Please attach a good number of disks at once on the command line, and compare the appliance's boot time between (e.g.) RHEL7's "/usr/share/AAVMF/AAVMF_CODE.fd" and the following binary (after decompression): https://people.redhat.com/lersek/kernel_before_bootdevs-991e2f2f-64cf-4566-b933-919928e2aa6b/QEMU_EFI.fd.padded.xz (That binary corresponds to the branch linked above, cross-built from x86_64 with "aarch64-linux-gnu-gcc (GCC) 6.1.1 20160621 (Red Hat Cross 6.1.1-2)", using the following options: export GCC5_AARCH64_PREFIX=aarch64-linux-gnu- build --cmd-len=65536 --hash -t GCC5 -b DEBUG -a AARCH64 \ -p ArmVirtPkg/ArmVirtQemu.dsc -D DEBUG_PRINT_ERROR_LEVEL=0x80000000 and then it was padded with zeroes to 64MB.) If you have good results, please respond with your Tested-by (which I'll apply to patch 1/5, since that's the one that matters for the ARM64 target). Xiang, if you use guestfish (or else direct kernel boot) occasionally, then similar testing would be very welcome from your side too. Gabriel, I'm CC'ing you on patch 4/5, because it affects code that you had originally written. Can you please regression-test this series with your usual OVMF environment / guest(s)? Cc: "Gabriel L. Somlo" <gsomlo@gmail.com> Cc: "Richard W.M. Jones" <rjones@redhat.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Jordan Justen <jordan.l.justen@intel.com> Cc: Xiang Zheng <xiang.zheng@linaro.org> Thanks everyone! Laszlo Laszlo Ersek (5): ArmVirtPkg/PlatformBootManagerLib: return to "-kernel before boot devices" OvmfPkg/PlatformBootManagerLib: wrap overlong lines in "BdsPlatform.c" OvmfPkg/PlatformBootManagerLib: rejuvenate old-style function comments OvmfPkg/PlatformBootManagerLib: hoist PciAcpiInitialization() OvmfPkg/PlatformBootManagerLib: process "-kernel" before boot devices ArmVirtPkg/Library/PlatformBootManagerLib/PlatformBm.c | 16 +- OvmfPkg/Library/PlatformBootManagerLib/BdsPlatform.c | 262 ++++++++++---------- 2 files changed, 136 insertions(+), 142 deletions(-) -- 2.14.1.3.gb7cf6e02401b _______________________________________________ edk2-devel mailing list edk2-devel@lists.01.org https://lists.01.org/mailman/listinfo/edk2-devel
On Thu, Mar 15, 2018 at 08:02:53PM +0100, Laszlo Ersek wrote: > Please attach a good number of disks at once on the command line, I read this bit then forgot to do it :-/ The test suite has a test for adding a large number of drives: https://github.com/libguestfs/libguestfs/tree/master/tests/disks so it's quite easy for me to test this: $ time ./test-add-disks -n 100 Results with Fedora's edk2-aarch64: real 8m25.353s user 0m2.393s sys 0m2.657s Results with your file: real 8m15.285s user 0m0.178s sys 0m0.381s So there's not really a great difference here, but boot time is not a significant factor for this test unlike the boot benchmark tests. Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com virt-builder quickly builds VMs from scratch http://libguestfs.org/virt-builder.1.html _______________________________________________ edk2-devel mailing list edk2-devel@lists.01.org https://lists.01.org/mailman/listinfo/edk2-devel
On 03/16/18 10:59, Richard W.M. Jones wrote: > On Thu, Mar 15, 2018 at 08:02:53PM +0100, Laszlo Ersek wrote: >> Please attach a good number of disks at once on the command line, > > I read this bit then forgot to do it :-/ > > The test suite has a test for adding a large number of drives: > > https://github.com/libguestfs/libguestfs/tree/master/tests/disks > > so it's quite easy for me to test this: > > $ time ./test-add-disks -n 100 What kind of disks does this test add, virtio-blk or virtio-scsi? And I assume PCI, not virtio-mmio? It's possible that with a hundred disks, you hit a limit in the firmware before all the time was spent that would have been necessary to bind all hundred disks. I mean, the point is exactly to prevent the firmware from spending time on those disks, but it should be due to a controlled restriction, not resource exhaustion, and the comparison should use a test case where the firmware does manage to bind them all (without the patches). ... We could go into the various limits here, regarding PCI bus number space, virtio-scsi targets and LUNs, but I think it would be premature before seeing a domain XML or a QEMU command line. (Also, my apologies for being vague with "good number of ...".) Furthermore: > Results with Fedora's edk2-aarch64: > > real 8m25.353s > user 0m2.393s > sys 0m2.657s > > Results with your file: > > real 8m15.285s > user 0m0.178s > sys 0m0.381s > > So there's not really a great difference here, but boot time is not a > significant factor for this test unlike the boot benchmark tests. indeed this patch should only improve boot time. In your other email, you mention "libguestfs-boot-benchmark". Does that include kernel boot time as well? ... OTOH, it probably should too. An interactive user definitely cares about the time between (a) hitting Enter on the "guestfish" command and (b) getting the guestfish prompt. Where the boot transitions from firmware to kernel is irrelevant to the user. Sorry to bother you with another request, but I currently have no access to my aarch64 hardware (so that I could test on aarch64 KVM); could you please compare boot benchmark results using, say, eight disks? Something like: time guestfish --ro \ -a disk1.img \ ... \ -a disk8.img \ launch : quit (I hope this test case is not totally bogus.) If you don't have time, I'll try to run this test myself tonight. I'd trust your results more though! Thanks! Laszlo _______________________________________________ edk2-devel mailing list edk2-devel@lists.01.org https://lists.01.org/mailman/listinfo/edk2-devel
On Fri, Mar 16, 2018 at 03:02:57PM +0100, Laszlo Ersek wrote: > On 03/16/18 10:59, Richard W.M. Jones wrote: > > On Thu, Mar 15, 2018 at 08:02:53PM +0100, Laszlo Ersek wrote: > >> Please attach a good number of disks at once on the command line, > > > > I read this bit then forgot to do it :-/ > > > > The test suite has a test for adding a large number of drives: > > > > https://github.com/libguestfs/libguestfs/tree/master/tests/disks > > > > so it's quite easy for me to test this: > > > > $ time ./test-add-disks -n 100 > > What kind of disks does this test add, virtio-blk or virtio-scsi? > > And I assume PCI, not virtio-mmio? virtio-scsi & PCI. I think we are still using virtio-mmio in RHEL 7? But anyway the tests were performed with the latest upstream stuff. > It's possible that with a hundred disks, you hit a limit in the firmware > before all the time was spent that would have been necessary to bind all > hundred disks. I mean, the point is exactly to prevent the firmware from > spending time on those disks, but it should be due to a controlled > restriction, not resource exhaustion, and the comparison should use a > test case where the firmware does manage to bind them all (without the > patches). There's actually a load of problems on x86, mainly with the BIOS & kernel and how long it takes to enumerate the disks. I didn't look closely at what it's trying to do on aarch64. > ... We could go into the various limits here, regarding PCI bus number > space, virtio-scsi targets and LUNs, but I think it would be premature > before seeing a domain XML or a QEMU command line. (Also, my apologies > for being vague with "good number of ...".) An easy way to reproduce this is: # dnf install /usr/bin/virt-rescue $ virt-rescue --scratch=100 ><rescue> ls /dev/sd Display all 101 possibilities? (y or n) sda sdah sdap sdax sdbe sdbm sdbu sdcb sdcj sdcr sdf sdn sdv sdaa sdai sdaq sday sdbf sdbn sdbv sdcc sdck sdcs sdg sdo sdw sdab sdaj sdar sdaz sdbg sdbo sdbw sdcd sdcl sdct sdh sdp sdx sdac sdak sdas sdb sdbh sdbp sdbx sdce sdcm sdcu sdi sdq sdy sdad sdal sdat sdba sdbi sdbq sdby sdcf sdcn sdcv sdj sdr sdz sdae sdam sdau sdbb sdbj sdbr sdbz sdcg sdco sdcw sdk sds sdaf sdan sdav sdbc sdbk sdbs sdc sdch sdcp sdd sdl sdt sdag sdao sdaw sdbd sdbl sdbt sdca sdci sdcq sde sdm sdu > indeed this patch should only improve boot time. In your other email, > you mention "libguestfs-boot-benchmark". Does that include kernel boot > time as well? libguestfs-boot-benchmark just starts the appliance (and kernel and userspace) and shuts it down, so it's as close to testing raw boot performance as you can get. However it only uses I think 1 or 2 disks, so it's not testing lots of devices. > ... OTOH, it probably should too. An interactive user definitely cares > about the time between (a) hitting Enter on the "guestfish" command and > (b) getting the guestfish prompt. Where the boot transitions from > firmware to kernel is irrelevant to the user. > > Sorry to bother you with another request, but I currently have no access > to my aarch64 hardware (so that I could test on aarch64 KVM); could you > please compare boot benchmark results using, say, eight disks? Something > like: > > time guestfish --ro \ > -a disk1.img \ > ... \ > -a disk8.img \ > launch : quit > > (I hope this test case is not totally bogus.) An easier way is this as follows. I missed out the --just-add option in my previous tests, which causes the test to do a lot more testing. With the option it just adds them, launches the appliance and shuts down. $ time ./test-add-disks --just-add -n 8 Results: with Fedora AAVMF: 0m11.197s 0m10.075s 0m10.033s with your firmware: 0m5.395s 0m5.385s 0m5.354s Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com libguestfs lets you edit virtual machines. Supports shell scripting, bindings from many languages. http://libguestfs.org _______________________________________________ edk2-devel mailing list edk2-devel@lists.01.org https://lists.01.org/mailman/listinfo/edk2-devel
On 03/16/18 15:47, Richard W.M. Jones wrote: > # dnf install /usr/bin/virt-rescue > $ virt-rescue --scratch=100 OK, I had to read up on virt-rescue for this, and I'm (again) impressed that the virt-* tools are the DeLuxe utilities in this space. :) "You can get virt-rescue to give you scratch disk(s) to play with. This is useful for testing out Linux utilities (see --scratch)". Convenient! >> time guestfish --ro \ >> -a disk1.img \ >> ... \ >> -a disk8.img \ >> launch : quit >> >> (I hope this test case is not totally bogus.) > > An easier way is this as follows. I missed out the --just-add option > in my previous tests, which causes the test to do a lot more testing. > With the option it just adds them, launches the appliance and shuts > down. > > $ time ./test-add-disks --just-add -n 8 > > Results: > > with Fedora AAVMF: 0m11.197s 0m10.075s 0m10.033s > with your firmware: 0m5.395s 0m5.385s 0m5.354s That's great; now I feel confident about picking up your T-b! Thank you for your help! Laszlo _______________________________________________ edk2-devel mailing list edk2-devel@lists.01.org https://lists.01.org/mailman/listinfo/edk2-devel
On Thu, Mar 15, 2018 at 08:02:53PM +0100, Laszlo Ersek wrote: > (Copying Rich, Xiang and Gabriel for testing requests below.) > > Repo: https://github.com/lersek/edk2.git > Branch: kernel_before_bootdevs > > After the recent series "OvmfPkg, ArmVirtQemu: leaner platform BDS > policy for connecting devices", I'm picking up another earlier idea -- a > direct kernel boot does not need devices such as disks and NICs to be > bound by UEFI. > > I tested this series extensively on QEMU, in OVMF (IA32X64) and > ArmVirtQemu (AARCH64), both with and without direct kernel boot. I > compared the logs in all sensible relations within a given architecture. > > Rich, can you please test this on ARM64, with guestfish/libguestfs? > Please attach a good number of disks at once on the command line, and > compare the appliance's boot time between (e.g.) RHEL7's > "/usr/share/AAVMF/AAVMF_CODE.fd" and the following binary (after > decompression): > > https://people.redhat.com/lersek/kernel_before_bootdevs-991e2f2f-64cf-4566-b933-919928e2aa6b/QEMU_EFI.fd.padded.xz > > (That binary corresponds to the branch linked above, cross-built from > x86_64 with "aarch64-linux-gnu-gcc (GCC) 6.1.1 20160621 (Red Hat Cross > 6.1.1-2)", using the following options: > > export GCC5_AARCH64_PREFIX=aarch64-linux-gnu- > build --cmd-len=65536 --hash -t GCC5 -b DEBUG -a AARCH64 \ > -p ArmVirtPkg/ArmVirtQemu.dsc -D DEBUG_PRINT_ERROR_LEVEL=0x80000000 > > and then it was padded with zeroes to 64MB.) > > If you have good results, please respond with your Tested-by (which I'll > apply to patch 1/5, since that's the one that matters for the ARM64 > target). > > Xiang, if you use guestfish (or else direct kernel boot) occasionally, > then similar testing would be very welcome from your side too. > > Gabriel, I'm CC'ing you on patch 4/5, because it affects code that you > had originally written. Can you please regression-test this series with > your usual OVMF environment / guest(s)? With the series applied on top of all my out-of-tree macboot patches, OSX (10.12) boots just as well as before, so no regressions as far as I'm able to tell! whole series: Tested-by: Gabriel Somlo <gsomlo@gmail.com> Thanks, --G > Cc: "Gabriel L. Somlo" <gsomlo@gmail.com> > Cc: "Richard W.M. Jones" <rjones@redhat.com> > Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> > Cc: Jordan Justen <jordan.l.justen@intel.com> > Cc: Xiang Zheng <xiang.zheng@linaro.org> > > Thanks everyone! > Laszlo > > Laszlo Ersek (5): > ArmVirtPkg/PlatformBootManagerLib: return to "-kernel before boot > devices" > OvmfPkg/PlatformBootManagerLib: wrap overlong lines in "BdsPlatform.c" > OvmfPkg/PlatformBootManagerLib: rejuvenate old-style function comments > OvmfPkg/PlatformBootManagerLib: hoist PciAcpiInitialization() > OvmfPkg/PlatformBootManagerLib: process "-kernel" before boot devices > > ArmVirtPkg/Library/PlatformBootManagerLib/PlatformBm.c | 16 +- > OvmfPkg/Library/PlatformBootManagerLib/BdsPlatform.c | 262 ++++++++++---------- > 2 files changed, 136 insertions(+), 142 deletions(-) > > -- > 2.14.1.3.gb7cf6e02401b > _______________________________________________ edk2-devel mailing list edk2-devel@lists.01.org https://lists.01.org/mailman/listinfo/edk2-devel
On 03/16/18 16:29, Gabriel L. Somlo wrote: > On Thu, Mar 15, 2018 at 08:02:53PM +0100, Laszlo Ersek wrote: >> Gabriel, I'm CC'ing you on patch 4/5, because it affects code that you >> had originally written. Can you please regression-test this series with >> your usual OVMF environment / guest(s)? > > With the series applied on top of all my out-of-tree macboot patches, > OSX (10.12) boots just as well as before, so no regressions as far as > I'm able to tell! > > whole series: > > Tested-by: Gabriel Somlo <gsomlo@gmail.com> Thank you, Gabriel! I'll add that to patches 2 through 5 (the code subject to patch 1 is not built into the OVMF binary): >> ArmVirtPkg/PlatformBootManagerLib: return to "-kernel before boot >> devices" >> OvmfPkg/PlatformBootManagerLib: wrap overlong lines in "BdsPlatform.c" >> OvmfPkg/PlatformBootManagerLib: rejuvenate old-style function comments >> OvmfPkg/PlatformBootManagerLib: hoist PciAcpiInitialization() >> OvmfPkg/PlatformBootManagerLib: process "-kernel" before boot devices Thanks! Laszlo _______________________________________________ edk2-devel mailing list edk2-devel@lists.01.org https://lists.01.org/mailman/listinfo/edk2-devel
On 03/15/18 20:02, Laszlo Ersek wrote: > (Copying Rich, Xiang and Gabriel for testing requests below.) > > Repo: https://github.com/lersek/edk2.git > Branch: kernel_before_bootdevs > > After the recent series "OvmfPkg, ArmVirtQemu: leaner platform BDS > policy for connecting devices", I'm picking up another earlier idea -- a > direct kernel boot does not need devices such as disks and NICs to be > bound by UEFI. Thank you all, series pushed: d0976b9acced..a34a88696256. Laszlo _______________________________________________ edk2-devel mailing list edk2-devel@lists.01.org https://lists.01.org/mailman/listinfo/edk2-devel
On Thu, Mar 15, 2018 at 08:02:53PM +0100, Laszlo Ersek wrote: > (Copying Rich, Xiang and Gabriel for testing requests below.) > > Repo: https://github.com/lersek/edk2.git > Branch: kernel_before_bootdevs > > After the recent series "OvmfPkg, ArmVirtQemu: leaner platform BDS > policy for connecting devices", I'm picking up another earlier idea -- a > direct kernel boot does not need devices such as disks and NICs to be > bound by UEFI. > > I tested this series extensively on QEMU, in OVMF (IA32X64) and > ArmVirtQemu (AARCH64), both with and without direct kernel boot. I > compared the logs in all sensible relations within a given architecture. > > Rich, can you please test this on ARM64, with guestfish/libguestfs? > Please attach a good number of disks at once on the command line, and > compare the appliance's boot time between (e.g.) RHEL7's > "/usr/share/AAVMF/AAVMF_CODE.fd" and the following binary (after > decompression): > > https://people.redhat.com/lersek/kernel_before_bootdevs-991e2f2f-64cf-4566-b933-919928e2aa6b/QEMU_EFI.fd.padded.xz I tested this on Fedora Rawhide (aarch64) with: kernel-core-4.16.0-0.rc5.git1.2.fc29.aarch64 (host & guest) qemu-2.11.0-5.fc29.aarch64 edk2-aarch64-20171011git92d07e4-2.fc28.noarch libguestfs-1.39.1-1.fc29.aarch64 I used the /usr/bin/libguestfs-boot-benchmark tool from libguestfs-benchmarking-1.39.1-1.fc29.aarch64 As a baseline, on my mid-range Intel i7 laptop (note that this number is NOT comparable to the aarch64 numbers, it's just to give a flavour of what is possible): Result: 1384.5ms ±9.2ms On aarch64 using edk2-aarch64 from Fedora: Result: 8844.0ms ±30.7ms On aarch64 using your supplied build of AAVMF: Result: 4156.6ms ±1.3ms I also confirmed (using libguestfs-test-tool) that it was working and using the right AAVMF_CODE.fd file. Therefore: Tested-by: Richard W.M. Jones <rjones@redhat.com> Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com virt-df lists disk usage of guests without needing to install any software inside the virtual machine. Supports Linux and Windows. http://people.redhat.com/~rjones/virt-df/ _______________________________________________ edk2-devel mailing list edk2-devel@lists.01.org https://lists.01.org/mailman/listinfo/edk2-devel
On 15 March 2018 at 19:02, Laszlo Ersek <lersek@redhat.com> wrote: > (Copying Rich, Xiang and Gabriel for testing requests below.) > > Repo: https://github.com/lersek/edk2.git > Branch: kernel_before_bootdevs > > After the recent series "OvmfPkg, ArmVirtQemu: leaner platform BDS > policy for connecting devices", I'm picking up another earlier idea -- a > direct kernel boot does not need devices such as disks and NICs to be > bound by UEFI. > > I tested this series extensively on QEMU, in OVMF (IA32X64) and > ArmVirtQemu (AARCH64), both with and without direct kernel boot. I > compared the logs in all sensible relations within a given architecture. > > Rich, can you please test this on ARM64, with guestfish/libguestfs? > Please attach a good number of disks at once on the command line, and > compare the appliance's boot time between (e.g.) RHEL7's > "/usr/share/AAVMF/AAVMF_CODE.fd" and the following binary (after > decompression): > > https://people.redhat.com/lersek/kernel_before_bootdevs-991e2f2f-64cf-4566-b933-919928e2aa6b/QEMU_EFI.fd.padded.xz > > (That binary corresponds to the branch linked above, cross-built from > x86_64 with "aarch64-linux-gnu-gcc (GCC) 6.1.1 20160621 (Red Hat Cross > 6.1.1-2)", using the following options: > > export GCC5_AARCH64_PREFIX=aarch64-linux-gnu- > build --cmd-len=65536 --hash -t GCC5 -b DEBUG -a AARCH64 \ > -p ArmVirtPkg/ArmVirtQemu.dsc -D DEBUG_PRINT_ERROR_LEVEL=0x80000000 > > and then it was padded with zeroes to 64MB.) > > If you have good results, please respond with your Tested-by (which I'll > apply to patch 1/5, since that's the one that matters for the ARM64 > target). > > Xiang, if you use guestfish (or else direct kernel boot) occasionally, > then similar testing would be very welcome from your side too. > > Gabriel, I'm CC'ing you on patch 4/5, because it affects code that you > had originally written. Can you please regression-test this series with > your usual OVMF environment / guest(s)? > > Cc: "Gabriel L. Somlo" <gsomlo@gmail.com> > Cc: "Richard W.M. Jones" <rjones@redhat.com> > Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> > Cc: Jordan Justen <jordan.l.justen@intel.com> > Cc: Xiang Zheng <xiang.zheng@linaro.org> > > Thanks everyone! > Laszlo > > Laszlo Ersek (5): > ArmVirtPkg/PlatformBootManagerLib: return to "-kernel before boot > devices" > OvmfPkg/PlatformBootManagerLib: wrap overlong lines in "BdsPlatform.c" > OvmfPkg/PlatformBootManagerLib: rejuvenate old-style function comments > OvmfPkg/PlatformBootManagerLib: hoist PciAcpiInitialization() > OvmfPkg/PlatformBootManagerLib: process "-kernel" before boot devices > > ArmVirtPkg/Library/PlatformBootManagerLib/PlatformBm.c | 16 +- > OvmfPkg/Library/PlatformBootManagerLib/BdsPlatform.c | 262 ++++++++++---------- > 2 files changed, 136 insertions(+), 142 deletions(-) > For the series Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> _______________________________________________ edk2-devel mailing list edk2-devel@lists.01.org https://lists.01.org/mailman/listinfo/edk2-devel
© 2016 - 2024 Red Hat, Inc.