[edk2] [PATCH v3 00/21] OvmfPkg/Virtio: introduce IOMMU-like member functions

Brijesh Singh posted 21 patches 7 years, 4 months ago
Failed in applying to current master (apply log)
OvmfPkg/Library/VirtioLib/VirtioLib.inf                         |   1 -
OvmfPkg/Include/IndustryStandard/Virtio10.h                     |   4 +-
OvmfPkg/Include/Library/VirtioLib.h                             | 132 +++++++++---
OvmfPkg/Include/Protocol/VirtioDevice.h                         | 162 +++++++++++++-
OvmfPkg/Library/VirtioMmioDeviceLib/VirtioMmioDevice.h          |  39 +++-
OvmfPkg/VirtioBlkDxe/VirtioBlk.h                                |   1 +
OvmfPkg/VirtioNetDxe/VirtioNet.h                                |  25 ++-
OvmfPkg/VirtioPciDeviceDxe/VirtioPciDevice.h                    |  37 +++-
OvmfPkg/VirtioRngDxe/VirtioRng.h                                |   1 +
OvmfPkg/VirtioScsiDxe/VirtioScsi.h                              |   1 +
OvmfPkg/Library/VirtioLib/VirtioLib.c                           | 204 +++++++++++++++---
OvmfPkg/Library/VirtioMmioDeviceLib/VirtioMmioDevice.c          |   8 +-
OvmfPkg/Library/VirtioMmioDeviceLib/VirtioMmioDeviceFunctions.c |  62 +++++-
OvmfPkg/Virtio10Dxe/Virtio10.c                                  | 128 ++++++++++-
OvmfPkg/VirtioBlkDxe/VirtioBlk.c                                | 213 ++++++++++++++++---
OvmfPkg/VirtioGpuDxe/Commands.c                                 |  13 +-
OvmfPkg/VirtioNetDxe/Events.c                                   |  19 ++
OvmfPkg/VirtioNetDxe/SnpGetStatus.c                             |  29 ++-
OvmfPkg/VirtioNetDxe/SnpInitialize.c                            | 195 ++++++++++++++---
OvmfPkg/VirtioNetDxe/SnpSharedHelpers.c                         | 133 +++++++++++-
OvmfPkg/VirtioNetDxe/SnpShutdown.c                              |   7 +-
OvmfPkg/VirtioNetDxe/SnpTransmit.c                              |  30 ++-
OvmfPkg/VirtioPciDeviceDxe/VirtioPciDevice.c                    |   7 +-
OvmfPkg/VirtioPciDeviceDxe/VirtioPciFunctions.c                 |  63 +++++-
OvmfPkg/VirtioRngDxe/VirtioRng.c                                |  93 ++++++--
OvmfPkg/VirtioScsiDxe/VirtioScsi.c                              | 222 +++++++++++++++++---
26 files changed, 1635 insertions(+), 194 deletions(-)
[edk2] [PATCH v3 00/21] OvmfPkg/Virtio: introduce IOMMU-like member functions
Posted by Brijesh Singh 7 years, 4 months ago
Currently, virtio drivers provides the system physical address to the device.
However, some systems may feature an IOMMU that requires the drivers to pass
the device addresses to the device - which are then translated by the IOMMU 
into physical addresses in memory. The patch series introduces new member
functions in VIRTIO_DEVICE_PROTOCOL which can be used for mapping a system
physical address to device address.

The approach that this patch series takes is to maps the system physical
address to device address for buffers (including rings, device specifc
request and response  pointed by vring descriptor, and any further memory 
reference by those request and response).

Patch 1 - 4:
 Defines and implements new member functions to map a system physical address
 to device address. The patch implements Laszlo's suggestion [1].

[1] http://mid.mail-archive.com/841bec5f-6f6e-8b1f-25ba-0fd37a915b72@redhat.com

Patch 5 - 12:
 Add some helper functions and allocate the vring using newly added member
 functions.

Patch 13:
 Update the virtio-rng driver to use newly added member functions to map the
 addresses.
 Verified using the following qemu cli

 # $QEMU \
    -device virtio-rng-pci

 # $QEMU \
    -device virtio-rng-pci,disable-legacy=on

 # $QEMU \
    -device virtio-rng-pci,disable-legacy=on,iommu_platform=true

 And succesfully ran RngTest.efi from SecurityPkg/Application and also
 verified that /dev/hwrng get created after Linux guest boot

Patch 14:
 Update the virtio-blk driver to use newly added member functions to map the
 addresses.
 Verified using the following qemu cli

 # $QEMU \
    -drive file=${IMAGE},if=none,id=disk0 \
    -device virtio-blk-pci,drive=disk0

 # $QEMU \
    -drive file=${IMAGE},if=none,id=disk0 \
    -device virtio-blk-pci,drive=disk0,disable-legacy=on

 # $QEMU \
    -drive file=${IMAGE},if=none,id=disk0 \
    -device virtio-blk-pci,drive=disk0,disable-legacy=on,iommu_platform=true

Patch 15:
 Update the virtio-scsi driver to use newly added member functions to map the
 addresses.
 Verified using the following qemu cli

 # $QEMU \
    -drive file=${IMAGE},if=none,id=disk0 \
    -device scsi-hd,drive=disk0 \
    -device virtio-scsi-pci,id=scsi 

 # $QEMU \
    -drive file=${IMAGE},if=none,id=disk0 \
    -device scsi-hd,drive=disk0 \
    -device virtio-scsi-pci,id=scsi,disable-legacy=on 

 # $QEMU \
    -drive file=${IMAGE},if=none,id=disk0 \
    -device scsi-hd,drive=disk0 \
    -device virtio-scsi-pci,id=scsi,disable-legacy=on,iommu_platform=true

Patch 16 - 19:
 Update the virtio-net driver to use newly added member functions to map the
 addresses.
 Verified using the following qemu cli

 # $QEMU \
    -netdev type=tap,id=net0 \
    -device virtio-net-pci,netdev=net0,romfile=

 # $QEMU \
    -netdev type=tap,id=net0 \
    -device virtio-net-pci,netdev=net0,disable-legacy=on,romfile=

 # $QEMU \
    -netdev type=tap,id=net0 \
    -device virtio-net-pci,netdev=net0,disable-legacy=on,iommu_platform=true,romfile=

Patch 20 - 23:
 Add support for VIRTIO_F_IOMMU_FEATURE bit

Repo: https://github.com/codomania/edk2
Branch: virtio-support-v3

Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Laszlo Ersek <lersek@redhat.com>

TODO:
 * add VirtioGpuDxe (i will take Laszlo's offer that he can help with this driver)
 * I did minimal test on aarch64 - I was running into some Linux
   bootup issues with Fedora aarch64 iso. The issue does not appear to
   be releated to virtio changes. If anyone can help doing additional
   test with their aarch images that will be great ! thanks

Changes since v2:
 * changes to address v2 feedbacks
 * split the iommu_platform support into multiple patches

Changes since v1:
 * changes to address v1 feedbacks
 * add VIRTIO_F_IOMMU_PLATFORM feature bit

Brijesh Singh (23):
  OvmfPkg: introduce IOMMU-like member functions to
    VIRTIO_DEVICE_PROTOCOL
  OvmfPkg/Virtio10Dxe: implement IOMMU-like member functions
  OvmfPkg/VirtioPciDeviceDxe: implement IOMMU-like member functions
  OvmfPkg/VirtioMmioDeviceLib: implement IOMMU-like member functions
  OvmfPkg/VirtioLib: add VirtioMapAllBytesInSharedBuffer() helper
    function
  OvmfPkg/VirtioLib: take VirtIo instance in
    VirtioRingInit/VirtioRingUninit
  OvmfPkg/Virtio: take RingBaseShift in SetQueueAddress()
  OvmfPkg/Virtio10Dxe: add the RingBaseShift offset
  OvmfPkg/VirtioLib: add function to map VRING
  OvmfPkg/VirtioLib: alloc VRING buffer with AllocateSharedPages()
  OvmfPkg/VirtioLib: change the parameter of VirtioAppendDesc() to
    UINT64
  OvmfPkg/VirtioRngDxe: map host address to device address
  OvmfPkg/VirtioBlkDxe: map host address to device address
  OvmfPkg/VirtioScsiDxe: map host address to device address
  OvmfPkg/VirtioNetDxe: alloc Tx and Rx rings using AllocateSharedPage()
  OvmfPkg/VirtioNetDxe: alloc RxBuf using AllocateSharedPages()
  OvmfPkg/VirtioNetDxe: dynamically alloc transmit header
  OvmfPkg/VirtioNetDxe: map transmit buffer host address to device
    address
  OvmfPkg/Virtio10: define VIRITO_F_IOMMU_PLATFORM feature bit
  OvmfPkg/VirtioRngDxe: negotiate VIRITO_F_IOMMU_PLATFORM
  OvmfPkg/VirtioBlkDxe: negotiate VIRITO_F_IOMMU_PLATFORM
  OvmfPkg/VirtioScsiDxe: negotiate VIRITO_F_IOMMU_PLATFORM
  OvmfPkg/VirtioNetDxe: negotiate VIRITO_F_IOMMU_PLATFORM

 OvmfPkg/Library/VirtioLib/VirtioLib.inf                         |   1 -
 OvmfPkg/Include/IndustryStandard/Virtio10.h                     |   4 +-
 OvmfPkg/Include/Library/VirtioLib.h                             | 132 +++++++++---
 OvmfPkg/Include/Protocol/VirtioDevice.h                         | 162 +++++++++++++-
 OvmfPkg/Library/VirtioMmioDeviceLib/VirtioMmioDevice.h          |  39 +++-
 OvmfPkg/VirtioBlkDxe/VirtioBlk.h                                |   1 +
 OvmfPkg/VirtioNetDxe/VirtioNet.h                                |  25 ++-
 OvmfPkg/VirtioPciDeviceDxe/VirtioPciDevice.h                    |  37 +++-
 OvmfPkg/VirtioRngDxe/VirtioRng.h                                |   1 +
 OvmfPkg/VirtioScsiDxe/VirtioScsi.h                              |   1 +
 OvmfPkg/Library/VirtioLib/VirtioLib.c                           | 204 +++++++++++++++---
 OvmfPkg/Library/VirtioMmioDeviceLib/VirtioMmioDevice.c          |   8 +-
 OvmfPkg/Library/VirtioMmioDeviceLib/VirtioMmioDeviceFunctions.c |  62 +++++-
 OvmfPkg/Virtio10Dxe/Virtio10.c                                  | 128 ++++++++++-
 OvmfPkg/VirtioBlkDxe/VirtioBlk.c                                | 213 ++++++++++++++++---
 OvmfPkg/VirtioGpuDxe/Commands.c                                 |  13 +-
 OvmfPkg/VirtioNetDxe/Events.c                                   |  19 ++
 OvmfPkg/VirtioNetDxe/SnpGetStatus.c                             |  29 ++-
 OvmfPkg/VirtioNetDxe/SnpInitialize.c                            | 195 ++++++++++++++---
 OvmfPkg/VirtioNetDxe/SnpSharedHelpers.c                         | 133 +++++++++++-
 OvmfPkg/VirtioNetDxe/SnpShutdown.c                              |   7 +-
 OvmfPkg/VirtioNetDxe/SnpTransmit.c                              |  30 ++-
 OvmfPkg/VirtioPciDeviceDxe/VirtioPciDevice.c                    |   7 +-
 OvmfPkg/VirtioPciDeviceDxe/VirtioPciFunctions.c                 |  63 +++++-
 OvmfPkg/VirtioRngDxe/VirtioRng.c                                |  93 ++++++--
 OvmfPkg/VirtioScsiDxe/VirtioScsi.c                              | 222 +++++++++++++++++---
 26 files changed, 1635 insertions(+), 194 deletions(-)

-- 
2.7.4

_______________________________________________
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel
Re: [edk2] [PATCH v3 00/21] OvmfPkg/Virtio: introduce IOMMU-like member functions
Posted by Laszlo Ersek 7 years, 4 months ago
Hi Brijesh,

so here's what I'd like to do with v3:

On 08/23/17 14:22, Brijesh Singh wrote:
> Brijesh Singh (23):
>   OvmfPkg: introduce IOMMU-like member functions to
>     VIRTIO_DEVICE_PROTOCOL
>   OvmfPkg/Virtio10Dxe: implement IOMMU-like member functions
>   OvmfPkg/VirtioPciDeviceDxe: implement IOMMU-like member functions
>   OvmfPkg/VirtioMmioDeviceLib: implement IOMMU-like member functions
>   OvmfPkg/VirtioLib: add VirtioMapAllBytesInSharedBuffer() helper
>     function
>   OvmfPkg/VirtioLib: take VirtIo instance in
>     VirtioRingInit/VirtioRingUninit
>   OvmfPkg/Virtio: take RingBaseShift in SetQueueAddress()
>   OvmfPkg/Virtio10Dxe: add the RingBaseShift offset
>   OvmfPkg/VirtioLib: add function to map VRING
>   OvmfPkg/VirtioLib: alloc VRING buffer with AllocateSharedPages()
>   OvmfPkg/VirtioLib: change the parameter of VirtioAppendDesc() to
>     UINT64

(1) I'll take the first 11 patches (which work on the transports and
VirtioLib), fix up the trivial stuff I've found in the v3 review, and
add my R-b tags.


>   OvmfPkg/VirtioRngDxe: map host address to device address

(2) I'll do the same for the VirtioRngDxe driver.


(3) I'll test this initial sequence in various scenarios. I think that
the protocol / transport / VirtioLib changes are good at this point, and
the simplest driver (VirtioRngDxe) demonstrates how to put those new
features to use. It also enables regression testing.

Importantly, I also plan to regression-test the remaining (not yet
converted) drivers at this point. Those drivers are affected only by the
"alloc VRING buffer with AllocateSharedPages" patch, as follows:

- On legacy virtio-pci and virtio-mmio transports, the change is a
no-op, because the AllocateSharedPages() and FreeSharedPages() VirtIo
members are backed by MemoryAllocationLib's AllocatePages() /
FreePages(). So the behavior of VirtioRingInit() / VirtioRingUninit()
remains identical.

- On the virtio-1.0 transport, the direct AllocatePages() call in
VirtioRingInit() will be replaced with VirtIo.AllocateSharedPages ->
PciIo.AllocateBuffer -> PciRootBridgeIo.AllocateBuffer.

- The last function will either branch to gBS->AllocatePages -- if
there's no IoMmu protocol, i.e. no SEV -- which is identical to current
behavior, or branch to IoMmu.AllocateBuffer.

- in IoMmuAllocateBuffer(), we'll allocate StashBuffer on the side
(which is no problem), and the actual allocation (for HostAddress) will
be done with gBS->AllocatePages().

The end result is that at this point, the unconverted drivers won't yet
work on SEV, but they will continue working if SEV is absent. The only
difference is (dependent on transport) longer call chains to memory
allocation and freeing, and larger memory footprint. But, no regressions
in functionality.

If I'm satisfied with the above test results, I'll add my
Regression-tested-by tags as well, and push the first 12 patches.

This will provide a foundation for further driver work (incl. my
VirtioGpuDxe work).


>   OvmfPkg/VirtioBlkDxe: map host address to device address
>   OvmfPkg/VirtioScsiDxe: map host address to device address

(4) I've looked at these patches briefly. They are possibly fine now,
but they've grown way too big. I struggled with the verification of the
VirtioRngDxe driver patch, and each of these two is more than twice as big.

So please, split *each* of these two patches in two parts (a logical
build-up):
- the first part should map and unmap the vring (all relevant contexts),
- the second part should map the requests.


(5) (I think this is my most important point), with the foundation in
place, I suggest we switch to one series per driver. For each driver, I
have to look at the virtio spec, and "page-in" how the requests are
structured, what they do etc. It's not a mechanical process. All that
virtio stuff is easier to re-digest if we proceed device by device.

Should we need later tweaks for the foundation, then at this point I
prefer small incremental patches for that.


>   OvmfPkg/VirtioNetDxe: alloc Tx and Rx rings using AllocateSharedPage()
>   OvmfPkg/VirtioNetDxe: alloc RxBuf using AllocateSharedPages()
>   OvmfPkg/VirtioNetDxe: dynamically alloc transmit header
>   OvmfPkg/VirtioNetDxe: map transmit buffer host address to device
>     address

(6) This is obviously the most complex driver. I've only snuck a peek. I
have one comment at this point: *if* we have to do random lookups, then
lists aren't optimal.

Please consider using the following library class:

  MdePkg/Include/Library/OrderedCollectionLib.h

It is already resolved in the OVMF DSC files to the following instance:

  MdePkg/Library/BaseOrderedCollectionRedBlackTreeLib/

Examples for use:
- various modules in OvmfPkg,
- AppPkg/Applications/OrderedCollectionTest


>   OvmfPkg/Virtio10: define VIRITO_F_IOMMU_PLATFORM feature bit
>   OvmfPkg/VirtioRngDxe: negotiate VIRITO_F_IOMMU_PLATFORM

(7) I would have liked to include these two patches in my "initial
push", but minimally the second patch needs fixes from you.

After I'm done with point (3), please repost these patches *only*.


>   OvmfPkg/VirtioBlkDxe: negotiate VIRITO_F_IOMMU_PLATFORM
>   OvmfPkg/VirtioScsiDxe: negotiate VIRITO_F_IOMMU_PLATFORM
>   OvmfPkg/VirtioNetDxe: negotiate VIRITO_F_IOMMU_PLATFORM

(8) After these patches are fixed up, I suggest that you please post
each one of them at the end of the matching driver series.


> TODO:
>  * add VirtioGpuDxe (i will take Laszlo's offer that he can help with
>    this driver)

OK, will do, thank you!

In this work I'll also seek to follow the series layout proposed above.

>  * I did minimal test on aarch64 - I was running into some Linux
>    bootup issues with Fedora aarch64 iso. The issue does not appear to
>    be releated to virtio changes. If anyone can help doing additional
>    test with their aarch images that will be great ! thanks

I'll test on aarch64 too.

Thanks!
Laszlo
_______________________________________________
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel
Re: [edk2] [PATCH v3 00/21] OvmfPkg/Virtio: introduce IOMMU-like member functions
Posted by Brijesh Singh 7 years, 4 months ago
Hi Laszlo,


On 8/23/17 7:26 PM, Laszlo Ersek wrote:
> Hi Brijesh,
>
> so here's what I'd like to do with v3:
>
> On 08/23/17 14:22, Brijesh Singh wrote:
>> Brijesh Singh (23):
>>   OvmfPkg: introduce IOMMU-like member functions to
>>     VIRTIO_DEVICE_PROTOCOL
>>   OvmfPkg/Virtio10Dxe: implement IOMMU-like member functions
>>   OvmfPkg/VirtioPciDeviceDxe: implement IOMMU-like member functions
>>   OvmfPkg/VirtioMmioDeviceLib: implement IOMMU-like member functions
>>   OvmfPkg/VirtioLib: add VirtioMapAllBytesInSharedBuffer() helper
>>     function
>>   OvmfPkg/VirtioLib: take VirtIo instance in
>>     VirtioRingInit/VirtioRingUninit
>>   OvmfPkg/Virtio: take RingBaseShift in SetQueueAddress()
>>   OvmfPkg/Virtio10Dxe: add the RingBaseShift offset
>>   OvmfPkg/VirtioLib: add function to map VRING
>>   OvmfPkg/VirtioLib: alloc VRING buffer with AllocateSharedPages()
>>   OvmfPkg/VirtioLib: change the parameter of VirtioAppendDesc() to
>>     UINT64
> (1) I'll take the first 11 patches (which work on the transports and
> VirtioLib), fix up the trivial stuff I've found in the v3 review, and
> add my R-b tags.

Thanks

>
>>   OvmfPkg/VirtioRngDxe: map host address to device address
> (2) I'll do the same for the VirtioRngDxe driver.
>

Thanks

> (3) I'll test this initial sequence in various scenarios. I think that
> the protocol / transport / VirtioLib changes are good at this point, and
> the simplest driver (VirtioRngDxe) demonstrates how to put those new
> features to use. It also enables regression testing.
>
> Importantly, I also plan to regression-test the remaining (not yet
> converted) drivers at this point. Those drivers are affected only by the
> "alloc VRING buffer with AllocateSharedPages" patch, as follows:
>
> - On legacy virtio-pci and virtio-mmio transports, the change is a
> no-op, because the AllocateSharedPages() and FreeSharedPages() VirtIo
> members are backed by MemoryAllocationLib's AllocatePages() /
> FreePages(). So the behavior of VirtioRingInit() / VirtioRingUninit()
> remains identical.
>
> - On the virtio-1.0 transport, the direct AllocatePages() call in
> VirtioRingInit() will be replaced with VirtIo.AllocateSharedPages ->
> PciIo.AllocateBuffer -> PciRootBridgeIo.AllocateBuffer.
>
> - The last function will either branch to gBS->AllocatePages -- if
> there's no IoMmu protocol, i.e. no SEV -- which is identical to current
> behavior, or branch to IoMmu.AllocateBuffer.
>
> - in IoMmuAllocateBuffer(), we'll allocate StashBuffer on the side
> (which is no problem), and the actual allocation (for HostAddress) will
> be done with gBS->AllocatePages().
>
> The end result is that at this point, the unconverted drivers won't yet
> work on SEV, but they will continue working if SEV is absent. The only
> difference is (dependent on transport) longer call chains to memory
> allocation and freeing, and larger memory footprint. But, no regressions
> in functionality.
>
> If I'm satisfied with the above test results, I'll add my
> Regression-tested-by tags as well, and push the first 12 patches.
>
> This will provide a foundation for further driver work (incl. my
> VirtioGpuDxe work).

I agree with you. Sounds like a good plan.
>
>>   OvmfPkg/VirtioBlkDxe: map host address to device address
>>   OvmfPkg/VirtioScsiDxe: map host address to device address
> (4) I've looked at these patches briefly. They are possibly fine now,
> but they've grown way too big. I struggled with the verification of the
> VirtioRngDxe driver patch, and each of these two is more than twice as big.
Great thanks, I agree the series is getting bigger. After v1 feedbacks,
I was almost tempted to say that lets work to enable the base first then
will do one driver at a time. I was repeating the same mistake in each
patch and that was causing trouble to you and also I had to rework the
all drivers.

> So please, split *each* of these two patches in two parts (a logical
> build-up):
> - the first part should map and unmap the vring (all relevant contexts),
> - the second part should map the requests.
>
>
> (5) (I think this is my most important point), with the foundation in
> place, I suggest we switch to one series per driver. For each driver, I
> have to look at the virtio spec, and "page-in" how the requests are
> structured, what they do etc. It's not a mechanical process. All that
> virtio stuff is easier to re-digest if we proceed device by device.
>
> Should we need later tweaks for the foundation, then at this point I
> prefer small incremental patches for that.
>
>
>>   OvmfPkg/VirtioNetDxe: alloc Tx and Rx rings using AllocateSharedPage()
>>   OvmfPkg/VirtioNetDxe: alloc RxBuf using AllocateSharedPages()
>>   OvmfPkg/VirtioNetDxe: dynamically alloc transmit header
>>   OvmfPkg/VirtioNetDxe: map transmit buffer host address to device
>>     address
> (6) This is obviously the most complex driver. I've only snuck a peek. I
> have one comment at this point: *if* we have to do random lookups, then
> lists aren't optimal.
>
> Please consider using the following library class:
>
>   MdePkg/Include/Library/OrderedCollectionLib.h
>
> It is already resolved in the OVMF DSC files to the following instance:
>
>   MdePkg/Library/BaseOrderedCollectionRedBlackTreeLib/
>
> Examples for use:
> - various modules in OvmfPkg,
> - AppPkg/Applications/OrderedCollectionTest

Okay, I will look into it - thanks for the tip. I wanted to actually use
the Simple array (because we know the maximum number of buffer we can
queue) but was  not sure about your preferences hence I went to with
list. If you are okay then I can use array's too.
>
>>   OvmfPkg/Virtio10: define VIRITO_F_IOMMU_PLATFORM feature bit
>>   OvmfPkg/VirtioRngDxe: negotiate VIRITO_F_IOMMU_PLATFORM
> (7) I would have liked to include these two patches in my "initial
> push", but minimally the second patch needs fixes from you.
>
> After I'm done with point (3), please repost these patches *only*.

Sure will do. thanks
>
>>   OvmfPkg/VirtioBlkDxe: negotiate VIRITO_F_IOMMU_PLATFORM
>>   OvmfPkg/VirtioScsiDxe: negotiate VIRITO_F_IOMMU_PLATFORM
>>   OvmfPkg/VirtioNetDxe: negotiate VIRITO_F_IOMMU_PLATFORM
> (8) After these patches are fixed up, I suggest that you please post
> each one of them at the end of the matching driver series.
>

Will do.

>> TODO:
>>  * add VirtioGpuDxe (i will take Laszlo's offer that he can help with
>>    this driver)
> OK, will do, thank you!
>
> In this work I'll also seek to follow the series layout proposed above.
>
>>  * I did minimal test on aarch64 - I was running into some Linux
>>    bootup issues with Fedora aarch64 iso. The issue does not appear to
>>    be releated to virtio changes. If anyone can help doing additional
>>    test with their aarch images that will be great ! thanks
> I'll test on aarch64 too.

thank you.

-Brijesh
_______________________________________________
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel
Re: [edk2] [PATCH v3 00/21] OvmfPkg/Virtio: introduce IOMMU-like member functions
Posted by Laszlo Ersek 7 years, 4 months ago
On 08/24/17 02:54, Brijesh Singh wrote:
> On 8/23/17 7:26 PM, Laszlo Ersek wrote:
>> On 08/23/17 14:22, Brijesh Singh wrote:

>>>   OvmfPkg/VirtioBlkDxe: map host address to device address
>>>   OvmfPkg/VirtioScsiDxe: map host address to device address

>> (4) I've looked at these patches briefly. They are possibly fine now,
>> but they've grown way too big. I struggled with the verification of the
>> VirtioRngDxe driver patch, and each of these two is more than twice as big.

> Great thanks, I agree the series is getting bigger. After v1 feedbacks,
> I was almost tempted to say that lets work to enable the base first then
> will do one driver at a time. I was repeating the same mistake in each
> patch and that was causing trouble to you and also I had to rework the
> all drivers.

Unfortunately (for both of us! :) ), this churn is unavoidable in the
first few versions of a large set -- the foundation is not dictated by
some "divine design", it is dictated only by what the higher level
drivers need. And we only find out what the higher level drivers need if
you at least prototype the patches for them, and I at least skim-review
those patches.

It's regrettable that it requires so much wasted work, but I don't know
how to do it better, save for knowing the future. :)

I trust at this point the base has stabilized enough, and if we need
further changes, we can solve that with small incremental patches.

>> (6) This is obviously the most complex driver. I've only snuck a peek. I
>> have one comment at this point: *if* we have to do random lookups, then
>> lists aren't optimal.
>>
>> Please consider using the following library class:
>>
>>   MdePkg/Include/Library/OrderedCollectionLib.h
>>
>> It is already resolved in the OVMF DSC files to the following instance:
>>
>>   MdePkg/Library/BaseOrderedCollectionRedBlackTreeLib/
>>
>> Examples for use:
>> - various modules in OvmfPkg,
>> - AppPkg/Applications/OrderedCollectionTest
> 
> Okay, I will look into it - thanks for the tip. I wanted to actually use
> the Simple array (because we know the maximum number of buffer we can
> queue) but was  not sure about your preferences hence I went to with
> list. If you are okay then I can use array's too.

My concern isn't about memory consumption (if our queue is full (64
elements, IIRC), we just reject the transmit request and let the caller
deal with the error).

Instead, I'd like to avoid an O(n) lookup time (where n is the number of
pending requests), for whatever lookup is necessary now (I haven't
checked the specifics yet). BaseOrderedCollectionRedBlackTreeLib would
give you O(log n) lookup time.

Without having looked at the details, I think an array would have O(n)
lookup time as well (linear search, right?).

(Let's not try to do O(1) with a hash function -- that's quite out of
scope for now, and with a hash table, one has to think about collisions
too, etc. When I contributed BaseOrderedCollectionRedBlackTreeLib, I
wanted it to become *the* go-to associative data structure for edk2 --
at least in such DXE and UEFI driver contexts where frequent pool
allocations and releases are not a problem. Plus, RB trees double as
fast priority queues, can be used for one-off sorting, have worst-case
(not just on-average) O(log n) time guarantees... I think they're
versatile.)

Cheers
Laszlo
_______________________________________________
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel
Re: [edk2] [PATCH v3 00/21] OvmfPkg/Virtio: introduce IOMMU-like member functions
Posted by Brijesh Singh 7 years, 4 months ago

On 8/23/17 8:22 PM, Laszlo Ersek wrote:Okay, I will look into it -
thanks for the tip. I wanted to actually use
>> the Simple array (because we know the maximum number of buffer we can
>> queue) but was  not sure about your preferences hence I went to with
>> list. If you are okay then I can use array's too.
> My concern isn't about memory consumption (if our queue is full (64
> elements, IIRC), we just reject the transmit request and let the caller
> deal with the error).
>
> Instead, I'd like to avoid an O(n) lookup time (where n is the number of
> pending requests), for whatever lookup is necessary now (I haven't
> checked the specifics yet). BaseOrderedCollectionRedBlackTreeLib would
> give you O(log n) lookup time.
>
> Without having looked at the details, I think an array would have O(n)
> lookup time as well (linear search, right?).
>
> (Let's not try to do O(1) with a hash function -- that's quite out of
> scope for now, and with a hash table, one has to think about collisions
> too, etc. When I contributed BaseOrderedCollectionRedBlackTreeLib, I
> wanted it to become *the* go-to associative data structure for edk2 --
> at least in such DXE and UEFI driver contexts where frequent pool
> allocations and releases are not a problem. Plus, RB trees double as
> fast priority queues, can be used for one-off sorting, have worst-case
> (not just on-average) O(log n) time guarantees... I think they're
> versatile.)

To my understanding the network packet enqueued in the transmit ring
will be completed in same order, hence I was thinking about the queue
data structure and was not concern about the search time. I was mostly
concerned about the memory consumption but if my understanding is wrong
then I agree that we need to pick right data structure. Since the
RedBlack tree is already implemented hence I have no issue using it. thanks

-Brijesh
 

_______________________________________________
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel
Re: [edk2] [PATCH v3 00/21] OvmfPkg/Virtio: introduce IOMMU-like member functions
Posted by Laszlo Ersek 7 years, 4 months ago
On 08/24/17 04:06, Brijesh Singh wrote:
> 
> 
> On 8/23/17 8:22 PM, Laszlo Ersek wrote:Okay, I will look into it -
> thanks for the tip. I wanted to actually use
>>> the Simple array (because we know the maximum number of buffer we can
>>> queue) but was  not sure about your preferences hence I went to with
>>> list. If you are okay then I can use array's too.
>> My concern isn't about memory consumption (if our queue is full (64
>> elements, IIRC), we just reject the transmit request and let the caller
>> deal with the error).
>>
>> Instead, I'd like to avoid an O(n) lookup time (where n is the number of
>> pending requests), for whatever lookup is necessary now (I haven't
>> checked the specifics yet). BaseOrderedCollectionRedBlackTreeLib would
>> give you O(log n) lookup time.
>>
>> Without having looked at the details, I think an array would have O(n)
>> lookup time as well (linear search, right?).
>>
>> (Let's not try to do O(1) with a hash function -- that's quite out of
>> scope for now, and with a hash table, one has to think about collisions
>> too, etc. When I contributed BaseOrderedCollectionRedBlackTreeLib, I
>> wanted it to become *the* go-to associative data structure for edk2 --
>> at least in such DXE and UEFI driver contexts where frequent pool
>> allocations and releases are not a problem. Plus, RB trees double as
>> fast priority queues, can be used for one-off sorting, have worst-case
>> (not just on-average) O(log n) time guarantees... I think they're
>> versatile.)
> 
> To my understanding the network packet enqueued in the transmit ring
> will be completed in same order,

The current code does not make this assumption -- please see the very
last paragraph in "OvmfPkg/VirtioNetDxe/TechNotes.txt" --, and it would
be good to keep it that way.

> hence I was thinking about the queue
> data structure and was not concern about the search time. I was mostly
> concerned about the memory consumption but if my understanding is wrong
> then I agree that we need to pick right data structure. Since the
> RedBlack tree is already implemented hence I have no issue using it. thanks

OK, thanks!
Laszlo
_______________________________________________
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel
Re: [edk2] [PATCH v3 00/21] OvmfPkg/Virtio: introduce IOMMU-like member functions
Posted by Laszlo Ersek 7 years, 4 months ago
On 08/24/17 02:26, Laszlo Ersek wrote:
> Hi Brijesh,
>
> so here's what I'd like to do with v3:
>
> On 08/23/17 14:22, Brijesh Singh wrote:
>> Brijesh Singh (23):
>>   OvmfPkg: introduce IOMMU-like member functions to
>>     VIRTIO_DEVICE_PROTOCOL
>>   OvmfPkg/Virtio10Dxe: implement IOMMU-like member functions
>>   OvmfPkg/VirtioPciDeviceDxe: implement IOMMU-like member functions
>>   OvmfPkg/VirtioMmioDeviceLib: implement IOMMU-like member functions
>>   OvmfPkg/VirtioLib: add VirtioMapAllBytesInSharedBuffer() helper
>>     function
>>   OvmfPkg/VirtioLib: take VirtIo instance in
>>     VirtioRingInit/VirtioRingUninit
>>   OvmfPkg/Virtio: take RingBaseShift in SetQueueAddress()
>>   OvmfPkg/Virtio10Dxe: add the RingBaseShift offset
>>   OvmfPkg/VirtioLib: add function to map VRING
>>   OvmfPkg/VirtioLib: alloc VRING buffer with AllocateSharedPages()
>>   OvmfPkg/VirtioLib: change the parameter of VirtioAppendDesc() to
>>     UINT64
>
> (1) I'll take the first 11 patches (which work on the transports and
> VirtioLib), fix up the trivial stuff I've found in the v3 review, and
> add my R-b tags.
>
>
>>   OvmfPkg/VirtioRngDxe: map host address to device address
>
> (2) I'll do the same for the VirtioRngDxe driver.
>
>
> (3) I'll test this initial sequence in various scenarios. I think that
> the protocol / transport / VirtioLib changes are good at this point,
> and the simplest driver (VirtioRngDxe) demonstrates how to put those
> new features to use. It also enables regression testing.
>
> Importantly, I also plan to regression-test the remaining (not yet
> converted) drivers at this point. Those drivers are affected only by
> the "alloc VRING buffer with AllocateSharedPages" patch, as follows:
>
> - On legacy virtio-pci and virtio-mmio transports, the change is a
> no-op, because the AllocateSharedPages() and FreeSharedPages() VirtIo
> members are backed by MemoryAllocationLib's AllocatePages() /
> FreePages(). So the behavior of VirtioRingInit() / VirtioRingUninit()
> remains identical.
>
> - On the virtio-1.0 transport, the direct AllocatePages() call in
> VirtioRingInit() will be replaced with VirtIo.AllocateSharedPages ->
> PciIo.AllocateBuffer -> PciRootBridgeIo.AllocateBuffer.
>
> - The last function will either branch to gBS->AllocatePages -- if
> there's no IoMmu protocol, i.e. no SEV -- which is identical to
> current behavior, or branch to IoMmu.AllocateBuffer.
>
> - in IoMmuAllocateBuffer(), we'll allocate StashBuffer on the side
> (which is no problem), and the actual allocation (for HostAddress)
> will be done with gBS->AllocatePages().
>
> The end result is that at this point, the unconverted drivers won't
> yet work on SEV, but they will continue working if SEV is absent. The
> only difference is (dependent on transport) longer call chains to
> memory allocation and freeing, and larger memory footprint. But, no
> regressions in functionality.
>
> If I'm satisfied with the above test results, I'll add my
> Regression-tested-by tags as well, and push the first 12 patches.

For patches v3 1-12:

Regression-tested-by: Laszlo Ersek <lersek@redhat.com>

I've pushed those patches now; commit range c69071bd7e21..0a568ccbcbd1.

The testing took incredibly long. For AARCH64 testing, I used my Mustang
(KVM). For IA32 and X64, I used my laptop (also KVM). This was all
without SEV (hence "regression testing".) Here's the matrix and some
remarks:

                            virtio-mmio  legacy PCI        modern PCI
                            -----------  ----------------- --------------------------
  device  test scenario     AARCH64      IA32     X64      IA32     X64      AARCH64
  ------  ----------------  -----------  -------- -------- -------- -------- --------
  rng     shell RECONNECT   PASS         PASS     PASS     PASS     PASS     PASS

  rng     RngTest.efi       PASS         PASS     PASS     PASS     PASS     PASS

  rng     ExitBootServices  PASS         PASS     PASS     PASS     PASS     PASS

  blk     shell RECONNECT   PASS         PASS     PASS     PASS     PASS     PASS

  blk     shell LS/TYPE     PASS         PASS     PASS     PASS     PASS     PASS

  blk     ExitBootServices  PASS         PASS     PASS     PASS     PASS     PASS

  scsi    shell RECONNECT   PASS         PASS     PASS     PASS     PASS     PASS

  scsi    OS boot           PASS         PASS     PASS     PASS     PASS     PASS

  scsi    ExitBootServices  PASS         PASS     PASS     PASS     PASS     PASS

  net     shell RECONNECT   PASS         PASS     PASS     PASS     PASS     PASS
          (parent, child)

  net     DHCP +            PASS[06]     SKIP[08] SKIP[10] SKIP[08] SKIP[10] PASS[03]
          DataSink.efi

  net     DHCP +            PASS[05]     SKIP[07] PASS[11] SKIP[07] PASS[09] PASS[04]
          DataSource.efi

  net     DHCP + PXE boot   PASS         PASS     PASS     PASS     PASS     PASS

  net     ExitBootServices  PASS         PASS     PASS     PASS     PASS     PASS

  gpu     shell RECONNECT   N/A[01]      N/A[01]  N/A[01]  N/A[02]  N/A[02]  PASS
          (parent, child)

  gpu     shell MODE        N/A[01]      N/A[01]  N/A[01]  N/A[02]  N/A[02]  PASS

  gpu     ExitBootServices  N/A[01]      N/A[01]  N/A[01]  N/A[02]  N/A[02]  PASS

Remarks:

[01] virtio-gpu is modern-only

[02] Libvirt maps the "virtio" video model to "virtio-vga" on IA32/X64,
     which is bound by QemuVideoDxe, and to "virtio-gpu-pci" on AARCH64,
     which is bound by VirtioGpuDxe.

[03] transferred 200 MB, average speed 1.3-1.6 MB/s

[04] transferred 20 MB, average speed 280 KB/s

[05] transferred 20 MB, average speed 96 KB/s

[06] transferred 80 MB, average speed 374 KB/s

[07] 295 KB were transferred, but when DataSource.efi wanted to print
     the first stats line, it crashed with a divide error:

     !!!! IA32 Exception Type - 00(#DE - Divide Error)  CPU Apic ID - 00000000 !!!!
     EIP  - 7CD2077A, CS  - 00000010, EFLAGS - 00010246
     EAX  - 00000000, ECX - 00000000, EDX - 00000000, EBX - 00000000
     ESP  - 7EEBE2EC, EBP - 7EEBE328, ESI - 000004D2, EDI - 00000000
     DS   - 00000008, ES  - 00000008, FS  - 00000008, GS  - 00000008, SS - 00000008
     CR0  - 00000033, CR2 - 00000000, CR3 - 00000000, CR4 - 00000640
     DR0  - 00000000, DR1 - 00000000, DR2 - 00000000, DR3 - 00000000
     DR6  - FFFF0FF0, DR7 - 00000400
     GDTR - 7EE07A90 00000047, IDTR - 7DDB7010 000007FF
     LDTR - 00000000, TR - 00000000
     FXSAVE_STATE - 7EEBE030
     !!!! Find image
     Build/AppPkg/NOOPT_GCC48/IA32/AppPkg/Applications/Sockets/DataSource/DataSource/DEBUG/DataSource.dll
     (ImageBase=000000007CCF3000, EntryPoint=000000007CCF3220) !!!!

     0002d770 <InternalMathDivRemU64x32>:
        2d770:       8b 4c 24 0c             mov    0xc(%esp),%ecx
        2d774:       8b 44 24 08             mov    0x8(%esp),%eax
        2d778:       31 d2                   xor    %edx,%edx
     -> 2d77a:       f7 f1                   div    %ecx
        2d77c:       50                      push   %eax
        2d77d:       8b 44 24 08             mov    0x8(%esp),%eax
        2d781:       f7 f1                   div    %ecx
        2d783:       8b 4c 24 14             mov    0x14(%esp),%ecx
        2d787:       e3 02                   jecxz  2d78b <InternalMathDivRemU64x32.0>
        2d789:       89 11                   mov    %edx,(%ecx)

     The same symptom reproduces without applying Brijesh's patches.

[08] after the connection is established, DataSink.efi crashes

     !!!! IA32 Exception Type - 00(#DE - Divide Error)  CPU Apic ID - 00000000 !!!!
     EIP  - 7CD211CA, CS  - 00000010, EFLAGS - 00010246
     EAX  - 00000000, ECX - 00000000, EDX - 00000000, EBX - 00000000
     ESP  - 7EEBE23C, EBP - 7EEBE278, ESI - 00000002, EDI - 00000000
     DS   - 00000008, ES  - 00000008, FS  - 00000008, GS  - 00000008, SS - 00000008
     CR0  - 00000033, CR2 - 00000000, CR3 - 00000000, CR4 - 00000640
     DR0  - 00000000, DR1 - 00000000, DR2 - 00000000, DR3 - 00000000
     DR6  - FFFF0FF0, DR7 - 00000400
     GDTR - 7EE07A90 00000047, IDTR - 7DDB7010 000007FF
     LDTR - 00000000, TR - 00000000
     FXSAVE_STATE - 7EEBDF80
     !!!! Find image
     Build/AppPkg/NOOPT_GCC48/IA32/AppPkg/Applications/Sockets/DataSink/DataSink/DEBUG/DataSink.dll
     (ImageBase=000000007CCFA000, EntryPoint=000000007CCFA220) !!!!

     000271c0 <InternalMathDivRemU64x32>:
        271c0:       8b 4c 24 0c             mov    0xc(%esp),%ecx
        271c4:       8b 44 24 08             mov    0x8(%esp),%eax
        271c8:       31 d2                   xor    %edx,%edx
     -> 271ca:       f7 f1                   div    %ecx
        271cc:       50                      push   %eax
        271cd:       8b 44 24 08             mov    0x8(%esp),%eax
        271d1:       f7 f1                   div    %ecx
        271d3:       8b 4c 24 14             mov    0x14(%esp),%ecx
        271d7:       e3 02                   jecxz  271db <InternalMathDivRemU64x32.0>
        271d9:       89 11                   mov    %edx,(%ecx)

     The same symptom reproduces without applying Brijesh's patches.

[09] transferred 30MB, average speed 282 KB/s

[10] DataSink.efi hangs. The connection is established (according to
     "netstat"), but no data is transferred. The OVMF log contains
     repeated messages

     TcpInput: sequence acceptance test failed for segment of TCB 7CF52998
     Tcp4Input: Discard a packet

     The same symptom reproduces without applying Brijesh's patches.

[11] transferred 10.2MB, average speed 282 KB/s


The DataSource.efi and DataSink.efi applications seem quite brittle to
me, so I think in the future I might switch to different tools for
testing virtio-net with TCP. HTTP boot looks like a good candidate
<https://github.com/tianocore/tianocore.github.io/wiki/HTTP-Boot>;
perhaps I'll try it out.

Brijesh, can you please proceed with step (7) below?

Thank you,
Laszlo

>
> This will provide a foundation for further driver work (incl. my
> VirtioGpuDxe work).
>
>
>>   OvmfPkg/VirtioBlkDxe: map host address to device address
>>   OvmfPkg/VirtioScsiDxe: map host address to device address
>
> (4) I've looked at these patches briefly. They are possibly fine now,
> but they've grown way too big. I struggled with the verification of
> the VirtioRngDxe driver patch, and each of these two is more than
> twice as big.
>
> So please, split *each* of these two patches in two parts (a logical
> build-up):
> - the first part should map and unmap the vring (all relevant
>   contexts),
> - the second part should map the requests.
>
>
> (5) (I think this is my most important point), with the foundation in
> place, I suggest we switch to one series per driver. For each driver,
> I have to look at the virtio spec, and "page-in" how the requests are
> structured, what they do etc. It's not a mechanical process. All that
> virtio stuff is easier to re-digest if we proceed device by device.
>
> Should we need later tweaks for the foundation, then at this point I
> prefer small incremental patches for that.
>
>
>>   OvmfPkg/VirtioNetDxe: alloc Tx and Rx rings using AllocateSharedPage()
>>   OvmfPkg/VirtioNetDxe: alloc RxBuf using AllocateSharedPages()
>>   OvmfPkg/VirtioNetDxe: dynamically alloc transmit header
>>   OvmfPkg/VirtioNetDxe: map transmit buffer host address to device
>>     address
>
> (6) This is obviously the most complex driver. I've only snuck a peek.
> I have one comment at this point: *if* we have to do random lookups,
> then lists aren't optimal.
>
> Please consider using the following library class:
>
>   MdePkg/Include/Library/OrderedCollectionLib.h
>
> It is already resolved in the OVMF DSC files to the following
> instance:
>
>   MdePkg/Library/BaseOrderedCollectionRedBlackTreeLib/
>
> Examples for use:
> - various modules in OvmfPkg,
> - AppPkg/Applications/OrderedCollectionTest
>
>
>>   OvmfPkg/Virtio10: define VIRITO_F_IOMMU_PLATFORM feature bit
>>   OvmfPkg/VirtioRngDxe: negotiate VIRITO_F_IOMMU_PLATFORM
>
> (7) I would have liked to include these two patches in my "initial
> push", but minimally the second patch needs fixes from you.
>
> After I'm done with point (3), please repost these patches *only*.
>
>
>>   OvmfPkg/VirtioBlkDxe: negotiate VIRITO_F_IOMMU_PLATFORM
>>   OvmfPkg/VirtioScsiDxe: negotiate VIRITO_F_IOMMU_PLATFORM
>>   OvmfPkg/VirtioNetDxe: negotiate VIRITO_F_IOMMU_PLATFORM
>
> (8) After these patches are fixed up, I suggest that you please post
> each one of them at the end of the matching driver series.
>
>
>> TODO:
>>  * add VirtioGpuDxe (i will take Laszlo's offer that he can help with
>>    this driver)
>
> OK, will do, thank you!
>
> In this work I'll also seek to follow the series layout proposed
> above.
>
>>  * I did minimal test on aarch64 - I was running into some Linux
>>    bootup issues with Fedora aarch64 iso. The issue does not appear
>>    to be releated to virtio changes. If anyone can help doing
>>    additional test with their aarch images that will be great !
>>    thanks
>
> I'll test on aarch64 too.
>
> Thanks!
> Laszlo
> _______________________________________________
> edk2-devel mailing list
> edk2-devel@lists.01.org
> https://lists.01.org/mailman/listinfo/edk2-devel
>

_______________________________________________
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel