[RFC v2 00/13] Dinamycally switch to vhost shadow virtqueues at vdpa net migration

Eugenio Pérez posted 13 patches 1 year, 3 months ago
include/hw/virtio/vhost-backend.h |   4 +
include/hw/virtio/vhost-vdpa.h    |   1 +
hw/net/vhost_net.c                |  25 ++-
hw/virtio/vhost-vdpa.c            |  64 +++++---
hw/virtio/vhost.c                 |   3 +
net/vhost-vdpa.c                  | 247 +++++++++++++++++++++++++-----
6 files changed, 278 insertions(+), 66 deletions(-)
[RFC v2 00/13] Dinamycally switch to vhost shadow virtqueues at vdpa net migration
Posted by Eugenio Pérez 1 year, 3 months ago
It's possible to migrate vdpa net devices if they are shadowed from the
start.  But to always shadow the dataplane is effectively break its host
passthrough, so its not convenient in vDPA scenarios.

This series enables dynamically switching to shadow mode only at
migration time.  This allow full data virtqueues passthrough all the
time qemu is not migrating.

Successfully tested with vdpa_sim_net (but it needs some patches, I
will send them soon) and qemu emulated device with vp_vdpa with
some restrictions:
* No CVQ.
* VIRTIO_RING_F_STATE patches.
* Expose _F_SUSPEND, but ignore it and suspend on ring state fetch like
  DPDK.

Comments are welcome, especially in the patcheswith RFC in the message.

v2:
- Use a migration listener instead of a memory listener to know when
  the migration starts.
- Add stuff not picked with ASID patches, like enable rings after
  driver_ok
- Add rewinding on the migration src, not in dst
- v1 at https://lists.gnu.org/archive/html/qemu-devel/2022-08/msg01664.html

Eugenio Pérez (13):
  vdpa: fix VHOST_BACKEND_F_IOTLB_ASID flag check
  vdpa net: move iova tree creation from init to start
  vdpa: copy cvq shadow_data from data vqs, not from x-svq
  vdpa: rewind at get_base, not set_base
  vdpa net: add migration blocker if cannot migrate cvq
  vhost: delay set_vring_ready after DRIVER_OK
  vdpa: delay set_vring_ready after DRIVER_OK
  vdpa: Negotiate _F_SUSPEND feature
  vdpa: add feature_log parameter to vhost_vdpa
  vdpa net: allow VHOST_F_LOG_ALL
  vdpa: add vdpa net migration state notifier
  vdpa: preemptive kick at enable
  vdpa: Conditionally expose _F_LOG in vhost_net devices

 include/hw/virtio/vhost-backend.h |   4 +
 include/hw/virtio/vhost-vdpa.h    |   1 +
 hw/net/vhost_net.c                |  25 ++-
 hw/virtio/vhost-vdpa.c            |  64 +++++---
 hw/virtio/vhost.c                 |   3 +
 net/vhost-vdpa.c                  | 247 +++++++++++++++++++++++++-----
 6 files changed, 278 insertions(+), 66 deletions(-)

-- 
2.31.1

Re: [RFC v2 00/13] Dinamycally switch to vhost shadow virtqueues at vdpa net migration
Posted by Si-Wei Liu 1 year, 2 months ago

On 1/12/2023 9:24 AM, Eugenio Pérez wrote:
> It's possible to migrate vdpa net devices if they are shadowed from the
>
> start.  But to always shadow the dataplane is effectively break its host
>
> passthrough, so its not convenient in vDPA scenarios.
>
>
>
> This series enables dynamically switching to shadow mode only at
>
> migration time.  This allow full data virtqueues passthrough all the
>
> time qemu is not migrating.
>
>
>
> Successfully tested with vdpa_sim_net (but it needs some patches, I
>
> will send them soon) and qemu emulated device with vp_vdpa with
>
> some restrictions:
>
> * No CVQ.
>
> * VIRTIO_RING_F_STATE patches.
What are these patches (I'm not sure I follow VIRTIO_RING_F_STATE, is it 
a new feature that other vdpa driver would need for live migration)?

-Siwei

>
> * Expose _F_SUSPEND, but ignore it and suspend on ring state fetch like
>
>    DPDK.
>
>
>
> Comments are welcome, especially in the patcheswith RFC in the message.
>
>
>
> v2:
>
> - Use a migration listener instead of a memory listener to know when
>
>    the migration starts.
>
> - Add stuff not picked with ASID patches, like enable rings after
>
>    driver_ok
>
> - Add rewinding on the migration src, not in dst
>
> - v1 at https://lists.gnu.org/archive/html/qemu-devel/2022-08/msg01664.html
>
>
>
> Eugenio Pérez (13):
>
>    vdpa: fix VHOST_BACKEND_F_IOTLB_ASID flag check
>
>    vdpa net: move iova tree creation from init to start
>
>    vdpa: copy cvq shadow_data from data vqs, not from x-svq
>
>    vdpa: rewind at get_base, not set_base
>
>    vdpa net: add migration blocker if cannot migrate cvq
>
>    vhost: delay set_vring_ready after DRIVER_OK
>
>    vdpa: delay set_vring_ready after DRIVER_OK
>
>    vdpa: Negotiate _F_SUSPEND feature
>
>    vdpa: add feature_log parameter to vhost_vdpa
>
>    vdpa net: allow VHOST_F_LOG_ALL
>
>    vdpa: add vdpa net migration state notifier
>
>    vdpa: preemptive kick at enable
>
>    vdpa: Conditionally expose _F_LOG in vhost_net devices
>
>
>
>   include/hw/virtio/vhost-backend.h |   4 +
>
>   include/hw/virtio/vhost-vdpa.h    |   1 +
>
>   hw/net/vhost_net.c                |  25 ++-
>
>   hw/virtio/vhost-vdpa.c            |  64 +++++---
>
>   hw/virtio/vhost.c                 |   3 +
>
>   net/vhost-vdpa.c                  | 247 +++++++++++++++++++++++++-----
>
>   6 files changed, 278 insertions(+), 66 deletions(-)
>
>
>


Re: [RFC v2 00/13] Dinamycally switch to vhost shadow virtqueues at vdpa net migration
Posted by Eugenio Perez Martin 1 year, 2 months ago
On Thu, Feb 2, 2023 at 2:00 AM Si-Wei Liu <si-wei.liu@oracle.com> wrote:
>
>
>
> On 1/12/2023 9:24 AM, Eugenio Pérez wrote:
> > It's possible to migrate vdpa net devices if they are shadowed from the
> >
> > start.  But to always shadow the dataplane is effectively break its host
> >
> > passthrough, so its not convenient in vDPA scenarios.
> >
> >
> >
> > This series enables dynamically switching to shadow mode only at
> >
> > migration time.  This allow full data virtqueues passthrough all the
> >
> > time qemu is not migrating.
> >
> >
> >
> > Successfully tested with vdpa_sim_net (but it needs some patches, I
> >
> > will send them soon) and qemu emulated device with vp_vdpa with
> >
> > some restrictions:
> >
> > * No CVQ.
> >
> > * VIRTIO_RING_F_STATE patches.
> What are these patches (I'm not sure I follow VIRTIO_RING_F_STATE, is it
> a new feature that other vdpa driver would need for live migration)?
>

Not really,

Since vp_vdpa wraps a virtio-net-pci driver to give it vdpa
capabilities it needs a virtio in-band method to set and fetch the
virtqueue state. Jason sent a proposal some time ago [1], and I
implemented it in qemu's virtio emulated device.

I can send them as a RFC but I didn't worry about making it pretty,
nor I think they should be merged at the moment. vdpa parent drivers
should follow vdpa_sim changes.

Thanks!

[1] https://lists.oasis-open.org/archives/virtio-comment/202103/msg00036.html

> -Siwei
>
> >
> > * Expose _F_SUSPEND, but ignore it and suspend on ring state fetch like
> >
> >    DPDK.
> >
> >
> >
> > Comments are welcome, especially in the patcheswith RFC in the message.
> >
> >
> >
> > v2:
> >
> > - Use a migration listener instead of a memory listener to know when
> >
> >    the migration starts.
> >
> > - Add stuff not picked with ASID patches, like enable rings after
> >
> >    driver_ok
> >
> > - Add rewinding on the migration src, not in dst
> >
> > - v1 at https://lists.gnu.org/archive/html/qemu-devel/2022-08/msg01664.html
> >
> >
> >
> > Eugenio Pérez (13):
> >
> >    vdpa: fix VHOST_BACKEND_F_IOTLB_ASID flag check
> >
> >    vdpa net: move iova tree creation from init to start
> >
> >    vdpa: copy cvq shadow_data from data vqs, not from x-svq
> >
> >    vdpa: rewind at get_base, not set_base
> >
> >    vdpa net: add migration blocker if cannot migrate cvq
> >
> >    vhost: delay set_vring_ready after DRIVER_OK
> >
> >    vdpa: delay set_vring_ready after DRIVER_OK
> >
> >    vdpa: Negotiate _F_SUSPEND feature
> >
> >    vdpa: add feature_log parameter to vhost_vdpa
> >
> >    vdpa net: allow VHOST_F_LOG_ALL
> >
> >    vdpa: add vdpa net migration state notifier
> >
> >    vdpa: preemptive kick at enable
> >
> >    vdpa: Conditionally expose _F_LOG in vhost_net devices
> >
> >
> >
> >   include/hw/virtio/vhost-backend.h |   4 +
> >
> >   include/hw/virtio/vhost-vdpa.h    |   1 +
> >
> >   hw/net/vhost_net.c                |  25 ++-
> >
> >   hw/virtio/vhost-vdpa.c            |  64 +++++---
> >
> >   hw/virtio/vhost.c                 |   3 +
> >
> >   net/vhost-vdpa.c                  | 247 +++++++++++++++++++++++++-----
> >
> >   6 files changed, 278 insertions(+), 66 deletions(-)
> >
> >
> >
>
Re: [RFC v2 00/13] Dinamycally switch to vhost shadow virtqueues at vdpa net migration
Posted by Si-Wei Liu 1 year, 2 months ago

On 2/2/2023 3:27 AM, Eugenio Perez Martin wrote:
> On Thu, Feb 2, 2023 at 2:00 AM Si-Wei Liu <si-wei.liu@oracle.com> wrote:
>>
>>
>> On 1/12/2023 9:24 AM, Eugenio Pérez wrote:
>>> It's possible to migrate vdpa net devices if they are shadowed from the
>>>
>>> start.  But to always shadow the dataplane is effectively break its host
>>>
>>> passthrough, so its not convenient in vDPA scenarios.
>>>
>>>
>>>
>>> This series enables dynamically switching to shadow mode only at
>>>
>>> migration time.  This allow full data virtqueues passthrough all the
>>>
>>> time qemu is not migrating.
>>>
>>>
>>>
>>> Successfully tested with vdpa_sim_net (but it needs some patches, I
>>>
>>> will send them soon) and qemu emulated device with vp_vdpa with
>>>
>>> some restrictions:
>>>
>>> * No CVQ.
>>>
>>> * VIRTIO_RING_F_STATE patches.
>> What are these patches (I'm not sure I follow VIRTIO_RING_F_STATE, is it
>> a new feature that other vdpa driver would need for live migration)?
>>
> Not really,
>
> Since vp_vdpa wraps a virtio-net-pci driver to give it vdpa
> capabilities it needs a virtio in-band method to set and fetch the
> virtqueue state. Jason sent a proposal some time ago [1], and I
> implemented it in qemu's virtio emulated device.
>
> I can send them as a RFC but I didn't worry about making it pretty,
> nor I think they should be merged at the moment. vdpa parent drivers
> should follow vdpa_sim changes.
Got it. No bother sending RFC for now, I think it's limited to virtio 
backed vdpa providers only. Thanks for the clarifications.

-Siwei

>
> Thanks!
>
> [1] https://lists.oasis-open.org/archives/virtio-comment/202103/msg00036.html
>
>> -Siwei
>>
>>> * Expose _F_SUSPEND, but ignore it and suspend on ring state fetch like
>>>
>>>     DPDK.
>>>
>>>
>>>
>>> Comments are welcome, especially in the patcheswith RFC in the message.
>>>
>>>
>>>
>>> v2:
>>>
>>> - Use a migration listener instead of a memory listener to know when
>>>
>>>     the migration starts.
>>>
>>> - Add stuff not picked with ASID patches, like enable rings after
>>>
>>>     driver_ok
>>>
>>> - Add rewinding on the migration src, not in dst
>>>
>>> - v1 at https://lists.gnu.org/archive/html/qemu-devel/2022-08/msg01664.html
>>>
>>>
>>>
>>> Eugenio Pérez (13):
>>>
>>>     vdpa: fix VHOST_BACKEND_F_IOTLB_ASID flag check
>>>
>>>     vdpa net: move iova tree creation from init to start
>>>
>>>     vdpa: copy cvq shadow_data from data vqs, not from x-svq
>>>
>>>     vdpa: rewind at get_base, not set_base
>>>
>>>     vdpa net: add migration blocker if cannot migrate cvq
>>>
>>>     vhost: delay set_vring_ready after DRIVER_OK
>>>
>>>     vdpa: delay set_vring_ready after DRIVER_OK
>>>
>>>     vdpa: Negotiate _F_SUSPEND feature
>>>
>>>     vdpa: add feature_log parameter to vhost_vdpa
>>>
>>>     vdpa net: allow VHOST_F_LOG_ALL
>>>
>>>     vdpa: add vdpa net migration state notifier
>>>
>>>     vdpa: preemptive kick at enable
>>>
>>>     vdpa: Conditionally expose _F_LOG in vhost_net devices
>>>
>>>
>>>
>>>    include/hw/virtio/vhost-backend.h |   4 +
>>>
>>>    include/hw/virtio/vhost-vdpa.h    |   1 +
>>>
>>>    hw/net/vhost_net.c                |  25 ++-
>>>
>>>    hw/virtio/vhost-vdpa.c            |  64 +++++---
>>>
>>>    hw/virtio/vhost.c                 |   3 +
>>>
>>>    net/vhost-vdpa.c                  | 247 +++++++++++++++++++++++++-----
>>>
>>>    6 files changed, 278 insertions(+), 66 deletions(-)
>>>
>>>
>>>