From nobody Thu Jul 10 07:49:29 2025 Delivered-To: importer2@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer2=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1677679391638832.5015608197691; Wed, 1 Mar 2023 06:03:11 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pXMwv-0000Jz-DG; Wed, 01 Mar 2023 08:56:53 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pXMt1-00008Q-SS for qemu-devel@nongnu.org; Wed, 01 Mar 2023 08:52:55 -0500 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pXMsj-0002la-Op for qemu-devel@nongnu.org; Wed, 01 Mar 2023 08:52:51 -0500 Received: from i7.infradead.org ([2001:8b0:10b:1:21e:67ff:fecb:7a92]) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1pXMse-001f6V-AD; Wed, 01 Mar 2023 13:52:29 +0000 Received: from dwoodhou by i7.infradead.org with local (Exim 4.96 #2 (Red Hat Linux)) id 1pXMse-0049XB-11; Wed, 01 Mar 2023 13:52:28 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Sender:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description; bh=HT2t6rxdjdAEfNKF4Stt/pFQl6h+/eyhSKu9GBMeZ9I=; b=lFjj35ld6H+BW4RWBaoafSkmCu s17xSYD09cRChT7sL0UybNUjH4UavARvfWPxGf49fPYEpI+UyTO56Li55IzQ7AR67j4Tg1KeTq0lr xJw9dyLS7q9jzLku1CHNZD+vx3U9pP3MbPhhvAjA45bnW6iJ7Gcnoujh7SAKySecWjxOtDlibl4Vc GnvPNMHiFm7cXtWB/OqgK1JBGrjF0C35xJUel32sx/7ryeBWdYpQx2+fkE7cMXi06BBowRmF+1P0C /C5OZnX4yI6pJNe7m8kfNzFajFVVO3T0hawO7GGoSFNHWp6SbmGUnMBI7OzSZ3rFUgMQwvNwbyN11 f9SjD9JQ==; From: David Woodhouse To: Peter Maydell , qemu-devel@nongnu.org Cc: Paolo Bonzini , Paul Durrant , Joao Martins , Ankur Arora , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Thomas Huth , =?UTF-8?q?Alex=20Benn=C3=A9e?= , Juan Quintela , "Dr . David Alan Gilbert" , Claudio Fontana , Julien Grall , "Michael S. Tsirkin" , Marcel Apfelbaum , armbru@redhat.com, Stefano Stabellini , vikram.garhwal@amd.com Subject: [PATCH v15 52/60] hw/xen: Add basic ring handling to xenstore Date: Wed, 1 Mar 2023 13:52:15 +0000 Message-Id: <20230301135223.988336-53-dwmw2@infradead.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230301135223.988336-1-dwmw2@infradead.org> References: <20230301135223.988336-1-dwmw2@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer2=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: none client-ip=2001:8b0:10b:1236::1; envelope-from=BATV+f4e15e254fb7e3cd38fc+7129+infradead.org+dwmw2@casper.srs.infradead.org; helo=casper.infradead.org X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_NONE=0.001, SPF_NONE=0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer2=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer2=patchew.org@nongnu.org X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1677679393639100003 Content-Type: text/plain; charset="utf-8" From: David Woodhouse Extract requests, return ENOSYS to all of them. This is enough to allow older Linux guests to boot, as they need *something* back but it doesn't matter much what. A full implementation of a single-tentant internal XenStore copy-on-write tree with transactions and watches is waiting in the wings to be sent in a subsequent round of patches along with hooking up the actual PV disk back end in qemu, but this is enough to get guests booting for now. Signed-off-by: David Woodhouse Reviewed-by: Paul Durrant --- hw/i386/kvm/xen_xenstore.c | 254 ++++++++++++++++++++++++++++++++++++- 1 file changed, 251 insertions(+), 3 deletions(-) diff --git a/hw/i386/kvm/xen_xenstore.c b/hw/i386/kvm/xen_xenstore.c index e8abddae57..14193ef3f9 100644 --- a/hw/i386/kvm/xen_xenstore.c +++ b/hw/i386/kvm/xen_xenstore.c @@ -192,18 +192,266 @@ uint16_t xen_xenstore_get_port(void) return s->guest_port; } =20 +static bool req_pending(XenXenstoreState *s) +{ + struct xsd_sockmsg *req =3D (struct xsd_sockmsg *)s->req_data; + + return s->req_offset =3D=3D XENSTORE_HEADER_SIZE + req->len; +} + +static void reset_req(XenXenstoreState *s) +{ + memset(s->req_data, 0, sizeof(s->req_data)); + s->req_offset =3D 0; +} + +static void reset_rsp(XenXenstoreState *s) +{ + s->rsp_pending =3D false; + + memset(s->rsp_data, 0, sizeof(s->rsp_data)); + s->rsp_offset =3D 0; +} + +static void process_req(XenXenstoreState *s) +{ + struct xsd_sockmsg *req =3D (struct xsd_sockmsg *)s->req_data; + struct xsd_sockmsg *rsp =3D (struct xsd_sockmsg *)s->rsp_data; + const char enosys[] =3D "ENOSYS"; + + assert(req_pending(s)); + assert(!s->rsp_pending); + + rsp->type =3D XS_ERROR; + rsp->req_id =3D req->req_id; + rsp->tx_id =3D req->tx_id; + rsp->len =3D sizeof(enosys); + memcpy((void *)&rsp[1], enosys, sizeof(enosys)); + + s->rsp_pending =3D true; + reset_req(s); +} + +static unsigned int copy_from_ring(XenXenstoreState *s, uint8_t *ptr, + unsigned int len) +{ + if (!len) { + return 0; + } + + XENSTORE_RING_IDX prod =3D qatomic_read(&s->xs->req_prod); + XENSTORE_RING_IDX cons =3D qatomic_read(&s->xs->req_cons); + unsigned int copied =3D 0; + + /* Ensure the ring contents don't cross the req_prod access. */ + smp_rmb(); + + while (len) { + unsigned int avail =3D prod - cons; + unsigned int offset =3D MASK_XENSTORE_IDX(cons); + unsigned int copylen =3D avail; + + if (avail > XENSTORE_RING_SIZE) { + error_report("XenStore ring handling error"); + s->fatal_error =3D true; + break; + } else if (avail =3D=3D 0) { + break; + } + + if (copylen > len) { + copylen =3D len; + } + if (copylen > XENSTORE_RING_SIZE - offset) { + copylen =3D XENSTORE_RING_SIZE - offset; + } + + memcpy(ptr, &s->xs->req[offset], copylen); + copied +=3D copylen; + + ptr +=3D copylen; + len -=3D copylen; + + cons +=3D copylen; + } + + /* + * Not sure this ever mattered except on Alpha, but this barrier + * is to ensure that the update to req_cons is globally visible + * only after we have consumed all the data from the ring, and we + * don't end up seeing data written to the ring *after* the other + * end sees the update and writes more to the ring. Xen's own + * xenstored has the same barrier here (although with no comment + * at all, obviously, because it's Xen code). + */ + smp_mb(); + + qatomic_set(&s->xs->req_cons, cons); + + return copied; +} + +static unsigned int copy_to_ring(XenXenstoreState *s, uint8_t *ptr, + unsigned int len) +{ + if (!len) { + return 0; + } + + XENSTORE_RING_IDX cons =3D qatomic_read(&s->xs->rsp_cons); + XENSTORE_RING_IDX prod =3D qatomic_read(&s->xs->rsp_prod); + unsigned int copied =3D 0; + + /* + * This matches the barrier in copy_to_ring() (or the guest's + * equivalent) betweem writing the data to the ring and updating + * rsp_prod. It protects against the pathological case (which + * again I think never happened except on Alpha) where our + * subsequent writes to the ring could *cross* the read of + * rsp_cons and the guest could see the new data when it was + * intending to read the old. + */ + smp_mb(); + + while (len) { + unsigned int avail =3D cons + XENSTORE_RING_SIZE - prod; + unsigned int offset =3D MASK_XENSTORE_IDX(prod); + unsigned int copylen =3D len; + + if (avail > XENSTORE_RING_SIZE) { + error_report("XenStore ring handling error"); + s->fatal_error =3D true; + break; + } else if (avail =3D=3D 0) { + break; + } + + if (copylen > avail) { + copylen =3D avail; + } + if (copylen > XENSTORE_RING_SIZE - offset) { + copylen =3D XENSTORE_RING_SIZE - offset; + } + + + memcpy(&s->xs->rsp[offset], ptr, copylen); + copied +=3D copylen; + + ptr +=3D copylen; + len -=3D copylen; + + prod +=3D copylen; + } + + /* Ensure the ring contents are seen before rsp_prod update. */ + smp_wmb(); + + qatomic_set(&s->xs->rsp_prod, prod); + + return copied; +} + +static unsigned int get_req(XenXenstoreState *s) +{ + unsigned int copied =3D 0; + + if (s->fatal_error) { + return 0; + } + + assert(!req_pending(s)); + + if (s->req_offset < XENSTORE_HEADER_SIZE) { + void *ptr =3D s->req_data + s->req_offset; + unsigned int len =3D XENSTORE_HEADER_SIZE; + unsigned int copylen =3D copy_from_ring(s, ptr, len); + + copied +=3D copylen; + s->req_offset +=3D copylen; + } + + if (s->req_offset >=3D XENSTORE_HEADER_SIZE) { + struct xsd_sockmsg *req =3D (struct xsd_sockmsg *)s->req_data; + + if (req->len > (uint32_t)XENSTORE_PAYLOAD_MAX) { + error_report("Illegal XenStore request"); + s->fatal_error =3D true; + return 0; + } + + void *ptr =3D s->req_data + s->req_offset; + unsigned int len =3D XENSTORE_HEADER_SIZE + req->len - s->req_offs= et; + unsigned int copylen =3D copy_from_ring(s, ptr, len); + + copied +=3D copylen; + s->req_offset +=3D copylen; + } + + return copied; +} + +static unsigned int put_rsp(XenXenstoreState *s) +{ + if (s->fatal_error) { + return 0; + } + + assert(s->rsp_pending); + + struct xsd_sockmsg *rsp =3D (struct xsd_sockmsg *)s->rsp_data; + assert(s->rsp_offset < XENSTORE_HEADER_SIZE + rsp->len); + + void *ptr =3D s->rsp_data + s->rsp_offset; + unsigned int len =3D XENSTORE_HEADER_SIZE + rsp->len - s->rsp_offset; + unsigned int copylen =3D copy_to_ring(s, ptr, len); + + s->rsp_offset +=3D copylen; + + /* Have we produced a complete response? */ + if (s->rsp_offset =3D=3D XENSTORE_HEADER_SIZE + rsp->len) { + reset_rsp(s); + } + + return copylen; +} + static void xen_xenstore_event(void *opaque) { XenXenstoreState *s =3D opaque; evtchn_port_t port =3D xen_be_evtchn_pending(s->eh); + unsigned int copied_to, copied_from; + bool processed, notify =3D false; + if (port !=3D s->be_port) { return; } - printf("xenstore event\n"); + /* We know this is a no-op. */ xen_be_evtchn_unmask(s->eh, port); - qemu_hexdump(stdout, "", s->xs, sizeof(*s->xs)); - xen_be_evtchn_notify(s->eh, s->be_port); + + do { + copied_to =3D copied_from =3D 0; + processed =3D false; + + if (s->rsp_pending) { + copied_to =3D put_rsp(s); + } + + if (!req_pending(s)) { + copied_from =3D get_req(s); + } + + if (req_pending(s) && !s->rsp_pending) { + process_req(s); + processed =3D true; + } + + notify |=3D copied_to || copied_from; + } while (copied_to || copied_from || processed); + + if (notify) { + xen_be_evtchn_notify(s->eh, s->be_port); + } } =20 static void alloc_guest_port(XenXenstoreState *s) --=20 2.39.0