From nobody Tue May 13 11:55:31 2025 Delivered-To: importer2@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer2=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1675262689356853.0968132366783; Wed, 1 Feb 2023 06:44:49 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pNEEK-00079i-Br; Wed, 01 Feb 2023 09:36:56 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pNE9z-0001b5-Ei for qemu-devel@nongnu.org; Wed, 01 Feb 2023 09:32:32 -0500 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pNE9t-0003Bh-RS for qemu-devel@nongnu.org; Wed, 01 Feb 2023 09:32:26 -0500 Received: from i7.infradead.org ([2001:8b0:10b:1:21e:67ff:fecb:7a92]) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1pNE8r-004nyS-2O; Wed, 01 Feb 2023 14:31:34 +0000 Received: from dwoodhou by i7.infradead.org with local (Exim 4.96 #2 (Red Hat Linux)) id 1pNE9Q-007Js6-13; Wed, 01 Feb 2023 14:31:52 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description; bh=CYeWIFIVbiBBgc0AhTSrBOuqwo0LxqnR6Cb8X5R6wYw=; b=Hcsq8EyRjDSNH4pi7H707+uQbC 5G5KsZ7J16B8MCMFaUici6z7j//vPcOzhhCfMHrbNO2mPIEQa7DOOSTwnBZxNv57lPb79aaffCSCY 6Za0sp9fBBFYf6Wmuea1fiZmmIf10ofdamUHUzYgzNm0llmz6Fsx19K/wCUmc6saMQR/b74Js9xVD 9YDsFNZ7BO7fd6xGl6dnn1i3qCyHup5kiLVdXQU46SijAOnRY235WOjnGlZZNQoOWkCLo6WFDQnXC YXJauMTqSn5szx5y3g0SA5qdzM8syFidPeNdscZdZOiGkmFCn9C2paU+FA7V6GlsEcVjq4QGaMgAy qgSwSNUA==; From: David Woodhouse To: Peter Maydell , qemu-devel@nongnu.org Cc: Paolo Bonzini , Paul Durrant , Joao Martins , Ankur Arora , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Thomas Huth , =?UTF-8?q?Alex=20Benn=C3=A9e?= , Juan Quintela , "Dr . David Alan Gilbert" , Claudio Fontana , Julien Grall , "Michael S. Tsirkin" , Marcel Apfelbaum , armbru@redhat.com Subject: [PATCH v10 32/59] hw/xen: Implement EVTCHNOP_bind_virq Date: Wed, 1 Feb 2023 14:31:21 +0000 Message-Id: <20230201143148.1744093-33-dwmw2@infradead.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230201143148.1744093-1-dwmw2@infradead.org> References: <20230201143148.1744093-1-dwmw2@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-SRS-Rewrite: SMTP reverse-path rewritten from by desiato.infradead.org. See http://www.infradead.org/rpr.html Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer2=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: none client-ip=2001:8b0:10b:1:d65d:64ff:fe57:4e05; envelope-from=BATV+c61c7683afee22e62f8e+7101+infradead.org+dwmw2@desiato.srs.infradead.org; helo=desiato.infradead.org X-Spam_score_int: -43 X-Spam_score: -4.4 X-Spam_bar: ---- X-Spam_report: (-4.4 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_NONE=0.001, SPF_NONE=0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer2=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer2=patchew.org@nongnu.org X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1675262690538100001 Content-Type: text/plain; charset="utf-8" From: David Woodhouse Add the array of virq ports to each vCPU so that we can deliver timers, debug ports, etc. Global virqs are allocated against vCPU 0 initially, but can be migrated to other vCPUs (when we implement that). The kernel needs to know about VIRQ_TIMER in order to accelerate timers, so tell it via KVM_XEN_VCPU_ATTR_TYPE_TIMER. Also save/restore the value of the singleshot timer across migration, as the kernel will handle the hypercalls automatically now. Signed-off-by: David Woodhouse Reviewed-by: Paul Durrant --- hw/i386/kvm/xen_evtchn.c | 85 ++++++++++++++++++++++++++++++++++++ hw/i386/kvm/xen_evtchn.h | 2 + include/sysemu/kvm_xen.h | 1 + target/i386/cpu.h | 4 ++ target/i386/kvm/xen-emu.c | 91 +++++++++++++++++++++++++++++++++++++++ target/i386/machine.c | 2 + 6 files changed, 185 insertions(+) diff --git a/hw/i386/kvm/xen_evtchn.c b/hw/i386/kvm/xen_evtchn.c index deea7de027..da2f5711dd 100644 --- a/hw/i386/kvm/xen_evtchn.c +++ b/hw/i386/kvm/xen_evtchn.c @@ -244,6 +244,11 @@ static bool valid_port(evtchn_port_t port) } } =20 +static bool valid_vcpu(uint32_t vcpu) +{ + return !!qemu_get_cpu(vcpu); +} + int xen_evtchn_status_op(struct evtchn_status *status) { XenEvtchnState *s =3D xen_evtchn_singleton; @@ -494,6 +499,43 @@ static void free_port(XenEvtchnState *s, evtchn_port_t= port) clear_port_pending(s, port); } =20 +static int allocate_port(XenEvtchnState *s, uint32_t vcpu, uint16_t type, + uint16_t val, evtchn_port_t *port) +{ + evtchn_port_t p =3D 1; + + for (p =3D 1; valid_port(p); p++) { + if (s->port_table[p].type =3D=3D EVTCHNSTAT_closed) { + s->port_table[p].vcpu =3D vcpu; + s->port_table[p].type =3D type; + s->port_table[p].type_val =3D val; + + *port =3D p; + + if (s->nr_ports < p + 1) { + s->nr_ports =3D p + 1; + } + + return 0; + } + } + return -ENOSPC; +} + +static bool virq_is_global(uint32_t virq) +{ + switch (virq) { + case VIRQ_TIMER: + case VIRQ_DEBUG: + case VIRQ_XENOPROF: + case VIRQ_XENPMU: + return false; + + default: + return true; + } +} + static int close_port(XenEvtchnState *s, evtchn_port_t port) { XenEvtchnPort *p =3D &s->port_table[port]; @@ -502,6 +544,11 @@ static int close_port(XenEvtchnState *s, evtchn_port_t= port) case EVTCHNSTAT_closed: return -ENOENT; =20 + case EVTCHNSTAT_virq: + kvm_xen_set_vcpu_virq(virq_is_global(p->type_val) ? 0 : p->vcpu, + p->type_val, 0); + break; + default: break; } @@ -553,3 +600,41 @@ int xen_evtchn_unmask_op(struct evtchn_unmask *unmask) =20 return ret; } + +int xen_evtchn_bind_virq_op(struct evtchn_bind_virq *virq) +{ + XenEvtchnState *s =3D xen_evtchn_singleton; + int ret; + + if (!s) { + return -ENOTSUP; + } + + if (virq->virq >=3D NR_VIRQS) { + return -EINVAL; + } + + /* Global VIRQ must be allocated on vCPU0 first */ + if (virq_is_global(virq->virq) && virq->vcpu !=3D 0) { + return -EINVAL; + } + + if (!valid_vcpu(virq->vcpu)) { + return -ENOENT; + } + + qemu_mutex_lock(&s->port_lock); + + ret =3D allocate_port(s, virq->vcpu, EVTCHNSTAT_virq, virq->virq, + &virq->port); + if (!ret) { + ret =3D kvm_xen_set_vcpu_virq(virq->vcpu, virq->virq, virq->port); + if (ret) { + free_port(s, virq->port); + } + } + + qemu_mutex_unlock(&s->port_lock); + + return ret; +} diff --git a/hw/i386/kvm/xen_evtchn.h b/hw/i386/kvm/xen_evtchn.h index 69c6b0d743..0ea13dda3a 100644 --- a/hw/i386/kvm/xen_evtchn.h +++ b/hw/i386/kvm/xen_evtchn.h @@ -18,8 +18,10 @@ int xen_evtchn_set_callback_param(uint64_t param); struct evtchn_status; struct evtchn_close; struct evtchn_unmask; +struct evtchn_bind_virq; int xen_evtchn_status_op(struct evtchn_status *status); int xen_evtchn_close_op(struct evtchn_close *close); int xen_evtchn_unmask_op(struct evtchn_unmask *unmask); +int xen_evtchn_bind_virq_op(struct evtchn_bind_virq *virq); =20 #endif /* QEMU_XEN_EVTCHN_H */ diff --git a/include/sysemu/kvm_xen.h b/include/sysemu/kvm_xen.h index 0c0efbe699..297630cd87 100644 --- a/include/sysemu/kvm_xen.h +++ b/include/sysemu/kvm_xen.h @@ -23,6 +23,7 @@ int kvm_xen_soft_reset(void); uint32_t kvm_xen_get_caps(void); void *kvm_xen_get_vcpu_info_hva(uint32_t vcpu_id); void kvm_xen_inject_vcpu_callback_vector(uint32_t vcpu_id, int type); +int kvm_xen_set_vcpu_virq(uint32_t vcpu_id, uint16_t virq, uint16_t port); =20 #define kvm_xen_has_cap(cap) (!!(kvm_xen_get_caps() & \ KVM_XEN_HVM_CONFIG_ ## cap)) diff --git a/target/i386/cpu.h b/target/i386/cpu.h index c9b12e7476..dba8732fc6 100644 --- a/target/i386/cpu.h +++ b/target/i386/cpu.h @@ -27,6 +27,8 @@ #include "qapi/qapi-types-common.h" #include "qemu/cpu-float.h" =20 +#define XEN_NR_VIRQS 24 + /* The x86 has a strong memory model with some store-after-load re-orderin= g */ #define TCG_GUEST_DEFAULT_MO (TCG_MO_ALL & ~TCG_MO_ST_LD) =20 @@ -1795,6 +1797,8 @@ typedef struct CPUArchState { uint64_t xen_vcpu_time_info_gpa; uint64_t xen_vcpu_runstate_gpa; uint8_t xen_vcpu_callback_vector; + uint16_t xen_virq[XEN_NR_VIRQS]; + uint64_t xen_singleshot_timer_ns; #endif #if defined(CONFIG_HVF) HVFX86LazyFlags hvf_lflags; diff --git a/target/i386/kvm/xen-emu.c b/target/i386/kvm/xen-emu.c index 418028b04f..0c4988ad63 100644 --- a/target/i386/kvm/xen-emu.c +++ b/target/i386/kvm/xen-emu.c @@ -352,6 +352,53 @@ void kvm_xen_inject_vcpu_callback_vector(uint32_t vcpu= _id, int type) } } =20 +static int kvm_xen_set_vcpu_timer(CPUState *cs) +{ + X86CPU *cpu =3D X86_CPU(cs); + CPUX86State *env =3D &cpu->env; + + struct kvm_xen_vcpu_attr va =3D { + .type =3D KVM_XEN_VCPU_ATTR_TYPE_TIMER, + .u.timer.port =3D env->xen_virq[VIRQ_TIMER], + .u.timer.priority =3D KVM_IRQ_ROUTING_XEN_EVTCHN_PRIO_2LEVEL, + .u.timer.expires_ns =3D env->xen_singleshot_timer_ns, + }; + + return kvm_vcpu_ioctl(cs, KVM_XEN_VCPU_SET_ATTR, &va); +} + +static void do_set_vcpu_timer_virq(CPUState *cs, run_on_cpu_data data) +{ + kvm_xen_set_vcpu_timer(cs); +} + +int kvm_xen_set_vcpu_virq(uint32_t vcpu_id, uint16_t virq, uint16_t port) +{ + CPUState *cs =3D qemu_get_cpu(vcpu_id); + + if (!cs) { + return -ENOENT; + } + + /* cpu.h doesn't include the actual Xen header. */ + qemu_build_assert(NR_VIRQS =3D=3D XEN_NR_VIRQS); + + if (virq >=3D NR_VIRQS) { + return -EINVAL; + } + + if (port && X86_CPU(cs)->env.xen_virq[virq]) { + return -EEXIST; + } + + X86_CPU(cs)->env.xen_virq[virq] =3D port; + if (virq =3D=3D VIRQ_TIMER && kvm_xen_has_cap(EVTCHN_SEND)) { + async_run_on_cpu(cs, do_set_vcpu_timer_virq, + RUN_ON_CPU_HOST_INT(port)); + } + return 0; +} + static void do_set_vcpu_time_info_gpa(CPUState *cs, run_on_cpu_data data) { X86CPU *cpu =3D X86_CPU(cs); @@ -384,6 +431,8 @@ static void do_vcpu_soft_reset(CPUState *cs, run_on_cpu= _data data) env->xen_vcpu_time_info_gpa =3D INVALID_GPA; env->xen_vcpu_runstate_gpa =3D INVALID_GPA; env->xen_vcpu_callback_vector =3D 0; + env->xen_singleshot_timer_ns =3D 0; + memset(env->xen_virq, 0, sizeof(env->xen_virq)); =20 set_vcpu_info(cs, INVALID_GPA); kvm_xen_set_vcpu_attr(cs, KVM_XEN_VCPU_ATTR_TYPE_VCPU_TIME_INFO, @@ -392,6 +441,7 @@ static void do_vcpu_soft_reset(CPUState *cs, run_on_cpu= _data data) INVALID_GPA); if (kvm_xen_has_cap(EVTCHN_SEND)) { kvm_xen_set_vcpu_callback_vector(cs); + kvm_xen_set_vcpu_timer(cs); } =20 } @@ -826,6 +876,21 @@ static bool kvm_xen_hcall_evtchn_op(struct kvm_xen_exi= t *exit, X86CPU *cpu, err =3D xen_evtchn_unmask_op(&unmask); break; } + case EVTCHNOP_bind_virq: { + struct evtchn_bind_virq virq; + + qemu_build_assert(sizeof(virq) =3D=3D 12); + if (kvm_copy_from_gva(cs, arg, &virq, sizeof(virq))) { + err =3D -EFAULT; + break; + } + + err =3D xen_evtchn_bind_virq_op(&virq); + if (!err && kvm_copy_to_gva(cs, arg, &virq, sizeof(virq))) { + err =3D -EFAULT; + } + break; + } default: return false; } @@ -1057,6 +1122,12 @@ int kvm_put_xen_state(CPUState *cs) } } =20 + if (env->xen_virq[VIRQ_TIMER]) { + ret =3D kvm_xen_set_vcpu_timer(cs); + if (ret < 0) { + return ret; + } + } return 0; } =20 @@ -1065,6 +1136,7 @@ int kvm_get_xen_state(CPUState *cs) X86CPU *cpu =3D X86_CPU(cs); CPUX86State *env =3D &cpu->env; uint64_t gpa; + int ret; =20 /* * The kernel does not mark vcpu_info as dirty when it delivers interr= upts @@ -1086,5 +1158,24 @@ int kvm_get_xen_state(CPUState *cs) } } =20 + if (!kvm_xen_has_cap(EVTCHN_SEND)) { + return 0; + } + + /* + * If the kernel is accelerating timers, read out the current value of= the + * singleshot timer deadline. + */ + if (env->xen_virq[VIRQ_TIMER]) { + struct kvm_xen_vcpu_attr va =3D { + .type =3D KVM_XEN_VCPU_ATTR_TYPE_TIMER, + }; + ret =3D kvm_vcpu_ioctl(cs, KVM_XEN_VCPU_GET_ATTR, &va); + if (ret < 0) { + return ret; + } + env->xen_singleshot_timer_ns =3D va.u.timer.expires_ns; + } + return 0; } diff --git a/target/i386/machine.c b/target/i386/machine.c index a4874eda90..603a1077e3 100644 --- a/target/i386/machine.c +++ b/target/i386/machine.c @@ -1275,6 +1275,8 @@ static const VMStateDescription vmstate_xen_vcpu =3D { VMSTATE_UINT64(env.xen_vcpu_time_info_gpa, X86CPU), VMSTATE_UINT64(env.xen_vcpu_runstate_gpa, X86CPU), VMSTATE_UINT8(env.xen_vcpu_callback_vector, X86CPU), + VMSTATE_UINT16_ARRAY(env.xen_virq, X86CPU, XEN_NR_VIRQS), + VMSTATE_UINT64(env.xen_singleshot_timer_ns, X86CPU), VMSTATE_END_OF_LIST() } }; --=20 2.39.0