From nobody Tue May 13 11:31:17 2025 Delivered-To: importer2@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer2=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1675264683007899.5061707625135; Wed, 1 Feb 2023 07:18:03 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pNEBf-0003vN-Sd; Wed, 01 Feb 2023 09:34:12 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pNE9z-0001b6-Fc for qemu-devel@nongnu.org; Wed, 01 Feb 2023 09:32:32 -0500 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pNE9q-0003AY-Ti for qemu-devel@nongnu.org; Wed, 01 Feb 2023 09:32:24 -0500 Received: from i7.infradead.org ([2001:8b0:10b:1:21e:67ff:fecb:7a92]) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1pNE8s-004nyZ-1o; Wed, 01 Feb 2023 14:31:32 +0000 Received: from dwoodhou by i7.infradead.org with local (Exim 4.96 #2 (Red Hat Linux)) id 1pNE9R-007Jt7-0U; Wed, 01 Feb 2023 14:31:53 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description; bh=mvR2B1qqHLWkiIXND4yDBrj2XCKsCt1EJ7aGAcJmP1g=; b=fd2uZb0rWenZgafiqv/OKCnNdD Y92W87D+glzKciiYupKTN/e9sO54Oseiq1Co1CkIUklO770IB+ua+GEkpDdNO8jt7DRvASC+jrzA6 I1Y0KDUkVX8zMxGtQdIcD+EDt4dVSC/SJGq1zZrZ9sfKw1yvgTJINs0In85irwsX0C60l8ihPbQqK kSDUTikpn6ZOqiUSWKXnQcqD0VMHrYTH3tN+7L9CvAqzjMoeHCBpR6Veql/jm0JWN6/4XZnJ7vD3c Y00WWJUaLMqYiYDBpov46YzrQRuTJWg9UhYSrorr80sKcZNLOtPg7FAKuBxtDBU3Wi6Yorh8VIoJ8 0TZRuOcg==; From: David Woodhouse To: Peter Maydell , qemu-devel@nongnu.org Cc: Paolo Bonzini , Paul Durrant , Joao Martins , Ankur Arora , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Thomas Huth , =?UTF-8?q?Alex=20Benn=C3=A9e?= , Juan Quintela , "Dr . David Alan Gilbert" , Claudio Fontana , Julien Grall , "Michael S. Tsirkin" , Marcel Apfelbaum , armbru@redhat.com Subject: [PATCH v10 47/59] i386/xen: handle PV timer hypercalls Date: Wed, 1 Feb 2023 14:31:36 +0000 Message-Id: <20230201143148.1744093-48-dwmw2@infradead.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230201143148.1744093-1-dwmw2@infradead.org> References: <20230201143148.1744093-1-dwmw2@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-SRS-Rewrite: SMTP reverse-path rewritten from by desiato.infradead.org. See http://www.infradead.org/rpr.html Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer2=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: none client-ip=2001:8b0:10b:1:d65d:64ff:fe57:4e05; envelope-from=BATV+c61c7683afee22e62f8e+7101+infradead.org+dwmw2@desiato.srs.infradead.org; helo=desiato.infradead.org X-Spam_score_int: -43 X-Spam_score: -4.4 X-Spam_bar: ---- X-Spam_report: (-4.4 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_NONE=0.001, SPF_NONE=0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer2=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer2=patchew.org@nongnu.org X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1675264713762100001 Content-Type: text/plain; charset="utf-8" From: Joao Martins Introduce support for one shot and periodic mode of Xen PV timers, whereby timer interrupts come through a special virq event channel with deadlines being set through: 1) set_timer_op hypercall (only oneshot) 2) vcpu_op hypercall for {set,stop}_{singleshot,periodic}_timer hypercalls Signed-off-by: Joao Martins Signed-off-by: David Woodhouse --- hw/i386/kvm/xen_evtchn.c | 31 +++++ hw/i386/kvm/xen_evtchn.h | 2 + target/i386/cpu.h | 5 + target/i386/kvm/xen-emu.c | 245 +++++++++++++++++++++++++++++++++++++- target/i386/machine.c | 1 + 5 files changed, 282 insertions(+), 2 deletions(-) diff --git a/hw/i386/kvm/xen_evtchn.c b/hw/i386/kvm/xen_evtchn.c index 5d5996641d..06572b3e10 100644 --- a/hw/i386/kvm/xen_evtchn.c +++ b/hw/i386/kvm/xen_evtchn.c @@ -1220,6 +1220,37 @@ int xen_evtchn_send_op(struct evtchn_send *send) return ret; } =20 +int xen_evtchn_set_port(uint16_t port) +{ + XenEvtchnState *s =3D xen_evtchn_singleton; + XenEvtchnPort *p; + int ret =3D -EINVAL; + + if (!s) { + return -ENOTSUP; + } + + if (!valid_port(port)) { + return -EINVAL; + } + + qemu_mutex_lock(&s->port_lock); + + p =3D &s->port_table[port]; + + /* QEMU has no business sending to anything but these */ + if (p->type =3D=3D EVTCHNSTAT_virq || + (p->type =3D=3D EVTCHNSTAT_interdomain && + (p->type_val & PORT_INFO_TYPEVAL_REMOTE_QEMU))) { + set_port_pending(s, port); + ret =3D 0; + } + + qemu_mutex_unlock(&s->port_lock); + + return ret; +} + EvtchnInfoList *qmp_xen_event_list(Error **errp) { XenEvtchnState *s =3D xen_evtchn_singleton; diff --git a/hw/i386/kvm/xen_evtchn.h b/hw/i386/kvm/xen_evtchn.h index b03c3108bc..24611478b8 100644 --- a/hw/i386/kvm/xen_evtchn.h +++ b/hw/i386/kvm/xen_evtchn.h @@ -20,6 +20,8 @@ int xen_evtchn_set_callback_param(uint64_t param); void xen_evtchn_connect_gsis(qemu_irq *system_gsis); void xen_evtchn_set_callback_level(int level); =20 +int xen_evtchn_set_port(uint16_t port); + struct evtchn_status; struct evtchn_close; struct evtchn_unmask; diff --git a/target/i386/cpu.h b/target/i386/cpu.h index e8718c31e5..b579f0f0f8 100644 --- a/target/i386/cpu.h +++ b/target/i386/cpu.h @@ -26,6 +26,7 @@ #include "exec/cpu-defs.h" #include "qapi/qapi-types-common.h" #include "qemu/cpu-float.h" +#include "qemu/timer.h" =20 #define XEN_NR_VIRQS 24 =20 @@ -1800,6 +1801,10 @@ typedef struct CPUArchState { bool xen_callback_asserted; uint16_t xen_virq[XEN_NR_VIRQS]; uint64_t xen_singleshot_timer_ns; + QEMUTimer *xen_singleshot_timer; + uint64_t xen_periodic_timer_period; + QEMUTimer *xen_periodic_timer; + QemuMutex xen_timers_lock; #endif #if defined(CONFIG_HVF) HVFX86LazyFlags hvf_lflags; diff --git a/target/i386/kvm/xen-emu.c b/target/i386/kvm/xen-emu.c index 44fa0de784..b537d03be7 100644 --- a/target/i386/kvm/xen-emu.c +++ b/target/i386/kvm/xen-emu.c @@ -38,6 +38,9 @@ =20 #include "xen-compat.h" =20 +static void xen_vcpu_singleshot_timer_event(void *opaque); +static void xen_vcpu_periodic_timer_event(void *opaque); + #ifdef TARGET_X86_64 #define hypercall_compat32(longmode) (!(longmode)) #else @@ -201,6 +204,23 @@ int kvm_xen_init_vcpu(CPUState *cs) env->xen_vcpu_time_info_gpa =3D INVALID_GPA; env->xen_vcpu_runstate_gpa =3D INVALID_GPA; =20 + qemu_mutex_init(&env->xen_timers_lock); + env->xen_singleshot_timer =3D timer_new_ns(QEMU_CLOCK_VIRTUAL, + xen_vcpu_singleshot_timer_eve= nt, + cpu); + if (!env->xen_singleshot_timer) { + return -ENOMEM; + } + env->xen_singleshot_timer->opaque =3D cs; + + env->xen_periodic_timer =3D timer_new_ns(QEMU_CLOCK_VIRTUAL, + xen_vcpu_periodic_timer_event, + cpu); + if (!env->xen_periodic_timer) { + return -ENOMEM; + } + env->xen_periodic_timer->opaque =3D cs; + return 0; } =20 @@ -232,7 +252,8 @@ static bool kvm_xen_hcall_xen_version(struct kvm_xen_ex= it *exit, X86CPU *cpu, 1 << XENFEAT_writable_descriptor_tables | 1 << XENFEAT_auto_translated_physmap | 1 << XENFEAT_supervisor_mode_kernel | - 1 << XENFEAT_hvm_callback_vector; + 1 << XENFEAT_hvm_callback_vector | + 1 << XENFEAT_hvm_safe_pvclock; } =20 err =3D kvm_copy_to_gva(CPU(cpu), arg, &fi, sizeof(fi)); @@ -875,13 +896,192 @@ static int vcpuop_register_runstate_info(CPUState *c= s, CPUState *target, return 0; } =20 +static uint64_t kvm_get_current_ns(void) +{ + struct kvm_clock_data data; + int ret; + + ret =3D kvm_vm_ioctl(kvm_state, KVM_GET_CLOCK, &data); + if (ret < 0) { + fprintf(stderr, "KVM_GET_CLOCK failed: %s\n", strerror(ret)); + abort(); + } + + return data.clock; +} + +static void xen_vcpu_singleshot_timer_event(void *opaque) +{ + CPUState *cpu =3D opaque; + CPUX86State *env =3D &X86_CPU(cpu)->env; + uint16_t port =3D env->xen_virq[VIRQ_TIMER]; + + if (likely(port)) { + xen_evtchn_set_port(port); + } + + qemu_mutex_lock(&env->xen_timers_lock); + env->xen_singleshot_timer_ns =3D 0; + qemu_mutex_unlock(&env->xen_timers_lock); +} + +static void xen_vcpu_periodic_timer_event(void *opaque) +{ + CPUState *cpu =3D opaque; + CPUX86State *env =3D &X86_CPU(cpu)->env; + uint16_t port =3D env->xen_virq[VIRQ_TIMER]; + int64_t qemu_now; + + if (likely(port)) { + xen_evtchn_set_port(port); + } + + qemu_mutex_lock(&env->xen_timers_lock); + + qemu_now =3D qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL); + timer_mod_ns(env->xen_periodic_timer, + qemu_now + env->xen_periodic_timer_period); + + qemu_mutex_unlock(&env->xen_timers_lock); +} + +static int do_set_periodic_timer(CPUState *target, uint64_t period_ns) +{ + CPUX86State *tenv =3D &X86_CPU(target)->env; + int64_t qemu_now; + + timer_del(tenv->xen_periodic_timer); + + qemu_mutex_lock(&tenv->xen_timers_lock); + + qemu_now =3D qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL); + timer_mod_ns(tenv->xen_periodic_timer, qemu_now + period_ns); + tenv->xen_periodic_timer_period =3D period_ns; + + qemu_mutex_unlock(&tenv->xen_timers_lock); + return 0; +} + +#define MILLISECS(_ms) ((int64_t)((_ms) * 1000000ULL)) +#define MICROSECS(_us) ((int64_t)((_us) * 1000ULL)) +#define STIME_MAX ((time_t)((int64_t)~0ull >> 1)) +/* Chosen so (NOW() + delta) wont overflow without an uptime of 200 years = */ +#define STIME_DELTA_MAX ((int64_t)((uint64_t)~0ull >> 2)) + +static int vcpuop_set_periodic_timer(CPUState *cs, CPUState *target, + uint64_t arg) +{ + struct vcpu_set_periodic_timer spt; + + qemu_build_assert(sizeof(spt) =3D=3D 8); + if (kvm_copy_from_gva(cs, arg, &spt, sizeof(spt))) { + return -EFAULT; + } + + if (spt.period_ns < MILLISECS(1) || spt.period_ns > STIME_DELTA_MAX) { + return -EINVAL; + } + + return do_set_periodic_timer(target, spt.period_ns); +} + +static int vcpuop_stop_periodic_timer(CPUState *target) +{ + CPUX86State *tenv =3D &X86_CPU(target)->env; + + qemu_mutex_lock(&tenv->xen_timers_lock); + + timer_del(tenv->xen_periodic_timer); + tenv->xen_periodic_timer_period =3D 0; + + qemu_mutex_unlock(&tenv->xen_timers_lock); + return 0; +} + +static int do_set_singleshot_timer(CPUState *cs, uint64_t timeout_abs, + bool future, bool linux_wa) +{ + CPUX86State *env =3D &X86_CPU(cs)->env; + int64_t now =3D kvm_get_current_ns(); + int64_t qemu_now =3D qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL); + int64_t delta =3D timeout_abs - now; + + if (future && timeout_abs < now) { + return -ETIME; + } + + if (linux_wa && unlikely((int64_t)timeout_abs < 0 || + (delta > 0 && (uint32_t)(delta >> 50) !=3D 0)= )) { + /* + * Xen has a 'Linux workaround' in do_set_timer_op() which checks + * for negative absolute timeout values (caused by integer + * overflow), and for values about 13 days in the future (2^50ns) + * which would be caused by jiffies overflow. For those cases, it + * sets the timeout 100ms in the future (not *too* soon, since if + * a guest really did set a long timeout on purpose we don't want + * to keep churning CPU time by waking it up). + */ + delta =3D (100 * SCALE_MS); + timeout_abs =3D now + delta; + } + + qemu_mutex_lock(&env->xen_timers_lock); + + timer_mod_ns(env->xen_singleshot_timer, qemu_now + delta); + env->xen_singleshot_timer_ns =3D now + delta; + + qemu_mutex_unlock(&env->xen_timers_lock); + return 0; +} + +static int vcpuop_set_singleshot_timer(CPUState *cs, uint64_t arg) +{ + struct vcpu_set_singleshot_timer sst; + + qemu_build_assert(sizeof(sst) =3D=3D 16); + if (kvm_copy_from_gva(cs, arg, &sst, sizeof(sst))) { + return -EFAULT; + } + + return do_set_singleshot_timer(cs, sst.timeout_abs_ns, + !!(sst.flags & VCPU_SSHOTTMR_future), + false); +} + +static int vcpuop_stop_singleshot_timer(CPUState *cs) +{ + CPUX86State *env =3D &X86_CPU(cs)->env; + + qemu_mutex_lock(&env->xen_timers_lock); + + timer_del(env->xen_singleshot_timer); + env->xen_singleshot_timer_ns =3D 0; + + qemu_mutex_unlock(&env->xen_timers_lock); + return 0; +} + +static int kvm_xen_hcall_set_timer_op(struct kvm_xen_exit *exit, X86CPU *c= pu, + uint64_t timeout) +{ + if (unlikely(timeout =3D=3D 0)) { + return vcpuop_stop_singleshot_timer(CPU(cpu)); + } else { + return do_set_singleshot_timer(CPU(cpu), timeout, false, true); + } +} + static bool kvm_xen_hcall_vcpu_op(struct kvm_xen_exit *exit, X86CPU *cpu, int cmd, int vcpu_id, uint64_t arg) { - CPUState *dest =3D qemu_get_cpu(vcpu_id); CPUState *cs =3D CPU(cpu); + CPUState *dest =3D cs->cpu_index =3D=3D vcpu_id ? cs : qemu_get_cpu(vc= pu_id); int err; =20 + if (!dest) { + return -ENOENT; + } + switch (cmd) { case VCPUOP_register_runstate_memory_area: err =3D vcpuop_register_runstate_info(cs, dest, arg); @@ -892,6 +1092,26 @@ static bool kvm_xen_hcall_vcpu_op(struct kvm_xen_exit= *exit, X86CPU *cpu, case VCPUOP_register_vcpu_info: err =3D vcpuop_register_vcpu_info(cs, dest, arg); break; + case VCPUOP_set_singleshot_timer: { + if (cs->cpu_index !=3D vcpu_id) { + return -EINVAL; + } + err =3D vcpuop_set_singleshot_timer(dest, arg); + break; + } + case VCPUOP_stop_singleshot_timer: + if (cs->cpu_index !=3D vcpu_id) { + return -EINVAL; + } + err =3D vcpuop_stop_singleshot_timer(dest); + break; + case VCPUOP_set_periodic_timer: { + err =3D vcpuop_set_periodic_timer(cs, dest, arg); + break; + } + case VCPUOP_stop_periodic_timer: + err =3D vcpuop_stop_periodic_timer(dest); + break; =20 default: return false; @@ -1246,6 +1466,9 @@ static bool do_kvm_xen_handle_exit(X86CPU *cpu, struc= t kvm_xen_exit *exit) } =20 switch (code) { + case __HYPERVISOR_set_timer_op: + return kvm_xen_hcall_set_timer_op(exit, cpu, + exit->u.hcall.params[0]); case __HYPERVISOR_grant_table_op: return kvm_xen_hcall_gnttab_op(exit, cpu, exit->u.hcall.params[0], exit->u.hcall.params[1], @@ -1355,7 +1578,25 @@ int kvm_put_xen_state(CPUState *cs) } } =20 + if (env->xen_periodic_timer_period) { + ret =3D do_set_periodic_timer(cs, env->xen_periodic_timer_period); + if (ret < 0) { + return ret; + } + } + if (!kvm_xen_has_cap(EVTCHN_SEND)) { + /* + * If the kernel has EVTCHN_SEND support then it handles timers to= o, + * so the timer will be restored by kvm_xen_set_vcpu_timer() below. + */ + if (env->xen_singleshot_timer_ns) { + ret =3D do_set_singleshot_timer(cs, env->xen_singleshot_timer_= ns, + false, false); + if (ret < 0) { + return ret; + } + } return 0; } =20 diff --git a/target/i386/machine.c b/target/i386/machine.c index 603a1077e3..c7ac8084b2 100644 --- a/target/i386/machine.c +++ b/target/i386/machine.c @@ -1277,6 +1277,7 @@ static const VMStateDescription vmstate_xen_vcpu =3D { VMSTATE_UINT8(env.xen_vcpu_callback_vector, X86CPU), VMSTATE_UINT16_ARRAY(env.xen_virq, X86CPU, XEN_NR_VIRQS), VMSTATE_UINT64(env.xen_singleshot_timer_ns, X86CPU), + VMSTATE_UINT64(env.xen_periodic_timer_period, X86CPU), VMSTATE_END_OF_LIST() } }; --=20 2.39.0