From nobody Mon Sep 16 19:08:06 2024 Delivered-To: importer2@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer2=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=syntacore.com ARC-Seal: i=1; a=rsa-sha256; t=1719326871; cv=none; d=zohomail.com; s=zohoarc; b=N9iBFniIyUAzrcfm8/AuAGYj+9W57wPsWTylyeSuAfRV68inZ916+ve8Oss8PfIc9zXrce2kr8Rftrw4t9UZi4UCDBfLPrbA4yXNXwYq4kDmq5lQpk8DK0rvEwk77HkKfrG3yUpu5HoEfJS8OIr9/u6sxFEyGfo1mTf3TBh/ehE= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1719326871; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=+b1uQRjJMEIN549+zHrhMasy4rzZQe526ayHxdTof6Y=; b=IeQUJ1PDSR2+8QQepLyJ/X5j75Mdv8j3cM/3WRikpg5kkweUFfvhnGtTQNUndzuxC31Zu9TdtOrrMXApJHeVdwFLFMHywVIch6RIJeZ/e035vi0TkeFM6T31qwRpuGpUrIc2MUqzM9eDdici3bO/XYb8NKkDUXsgEyzb3hMd0qs= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer2=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1719326871221662.3174723108959; Tue, 25 Jun 2024 07:47:51 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1sM7Rj-0001Mc-Mp; Tue, 25 Jun 2024 10:46:59 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sM7Rh-0001M5-Jh; Tue, 25 Jun 2024 10:46:57 -0400 Received: from mta-04.yadro.com ([89.207.88.248]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sM7Rf-0001yJ-8G; Tue, 25 Jun 2024 10:46:57 -0400 DKIM-Filter: OpenDKIM Filter v2.11.0 mta-04.yadro.com 91C71C000D DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=syntacore.com; s=mta-04; t=1719326812; bh=+b1uQRjJMEIN549+zHrhMasy4rzZQe526ayHxdTof6Y=; h=From:To:Subject:Date:Message-ID:MIME-Version:Content-Type:From; b=leJTph7oQVxU/dJdiSfWmp0qXADraNwCdsKgVwVDar6houN04k/AUACnLYu8nS6t7 t2IDRSnahaRVPzhx8QN4iFLkXq0KLYOl+47a3Dw3YlWM4eEs8+BZdUjWzmAK3afjJq pBmhL8GKpOso4OhzUKvyextdrrc512d2QIhLlzjdJp4uUnXJBT53N1i/YFNvZ7EWNJ zGayNBEoRnjmxjvdERjzi9v59lHNchp9jM9pEfMnqgdk3ZTHXbISsM/iLGSmD7/gNw n9MRbcqr27BXJInQxKhl6yxcwqNb+lk5Cp9y6HbE+UKnqCbORlzRpOfyI8fbRfv4d/ B3M/xaY0CpaeQ== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=syntacore.com; s=mta-03; t=1719326812; bh=+b1uQRjJMEIN549+zHrhMasy4rzZQe526ayHxdTof6Y=; h=From:To:Subject:Date:Message-ID:MIME-Version:Content-Type:From; b=ivuCE1vLCPUoDU55ZBS0IzOz/bfIn+Wg/236aw9NKOCeTM41tgD9ZDqLnbjGcJtOC 0Lx6ufMbA5wVxgGO3mFcp7jxYgQC99urIMz9jS21W+AyWAOrTBZotFQvLh+vXSKUs9 JNDqUEc5MqmtNbD68/1o0qDfmVmbDILc1TDs+1szoqx67yVb8ApshNsoaeux1EuBrq jSc9E8b6dfnBkHX4dg3viev6pKBXlyT+evBEyarF7QAFabJQzamLElww/AZ3Zusnpu pBapprdY3WmR1UV0fem3ghE3U2elaE8qb7eRWCg20EkmUQVv6qbS5XsPoXYJSEsxUF CSK5O0D+0s6rA== From: Alexei Filippov To: CC: , , , , , , , Subject: [PATCH v2] target/riscv: Add support for machine specific pmu's events Date: Tue, 25 Jun 2024 17:46:43 +0300 Message-ID: <20240625144643.34733-1-alexei.filippov@syntacore.com> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: T-EXCH-07.corp.yadro.com (172.17.11.57) To T-EXCH-12.corp.yadro.com (172.17.11.143) Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer2=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: permerror client-ip=89.207.88.248; envelope-from=alexei.filippov@syntacore.com; helo=mta-04.yadro.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_NONE=0.001, T_SPF_PERMERROR=0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer2=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer2=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @syntacore.com) (identity @syntacore.com) X-ZM-MESSAGEID: 1719326872968100002 Content-Type: text/plain; charset="utf-8" Was added call backs for machine specific pmu events. Simplify monitor functions by adding new hash table, which going to map counter number and event index. Was added read/write callbacks which going to simplify support for events, which expected to have different behavior. Signed-off-by: Alexei Filippov --- Changes since v2: -rebased to latest master target/riscv/cpu.h | 9 +++ target/riscv/csr.c | 43 +++++++++----- target/riscv/pmu.c | 139 ++++++++++++++++++++++----------------------- target/riscv/pmu.h | 11 ++-- 4 files changed, 115 insertions(+), 87 deletions(-) diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h index 6fe0d712b4..fbf82b050b 100644 --- a/target/riscv/cpu.h +++ b/target/riscv/cpu.h @@ -374,6 +374,13 @@ struct CPUArchState { uint64_t (*rdtime_fn)(void *); void *rdtime_fn_arg; =20 + /*machine specific pmu callback */ + void (*pmu_ctr_write)(PMUCTRState *counter, uint32_t event_idx, + target_ulong val, bool high_half); + target_ulong (*pmu_ctr_read)(PMUCTRState *counter, uint32_t event_idx, + bool high_half); + bool (*pmu_vendor_support)(uint32_t event_idx); + /* machine specific AIA ireg read-modify-write callback */ #define AIA_MAKE_IREG(__isel, __priv, __virt, __vgein, __xlen) \ ((((__xlen) & 0xff) << 24) | \ @@ -455,6 +462,8 @@ struct ArchCPU { uint32_t pmu_avail_ctrs; /* Mapping of events to counters */ GHashTable *pmu_event_ctr_map; + /* Mapping of counters to events */ + GHashTable *pmu_ctr_event_map; const GPtrArray *decoders; }; =20 diff --git a/target/riscv/csr.c b/target/riscv/csr.c index 58ef7079dc..b541852c84 100644 --- a/target/riscv/csr.c +++ b/target/riscv/csr.c @@ -875,20 +875,25 @@ static RISCVException write_mhpmcounter(CPURISCVState= *env, int csrno, int ctr_idx =3D csrno - CSR_MCYCLE; PMUCTRState *counter =3D &env->pmu_ctrs[ctr_idx]; uint64_t mhpmctr_val =3D val; + int event_idx; =20 counter->mhpmcounter_val =3D val; - if (riscv_pmu_ctr_monitor_cycles(env, ctr_idx) || - riscv_pmu_ctr_monitor_instructions(env, ctr_idx)) { - counter->mhpmcounter_prev =3D get_ticks(false); - if (ctr_idx > 2) { + event_idx =3D riscv_pmu_get_event_by_ctr(env, ctr_idx); + + if (event_idx !=3D RISCV_PMU_EVENT_NOT_PRESENTED) { + if (RISCV_PMU_CTR_IS_HPM(ctr_idx) && env->pmu_ctr_write) { + env->pmu_ctr_write(counter, event_idx, val, false); + } else { + counter->mhpmcounter_prev =3D get_ticks(false); + } + if (RISCV_PMU_CTR_IS_HPM(ctr_idx)) { if (riscv_cpu_mxl(env) =3D=3D MXL_RV32) { mhpmctr_val =3D mhpmctr_val | ((uint64_t)counter->mhpmcounterh_val << 32); } riscv_pmu_setup_timer(env, mhpmctr_val, ctr_idx); } - } else { - /* Other counters can keep incrementing from the given value */ + } else { counter->mhpmcounter_prev =3D val; } =20 @@ -902,13 +907,19 @@ static RISCVException write_mhpmcounterh(CPURISCVStat= e *env, int csrno, PMUCTRState *counter =3D &env->pmu_ctrs[ctr_idx]; uint64_t mhpmctr_val =3D counter->mhpmcounter_val; uint64_t mhpmctrh_val =3D val; + int event_idx; =20 counter->mhpmcounterh_val =3D val; mhpmctr_val =3D mhpmctr_val | (mhpmctrh_val << 32); - if (riscv_pmu_ctr_monitor_cycles(env, ctr_idx) || - riscv_pmu_ctr_monitor_instructions(env, ctr_idx)) { - counter->mhpmcounterh_prev =3D get_ticks(true); - if (ctr_idx > 2) { + event_idx =3D riscv_pmu_get_event_by_ctr(env, ctr_idx); + + if (event_idx !=3D RISCV_PMU_EVENT_NOT_PRESENTED) { + if (RISCV_PMU_CTR_IS_HPM(ctr_idx) && env->pmu_ctr_write) { + env->pmu_ctr_write(counter, event_idx, val, true); + } else { + counter->mhpmcounterh_prev =3D get_ticks(true); + } + if (RISCV_PMU_CTR_IS_HPM(ctr_idx)) { riscv_pmu_setup_timer(env, mhpmctr_val, ctr_idx); } } else { @@ -926,6 +937,7 @@ static RISCVException riscv_pmu_read_ctr(CPURISCVState = *env, target_ulong *val, counter->mhpmcounter_prev; target_ulong ctr_val =3D upper_half ? counter->mhpmcounterh_val : counter->mhpmcounter_val; + int event_idx; =20 if (get_field(env->mcountinhibit, BIT(ctr_idx))) { /* @@ -946,9 +958,14 @@ static RISCVException riscv_pmu_read_ctr(CPURISCVState= *env, target_ulong *val, * The kernel computes the perf delta by subtracting the current value= from * the value it initialized previously (ctr_val). */ - if (riscv_pmu_ctr_monitor_cycles(env, ctr_idx) || - riscv_pmu_ctr_monitor_instructions(env, ctr_idx)) { - *val =3D get_ticks(upper_half) - ctr_prev + ctr_val; + event_idx =3D riscv_pmu_get_event_by_ctr(env, ctr_idx); + if (event_idx !=3D RISCV_PMU_EVENT_NOT_PRESENTED) { + if (RISCV_PMU_CTR_IS_HPM(ctr_idx) && env->pmu_ctr_read) { + *val =3D env->pmu_ctr_read(counter, event_idx, + upper_half); + } else { + *val =3D get_ticks(upper_half) - ctr_prev + ctr_val; + } } else { *val =3D ctr_val; } diff --git a/target/riscv/pmu.c b/target/riscv/pmu.c index 0e7d58b8a5..c3b6b20337 100644 --- a/target/riscv/pmu.c +++ b/target/riscv/pmu.c @@ -88,7 +88,7 @@ static bool riscv_pmu_counter_valid(RISCVCPU *cpu, uint32= _t ctr_idx) } } =20 -static bool riscv_pmu_counter_enabled(RISCVCPU *cpu, uint32_t ctr_idx) +bool riscv_pmu_counter_enabled(RISCVCPU *cpu, uint32_t ctr_idx) { CPURISCVState *env =3D &cpu->env; =20 @@ -207,59 +207,28 @@ int riscv_pmu_incr_ctr(RISCVCPU *cpu, enum riscv_pmu_= event_idx event_idx) return ret; } =20 -bool riscv_pmu_ctr_monitor_instructions(CPURISCVState *env, - uint32_t target_ctr) +int riscv_pmu_get_event_by_ctr(CPURISCVState *env, + uint32_t target_ctr) { RISCVCPU *cpu; uint32_t event_idx; - uint32_t ctr_idx; =20 - /* Fixed instret counter */ - if (target_ctr =3D=3D 2) { - return true; + if (target_ctr < 3) { + return target_ctr; } =20 cpu =3D env_archcpu(env); - if (!cpu->pmu_event_ctr_map) { - return false; + if (!cpu->pmu_ctr_event_map || !cpu->pmu_event_ctr_map) { + return RISCV_PMU_EVENT_NOT_PRESENTED; } =20 - event_idx =3D RISCV_PMU_EVENT_HW_INSTRUCTIONS; - ctr_idx =3D GPOINTER_TO_UINT(g_hash_table_lookup(cpu->pmu_event_ctr_ma= p, - GUINT_TO_POINTER(event_idx))); - if (!ctr_idx) { - return false; + event_idx =3D GPOINTER_TO_UINT(g_hash_table_lookup(cpu->pmu_ctr_event_= map, + GUINT_TO_POINTER(targ= et_ctr))); + if (!event_idx) { + return RISCV_PMU_EVENT_NOT_PRESENTED; } =20 - return target_ctr =3D=3D ctr_idx ? true : false; -} - -bool riscv_pmu_ctr_monitor_cycles(CPURISCVState *env, uint32_t target_ctr) -{ - RISCVCPU *cpu; - uint32_t event_idx; - uint32_t ctr_idx; - - /* Fixed mcycle counter */ - if (target_ctr =3D=3D 0) { - return true; - } - - cpu =3D env_archcpu(env); - if (!cpu->pmu_event_ctr_map) { - return false; - } - - event_idx =3D RISCV_PMU_EVENT_HW_CPU_CYCLES; - ctr_idx =3D GPOINTER_TO_UINT(g_hash_table_lookup(cpu->pmu_event_ctr_ma= p, - GUINT_TO_POINTER(event_idx))); - - /* Counter zero is not used for event_ctr_map */ - if (!ctr_idx) { - return false; - } - - return (target_ctr =3D=3D ctr_idx) ? true : false; + return event_idx; } =20 static gboolean pmu_remove_event_map(gpointer key, gpointer value, @@ -268,6 +237,12 @@ static gboolean pmu_remove_event_map(gpointer key, gpo= inter value, return (GPOINTER_TO_UINT(value) =3D=3D GPOINTER_TO_UINT(udata)) ? true= : false; } =20 +static gboolean pmu_remove_ctr_map(gpointer key, gpointer value, + gpointer udata) +{ + return (GPOINTER_TO_UINT(key) =3D=3D GPOINTER_TO_UINT(udata)) ? true := false; +} + static int64_t pmu_icount_ticks_to_ns(int64_t value) { int64_t ret =3D 0; @@ -286,8 +261,11 @@ int riscv_pmu_update_event_map(CPURISCVState *env, uin= t64_t value, { uint32_t event_idx; RISCVCPU *cpu =3D env_archcpu(env); + bool machine_specific =3D false; =20 - if (!riscv_pmu_counter_valid(cpu, ctr_idx) || !cpu->pmu_event_ctr_map)= { + if (!riscv_pmu_counter_valid(cpu, ctr_idx) || + !cpu->pmu_event_ctr_map || + !cpu->pmu_ctr_event_map) { return -1; } =20 @@ -299,6 +277,9 @@ int riscv_pmu_update_event_map(CPURISCVState *env, uint= 64_t value, g_hash_table_foreach_remove(cpu->pmu_event_ctr_map, pmu_remove_event_map, GUINT_TO_POINTER(ctr_idx)); + g_hash_table_foreach_remove(cpu->pmu_ctr_event_map, + pmu_remove_ctr_map, + GUINT_TO_POINTER(ctr_idx)); return 0; } =20 @@ -308,40 +289,39 @@ int riscv_pmu_update_event_map(CPURISCVState *env, ui= nt64_t value, return 0; } =20 - switch (event_idx) { - case RISCV_PMU_EVENT_HW_CPU_CYCLES: - case RISCV_PMU_EVENT_HW_INSTRUCTIONS: - case RISCV_PMU_EVENT_CACHE_DTLB_READ_MISS: - case RISCV_PMU_EVENT_CACHE_DTLB_WRITE_MISS: - case RISCV_PMU_EVENT_CACHE_ITLB_PREFETCH_MISS: - break; - default: - /* We don't support any raw events right now */ - return -1; + if (RISCV_PMU_CTR_IS_HPM(ctr_idx) && env->pmu_vendor_support) { + machine_specific =3D env->pmu_vendor_support(event_idx); + } + + if (!machine_specific) { + switch (event_idx) { + case RISCV_PMU_EVENT_HW_CPU_CYCLES: + case RISCV_PMU_EVENT_HW_INSTRUCTIONS: + case RISCV_PMU_EVENT_CACHE_DTLB_READ_MISS: + case RISCV_PMU_EVENT_CACHE_DTLB_WRITE_MISS: + case RISCV_PMU_EVENT_CACHE_ITLB_PREFETCH_MISS: + break; + default: + return -1; + } } g_hash_table_insert(cpu->pmu_event_ctr_map, GUINT_TO_POINTER(event_idx= ), GUINT_TO_POINTER(ctr_idx)); + g_hash_table_insert(cpu->pmu_ctr_event_map, GUINT_TO_POINTER(ctr_idx), + GUINT_TO_POINTER(event_idx)); =20 return 0; } =20 static void pmu_timer_trigger_irq(RISCVCPU *cpu, - enum riscv_pmu_event_idx evt_idx) + uint32_t ctr_idx) { - uint32_t ctr_idx; CPURISCVState *env =3D &cpu->env; PMUCTRState *counter; target_ulong *mhpmevent_val; uint64_t of_bit_mask; int64_t irq_trigger_at; =20 - if (evt_idx !=3D RISCV_PMU_EVENT_HW_CPU_CYCLES && - evt_idx !=3D RISCV_PMU_EVENT_HW_INSTRUCTIONS) { - return; - } - - ctr_idx =3D GPOINTER_TO_UINT(g_hash_table_lookup(cpu->pmu_event_ctr_ma= p, - GUINT_TO_POINTER(evt_idx))); if (!riscv_pmu_counter_enabled(cpu, ctr_idx)) { return; } @@ -349,7 +329,7 @@ static void pmu_timer_trigger_irq(RISCVCPU *cpu, if (riscv_cpu_mxl(env) =3D=3D MXL_RV32) { mhpmevent_val =3D &env->mhpmeventh_val[ctr_idx]; of_bit_mask =3D MHPMEVENTH_BIT_OF; - } else { + } else { mhpmevent_val =3D &env->mhpmevent_val[ctr_idx]; of_bit_mask =3D MHPMEVENT_BIT_OF; } @@ -372,14 +352,25 @@ static void pmu_timer_trigger_irq(RISCVCPU *cpu, } } =20 +static void riscv_pmu_timer_trigger_irq(gpointer ctr, gpointer event_idx, + gpointer opaque) +{ + RISCVCPU *cpu =3D opaque; + + pmu_timer_trigger_irq(cpu, GPOINTER_TO_UINT(ctr)); +} + /* Timer callback for instret and cycle counter overflow */ void riscv_pmu_timer_cb(void *priv) { RISCVCPU *cpu =3D priv; =20 - /* Timer event was triggered only for these events */ - pmu_timer_trigger_irq(cpu, RISCV_PMU_EVENT_HW_CPU_CYCLES); - pmu_timer_trigger_irq(cpu, RISCV_PMU_EVENT_HW_INSTRUCTIONS); + if (!cpu->pmu_ctr_event_map || !cpu->pmu_event_ctr_map) { + return; + } + g_hash_table_foreach(cpu->pmu_ctr_event_map, + riscv_pmu_timer_trigger_irq, + cpu); } =20 int riscv_pmu_setup_timer(CPURISCVState *env, uint64_t value, uint32_t ctr= _idx) @@ -388,6 +379,7 @@ int riscv_pmu_setup_timer(CPURISCVState *env, uint64_t = value, uint32_t ctr_idx) int64_t overflow_ns, overflow_left =3D 0; RISCVCPU *cpu =3D env_archcpu(env); PMUCTRState *counter =3D &env->pmu_ctrs[ctr_idx]; + uint32_t event_idx; =20 if (!riscv_pmu_counter_valid(cpu, ctr_idx) || !cpu->cfg.ext_sscofpmf) { return -1; @@ -408,8 +400,9 @@ int riscv_pmu_setup_timer(CPURISCVState *env, uint64_t = value, uint32_t ctr_idx) overflow_left =3D overflow_delta - INT64_MAX; } =20 - if (riscv_pmu_ctr_monitor_cycles(env, ctr_idx) || - riscv_pmu_ctr_monitor_instructions(env, ctr_idx)) { + event_idx =3D riscv_pmu_get_event_by_ctr(env, ctr_idx); + + if (event_idx !=3D RISCV_PMU_EVENT_NOT_PRESENTED) { overflow_ns =3D pmu_icount_ticks_to_ns((int64_t)overflow_delta); overflow_left =3D pmu_icount_ticks_to_ns(overflow_left) ; } else { @@ -443,7 +436,13 @@ void riscv_pmu_init(RISCVCPU *cpu, Error **errp) =20 cpu->pmu_event_ctr_map =3D g_hash_table_new(g_direct_hash, g_direct_eq= ual); if (!cpu->pmu_event_ctr_map) { - error_setg(errp, "Unable to allocate PMU event hash table"); + error_setg(errp, "Unable to allocate first PMU event hash table"); + return; + } + + cpu->pmu_ctr_event_map =3D g_hash_table_new(g_direct_hash, g_direct_eq= ual); + if (!cpu->pmu_ctr_event_map) { + error_setg(errp, "Unable to allocate second PMU event hash table"); return; } =20 diff --git a/target/riscv/pmu.h b/target/riscv/pmu.h index 7c0ad661e0..b99a5f58d4 100644 --- a/target/riscv/pmu.h +++ b/target/riscv/pmu.h @@ -22,10 +22,12 @@ #include "cpu.h" #include "qapi/error.h" =20 -bool riscv_pmu_ctr_monitor_instructions(CPURISCVState *env, - uint32_t target_ctr); -bool riscv_pmu_ctr_monitor_cycles(CPURISCVState *env, - uint32_t target_ctr); +#define RISCV_PMU_EVENT_NOT_PRESENTED -1 + +#define RISCV_PMU_CTR_IS_HPM(x) (x > 2) + +int riscv_pmu_get_event_by_ctr(CPURISCVState *env, + uint32_t target_ctr); void riscv_pmu_timer_cb(void *priv); void riscv_pmu_init(RISCVCPU *cpu, Error **errp); int riscv_pmu_update_event_map(CPURISCVState *env, uint64_t value, @@ -34,5 +36,6 @@ int riscv_pmu_incr_ctr(RISCVCPU *cpu, enum riscv_pmu_even= t_idx event_idx); void riscv_pmu_generate_fdt_node(void *fdt, uint32_t cmask, char *pmu_name= ); int riscv_pmu_setup_timer(CPURISCVState *env, uint64_t value, uint32_t ctr_idx); +bool riscv_pmu_counter_enabled(RISCVCPU *cpu, uint32_t ctr_idx); =20 #endif /* RISCV_PMU_H */ --=20 2.34.1