From nobody Wed May 14 02:36:31 2025 Delivered-To: importer2@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer2=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=linaro.org ARC-Seal: i=1; a=rsa-sha256; t=1687349993; cv=none; d=zohomail.com; s=zohoarc; b=dqvOMk0P/X6DEamhQMsRCRFwQ+HxgitExL7Cg+hk5c/sSBmy5aW6bl+/iXg+GgFWrruuDTfYNLgAftxids/x3cnuAfgoYMnrVlXlRqnYq4phCUkdwvJhystQQrISlsCT718SuozX9VxGgveQT2iWoFNX7BR+nHgfsIRc23yy5G8= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1687349993; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=jgqAbaPN1d4W9rZooKKid+wiSp+1N29sW2yxha1lmeQ=; b=gbinvu2ozV43AHq7k7U+d91IHgCRBpI94YCYdG5YQyQFhEjCoTitZL5Fs8bv+bEXGr946+UPsL8LGuULespDrRF8V/zl/A79BeJnX8lPskWc61chN/PbMa5KK3KFXpMUsfudBqA/v7CqwLOnAwQGQp/EXiSPRl54it/StTNzCns= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer2=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1687349993445196.67283570617155; Wed, 21 Jun 2023 05:19:53 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qBwo4-0006eE-Mr; Wed, 21 Jun 2023 08:19:30 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qBwnm-0006aZ-Gq for qemu-devel@nongnu.org; Wed, 21 Jun 2023 08:19:10 -0400 Received: from mail-lj1-x229.google.com ([2a00:1450:4864:20::229]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1qBwnj-0000GC-Ih for qemu-devel@nongnu.org; Wed, 21 Jun 2023 08:19:10 -0400 Received: by mail-lj1-x229.google.com with SMTP id 38308e7fff4ca-2b466744368so66018021fa.0 for ; Wed, 21 Jun 2023 05:19:07 -0700 (PDT) Received: from stoup.lan ([176.176.128.70]) by smtp.gmail.com with ESMTPSA id s6-20020a056402014600b0051bdf152295sm543176edu.76.2023.06.21.05.19.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 21 Jun 2023 05:19:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1687349946; x=1689941946; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=jgqAbaPN1d4W9rZooKKid+wiSp+1N29sW2yxha1lmeQ=; b=EMClaJq9Bv02tiOB76o4+o3foPo+UXj3ZCIUWYYqoB2F0v3Z5owuks/qGTJoZf1+/y zqLf3dYoVTtcUC75C4JeehVqF6gUEwy0PlI9O8B0CIzlN/bzB4/u0zdFBixcpnJL9b7n /pyKisKhod6kJxLinE8LWcuuOPI34v8SD46r3DKQkXeMTxxbJw6U75pYYTIGTYMNVPVe v8DcAJkW53MaIU+wM0OywGtQlD8ivWGoatus3ljlqB80XViwz2IM03EBmziy/Ih1/rld RNf95Ed3MfpdRVcRV7foXC7LekT9LtJrR155rVeEU28BRqZCna0TuH/RYwwJIRUUO0dJ DM3Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687349946; x=1689941946; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=jgqAbaPN1d4W9rZooKKid+wiSp+1N29sW2yxha1lmeQ=; b=Tj77ahyw+B+8YaKehc0gUxp2UV+Iy4bX/Kj5FK+uU6fwxSA9mVrnVqlZg9Gy/Nobau 50xqLRb5KelOI/cacfoZlp0Rqkkb8nxQGce9pMRCZYwp+vFc4D1pfDIBVAMnaeUMVjFu /9w0iEP+v5YlLvzO6neZHV6GdJbMZDKmX3vHRGoSRCfmbN/kVzYXUmjfLI7DbpUvLTtf xYXkaNDjWquXY7L7NxS3D9UPVDgz/lyKF9/0/cI4RJoUHeb0EwmuqOK2IYv/K9sddQ+W 8/lg5BpnsLmEkuVUBRRosgtyfQSN/XAOzJXUlGKISrorjgcUMHbvDjRUYGx9b5i2EsS7 JNrQ== X-Gm-Message-State: AC+VfDxDE6qTxJqMQY9PzJC29kLuAYmUIw1uPzFE9wOKm1OKJZh9eFTu m0M4zpLhJIoozGGSTdGz6o4OSwhOP6rqUsT5k39Rf39v X-Google-Smtp-Source: ACHHUZ55siBQXrNgjmF9lAgolGvvZ1XA4yeFvdleCkWXZHvIrc2o19VUMPLjn4X9DC27tLaGh6NOpg== X-Received: by 2002:a2e:9c82:0:b0:2b4:6678:da57 with SMTP id x2-20020a2e9c82000000b002b46678da57mr7374896lji.6.1687349945564; Wed, 21 Jun 2023 05:19:05 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org Subject: [PATCH v2 1/9] accel/tcg: Store some tlb flags in CPUTLBEntryFull Date: Wed, 21 Jun 2023 14:18:54 +0200 Message-Id: <20230621121902.1392277-2-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230621121902.1392277-1-richard.henderson@linaro.org> References: <20230621121902.1392277-1-richard.henderson@linaro.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer2=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=2a00:1450:4864:20::229; envelope-from=richard.henderson@linaro.org; helo=mail-lj1-x229.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer2=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer2=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @linaro.org) X-ZM-MESSAGEID: 1687349994965100003 Content-Type: text/plain; charset="utf-8" We have run out of bits we can use within the CPUTLBEntry comparators, as TLB_FLAGS_MASK cannot overlap alignment. Store slow_flags[] in CPUTLBEntryFull, and merge with the flags from the comparator. A new TLB_FORCE_SLOW bit is set within the comparator as an indication that the slow path must be used. Move TLB_BSWAP to TLB_SLOW_FLAGS_MASK. Since we are out of bits, we cannot create a new bit without moving an old one. Signed-off-by: Richard Henderson Reviewed-by: Philippe Mathieu-Daud=C3=A9 --- include/exec/cpu-all.h | 21 +++++++-- include/exec/cpu-defs.h | 6 +++ accel/tcg/cputlb.c | 96 ++++++++++++++++++++++++----------------- 3 files changed, 80 insertions(+), 43 deletions(-) diff --git a/include/exec/cpu-all.h b/include/exec/cpu-all.h index 09bf4c0cc6..4422f4bb07 100644 --- a/include/exec/cpu-all.h +++ b/include/exec/cpu-all.h @@ -327,17 +327,30 @@ CPUArchState *cpu_copy(CPUArchState *env); #define TLB_MMIO (1 << (TARGET_PAGE_BITS_MIN - 3)) /* Set if TLB entry contains a watchpoint. */ #define TLB_WATCHPOINT (1 << (TARGET_PAGE_BITS_MIN - 4)) -/* Set if TLB entry requires byte swap. */ -#define TLB_BSWAP (1 << (TARGET_PAGE_BITS_MIN - 5)) +/* Set if the slow path must be used; more flags in CPUTLBEntryFull. */ +#define TLB_FORCE_SLOW (1 << (TARGET_PAGE_BITS_MIN - 5)) /* Set if TLB entry writes ignored. */ #define TLB_DISCARD_WRITE (1 << (TARGET_PAGE_BITS_MIN - 6)) =20 -/* Use this mask to check interception with an alignment mask +/* + * Use this mask to check interception with an alignment mask * in a TCG backend. */ #define TLB_FLAGS_MASK \ (TLB_INVALID_MASK | TLB_NOTDIRTY | TLB_MMIO \ - | TLB_WATCHPOINT | TLB_BSWAP | TLB_DISCARD_WRITE) + | TLB_WATCHPOINT | TLB_FORCE_SLOW | TLB_DISCARD_WRITE) + +/* + * Flags stored in CPUTLBEntryFull.slow_flags[x]. + * TLB_FORCE_SLOW must be set in CPUTLBEntry.addr_idx[x]. + */ +/* Set if TLB entry requires byte swap. */ +#define TLB_BSWAP (1 << 0) + +#define TLB_SLOW_FLAGS_MASK TLB_BSWAP + +/* The two sets of flags must not overlap. */ +QEMU_BUILD_BUG_ON(TLB_FLAGS_MASK & TLB_SLOW_FLAGS_MASK); =20 /** * tlb_hit_page: return true if page aligned @addr is a hit against the diff --git a/include/exec/cpu-defs.h b/include/exec/cpu-defs.h index 4cb77c8dec..c174d5371a 100644 --- a/include/exec/cpu-defs.h +++ b/include/exec/cpu-defs.h @@ -124,6 +124,12 @@ typedef struct CPUTLBEntryFull { /* @lg_page_size contains the log2 of the page size. */ uint8_t lg_page_size; =20 + /* + * Additional tlb flags for use by the slow path. If non-zero, + * the corresponding CPUTLBEntry comparator must have TLB_FORCE_SLOW. + */ + uint8_t slow_flags[3]; + /* * Allow target-specific additions to this structure. * This may be used to cache items from the guest cpu diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 14ce97c33b..b40ce5ea0f 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -1110,6 +1110,24 @@ static void tlb_add_large_page(CPUArchState *env, in= t mmu_idx, env_tlb(env)->d[mmu_idx].large_page_mask =3D lp_mask; } =20 +static inline void tlb_set_compare(CPUTLBEntryFull *full, CPUTLBEntry *ent, + target_ulong address, int flags, + MMUAccessType access_type, bool enable) +{ + if (enable) { + address |=3D flags & TLB_FLAGS_MASK; + flags &=3D TLB_SLOW_FLAGS_MASK; + if (flags) { + address |=3D TLB_FORCE_SLOW; + } + } else { + address =3D -1; + flags =3D 0; + } + ent->addr_idx[access_type] =3D address; + full->slow_flags[access_type] =3D flags; +} + /* * Add a new TLB entry. At most one entry for a given virtual address * is permitted. Only a single TARGET_PAGE_SIZE region is mapped, the @@ -1125,9 +1143,7 @@ void tlb_set_page_full(CPUState *cpu, int mmu_idx, CPUTLB *tlb =3D env_tlb(env); CPUTLBDesc *desc =3D &tlb->d[mmu_idx]; MemoryRegionSection *section; - unsigned int index; - target_ulong address; - target_ulong write_address; + unsigned int index, read_flags, write_flags; uintptr_t addend; CPUTLBEntry *te, tn; hwaddr iotlb, xlat, sz, paddr_page; @@ -1156,13 +1172,13 @@ void tlb_set_page_full(CPUState *cpu, int mmu_idx, " prot=3D%x idx=3D%d\n", vaddr, full->phys_addr, prot, mmu_idx); =20 - address =3D vaddr_page; + read_flags =3D 0; if (full->lg_page_size < TARGET_PAGE_BITS) { /* Repeat the MMU check and TLB fill on every access. */ - address |=3D TLB_INVALID_MASK; + read_flags |=3D TLB_INVALID_MASK; } if (full->attrs.byte_swap) { - address |=3D TLB_BSWAP; + read_flags |=3D TLB_BSWAP; } =20 is_ram =3D memory_region_is_ram(section->mr); @@ -1176,7 +1192,7 @@ void tlb_set_page_full(CPUState *cpu, int mmu_idx, addend =3D 0; } =20 - write_address =3D address; + write_flags =3D read_flags; if (is_ram) { iotlb =3D memory_region_get_ram_addr(section->mr) + xlat; /* @@ -1185,9 +1201,9 @@ void tlb_set_page_full(CPUState *cpu, int mmu_idx, */ if (prot & PAGE_WRITE) { if (section->readonly) { - write_address |=3D TLB_DISCARD_WRITE; + write_flags |=3D TLB_DISCARD_WRITE; } else if (cpu_physical_memory_is_clean(iotlb)) { - write_address |=3D TLB_NOTDIRTY; + write_flags |=3D TLB_NOTDIRTY; } } } else { @@ -1198,9 +1214,9 @@ void tlb_set_page_full(CPUState *cpu, int mmu_idx, * Reads to romd devices go through the ram_ptr found above, * but of course reads to I/O must go through MMIO. */ - write_address |=3D TLB_MMIO; + write_flags |=3D TLB_MMIO; if (!is_romd) { - address =3D write_address; + read_flags =3D write_flags; } } =20 @@ -1253,36 +1269,30 @@ void tlb_set_page_full(CPUState *cpu, int mmu_idx, * vaddr we add back in io_readx()/io_writex()/get_page_addr_code(). */ desc->fulltlb[index] =3D *full; - desc->fulltlb[index].xlat_section =3D iotlb - vaddr_page; - desc->fulltlb[index].phys_addr =3D paddr_page; + full =3D &desc->fulltlb[index]; + full->xlat_section =3D iotlb - vaddr_page; + full->phys_addr =3D paddr_page; =20 /* Now calculate the new entry */ tn.addend =3D addend - vaddr_page; - if (prot & PAGE_READ) { - tn.addr_read =3D address; - if (wp_flags & BP_MEM_READ) { - tn.addr_read |=3D TLB_WATCHPOINT; - } - } else { - tn.addr_read =3D -1; - } =20 - if (prot & PAGE_EXEC) { - tn.addr_code =3D address; - } else { - tn.addr_code =3D -1; - } + tlb_set_compare(full, &tn, vaddr_page, read_flags, + MMU_INST_FETCH, prot & PAGE_EXEC); =20 - tn.addr_write =3D -1; - if (prot & PAGE_WRITE) { - tn.addr_write =3D write_address; - if (prot & PAGE_WRITE_INV) { - tn.addr_write |=3D TLB_INVALID_MASK; - } - if (wp_flags & BP_MEM_WRITE) { - tn.addr_write |=3D TLB_WATCHPOINT; - } + if (wp_flags & BP_MEM_READ) { + read_flags |=3D TLB_WATCHPOINT; } + tlb_set_compare(full, &tn, vaddr_page, read_flags, + MMU_DATA_LOAD, prot & PAGE_READ); + + if (prot & PAGE_WRITE_INV) { + write_flags |=3D TLB_INVALID_MASK; + } + if (wp_flags & BP_MEM_WRITE) { + write_flags |=3D TLB_WATCHPOINT; + } + tlb_set_compare(full, &tn, vaddr_page, write_flags, + MMU_DATA_STORE, prot & PAGE_WRITE); =20 copy_tlb_helper_locked(te, &tn); tlb_n_used_entries_inc(env, mmu_idx); @@ -1512,7 +1522,8 @@ static int probe_access_internal(CPUArchState *env, t= arget_ulong addr, CPUTLBEntry *entry =3D tlb_entry(env, mmu_idx, addr); target_ulong tlb_addr =3D tlb_read_idx(entry, access_type); target_ulong page_addr =3D addr & TARGET_PAGE_MASK; - int flags =3D TLB_FLAGS_MASK; + int flags =3D TLB_FLAGS_MASK & ~TLB_FORCE_SLOW; + CPUTLBEntryFull *full; =20 if (!tlb_hit_page(tlb_addr, page_addr)) { if (!victim_tlb_hit(env, mmu_idx, index, access_type, page_addr)) { @@ -1541,7 +1552,8 @@ static int probe_access_internal(CPUArchState *env, t= arget_ulong addr, } flags &=3D tlb_addr; =20 - *pfull =3D &env_tlb(env)->d[mmu_idx].fulltlb[index]; + *pfull =3D full =3D &env_tlb(env)->d[mmu_idx].fulltlb[index]; + flags |=3D full->slow_flags[access_type]; =20 /* Fold all "mmio-like" bits into TLB_MMIO. This is not RAM. */ if (unlikely(flags & ~(TLB_WATCHPOINT | TLB_NOTDIRTY))) { @@ -1764,6 +1776,8 @@ static bool mmu_lookup1(CPUArchState *env, MMULookupP= ageData *data, CPUTLBEntry *entry =3D tlb_entry(env, mmu_idx, addr); target_ulong tlb_addr =3D tlb_read_idx(entry, access_type); bool maybe_resized =3D false; + CPUTLBEntryFull *full; + int flags; =20 /* If the TLB entry is for a different page, reload and try again. */ if (!tlb_hit(tlb_addr, addr)) { @@ -1777,8 +1791,12 @@ static bool mmu_lookup1(CPUArchState *env, MMULookup= PageData *data, tlb_addr =3D tlb_read_idx(entry, access_type) & ~TLB_INVALID_MASK; } =20 - data->flags =3D tlb_addr & TLB_FLAGS_MASK; - data->full =3D &env_tlb(env)->d[mmu_idx].fulltlb[index]; + full =3D &env_tlb(env)->d[mmu_idx].fulltlb[index]; + flags =3D tlb_addr & (TLB_FLAGS_MASK & ~TLB_FORCE_SLOW); + flags |=3D full->slow_flags[access_type]; + + data->full =3D full; + data->flags =3D flags; /* Compute haddr speculatively; depending on flags it might be invalid= . */ data->haddr =3D (void *)((uintptr_t)addr + entry->addend); =20 --=20 2.34.1