From nobody Fri May 17 06:30:36 2024 Delivered-To: importer2@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer2=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1712321692; cv=none; d=zohomail.com; s=zohoarc; b=doWbLmEqgaLHnA8vIQN0t49KYNbbXG3of5XMa82152zhxPVsghr4YBDNsN2NSF0enuRkjq5nRElsmnrWiRH9T7lNkEzX5M/kbhxnzO/VzaNGBTEgOft2H79HM5FjoIgy3gBrZByjrBi2aUK2mPp/ejs+Vfqjq5G5btNjheNJ460= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1712321692; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=useQ7jYU2MVSZLWPS4tiRLnS8UHaOQWjS/2uSUxE11M=; b=MPX95SwkBwsGt3HUKwIV/ihTCwLAmeme7rVBbGPa1WyF5yXKxBU5Kn0TsmOhyBYXV6LbkYcTOJhn/NRVUyRQrXVfu3vg/uHVw2Q4394GLWVWNCeYwA5mBblZcMGHj9+Wh17JD674BSMLuJwCMplwYpykn/w2WsZbk2qpH0faDI0= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer2=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1712321692705147.706762022641; Fri, 5 Apr 2024 05:54:52 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1rsj54-0004Wo-7T; Fri, 05 Apr 2024 08:54:06 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rsj51-0004Qm-VC; Fri, 05 Apr 2024 08:54:03 -0400 Received: from mail-pg1-x530.google.com ([2607:f8b0:4864:20::530]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1rsj4z-0000tk-PR; Fri, 05 Apr 2024 08:54:03 -0400 Received: by mail-pg1-x530.google.com with SMTP id 41be03b00d2f7-5e152c757a5so1510416a12.2; Fri, 05 Apr 2024 05:53:59 -0700 (PDT) Received: from wheely.local0.net (124-169-104-130.tpgi.com.au. [124.169.104.130]) by smtp.gmail.com with ESMTPSA id t19-20020a17090ae51300b002a2d8d34009sm2984414pjy.27.2024.04.05.05.53.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Apr 2024 05:53:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1712321637; x=1712926437; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=useQ7jYU2MVSZLWPS4tiRLnS8UHaOQWjS/2uSUxE11M=; b=LJRVGL/MhF4UuZY1VeVNZ7Izyznz4cQ8GdqolpnqxbYkqgj6zLdyB71TLqxo5doUpB /KfZ79MDRXrVI0uUxLmIILaHEG97dww5R1TxnrGMzcPglz9k8Mq2rVJBwWf7wBIPN/Zo AzSnKC8O/oKDdJ/DbER93x4wU4YnJUNyS0izpXh5BwN53a+ZMtm/imxGYzXnfBUMKdqm AILDOuaxF5Js0zP69GDShPjj+fhKYTQu+jinuV+u4r79HwUOdwQPK6sB5uLcPMpAE9SN lcMlSrS+cUgqGlk7YZwsir7TtfRA6+BqjOQ6hrMu4THn+lXtUMmkA7RG50Lf8vfGn9KZ GuFA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1712321637; x=1712926437; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=useQ7jYU2MVSZLWPS4tiRLnS8UHaOQWjS/2uSUxE11M=; b=gREUHW6KbOyzsB9gbUEa6UK31JdHTn+zcKl5wmWyLaPdCPbWKRU64mCZs1J/vCNcKd yEtqUOUco6llBIaN1pXm5Qwpt0JbApw+tUHXoeu2ZQwZj0F4JugR6mWsHGbLsPSxfrwS JJnKRk4qye35rgKvdyHesULvw6viEpPD5xggSksKDaD2YkYobE3ilqO9YCQhdNoLQalX C5F5GjSpUrYghWsxmdBGitINrt4V5OE6Vunuifz4ipPh7dw/lmBu2TtT9BmZDlTnU+9d ya2gTLLxR4FjviKvqEInhf/Zb2zosL1HfNHqBAe82JUvXDz8SC6auapqwFgyOzr8DqNm vmog== X-Forwarded-Encrypted: i=1; AJvYcCV9syqTDXr2g2PoFLRo7O007ufQJFhCZQDfPR07VoMYr4REjM4vcru7LlrverkmN170a9fxwFu+ZBBQCNZ43EiBpiqAmnQeYkBnGtQfNirWVJ1lHgwovKpy4NNI9pzY9NsuMiujEuo+W0pyiVetBJ+2AjJjFJOOGjwfN7SW8MBHax5eo8fTT3DF X-Gm-Message-State: AOJu0YzM0mxHJuzQ9JqyCaXtv29wCyHVOSvThKcKsGZp8yZ9D5ae08Sv 4vqRhCDcDKtvOnA4bAJaLNbHDvvsQLfWvSq54/s5F8mzKw1I+F0vOt8E+BX9 X-Google-Smtp-Source: AGHT+IH4vrl7IB4mP5sojqjM0bhc9Ks/6XmoLXQ+79O5Isz+A4wLoaWzL4tI8yP2PQs+/25e06CAJw== X-Received: by 2002:a05:6a21:196:b0:1a7:1c7:69e6 with SMTP id le22-20020a056a21019600b001a701c769e6mr1597667pzb.6.1712321637590; Fri, 05 Apr 2024 05:53:57 -0700 (PDT) From: Nicholas Piggin To: qemu-ppc@nongnu.org Cc: Nicholas Piggin , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Richard Henderson , Paolo Bonzini , Daniel Henrique Barboza , qemu-devel@nongnu.org, Peter Maydell , qemu-arm@nongnu.org, qemu-riscv@nongnu.org, qemu-s390x@nongnu.org Subject: [PATCH v2 1/3] target/ppc: Fix broadcast tlbie synchronisation Date: Fri, 5 Apr 2024 22:53:36 +1000 Message-ID: <20240405125340.380828-2-npiggin@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240405125340.380828-1-npiggin@gmail.com> References: <20240405125340.380828-1-npiggin@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer2=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=2607:f8b0:4864:20::530; envelope-from=npiggin@gmail.com; helo=mail-pg1-x530.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer2=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer2=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @gmail.com) X-ZM-MESSAGEID: 1712321693670100001 With mttcg, broadcast tlbie instructions do not wait until other vCPUs have been kicked out of TCG execution before they complete (including necessary subsequent tlbsync, etc., instructions). This is contrary to the ISA, and it permits other vCPUs to use translations after the TLB flush. For example: // *memP is initially 0, memV maps to memP with *pte CPU0 *pte =3D 0; ptesync ; tlbie ; eieio ; tlbsync ; ptesync *memP =3D 1; CPU1 assert(*memV =3D=3D 0); It is possible for the assertion to fail because CPU1 translates memV using the TLB after CPU0 has stored 1 to the underlying memory. The correct behaviour would be no assertion and possibly a page fault due to pte being cleared. This race was observed with a careful test case where the CPU1 checks run in a large TB making multiple loads so it can run for the entire CPU0 period between clearing the pte and storing the memory, but host vCPU thread preemption could cause the race to hit anywhere. As explained in commit 4ddc104689b ("target/ppc: Fix tlbie"), it is not enough to just use tlb_flush_all_cpus_synced(), because that does not execute until the calling CPU has finished its TB. It is also required that the TB is ended, which will guarantee all CPUs have run the work before the next instruction will be executed. Reviewed-by: Philippe Mathieu-Daud=C3=A9 Signed-off-by: Nicholas Piggin --- target/ppc/helper_regs.c | 2 +- target/ppc/mmu_helper.c | 2 +- target/ppc/translate.c | 7 +++++++ target/ppc/translate/storage-ctrl-impl.c.inc | 7 +++++++ 4 files changed, 16 insertions(+), 2 deletions(-) diff --git a/target/ppc/helper_regs.c b/target/ppc/helper_regs.c index 25258986e3..9094ae5004 100644 --- a/target/ppc/helper_regs.c +++ b/target/ppc/helper_regs.c @@ -334,7 +334,7 @@ void check_tlb_flush(CPUPPCState *env, bool global) if (global && (env->tlb_need_flush & TLB_NEED_GLOBAL_FLUSH)) { env->tlb_need_flush &=3D ~TLB_NEED_GLOBAL_FLUSH; env->tlb_need_flush &=3D ~TLB_NEED_LOCAL_FLUSH; - tlb_flush_all_cpus(cs); + tlb_flush_all_cpus_synced(cs); return; } =20 diff --git a/target/ppc/mmu_helper.c b/target/ppc/mmu_helper.c index c071b4d5e2..aaa5bfc62a 100644 --- a/target/ppc/mmu_helper.c +++ b/target/ppc/mmu_helper.c @@ -533,7 +533,7 @@ void helper_tlbie_isa300(CPUPPCState *env, target_ulong= rb, target_ulong rs, if (local) { tlb_flush_page(env_cpu(env), addr); } else { - tlb_flush_page_all_cpus(env_cpu(env), addr); + tlb_flush_page_all_cpus_synced(env_cpu(env), addr); } return; =20 diff --git a/target/ppc/translate.c b/target/ppc/translate.c index 93ffec787c..4ac8af2058 100644 --- a/target/ppc/translate.c +++ b/target/ppc/translate.c @@ -3495,6 +3495,13 @@ static inline void gen_check_tlb_flush(DisasContext = *ctx, bool global) gen_helper_check_tlb_flush_local(tcg_env); } gen_set_label(l); + if (global) { + /* + * Global TLB flush uses async-work which must run before the + * next instruction, so this must be the last in the TB. + */ + ctx->base.is_jmp =3D DISAS_EXIT_UPDATE; + } } #else static inline void gen_check_tlb_flush(DisasContext *ctx, bool global) { } diff --git a/target/ppc/translate/storage-ctrl-impl.c.inc b/target/ppc/tran= slate/storage-ctrl-impl.c.inc index 74c23a4191..b8b4454663 100644 --- a/target/ppc/translate/storage-ctrl-impl.c.inc +++ b/target/ppc/translate/storage-ctrl-impl.c.inc @@ -224,6 +224,13 @@ static bool do_tlbie(DisasContext *ctx, arg_X_tlbie *a= , bool local) a->prs << TLBIE_F_PRS_SHIFT | a->r << TLBIE_F_R_SHIFT | local << TLBIE_F_LOCAL_SHIFT)); + if (!local) { + /* + * Global TLB flush uses async-work which must run before the + * next instruction, so this must be the last in the TB. + */ + ctx->base.is_jmp =3D DISAS_EXIT_UPDATE; + } return true; #endif =20 --=20 2.43.0 From nobody Fri May 17 06:30:36 2024 Delivered-To: importer2@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer2=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1712321713; cv=none; d=zohomail.com; s=zohoarc; b=RTi5HMcCLehg7yrROpTHVbNbQz0QgtIrXA+ll3ApyRKObkkkV2Q644I1xwiJRgXXJZjGOOYVdDFjBJQMBt2JmcwwwSUYokAG8VH77uuuE7Avg5EDCwgrCSwRTm9pVGr/9QRomLbsOfu+nRwAwIVAT//d4zxvHe2Tp6u7XaZJ+ik= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1712321713; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=VHJJaFhy7VrzZXRwW2V+yycJSsEcKBdj9aGgV2xzP/I=; b=dVbvzJzC+HbsGnYxjJUoFLKmDezrgDJYZv+Kocs4xzoU4TXEQwA6zWxMLsJhKMwbAmDBDdNO5Tz4DAbwq41mFI5rqTbOMkd121R8Pl+JNNMrMFeaglXejcCJJ2AyCA04TFahj1z1/cY9NYvC+lgFRIRXpV7O6GLl01epmzqS6aU= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer2=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1712321713059921.3649547375057; Fri, 5 Apr 2024 05:55:13 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1rsj58-0004mF-NY; Fri, 05 Apr 2024 08:54:10 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rsj56-0004g2-LU; Fri, 05 Apr 2024 08:54:08 -0400 Received: from mail-pf1-x429.google.com ([2607:f8b0:4864:20::429]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1rsj54-0000xA-8c; Fri, 05 Apr 2024 08:54:08 -0400 Received: by mail-pf1-x429.google.com with SMTP id d2e1a72fcca58-6ecf1bb7f38so1388988b3a.0; Fri, 05 Apr 2024 05:54:04 -0700 (PDT) Received: from wheely.local0.net (124-169-104-130.tpgi.com.au. [124.169.104.130]) by smtp.gmail.com with ESMTPSA id t19-20020a17090ae51300b002a2d8d34009sm2984414pjy.27.2024.04.05.05.53.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Apr 2024 05:54:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1712321643; x=1712926443; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=VHJJaFhy7VrzZXRwW2V+yycJSsEcKBdj9aGgV2xzP/I=; b=KK61Dq5ixkcl703Qo8L7PzUCwpH5LU9eqvqWGaVoVxLTn+qIhbjGALq35M7HEEcQIK O3O0MVL1X2Qa/f45P+SdzCqAZwaMWL9yMW0JqNtax2tiRHMPQVg0RKDDGXCvp2IAaCZ5 ICvZ+S/8PekUrYFFjbExGrpqmrzDTLbr8oJZyJiQABCRTV7E9zvmlzsE/EzdEUB2DvyM Q5Tizb/gNjUkG3QVHHv4HWGgspLj67pyyayPgjBAtRmvxiP247rC9SOgy11pvl/nqGeV Emqyq+X3XYKIvK++mlkGEZfs/i6ZKKf9Dezi/EjBx72xLpo0rZAP5p5DmjTJopBZUCa1 pV9g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1712321643; x=1712926443; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=VHJJaFhy7VrzZXRwW2V+yycJSsEcKBdj9aGgV2xzP/I=; b=bTprrGsbB2IMCqyr51EPCbNeg1fV0CDC0KZhKxkVBplivAPGDpC3nUxh1AYzzCNeoR zFJ8I7gv5PS/uSXiXVsUV+8lOsQaRP90KQFeut9uRH9XcV61nhgnt8KtCirmNJDu9+g3 mvcI55GG1a3GXJ4bMULOdEJTRix5beF88rK6rCWxoFmjuUDVC9uH44vXv6+ynFdR1yg2 HZcegVik55WMOH+YL/80HFGL2dldVPqYsOnsd5L7CXkKF+4NddlYaQnoiyRHgP6HDPpn 0/euD+caNqPuMBwe1ar9LvXU9CfOgQGx/2Z253z2FJF/9prrMGuI/e1ZSsPlHEmLQgZZ IFKg== X-Forwarded-Encrypted: i=1; AJvYcCWRCbcCMbOmPOGBiAiC6eiqQG61+nG01IeELUapKZEn/F9osqas1lgc0L7sw/VBwmkIVkrklNMFT7jE4o3/jvNCo8lcXIloGKp5X4WV14K0VG5GEBy+jwOQyFNccwjVCc7oYBqCRANfppI4JsCYYpfZpeX90AFVrrohhEcSqWcT0g9y1kiY1gMi X-Gm-Message-State: AOJu0YxkXlc7cG1oLsHjz5JtGcU3VxZ7ty+f6/MR3ZkriKmAjiPwJjdh FIiyNp3W6fOdtCEaX2OWYDs26hRkngGzg0eO2HpTak9olONGZZiU+X3msgvL X-Google-Smtp-Source: AGHT+IFRtCTJRmFdVsYUa7F4jBJ2g1zl38nMtl5cHIzM3Lci4cpjP5Z+JsJJPR/87DVHlylesgeKyQ== X-Received: by 2002:a05:6a20:12cf:b0:1a7:3b7c:2e9b with SMTP id v15-20020a056a2012cf00b001a73b7c2e9bmr1349958pzg.4.1712321643258; Fri, 05 Apr 2024 05:54:03 -0700 (PDT) From: Nicholas Piggin To: qemu-ppc@nongnu.org Cc: Nicholas Piggin , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Richard Henderson , Paolo Bonzini , Daniel Henrique Barboza , qemu-devel@nongnu.org, Peter Maydell , qemu-arm@nongnu.org, qemu-riscv@nongnu.org, qemu-s390x@nongnu.org Subject: [PATCH v2 2/3] tcg/cputlb: Remove non-synced variants of global TLB flushes Date: Fri, 5 Apr 2024 22:53:37 +1000 Message-ID: <20240405125340.380828-3-npiggin@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240405125340.380828-1-npiggin@gmail.com> References: <20240405125340.380828-1-npiggin@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer2=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=2607:f8b0:4864:20::429; envelope-from=npiggin@gmail.com; helo=mail-pf1-x429.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer2=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer2=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @gmail.com) X-ZM-MESSAGEID: 1712321713645100001 These are no longer used. tlb_flush_all_cpus: removed by previous commit. tlb_flush_page_all_cpus: removed by previous commit. tlb_flush_page_bits_by_mmuidx_all_cpus: never used. tlb_flush_page_by_mmuidx_all_cpus: never used. tlb_flush_page_bits_by_mmuidx_all_cpus: never used, thus: tlb_flush_range_by_mmuidx_all_cpus: never used. tlb_flush_by_mmuidx_all_cpus: never used. Reviewed-by: Philippe Mathieu-Daud=C3=A9 Signed-off-by: Nicholas Piggin Reviewed-by: Richard Henderson --- docs/devel/multi-thread-tcg.rst | 13 ++-- include/exec/exec-all.h | 97 +++++------------------------- accel/tcg/cputlb.c | 103 -------------------------------- 3 files changed, 19 insertions(+), 194 deletions(-) diff --git a/docs/devel/multi-thread-tcg.rst b/docs/devel/multi-thread-tcg.= rst index 1420789fff..d706c27ea7 100644 --- a/docs/devel/multi-thread-tcg.rst +++ b/docs/devel/multi-thread-tcg.rst @@ -205,15 +205,10 @@ DESIGN REQUIREMENTS: =20 (Current solution) =20 -We have updated cputlb.c to defer operations when a cross-vCPU -operation with async_run_on_cpu() which ensures each vCPU sees a -coherent state when it next runs its work (in a few instructions -time). - -A new set up operations (tlb_flush_*_all_cpus) take an additional flag -which when set will force synchronisation by setting the source vCPUs -work as "safe work" and exiting the cpu run loop. This ensure by the -time execution restarts all flush operations have completed. +A new set of tlb flush operations (tlb_flush_*_all_cpus_synced) force +synchronisation by setting the source vCPUs work as "safe work" and +exiting the cpu run loop. This ensures that by the time execution +restarts all flush operations have completed. =20 TLB flag updates are all done atomically and are also protected by the corresponding page lock. diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h index 3e53501691..7cf9faa63f 100644 --- a/include/exec/exec-all.h +++ b/include/exec/exec-all.h @@ -66,24 +66,15 @@ void tlb_destroy(CPUState *cpu); */ void tlb_flush_page(CPUState *cpu, vaddr addr); /** - * tlb_flush_page_all_cpus: + * tlb_flush_page_all_cpus_synced: * @cpu: src CPU of the flush * @addr: virtual address of page to be flushed * - * Flush one page from the TLB of the specified CPU, for all + * Flush one page from the TLB of all CPUs, for all * MMU indexes. - */ -void tlb_flush_page_all_cpus(CPUState *src, vaddr addr); -/** - * tlb_flush_page_all_cpus_synced: - * @cpu: src CPU of the flush - * @addr: virtual address of page to be flushed * - * Flush one page from the TLB of the specified CPU, for all MMU - * indexes like tlb_flush_page_all_cpus except the source vCPUs work - * is scheduled as safe work meaning all flushes will be complete once - * the source vCPUs safe work is complete. This will depend on when - * the guests translation ends the TB. + * When this function returns, no CPUs will subsequently perform + * translations using the flushed TLBs. */ void tlb_flush_page_all_cpus_synced(CPUState *src, vaddr addr); /** @@ -96,19 +87,14 @@ void tlb_flush_page_all_cpus_synced(CPUState *src, vadd= r addr); * use one of the other functions for efficiency. */ void tlb_flush(CPUState *cpu); -/** - * tlb_flush_all_cpus: - * @cpu: src CPU of the flush - */ -void tlb_flush_all_cpus(CPUState *src_cpu); /** * tlb_flush_all_cpus_synced: * @cpu: src CPU of the flush * - * Like tlb_flush_all_cpus except this except the source vCPUs work is - * scheduled as safe work meaning all flushes will be complete once - * the source vCPUs safe work is complete. This will depend on when - * the guests translation ends the TB. + * Flush the entire TLB for all CPUs, for all MMU indexes. + * + * When this function returns, no CPUs will subsequently perform + * translations using the flushed TLBs. */ void tlb_flush_all_cpus_synced(CPUState *src_cpu); /** @@ -123,27 +109,16 @@ void tlb_flush_all_cpus_synced(CPUState *src_cpu); void tlb_flush_page_by_mmuidx(CPUState *cpu, vaddr addr, uint16_t idxmap); /** - * tlb_flush_page_by_mmuidx_all_cpus: + * tlb_flush_page_by_mmuidx_all_cpus_synced: * @cpu: Originating CPU of the flush * @addr: virtual address of page to be flushed * @idxmap: bitmap of MMU indexes to flush * * Flush one page from the TLB of all CPUs, for the specified * MMU indexes. - */ -void tlb_flush_page_by_mmuidx_all_cpus(CPUState *cpu, vaddr addr, - uint16_t idxmap); -/** - * tlb_flush_page_by_mmuidx_all_cpus_synced: - * @cpu: Originating CPU of the flush - * @addr: virtual address of page to be flushed - * @idxmap: bitmap of MMU indexes to flush * - * Flush one page from the TLB of all CPUs, for the specified MMU - * indexes like tlb_flush_page_by_mmuidx_all_cpus except the source - * vCPUs work is scheduled as safe work meaning all flushes will be - * complete once the source vCPUs safe work is complete. This will - * depend on when the guests translation ends the TB. + * When this function returns, no CPUs will subsequently perform + * translations using the flushed TLBs. */ void tlb_flush_page_by_mmuidx_all_cpus_synced(CPUState *cpu, vaddr addr, uint16_t idxmap); @@ -158,24 +133,15 @@ void tlb_flush_page_by_mmuidx_all_cpus_synced(CPUStat= e *cpu, vaddr addr, */ void tlb_flush_by_mmuidx(CPUState *cpu, uint16_t idxmap); /** - * tlb_flush_by_mmuidx_all_cpus: + * tlb_flush_by_mmuidx_all_cpus_synced: * @cpu: Originating CPU of the flush * @idxmap: bitmap of MMU indexes to flush * - * Flush all entries from all TLBs of all CPUs, for the specified + * Flush all entries from the TLB of all CPUs, for the specified * MMU indexes. - */ -void tlb_flush_by_mmuidx_all_cpus(CPUState *cpu, uint16_t idxmap); -/** - * tlb_flush_by_mmuidx_all_cpus_synced: - * @cpu: Originating CPU of the flush - * @idxmap: bitmap of MMU indexes to flush * - * Flush all entries from all TLBs of all CPUs, for the specified - * MMU indexes like tlb_flush_by_mmuidx_all_cpus except except the source - * vCPUs work is scheduled as safe work meaning all flushes will be - * complete once the source vCPUs safe work is complete. This will - * depend on when the guests translation ends the TB. + * When this function returns, no CPUs will subsequently perform + * translations using the flushed TLBs. */ void tlb_flush_by_mmuidx_all_cpus_synced(CPUState *cpu, uint16_t idxmap); =20 @@ -192,8 +158,6 @@ void tlb_flush_page_bits_by_mmuidx(CPUState *cpu, vaddr= addr, uint16_t idxmap, unsigned bits); =20 /* Similarly, with broadcast and syncing. */ -void tlb_flush_page_bits_by_mmuidx_all_cpus(CPUState *cpu, vaddr addr, - uint16_t idxmap, unsigned bits= ); void tlb_flush_page_bits_by_mmuidx_all_cpus_synced (CPUState *cpu, vaddr addr, uint16_t idxmap, unsigned bits); =20 @@ -213,9 +177,6 @@ void tlb_flush_range_by_mmuidx(CPUState *cpu, vaddr add= r, unsigned bits); =20 /* Similarly, with broadcast and syncing. */ -void tlb_flush_range_by_mmuidx_all_cpus(CPUState *cpu, vaddr addr, - vaddr len, uint16_t idxmap, - unsigned bits); void tlb_flush_range_by_mmuidx_all_cpus_synced(CPUState *cpu, vaddr addr, vaddr len, @@ -288,18 +249,12 @@ static inline void tlb_destroy(CPUState *cpu) static inline void tlb_flush_page(CPUState *cpu, vaddr addr) { } -static inline void tlb_flush_page_all_cpus(CPUState *src, vaddr addr) -{ -} static inline void tlb_flush_page_all_cpus_synced(CPUState *src, vaddr add= r) { } static inline void tlb_flush(CPUState *cpu) { } -static inline void tlb_flush_all_cpus(CPUState *src_cpu) -{ -} static inline void tlb_flush_all_cpus_synced(CPUState *src_cpu) { } @@ -311,20 +266,11 @@ static inline void tlb_flush_page_by_mmuidx(CPUState = *cpu, static inline void tlb_flush_by_mmuidx(CPUState *cpu, uint16_t idxmap) { } -static inline void tlb_flush_page_by_mmuidx_all_cpus(CPUState *cpu, - vaddr addr, - uint16_t idxmap) -{ -} static inline void tlb_flush_page_by_mmuidx_all_cpus_synced(CPUState *cpu, vaddr addr, uint16_t idxma= p) { } -static inline void tlb_flush_by_mmuidx_all_cpus(CPUState *cpu, uint16_t id= xmap) -{ -} - static inline void tlb_flush_by_mmuidx_all_cpus_synced(CPUState *cpu, uint16_t idxmap) { @@ -335,12 +281,6 @@ static inline void tlb_flush_page_bits_by_mmuidx(CPUSt= ate *cpu, unsigned bits) { } -static inline void tlb_flush_page_bits_by_mmuidx_all_cpus(CPUState *cpu, - vaddr addr, - uint16_t idxmap, - unsigned bits) -{ -} static inline void tlb_flush_page_bits_by_mmuidx_all_cpus_synced(CPUState *cpu, vaddr addr, uint16_t idxmap, unsigned bi= ts) @@ -351,13 +291,6 @@ static inline void tlb_flush_range_by_mmuidx(CPUState = *cpu, vaddr addr, unsigned bits) { } -static inline void tlb_flush_range_by_mmuidx_all_cpus(CPUState *cpu, - vaddr addr, - vaddr len, - uint16_t idxmap, - unsigned bits) -{ -} static inline void tlb_flush_range_by_mmuidx_all_cpus_synced(CPUState *cpu, vaddr addr, vaddr len, diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 93b1ca810b..8ff3aa5e50 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -379,21 +379,6 @@ void tlb_flush(CPUState *cpu) tlb_flush_by_mmuidx(cpu, ALL_MMUIDX_BITS); } =20 -void tlb_flush_by_mmuidx_all_cpus(CPUState *src_cpu, uint16_t idxmap) -{ - const run_on_cpu_func fn =3D tlb_flush_by_mmuidx_async_work; - - tlb_debug("mmu_idx: 0x%"PRIx16"\n", idxmap); - - flush_all_helper(src_cpu, fn, RUN_ON_CPU_HOST_INT(idxmap)); - fn(src_cpu, RUN_ON_CPU_HOST_INT(idxmap)); -} - -void tlb_flush_all_cpus(CPUState *src_cpu) -{ - tlb_flush_by_mmuidx_all_cpus(src_cpu, ALL_MMUIDX_BITS); -} - void tlb_flush_by_mmuidx_all_cpus_synced(CPUState *src_cpu, uint16_t idxma= p) { const run_on_cpu_func fn =3D tlb_flush_by_mmuidx_async_work; @@ -604,46 +589,6 @@ void tlb_flush_page(CPUState *cpu, vaddr addr) tlb_flush_page_by_mmuidx(cpu, addr, ALL_MMUIDX_BITS); } =20 -void tlb_flush_page_by_mmuidx_all_cpus(CPUState *src_cpu, vaddr addr, - uint16_t idxmap) -{ - tlb_debug("addr: %016" VADDR_PRIx " mmu_idx:%"PRIx16"\n", addr, idxmap= ); - - /* This should already be page aligned */ - addr &=3D TARGET_PAGE_MASK; - - /* - * Allocate memory to hold addr+idxmap only when needed. - * See tlb_flush_page_by_mmuidx for details. - */ - if (idxmap < TARGET_PAGE_SIZE) { - flush_all_helper(src_cpu, tlb_flush_page_by_mmuidx_async_1, - RUN_ON_CPU_TARGET_PTR(addr | idxmap)); - } else { - CPUState *dst_cpu; - - /* Allocate a separate data block for each destination cpu. */ - CPU_FOREACH(dst_cpu) { - if (dst_cpu !=3D src_cpu) { - TLBFlushPageByMMUIdxData *d - =3D g_new(TLBFlushPageByMMUIdxData, 1); - - d->addr =3D addr; - d->idxmap =3D idxmap; - async_run_on_cpu(dst_cpu, tlb_flush_page_by_mmuidx_async_2, - RUN_ON_CPU_HOST_PTR(d)); - } - } - } - - tlb_flush_page_by_mmuidx_async_0(src_cpu, addr, idxmap); -} - -void tlb_flush_page_all_cpus(CPUState *src, vaddr addr) -{ - tlb_flush_page_by_mmuidx_all_cpus(src, addr, ALL_MMUIDX_BITS); -} - void tlb_flush_page_by_mmuidx_all_cpus_synced(CPUState *src_cpu, vaddr addr, uint16_t idxmap) @@ -835,54 +780,6 @@ void tlb_flush_page_bits_by_mmuidx(CPUState *cpu, vadd= r addr, tlb_flush_range_by_mmuidx(cpu, addr, TARGET_PAGE_SIZE, idxmap, bits); } =20 -void tlb_flush_range_by_mmuidx_all_cpus(CPUState *src_cpu, - vaddr addr, vaddr len, - uint16_t idxmap, unsigned bits) -{ - TLBFlushRangeData d; - CPUState *dst_cpu; - - /* - * If all bits are significant, and len is small, - * this devolves to tlb_flush_page. - */ - if (bits >=3D TARGET_LONG_BITS && len <=3D TARGET_PAGE_SIZE) { - tlb_flush_page_by_mmuidx_all_cpus(src_cpu, addr, idxmap); - return; - } - /* If no page bits are significant, this devolves to tlb_flush. */ - if (bits < TARGET_PAGE_BITS) { - tlb_flush_by_mmuidx_all_cpus(src_cpu, idxmap); - return; - } - - /* This should already be page aligned */ - d.addr =3D addr & TARGET_PAGE_MASK; - d.len =3D len; - d.idxmap =3D idxmap; - d.bits =3D bits; - - /* Allocate a separate data block for each destination cpu. */ - CPU_FOREACH(dst_cpu) { - if (dst_cpu !=3D src_cpu) { - TLBFlushRangeData *p =3D g_memdup(&d, sizeof(d)); - async_run_on_cpu(dst_cpu, - tlb_flush_range_by_mmuidx_async_1, - RUN_ON_CPU_HOST_PTR(p)); - } - } - - tlb_flush_range_by_mmuidx_async_0(src_cpu, d); -} - -void tlb_flush_page_bits_by_mmuidx_all_cpus(CPUState *src_cpu, - vaddr addr, uint16_t idxmap, - unsigned bits) -{ - tlb_flush_range_by_mmuidx_all_cpus(src_cpu, addr, TARGET_PAGE_SIZE, - idxmap, bits); -} - void tlb_flush_range_by_mmuidx_all_cpus_synced(CPUState *src_cpu, vaddr addr, vaddr len, --=20 2.43.0 From nobody Fri May 17 06:30:36 2024 Delivered-To: importer2@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer2=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1712321713; cv=none; d=zohomail.com; s=zohoarc; b=CsCTVNqGz2Ftpj1Gc8pP9Ob7DP2c4m1PZ/mKweOLtc3cA1wyNeZcS2VL7sKvGLXqY12eVdBwqfpSnLhn29v55eOzgeFd8U1kzn4pfeD6I/PG3vZ4I2gdb+lC6Qw6Yz8VwPaBlBtGjHUjRt4dc3hi/0Fl+8fTE6wzg/YKz2b1d+Y= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1712321713; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=jARGZ5t8qjs1XSZjo4NZHkEKWxs+rQQU56W1uBL9n0I=; b=jFkRjGrziA1ZvSVhFqHXIzEGFAkQxyMtJIF8cXxuXc33mDgIVXWkNTATH0wDkMTJhrnIBNiNC2uXDtSjt8/Lf1BkUeD1b2ak43bcNNYSDjHvVOW3/OcbIANlTs7aMhlwsxC76AWcJpCZaE5tQNv/UfPWq160MzBmlCgkrZcmubY= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer2=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1712321713923271.13890283923126; Fri, 5 Apr 2024 05:55:13 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1rsj5D-0004pV-Hr; Fri, 05 Apr 2024 08:54:15 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rsj5B-0004o5-CD; Fri, 05 Apr 2024 08:54:13 -0400 Received: from mail-pj1-x102e.google.com ([2607:f8b0:4864:20::102e]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1rsj59-0000yE-Kq; Fri, 05 Apr 2024 08:54:13 -0400 Received: by mail-pj1-x102e.google.com with SMTP id 98e67ed59e1d1-29ddfada0d0so1520303a91.3; Fri, 05 Apr 2024 05:54:09 -0700 (PDT) Received: from wheely.local0.net (124-169-104-130.tpgi.com.au. [124.169.104.130]) by smtp.gmail.com with ESMTPSA id t19-20020a17090ae51300b002a2d8d34009sm2984414pjy.27.2024.04.05.05.54.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Apr 2024 05:54:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1712321648; x=1712926448; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=jARGZ5t8qjs1XSZjo4NZHkEKWxs+rQQU56W1uBL9n0I=; b=Brx37PE+2hszUmqqb3cuzk2ToxGzpJTPxq/INiZ3DFChwKBkA8hrDbdR3etp2lKnLp 8Eu1dvDMifoMYuU4kPmX9eDXAwACRsPofl8+qTqukAZW1VaFPMOXAvOVjK2VhU8GiEUz pJ4ZXkFLR37oIfkYag4ThaN8x4VWatKzPFoRmvMLJorrLqG0Comtr6S2rrCnfwrUk9iz cOONq/0V/n13kvl/sWvx4zuYEXoUidLKvvoKqTeo7hTySAJjEBv+6b7tRa4JkuLH+IoD r9wPvcglVSGA1irB2kqn4AF+4O1vc1hyyqk8mfZ2iqc4xoUVPC47gTSOaxxKDZ9sl03Q AtaQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1712321648; x=1712926448; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=jARGZ5t8qjs1XSZjo4NZHkEKWxs+rQQU56W1uBL9n0I=; b=KMCQd6L94kVEiVAD7U6aJDGpEsupKZ+Iac/f5zfPbWbthc1W/0Yk/BtL6JXhuA24Lx Oamg6tgAdzLO5RNnhRQTW7VqZrucM8uVKVguQ6p9+hMLkQMN0BghoLnib01DSsFukWr/ ijX4q++fkLMevrd6hNthcrJVCIFbXAJkwzI77S7lvFGPSrCqEMhm324SMCQXSAmCabon cyUJusqLOATSeeo3Aes/y1EyrnZHb7gCLP+Tfi/Isx3GMn5Eqw6qwf20eB8v/WbWuVyD UMG+nu3HLc+a4OVDSrHtiDFbsOi6cukkmQCoN165F51GHJJ75CkfQLzjlg6DXW0jvzY3 0MTQ== X-Forwarded-Encrypted: i=1; AJvYcCWPAu4Y240NnLgNP24kL6T5KyRwNYdaen/Ic+PexIrwhgxcjjaF216xrAOEBeRh2TgjG66j/NONCJKa+AtJ8CS4cCKeprvk/im0eKXyn3O+Dz97sEuU67Q+gdhbv4PrGfFM5sBiOVJ8ullrAwWG4lZ3xS8UgD5gwFsBUK5IzNJowKo8vrYjU7MW X-Gm-Message-State: AOJu0YxS+YcgVk7PvlWERtBXu1pVAtNglgov5+h+5fsalVnitwvv2STp dLf/s4cVuUspOU+knvtw4KZa9EhFiZOj92xBHPkAtOekzDf18zvYSY80CyA/ X-Google-Smtp-Source: AGHT+IHb1xZVS1TxNfGKMYCQuLGSgcvhQNH2E/XTkIAWoodFPQFGj/azdYKyOY/ZHn6j7mrwFVKiCw== X-Received: by 2002:a17:90b:46c8:b0:2a4:82ca:4516 with SMTP id jx8-20020a17090b46c800b002a482ca4516mr568882pjb.5.1712321648367; Fri, 05 Apr 2024 05:54:08 -0700 (PDT) From: Nicholas Piggin To: qemu-ppc@nongnu.org Cc: Nicholas Piggin , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Richard Henderson , Paolo Bonzini , Daniel Henrique Barboza , qemu-devel@nongnu.org, Peter Maydell , qemu-arm@nongnu.org, qemu-riscv@nongnu.org, qemu-s390x@nongnu.org Subject: [PATCH v2 3/3] tcg/cputlb: remove other-cpu capability from TLB flushing Date: Fri, 5 Apr 2024 22:53:38 +1000 Message-ID: <20240405125340.380828-4-npiggin@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240405125340.380828-1-npiggin@gmail.com> References: <20240405125340.380828-1-npiggin@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer2=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=2607:f8b0:4864:20::102e; envelope-from=npiggin@gmail.com; helo=mail-pj1-x102e.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer2=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer2=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @gmail.com) X-ZM-MESSAGEID: 1712321715578100007 Content-Type: text/plain; charset="utf-8" Some TLB flush operations can flush other CPUs. The problem with this is they used non-synced variants of flushes (i.e., that return before the destination has completed the flush). Since all TLB flush users need the _synced variants, and that last user (ppc) of the non-synced flush was buggy, this is a footgun waiting to go off. There do not seem to be any callers that flush other CPUs, so remove the capability. Signed-off-by: Nicholas Piggin Reviewed-by: Richard Henderson --- accel/tcg/cputlb.c | 42 +++++++++--------------------------------- 1 file changed, 9 insertions(+), 33 deletions(-) diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 8ff3aa5e50..1fe6def280 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -366,12 +366,9 @@ void tlb_flush_by_mmuidx(CPUState *cpu, uint16_t idxma= p) { tlb_debug("mmu_idx: 0x%" PRIx16 "\n", idxmap); =20 - if (cpu->created && !qemu_cpu_is_self(cpu)) { - async_run_on_cpu(cpu, tlb_flush_by_mmuidx_async_work, - RUN_ON_CPU_HOST_INT(idxmap)); - } else { - tlb_flush_by_mmuidx_async_work(cpu, RUN_ON_CPU_HOST_INT(idxmap)); - } + assert_cpu_is_self(cpu); + + tlb_flush_by_mmuidx_async_work(cpu, RUN_ON_CPU_HOST_INT(idxmap)); } =20 void tlb_flush(CPUState *cpu) @@ -560,28 +557,12 @@ void tlb_flush_page_by_mmuidx(CPUState *cpu, vaddr ad= dr, uint16_t idxmap) { tlb_debug("addr: %016" VADDR_PRIx " mmu_idx:%" PRIx16 "\n", addr, idxm= ap); =20 + assert_cpu_is_self(cpu); + /* This should already be page aligned */ addr &=3D TARGET_PAGE_MASK; =20 - if (qemu_cpu_is_self(cpu)) { - tlb_flush_page_by_mmuidx_async_0(cpu, addr, idxmap); - } else if (idxmap < TARGET_PAGE_SIZE) { - /* - * Most targets have only a few mmu_idx. In the case where - * we can stuff idxmap into the low TARGET_PAGE_BITS, avoid - * allocating memory for this operation. - */ - async_run_on_cpu(cpu, tlb_flush_page_by_mmuidx_async_1, - RUN_ON_CPU_TARGET_PTR(addr | idxmap)); - } else { - TLBFlushPageByMMUIdxData *d =3D g_new(TLBFlushPageByMMUIdxData, 1); - - /* Otherwise allocate a structure, freed by the worker. */ - d->addr =3D addr; - d->idxmap =3D idxmap; - async_run_on_cpu(cpu, tlb_flush_page_by_mmuidx_async_2, - RUN_ON_CPU_HOST_PTR(d)); - } + tlb_flush_page_by_mmuidx_async_0(cpu, addr, idxmap); } =20 void tlb_flush_page(CPUState *cpu, vaddr addr) @@ -744,6 +725,8 @@ void tlb_flush_range_by_mmuidx(CPUState *cpu, vaddr add= r, { TLBFlushRangeData d; =20 + assert_cpu_is_self(cpu); + /* * If all bits are significant, and len is small, * this devolves to tlb_flush_page. @@ -764,14 +747,7 @@ void tlb_flush_range_by_mmuidx(CPUState *cpu, vaddr ad= dr, d.idxmap =3D idxmap; d.bits =3D bits; =20 - if (qemu_cpu_is_self(cpu)) { - tlb_flush_range_by_mmuidx_async_0(cpu, d); - } else { - /* Otherwise allocate a structure, freed by the worker. */ - TLBFlushRangeData *p =3D g_memdup(&d, sizeof(d)); - async_run_on_cpu(cpu, tlb_flush_range_by_mmuidx_async_1, - RUN_ON_CPU_HOST_PTR(p)); - } + tlb_flush_range_by_mmuidx_async_0(cpu, d); } =20 void tlb_flush_page_bits_by_mmuidx(CPUState *cpu, vaddr addr, --=20 2.43.0