From nobody Thu Dec 26 14:09:20 2024 Delivered-To: importer2@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer2=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=linaro.org ARC-Seal: i=1; a=rsa-sha256; t=1675112553; cv=none; d=zohomail.com; s=zohoarc; b=RhXS6Q+0K2l3Pe7WQQB6zoYcwyF30RicNLULJbHo+ghYwp92j/fAA4LPYKodJozh4uDAWtPNTTPNzX/nVZwSuyrL2SWGwNgr0++C9l8XEhm9TSaWl99AuHN7YAJaR+mfixc0RRh1ygI6a8h9vb/qxj/4LTGt9XtNVOqEjNi5p+E= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1675112553; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=jrbB/jyNl0+/I5QCOX2u+WVryaCrHg2gHj1ZSUlZ/Hw=; b=BXDQQqDyBsjUV5m8rzZ7L/wLX1FQINtdTVu/0n1coI2Kv8xEfM+U02AhdGU31pvLbBxlaG+0BcMRd9ZCJEtKmmW25JziFgizolQcP5HJPPDWCqVMbNdfmm4Vs4HNPNZwCu3dYK0kUZDMymg/OIO+xEr0MgV54StanoZMKYs6OKA= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer2=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1675112553874402.84757622770815; Mon, 30 Jan 2023 13:02:33 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pMbGL-0000Jo-UG; Mon, 30 Jan 2023 16:00:26 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pMbG1-0008Gi-QB for qemu-devel@nongnu.org; Mon, 30 Jan 2023 16:00:07 -0500 Received: from mail-pf1-x42e.google.com ([2607:f8b0:4864:20::42e]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pMbFw-0005Gg-Ph for qemu-devel@nongnu.org; Mon, 30 Jan 2023 16:00:04 -0500 Received: by mail-pf1-x42e.google.com with SMTP id g9so6701480pfk.13 for ; Mon, 30 Jan 2023 13:00:00 -0800 (PST) Received: from stoup.. (rrcs-173-197-98-118.west.biz.rr.com. [173.197.98.118]) by smtp.gmail.com with ESMTPSA id x10-20020aa79a4a000000b00593eb3a5e44sm102933pfj.37.2023.01.30.12.59.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 Jan 2023 12:59:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=jrbB/jyNl0+/I5QCOX2u+WVryaCrHg2gHj1ZSUlZ/Hw=; b=O5ekH/HBvg7sdz2gW/UIjfTzoRQaIxHod2XCI2GV7NHaJV2I5fZEZ5XQZzhR2X390h AGHNVLyFYF3Raty+GFfW98qMF841MMqxGhNpoTnrtK1FMPmAv7NWS0iiKt3XjeW5OasG YxVVHXleZ0i6WHEg8TqLV4du0jV/bDFbmrl7zYTpZkJEpodOoBV6teOT3vQpHEmPCmuh AsUmpBSqvEq3v8q2VjHa1pAWHb97bDbl39gMQi5pKTwcUiuk5xDSc9MoEZW7Eb0Ubp7s 4BcLg46Kd51fGcFqXMYSWnQsx+xx2vOw8yIOigj74xcEsV3DxSGIY5LuU1ycezGJJQrG 0OWA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=jrbB/jyNl0+/I5QCOX2u+WVryaCrHg2gHj1ZSUlZ/Hw=; b=KDgGospGT83S5DmtLJGaG6L3jyCfa+6hG8TjCjAGzYo5AHq9sK2tDQvUGTF7lCSJpx 7HAGiIVU123BrXh76QeOlPXAtEnA9UlbBaUr049MVM66KLfpXe/R4wDYN5u3W6VSy3Qq LUs2meM2C7r8nOSAXpXz2sh+3s2HHdOm2aVFQkc6zAfdSRiqDjvDw92/VgbkisHcMbjr wtmn2K59tQFaauYwl6cF8VeRHqu6DwRk9S68YJ0Qy+/r/s8Bmd+6TfQjeku4YXzPoy/u 3W75Av0ss3hqxm+KfxNbRLv/NlrW1nTtSSSjPMczJAMMFN2qlmnEowtsv6CjGSKlY1cI UfCg== X-Gm-Message-State: AO0yUKUQ3EPQ7X60gQ9oLVLpDiKmF3Jpcxjrx0xInzTRDD8Ex78OQneK VhjhUz+tkvP7ao1LQuw7688MCJtrpjj20238 X-Google-Smtp-Source: AK7set/k32JohX4SgVgBsydOvqEkEDb38DOLowLbQDJcefCyyJXHlOjJwxKCiwIhXG86DMUL9Fpo+w== X-Received: by 2002:aa7:9097:0:b0:592:5e1d:c7d2 with SMTP id i23-20020aa79097000000b005925e1dc7d2mr12978987pfa.23.1675112398917; Mon, 30 Jan 2023 12:59:58 -0800 (PST) From: Richard Henderson To: qemu-devel@nongnu.org Cc: peter.maydell@linaro.org, cota@braap.org Subject: [PATCH 11/27] tcg: Use tcg_temp_ebb_new_* in tcg/ Date: Mon, 30 Jan 2023 10:59:19 -1000 Message-Id: <20230130205935.1157347-13-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230130205935.1157347-1-richard.henderson@linaro.org> References: <20230130205935.1157347-1-richard.henderson@linaro.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer2=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=2607:f8b0:4864:20::42e; envelope-from=richard.henderson@linaro.org; helo=mail-pf1-x42e.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer2=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer2=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @linaro.org) X-ZM-MESSAGEID: 1675112554616100005 Content-Type: text/plain; charset="utf-8" All of these have obvious and quite local scope. Signed-off-by: Richard Henderson Reviewed-by: Philippe Mathieu-Daud=C3=A9 --- tcg/tcg-op-gvec.c | 270 +++++++++++++++++++++++----------------------- tcg/tcg-op.c | 258 ++++++++++++++++++++++---------------------- tcg/tcg.c | 2 +- 3 files changed, 265 insertions(+), 265 deletions(-) diff --git a/tcg/tcg-op-gvec.c b/tcg/tcg-op-gvec.c index 079a761b04..d895011d6b 100644 --- a/tcg/tcg-op-gvec.c +++ b/tcg/tcg-op-gvec.c @@ -117,8 +117,8 @@ void tcg_gen_gvec_2_ool(uint32_t dofs, uint32_t aofs, TCGv_ptr a0, a1; TCGv_i32 desc =3D tcg_constant_i32(simd_desc(oprsz, maxsz, data)); =20 - a0 =3D tcg_temp_new_ptr(); - a1 =3D tcg_temp_new_ptr(); + a0 =3D tcg_temp_ebb_new_ptr(); + a1 =3D tcg_temp_ebb_new_ptr(); =20 tcg_gen_addi_ptr(a0, cpu_env, dofs); tcg_gen_addi_ptr(a1, cpu_env, aofs); @@ -138,8 +138,8 @@ void tcg_gen_gvec_2i_ool(uint32_t dofs, uint32_t aofs, = TCGv_i64 c, TCGv_ptr a0, a1; TCGv_i32 desc =3D tcg_constant_i32(simd_desc(oprsz, maxsz, data)); =20 - a0 =3D tcg_temp_new_ptr(); - a1 =3D tcg_temp_new_ptr(); + a0 =3D tcg_temp_ebb_new_ptr(); + a1 =3D tcg_temp_ebb_new_ptr(); =20 tcg_gen_addi_ptr(a0, cpu_env, dofs); tcg_gen_addi_ptr(a1, cpu_env, aofs); @@ -158,9 +158,9 @@ void tcg_gen_gvec_3_ool(uint32_t dofs, uint32_t aofs, u= int32_t bofs, TCGv_ptr a0, a1, a2; TCGv_i32 desc =3D tcg_constant_i32(simd_desc(oprsz, maxsz, data)); =20 - a0 =3D tcg_temp_new_ptr(); - a1 =3D tcg_temp_new_ptr(); - a2 =3D tcg_temp_new_ptr(); + a0 =3D tcg_temp_ebb_new_ptr(); + a1 =3D tcg_temp_ebb_new_ptr(); + a2 =3D tcg_temp_ebb_new_ptr(); =20 tcg_gen_addi_ptr(a0, cpu_env, dofs); tcg_gen_addi_ptr(a1, cpu_env, aofs); @@ -181,10 +181,10 @@ void tcg_gen_gvec_4_ool(uint32_t dofs, uint32_t aofs,= uint32_t bofs, TCGv_ptr a0, a1, a2, a3; TCGv_i32 desc =3D tcg_constant_i32(simd_desc(oprsz, maxsz, data)); =20 - a0 =3D tcg_temp_new_ptr(); - a1 =3D tcg_temp_new_ptr(); - a2 =3D tcg_temp_new_ptr(); - a3 =3D tcg_temp_new_ptr(); + a0 =3D tcg_temp_ebb_new_ptr(); + a1 =3D tcg_temp_ebb_new_ptr(); + a2 =3D tcg_temp_ebb_new_ptr(); + a3 =3D tcg_temp_ebb_new_ptr(); =20 tcg_gen_addi_ptr(a0, cpu_env, dofs); tcg_gen_addi_ptr(a1, cpu_env, aofs); @@ -207,11 +207,11 @@ void tcg_gen_gvec_5_ool(uint32_t dofs, uint32_t aofs,= uint32_t bofs, TCGv_ptr a0, a1, a2, a3, a4; TCGv_i32 desc =3D tcg_constant_i32(simd_desc(oprsz, maxsz, data)); =20 - a0 =3D tcg_temp_new_ptr(); - a1 =3D tcg_temp_new_ptr(); - a2 =3D tcg_temp_new_ptr(); - a3 =3D tcg_temp_new_ptr(); - a4 =3D tcg_temp_new_ptr(); + a0 =3D tcg_temp_ebb_new_ptr(); + a1 =3D tcg_temp_ebb_new_ptr(); + a2 =3D tcg_temp_ebb_new_ptr(); + a3 =3D tcg_temp_ebb_new_ptr(); + a4 =3D tcg_temp_ebb_new_ptr(); =20 tcg_gen_addi_ptr(a0, cpu_env, dofs); tcg_gen_addi_ptr(a1, cpu_env, aofs); @@ -237,8 +237,8 @@ void tcg_gen_gvec_2_ptr(uint32_t dofs, uint32_t aofs, TCGv_ptr a0, a1; TCGv_i32 desc =3D tcg_constant_i32(simd_desc(oprsz, maxsz, data)); =20 - a0 =3D tcg_temp_new_ptr(); - a1 =3D tcg_temp_new_ptr(); + a0 =3D tcg_temp_ebb_new_ptr(); + a1 =3D tcg_temp_ebb_new_ptr(); =20 tcg_gen_addi_ptr(a0, cpu_env, dofs); tcg_gen_addi_ptr(a1, cpu_env, aofs); @@ -258,9 +258,9 @@ void tcg_gen_gvec_3_ptr(uint32_t dofs, uint32_t aofs, u= int32_t bofs, TCGv_ptr a0, a1, a2; TCGv_i32 desc =3D tcg_constant_i32(simd_desc(oprsz, maxsz, data)); =20 - a0 =3D tcg_temp_new_ptr(); - a1 =3D tcg_temp_new_ptr(); - a2 =3D tcg_temp_new_ptr(); + a0 =3D tcg_temp_ebb_new_ptr(); + a1 =3D tcg_temp_ebb_new_ptr(); + a2 =3D tcg_temp_ebb_new_ptr(); =20 tcg_gen_addi_ptr(a0, cpu_env, dofs); tcg_gen_addi_ptr(a1, cpu_env, aofs); @@ -283,10 +283,10 @@ void tcg_gen_gvec_4_ptr(uint32_t dofs, uint32_t aofs,= uint32_t bofs, TCGv_ptr a0, a1, a2, a3; TCGv_i32 desc =3D tcg_constant_i32(simd_desc(oprsz, maxsz, data)); =20 - a0 =3D tcg_temp_new_ptr(); - a1 =3D tcg_temp_new_ptr(); - a2 =3D tcg_temp_new_ptr(); - a3 =3D tcg_temp_new_ptr(); + a0 =3D tcg_temp_ebb_new_ptr(); + a1 =3D tcg_temp_ebb_new_ptr(); + a2 =3D tcg_temp_ebb_new_ptr(); + a3 =3D tcg_temp_ebb_new_ptr(); =20 tcg_gen_addi_ptr(a0, cpu_env, dofs); tcg_gen_addi_ptr(a1, cpu_env, aofs); @@ -311,11 +311,11 @@ void tcg_gen_gvec_5_ptr(uint32_t dofs, uint32_t aofs,= uint32_t bofs, TCGv_ptr a0, a1, a2, a3, a4; TCGv_i32 desc =3D tcg_constant_i32(simd_desc(oprsz, maxsz, data)); =20 - a0 =3D tcg_temp_new_ptr(); - a1 =3D tcg_temp_new_ptr(); - a2 =3D tcg_temp_new_ptr(); - a3 =3D tcg_temp_new_ptr(); - a4 =3D tcg_temp_new_ptr(); + a0 =3D tcg_temp_ebb_new_ptr(); + a1 =3D tcg_temp_ebb_new_ptr(); + a2 =3D tcg_temp_ebb_new_ptr(); + a3 =3D tcg_temp_ebb_new_ptr(); + a4 =3D tcg_temp_ebb_new_ptr(); =20 tcg_gen_addi_ptr(a0, cpu_env, dofs); tcg_gen_addi_ptr(a1, cpu_env, aofs); @@ -576,16 +576,16 @@ static void do_dup(unsigned vece, uint32_t dofs, uint= 32_t oprsz, be simple enough. */ if (TCG_TARGET_REG_BITS =3D=3D 64 && (vece !=3D MO_32 || !check_size_impl(oprsz, 4))) { - t_64 =3D tcg_temp_new_i64(); + t_64 =3D tcg_temp_ebb_new_i64(); tcg_gen_extu_i32_i64(t_64, in_32); tcg_gen_dup_i64(vece, t_64, t_64); } else { - t_32 =3D tcg_temp_new_i32(); + t_32 =3D tcg_temp_ebb_new_i32(); tcg_gen_dup_i32(vece, t_32, in_32); } } else if (in_64) { /* We are given a 64-bit variable input. */ - t_64 =3D tcg_temp_new_i64(); + t_64 =3D tcg_temp_ebb_new_i64(); tcg_gen_dup_i64(vece, t_64, in_64); } else { /* We are given a constant input. */ @@ -620,7 +620,7 @@ static void do_dup(unsigned vece, uint32_t dofs, uint32= _t oprsz, } =20 /* Otherwise implement out of line. */ - t_ptr =3D tcg_temp_new_ptr(); + t_ptr =3D tcg_temp_ebb_new_ptr(); tcg_gen_addi_ptr(t_ptr, cpu_env, dofs); =20 /* @@ -636,7 +636,7 @@ static void do_dup(unsigned vece, uint32_t dofs, uint32= _t oprsz, if (in_32) { t_val =3D in_32; } else if (in_64) { - t_val =3D tcg_temp_new_i32(); + t_val =3D tcg_temp_ebb_new_i32(); tcg_gen_extrl_i64_i32(t_val, in_64); } else { t_val =3D tcg_constant_i32(in_c); @@ -671,7 +671,7 @@ static void do_dup(unsigned vece, uint32_t dofs, uint32= _t oprsz, if (in_32) { fns[vece](t_ptr, t_desc, in_32); } else if (in_64) { - t_32 =3D tcg_temp_new_i32(); + t_32 =3D tcg_temp_ebb_new_i32(); tcg_gen_extrl_i64_i32(t_32, in_64); fns[vece](t_ptr, t_desc, t_32); tcg_temp_free_i32(t_32); @@ -705,8 +705,8 @@ static void expand_clr(uint32_t dofs, uint32_t maxsz) static void expand_2_i32(uint32_t dofs, uint32_t aofs, uint32_t oprsz, bool load_dest, void (*fni)(TCGv_i32, TCGv_i32)) { - TCGv_i32 t0 =3D tcg_temp_new_i32(); - TCGv_i32 t1 =3D tcg_temp_new_i32(); + TCGv_i32 t0 =3D tcg_temp_ebb_new_i32(); + TCGv_i32 t1 =3D tcg_temp_ebb_new_i32(); uint32_t i; =20 for (i =3D 0; i < oprsz; i +=3D 4) { @@ -725,8 +725,8 @@ static void expand_2i_i32(uint32_t dofs, uint32_t aofs,= uint32_t oprsz, int32_t c, bool load_dest, void (*fni)(TCGv_i32, TCGv_i32, int32_t)) { - TCGv_i32 t0 =3D tcg_temp_new_i32(); - TCGv_i32 t1 =3D tcg_temp_new_i32(); + TCGv_i32 t0 =3D tcg_temp_ebb_new_i32(); + TCGv_i32 t1 =3D tcg_temp_ebb_new_i32(); uint32_t i; =20 for (i =3D 0; i < oprsz; i +=3D 4) { @@ -745,8 +745,8 @@ static void expand_2s_i32(uint32_t dofs, uint32_t aofs,= uint32_t oprsz, TCGv_i32 c, bool scalar_first, void (*fni)(TCGv_i32, TCGv_i32, TCGv_i32)) { - TCGv_i32 t0 =3D tcg_temp_new_i32(); - TCGv_i32 t1 =3D tcg_temp_new_i32(); + TCGv_i32 t0 =3D tcg_temp_ebb_new_i32(); + TCGv_i32 t1 =3D tcg_temp_ebb_new_i32(); uint32_t i; =20 for (i =3D 0; i < oprsz; i +=3D 4) { @@ -767,9 +767,9 @@ static void expand_3_i32(uint32_t dofs, uint32_t aofs, uint32_t bofs, uint32_t oprsz, bool load_dest, void (*fni)(TCGv_i32, TCGv_i32, TCGv_i32)) { - TCGv_i32 t0 =3D tcg_temp_new_i32(); - TCGv_i32 t1 =3D tcg_temp_new_i32(); - TCGv_i32 t2 =3D tcg_temp_new_i32(); + TCGv_i32 t0 =3D tcg_temp_ebb_new_i32(); + TCGv_i32 t1 =3D tcg_temp_ebb_new_i32(); + TCGv_i32 t2 =3D tcg_temp_ebb_new_i32(); uint32_t i; =20 for (i =3D 0; i < oprsz; i +=3D 4) { @@ -790,9 +790,9 @@ static void expand_3i_i32(uint32_t dofs, uint32_t aofs,= uint32_t bofs, uint32_t oprsz, int32_t c, bool load_dest, void (*fni)(TCGv_i32, TCGv_i32, TCGv_i32, int32_= t)) { - TCGv_i32 t0 =3D tcg_temp_new_i32(); - TCGv_i32 t1 =3D tcg_temp_new_i32(); - TCGv_i32 t2 =3D tcg_temp_new_i32(); + TCGv_i32 t0 =3D tcg_temp_ebb_new_i32(); + TCGv_i32 t1 =3D tcg_temp_ebb_new_i32(); + TCGv_i32 t2 =3D tcg_temp_ebb_new_i32(); uint32_t i; =20 for (i =3D 0; i < oprsz; i +=3D 4) { @@ -814,10 +814,10 @@ static void expand_4_i32(uint32_t dofs, uint32_t aofs= , uint32_t bofs, uint32_t cofs, uint32_t oprsz, bool write_aofs, void (*fni)(TCGv_i32, TCGv_i32, TCGv_i32, TCGv_i3= 2)) { - TCGv_i32 t0 =3D tcg_temp_new_i32(); - TCGv_i32 t1 =3D tcg_temp_new_i32(); - TCGv_i32 t2 =3D tcg_temp_new_i32(); - TCGv_i32 t3 =3D tcg_temp_new_i32(); + TCGv_i32 t0 =3D tcg_temp_ebb_new_i32(); + TCGv_i32 t1 =3D tcg_temp_ebb_new_i32(); + TCGv_i32 t2 =3D tcg_temp_ebb_new_i32(); + TCGv_i32 t3 =3D tcg_temp_ebb_new_i32(); uint32_t i; =20 for (i =3D 0; i < oprsz; i +=3D 4) { @@ -841,10 +841,10 @@ static void expand_4i_i32(uint32_t dofs, uint32_t aof= s, uint32_t bofs, void (*fni)(TCGv_i32, TCGv_i32, TCGv_i32, TCGv_i= 32, int32_t)) { - TCGv_i32 t0 =3D tcg_temp_new_i32(); - TCGv_i32 t1 =3D tcg_temp_new_i32(); - TCGv_i32 t2 =3D tcg_temp_new_i32(); - TCGv_i32 t3 =3D tcg_temp_new_i32(); + TCGv_i32 t0 =3D tcg_temp_ebb_new_i32(); + TCGv_i32 t1 =3D tcg_temp_ebb_new_i32(); + TCGv_i32 t2 =3D tcg_temp_ebb_new_i32(); + TCGv_i32 t3 =3D tcg_temp_ebb_new_i32(); uint32_t i; =20 for (i =3D 0; i < oprsz; i +=3D 4) { @@ -864,8 +864,8 @@ static void expand_4i_i32(uint32_t dofs, uint32_t aofs,= uint32_t bofs, static void expand_2_i64(uint32_t dofs, uint32_t aofs, uint32_t oprsz, bool load_dest, void (*fni)(TCGv_i64, TCGv_i64)) { - TCGv_i64 t0 =3D tcg_temp_new_i64(); - TCGv_i64 t1 =3D tcg_temp_new_i64(); + TCGv_i64 t0 =3D tcg_temp_ebb_new_i64(); + TCGv_i64 t1 =3D tcg_temp_ebb_new_i64(); uint32_t i; =20 for (i =3D 0; i < oprsz; i +=3D 8) { @@ -884,8 +884,8 @@ static void expand_2i_i64(uint32_t dofs, uint32_t aofs,= uint32_t oprsz, int64_t c, bool load_dest, void (*fni)(TCGv_i64, TCGv_i64, int64_t)) { - TCGv_i64 t0 =3D tcg_temp_new_i64(); - TCGv_i64 t1 =3D tcg_temp_new_i64(); + TCGv_i64 t0 =3D tcg_temp_ebb_new_i64(); + TCGv_i64 t1 =3D tcg_temp_ebb_new_i64(); uint32_t i; =20 for (i =3D 0; i < oprsz; i +=3D 8) { @@ -904,8 +904,8 @@ static void expand_2s_i64(uint32_t dofs, uint32_t aofs,= uint32_t oprsz, TCGv_i64 c, bool scalar_first, void (*fni)(TCGv_i64, TCGv_i64, TCGv_i64)) { - TCGv_i64 t0 =3D tcg_temp_new_i64(); - TCGv_i64 t1 =3D tcg_temp_new_i64(); + TCGv_i64 t0 =3D tcg_temp_ebb_new_i64(); + TCGv_i64 t1 =3D tcg_temp_ebb_new_i64(); uint32_t i; =20 for (i =3D 0; i < oprsz; i +=3D 8) { @@ -926,9 +926,9 @@ static void expand_3_i64(uint32_t dofs, uint32_t aofs, uint32_t bofs, uint32_t oprsz, bool load_dest, void (*fni)(TCGv_i64, TCGv_i64, TCGv_i64)) { - TCGv_i64 t0 =3D tcg_temp_new_i64(); - TCGv_i64 t1 =3D tcg_temp_new_i64(); - TCGv_i64 t2 =3D tcg_temp_new_i64(); + TCGv_i64 t0 =3D tcg_temp_ebb_new_i64(); + TCGv_i64 t1 =3D tcg_temp_ebb_new_i64(); + TCGv_i64 t2 =3D tcg_temp_ebb_new_i64(); uint32_t i; =20 for (i =3D 0; i < oprsz; i +=3D 8) { @@ -949,9 +949,9 @@ static void expand_3i_i64(uint32_t dofs, uint32_t aofs,= uint32_t bofs, uint32_t oprsz, int64_t c, bool load_dest, void (*fni)(TCGv_i64, TCGv_i64, TCGv_i64, int64_= t)) { - TCGv_i64 t0 =3D tcg_temp_new_i64(); - TCGv_i64 t1 =3D tcg_temp_new_i64(); - TCGv_i64 t2 =3D tcg_temp_new_i64(); + TCGv_i64 t0 =3D tcg_temp_ebb_new_i64(); + TCGv_i64 t1 =3D tcg_temp_ebb_new_i64(); + TCGv_i64 t2 =3D tcg_temp_ebb_new_i64(); uint32_t i; =20 for (i =3D 0; i < oprsz; i +=3D 8) { @@ -973,10 +973,10 @@ static void expand_4_i64(uint32_t dofs, uint32_t aofs= , uint32_t bofs, uint32_t cofs, uint32_t oprsz, bool write_aofs, void (*fni)(TCGv_i64, TCGv_i64, TCGv_i64, TCGv_i6= 4)) { - TCGv_i64 t0 =3D tcg_temp_new_i64(); - TCGv_i64 t1 =3D tcg_temp_new_i64(); - TCGv_i64 t2 =3D tcg_temp_new_i64(); - TCGv_i64 t3 =3D tcg_temp_new_i64(); + TCGv_i64 t0 =3D tcg_temp_ebb_new_i64(); + TCGv_i64 t1 =3D tcg_temp_ebb_new_i64(); + TCGv_i64 t2 =3D tcg_temp_ebb_new_i64(); + TCGv_i64 t3 =3D tcg_temp_ebb_new_i64(); uint32_t i; =20 for (i =3D 0; i < oprsz; i +=3D 8) { @@ -1000,10 +1000,10 @@ static void expand_4i_i64(uint32_t dofs, uint32_t a= ofs, uint32_t bofs, void (*fni)(TCGv_i64, TCGv_i64, TCGv_i64, TCGv_i= 64, int64_t)) { - TCGv_i64 t0 =3D tcg_temp_new_i64(); - TCGv_i64 t1 =3D tcg_temp_new_i64(); - TCGv_i64 t2 =3D tcg_temp_new_i64(); - TCGv_i64 t3 =3D tcg_temp_new_i64(); + TCGv_i64 t0 =3D tcg_temp_ebb_new_i64(); + TCGv_i64 t1 =3D tcg_temp_ebb_new_i64(); + TCGv_i64 t2 =3D tcg_temp_ebb_new_i64(); + TCGv_i64 t3 =3D tcg_temp_ebb_new_i64(); uint32_t i; =20 for (i =3D 0; i < oprsz; i +=3D 8) { @@ -1386,13 +1386,13 @@ void tcg_gen_gvec_2s(uint32_t dofs, uint32_t aofs, = uint32_t oprsz, tcg_temp_free_vec(t_vec); tcg_swap_vecop_list(hold_list); } else if (g->fni8 && check_size_impl(oprsz, 8)) { - TCGv_i64 t64 =3D tcg_temp_new_i64(); + TCGv_i64 t64 =3D tcg_temp_ebb_new_i64(); =20 tcg_gen_dup_i64(g->vece, t64, c); expand_2s_i64(dofs, aofs, oprsz, t64, g->scalar_first, g->fni8); tcg_temp_free_i64(t64); } else if (g->fni4 && check_size_impl(oprsz, 4)) { - TCGv_i32 t32 =3D tcg_temp_new_i32(); + TCGv_i32 t32 =3D tcg_temp_ebb_new_i32(); =20 tcg_gen_extrl_i64_i32(t32, c); tcg_gen_dup_i32(g->vece, t32, t32); @@ -1735,7 +1735,7 @@ void tcg_gen_gvec_dup_mem(unsigned vece, uint32_t dof= s, uint32_t aofs, do_dup_store(type, dofs, oprsz, maxsz, t_vec); tcg_temp_free_vec(t_vec); } else if (vece <=3D MO_32) { - TCGv_i32 in =3D tcg_temp_new_i32(); + TCGv_i32 in =3D tcg_temp_ebb_new_i32(); switch (vece) { case MO_8: tcg_gen_ld8u_i32(in, cpu_env, aofs); @@ -1750,7 +1750,7 @@ void tcg_gen_gvec_dup_mem(unsigned vece, uint32_t dof= s, uint32_t aofs, do_dup(vece, dofs, oprsz, maxsz, in, NULL, 0); tcg_temp_free_i32(in); } else { - TCGv_i64 in =3D tcg_temp_new_i64(); + TCGv_i64 in =3D tcg_temp_ebb_new_i64(); tcg_gen_ld_i64(in, cpu_env, aofs); do_dup(vece, dofs, oprsz, maxsz, NULL, in, 0); tcg_temp_free_i64(in); @@ -1769,8 +1769,8 @@ void tcg_gen_gvec_dup_mem(unsigned vece, uint32_t dof= s, uint32_t aofs, } tcg_temp_free_vec(in); } else { - TCGv_i64 in0 =3D tcg_temp_new_i64(); - TCGv_i64 in1 =3D tcg_temp_new_i64(); + TCGv_i64 in0 =3D tcg_temp_ebb_new_i64(); + TCGv_i64 in1 =3D tcg_temp_ebb_new_i64(); =20 tcg_gen_ld_i64(in0, cpu_env, aofs); tcg_gen_ld_i64(in1, cpu_env, aofs + 8); @@ -1815,7 +1815,7 @@ void tcg_gen_gvec_dup_mem(unsigned vece, uint32_t dof= s, uint32_t aofs, int j; =20 for (j =3D 0; j < 4; ++j) { - in[j] =3D tcg_temp_new_i64(); + in[j] =3D tcg_temp_ebb_new_i64(); tcg_gen_ld_i64(in[j], cpu_env, aofs + j * 8); } for (i =3D (aofs =3D=3D dofs) * 32; i < oprsz; i +=3D 32) { @@ -1860,9 +1860,9 @@ void tcg_gen_gvec_not(unsigned vece, uint32_t dofs, u= int32_t aofs, the 64-bit operation. */ static void gen_addv_mask(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b, TCGv_i64 m) { - TCGv_i64 t1 =3D tcg_temp_new_i64(); - TCGv_i64 t2 =3D tcg_temp_new_i64(); - TCGv_i64 t3 =3D tcg_temp_new_i64(); + TCGv_i64 t1 =3D tcg_temp_ebb_new_i64(); + TCGv_i64 t2 =3D tcg_temp_ebb_new_i64(); + TCGv_i64 t3 =3D tcg_temp_ebb_new_i64(); =20 tcg_gen_andc_i64(t1, a, m); tcg_gen_andc_i64(t2, b, m); @@ -1885,9 +1885,9 @@ void tcg_gen_vec_add8_i64(TCGv_i64 d, TCGv_i64 a, TCG= v_i64 b) void tcg_gen_vec_add8_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b) { TCGv_i32 m =3D tcg_constant_i32((int32_t)dup_const(MO_8, 0x80)); - TCGv_i32 t1 =3D tcg_temp_new_i32(); - TCGv_i32 t2 =3D tcg_temp_new_i32(); - TCGv_i32 t3 =3D tcg_temp_new_i32(); + TCGv_i32 t1 =3D tcg_temp_ebb_new_i32(); + TCGv_i32 t2 =3D tcg_temp_ebb_new_i32(); + TCGv_i32 t3 =3D tcg_temp_ebb_new_i32(); =20 tcg_gen_andc_i32(t1, a, m); tcg_gen_andc_i32(t2, b, m); @@ -1909,8 +1909,8 @@ void tcg_gen_vec_add16_i64(TCGv_i64 d, TCGv_i64 a, TC= Gv_i64 b) =20 void tcg_gen_vec_add16_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b) { - TCGv_i32 t1 =3D tcg_temp_new_i32(); - TCGv_i32 t2 =3D tcg_temp_new_i32(); + TCGv_i32 t1 =3D tcg_temp_ebb_new_i32(); + TCGv_i32 t2 =3D tcg_temp_ebb_new_i32(); =20 tcg_gen_andi_i32(t1, a, ~0xffff); tcg_gen_add_i32(t2, a, b); @@ -1923,8 +1923,8 @@ void tcg_gen_vec_add16_i32(TCGv_i32 d, TCGv_i32 a, TC= Gv_i32 b) =20 void tcg_gen_vec_add32_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b) { - TCGv_i64 t1 =3D tcg_temp_new_i64(); - TCGv_i64 t2 =3D tcg_temp_new_i64(); + TCGv_i64 t1 =3D tcg_temp_ebb_new_i64(); + TCGv_i64 t2 =3D tcg_temp_ebb_new_i64(); =20 tcg_gen_andi_i64(t1, a, ~0xffffffffull); tcg_gen_add_i64(t2, a, b); @@ -2043,9 +2043,9 @@ void tcg_gen_gvec_subs(unsigned vece, uint32_t dofs, = uint32_t aofs, Compare gen_addv_mask above. */ static void gen_subv_mask(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b, TCGv_i64 m) { - TCGv_i64 t1 =3D tcg_temp_new_i64(); - TCGv_i64 t2 =3D tcg_temp_new_i64(); - TCGv_i64 t3 =3D tcg_temp_new_i64(); + TCGv_i64 t1 =3D tcg_temp_ebb_new_i64(); + TCGv_i64 t2 =3D tcg_temp_ebb_new_i64(); + TCGv_i64 t3 =3D tcg_temp_ebb_new_i64(); =20 tcg_gen_or_i64(t1, a, m); tcg_gen_andc_i64(t2, b, m); @@ -2068,9 +2068,9 @@ void tcg_gen_vec_sub8_i64(TCGv_i64 d, TCGv_i64 a, TCG= v_i64 b) void tcg_gen_vec_sub8_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b) { TCGv_i32 m =3D tcg_constant_i32((int32_t)dup_const(MO_8, 0x80)); - TCGv_i32 t1 =3D tcg_temp_new_i32(); - TCGv_i32 t2 =3D tcg_temp_new_i32(); - TCGv_i32 t3 =3D tcg_temp_new_i32(); + TCGv_i32 t1 =3D tcg_temp_ebb_new_i32(); + TCGv_i32 t2 =3D tcg_temp_ebb_new_i32(); + TCGv_i32 t3 =3D tcg_temp_ebb_new_i32(); =20 tcg_gen_or_i32(t1, a, m); tcg_gen_andc_i32(t2, b, m); @@ -2092,8 +2092,8 @@ void tcg_gen_vec_sub16_i64(TCGv_i64 d, TCGv_i64 a, TC= Gv_i64 b) =20 void tcg_gen_vec_sub16_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b) { - TCGv_i32 t1 =3D tcg_temp_new_i32(); - TCGv_i32 t2 =3D tcg_temp_new_i32(); + TCGv_i32 t1 =3D tcg_temp_ebb_new_i32(); + TCGv_i32 t2 =3D tcg_temp_ebb_new_i32(); =20 tcg_gen_andi_i32(t1, b, ~0xffff); tcg_gen_sub_i32(t2, a, b); @@ -2106,8 +2106,8 @@ void tcg_gen_vec_sub16_i32(TCGv_i32 d, TCGv_i32 a, TC= Gv_i32 b) =20 void tcg_gen_vec_sub32_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b) { - TCGv_i64 t1 =3D tcg_temp_new_i64(); - TCGv_i64 t2 =3D tcg_temp_new_i64(); + TCGv_i64 t1 =3D tcg_temp_ebb_new_i64(); + TCGv_i64 t2 =3D tcg_temp_ebb_new_i64(); =20 tcg_gen_andi_i64(t1, b, ~0xffffffffull); tcg_gen_sub_i64(t2, a, b); @@ -2468,8 +2468,8 @@ void tcg_gen_gvec_umax(unsigned vece, uint32_t dofs, = uint32_t aofs, Compare gen_subv_mask above. */ static void gen_negv_mask(TCGv_i64 d, TCGv_i64 b, TCGv_i64 m) { - TCGv_i64 t2 =3D tcg_temp_new_i64(); - TCGv_i64 t3 =3D tcg_temp_new_i64(); + TCGv_i64 t2 =3D tcg_temp_ebb_new_i64(); + TCGv_i64 t3 =3D tcg_temp_ebb_new_i64(); =20 tcg_gen_andc_i64(t3, m, b); tcg_gen_andc_i64(t2, b, m); @@ -2494,8 +2494,8 @@ void tcg_gen_vec_neg16_i64(TCGv_i64 d, TCGv_i64 b) =20 void tcg_gen_vec_neg32_i64(TCGv_i64 d, TCGv_i64 b) { - TCGv_i64 t1 =3D tcg_temp_new_i64(); - TCGv_i64 t2 =3D tcg_temp_new_i64(); + TCGv_i64 t1 =3D tcg_temp_ebb_new_i64(); + TCGv_i64 t2 =3D tcg_temp_ebb_new_i64(); =20 tcg_gen_andi_i64(t1, b, ~0xffffffffull); tcg_gen_neg_i64(t2, b); @@ -2540,7 +2540,7 @@ void tcg_gen_gvec_neg(unsigned vece, uint32_t dofs, u= int32_t aofs, =20 static void gen_absv_mask(TCGv_i64 d, TCGv_i64 b, unsigned vece) { - TCGv_i64 t =3D tcg_temp_new_i64(); + TCGv_i64 t =3D tcg_temp_ebb_new_i64(); int nbit =3D 8 << vece; =20 /* Create -1 for each negative element. */ @@ -2749,7 +2749,7 @@ static const GVecGen2s gop_ands =3D { void tcg_gen_gvec_ands(unsigned vece, uint32_t dofs, uint32_t aofs, TCGv_i64 c, uint32_t oprsz, uint32_t maxsz) { - TCGv_i64 tmp =3D tcg_temp_new_i64(); + TCGv_i64 tmp =3D tcg_temp_ebb_new_i64(); tcg_gen_dup_i64(vece, tmp, c); tcg_gen_gvec_2s(dofs, aofs, oprsz, maxsz, tmp, &gop_ands); tcg_temp_free_i64(tmp); @@ -2773,7 +2773,7 @@ static const GVecGen2s gop_xors =3D { void tcg_gen_gvec_xors(unsigned vece, uint32_t dofs, uint32_t aofs, TCGv_i64 c, uint32_t oprsz, uint32_t maxsz) { - TCGv_i64 tmp =3D tcg_temp_new_i64(); + TCGv_i64 tmp =3D tcg_temp_ebb_new_i64(); tcg_gen_dup_i64(vece, tmp, c); tcg_gen_gvec_2s(dofs, aofs, oprsz, maxsz, tmp, &gop_xors); tcg_temp_free_i64(tmp); @@ -2797,7 +2797,7 @@ static const GVecGen2s gop_ors =3D { void tcg_gen_gvec_ors(unsigned vece, uint32_t dofs, uint32_t aofs, TCGv_i64 c, uint32_t oprsz, uint32_t maxsz) { - TCGv_i64 tmp =3D tcg_temp_new_i64(); + TCGv_i64 tmp =3D tcg_temp_ebb_new_i64(); tcg_gen_dup_i64(vece, tmp, c); tcg_gen_gvec_2s(dofs, aofs, oprsz, maxsz, tmp, &gop_ors); tcg_temp_free_i64(tmp); @@ -2944,7 +2944,7 @@ void tcg_gen_vec_sar8i_i64(TCGv_i64 d, TCGv_i64 a, in= t64_t c) { uint64_t s_mask =3D dup_const(MO_8, 0x80 >> c); uint64_t c_mask =3D dup_const(MO_8, 0xff >> c); - TCGv_i64 s =3D tcg_temp_new_i64(); + TCGv_i64 s =3D tcg_temp_ebb_new_i64(); =20 tcg_gen_shri_i64(d, a, c); tcg_gen_andi_i64(s, d, s_mask); /* isolate (shifted) sign bit */ @@ -2958,7 +2958,7 @@ void tcg_gen_vec_sar16i_i64(TCGv_i64 d, TCGv_i64 a, i= nt64_t c) { uint64_t s_mask =3D dup_const(MO_16, 0x8000 >> c); uint64_t c_mask =3D dup_const(MO_16, 0xffff >> c); - TCGv_i64 s =3D tcg_temp_new_i64(); + TCGv_i64 s =3D tcg_temp_ebb_new_i64(); =20 tcg_gen_shri_i64(d, a, c); tcg_gen_andi_i64(s, d, s_mask); /* isolate (shifted) sign bit */ @@ -2972,7 +2972,7 @@ void tcg_gen_vec_sar8i_i32(TCGv_i32 d, TCGv_i32 a, in= t32_t c) { uint32_t s_mask =3D dup_const(MO_8, 0x80 >> c); uint32_t c_mask =3D dup_const(MO_8, 0xff >> c); - TCGv_i32 s =3D tcg_temp_new_i32(); + TCGv_i32 s =3D tcg_temp_ebb_new_i32(); =20 tcg_gen_shri_i32(d, a, c); tcg_gen_andi_i32(s, d, s_mask); /* isolate (shifted) sign bit */ @@ -2986,7 +2986,7 @@ void tcg_gen_vec_sar16i_i32(TCGv_i32 d, TCGv_i32 a, i= nt32_t c) { uint32_t s_mask =3D dup_const(MO_16, 0x8000 >> c); uint32_t c_mask =3D dup_const(MO_16, 0xffff >> c); - TCGv_i32 s =3D tcg_temp_new_i32(); + TCGv_i32 s =3D tcg_temp_ebb_new_i32(); =20 tcg_gen_shri_i32(d, a, c); tcg_gen_andi_i32(s, d, s_mask); /* isolate (shifted) sign bit */ @@ -3180,7 +3180,7 @@ do_gvec_shifts(unsigned vece, uint32_t dofs, uint32_t= aofs, TCGv_i32 shift, TCGv_vec v_shift =3D tcg_temp_new_vec(type); =20 if (vece =3D=3D MO_64) { - TCGv_i64 sh64 =3D tcg_temp_new_i64(); + TCGv_i64 sh64 =3D tcg_temp_ebb_new_i64(); tcg_gen_extu_i32_i64(sh64, shift); tcg_gen_dup_i64_vec(MO_64, v_shift, sh64); tcg_temp_free_i64(sh64); @@ -3221,14 +3221,14 @@ do_gvec_shifts(unsigned vece, uint32_t dofs, uint32= _t aofs, TCGv_i32 shift, if (vece =3D=3D MO_32 && check_size_impl(oprsz, 4)) { expand_2s_i32(dofs, aofs, oprsz, shift, false, g->fni4); } else if (vece =3D=3D MO_64 && check_size_impl(oprsz, 8)) { - TCGv_i64 sh64 =3D tcg_temp_new_i64(); + TCGv_i64 sh64 =3D tcg_temp_ebb_new_i64(); tcg_gen_extu_i32_i64(sh64, shift); expand_2s_i64(dofs, aofs, oprsz, sh64, false, g->fni8); tcg_temp_free_i64(sh64); } else { - TCGv_ptr a0 =3D tcg_temp_new_ptr(); - TCGv_ptr a1 =3D tcg_temp_new_ptr(); - TCGv_i32 desc =3D tcg_temp_new_i32(); + TCGv_ptr a0 =3D tcg_temp_ebb_new_ptr(); + TCGv_ptr a1 =3D tcg_temp_ebb_new_ptr(); + TCGv_i32 desc =3D tcg_temp_ebb_new_i32(); =20 tcg_gen_shli_i32(desc, shift, SIMD_DATA_SHIFT); tcg_gen_ori_i32(desc, desc, simd_desc(oprsz, maxsz, 0)); @@ -3360,7 +3360,7 @@ static void tcg_gen_shlv_mod_vec(unsigned vece, TCGv_= vec d, =20 static void tcg_gen_shl_mod_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b) { - TCGv_i32 t =3D tcg_temp_new_i32(); + TCGv_i32 t =3D tcg_temp_ebb_new_i32(); =20 tcg_gen_andi_i32(t, b, 31); tcg_gen_shl_i32(d, a, t); @@ -3369,7 +3369,7 @@ static void tcg_gen_shl_mod_i32(TCGv_i32 d, TCGv_i32 = a, TCGv_i32 b) =20 static void tcg_gen_shl_mod_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b) { - TCGv_i64 t =3D tcg_temp_new_i64(); + TCGv_i64 t =3D tcg_temp_ebb_new_i64(); =20 tcg_gen_andi_i64(t, b, 63); tcg_gen_shl_i64(d, a, t); @@ -3423,7 +3423,7 @@ static void tcg_gen_shrv_mod_vec(unsigned vece, TCGv_= vec d, =20 static void tcg_gen_shr_mod_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b) { - TCGv_i32 t =3D tcg_temp_new_i32(); + TCGv_i32 t =3D tcg_temp_ebb_new_i32(); =20 tcg_gen_andi_i32(t, b, 31); tcg_gen_shr_i32(d, a, t); @@ -3432,7 +3432,7 @@ static void tcg_gen_shr_mod_i32(TCGv_i32 d, TCGv_i32 = a, TCGv_i32 b) =20 static void tcg_gen_shr_mod_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b) { - TCGv_i64 t =3D tcg_temp_new_i64(); + TCGv_i64 t =3D tcg_temp_ebb_new_i64(); =20 tcg_gen_andi_i64(t, b, 63); tcg_gen_shr_i64(d, a, t); @@ -3486,7 +3486,7 @@ static void tcg_gen_sarv_mod_vec(unsigned vece, TCGv_= vec d, =20 static void tcg_gen_sar_mod_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b) { - TCGv_i32 t =3D tcg_temp_new_i32(); + TCGv_i32 t =3D tcg_temp_ebb_new_i32(); =20 tcg_gen_andi_i32(t, b, 31); tcg_gen_sar_i32(d, a, t); @@ -3495,7 +3495,7 @@ static void tcg_gen_sar_mod_i32(TCGv_i32 d, TCGv_i32 = a, TCGv_i32 b) =20 static void tcg_gen_sar_mod_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b) { - TCGv_i64 t =3D tcg_temp_new_i64(); + TCGv_i64 t =3D tcg_temp_ebb_new_i64(); =20 tcg_gen_andi_i64(t, b, 63); tcg_gen_sar_i64(d, a, t); @@ -3549,7 +3549,7 @@ static void tcg_gen_rotlv_mod_vec(unsigned vece, TCGv= _vec d, =20 static void tcg_gen_rotl_mod_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b) { - TCGv_i32 t =3D tcg_temp_new_i32(); + TCGv_i32 t =3D tcg_temp_ebb_new_i32(); =20 tcg_gen_andi_i32(t, b, 31); tcg_gen_rotl_i32(d, a, t); @@ -3558,7 +3558,7 @@ static void tcg_gen_rotl_mod_i32(TCGv_i32 d, TCGv_i32= a, TCGv_i32 b) =20 static void tcg_gen_rotl_mod_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b) { - TCGv_i64 t =3D tcg_temp_new_i64(); + TCGv_i64 t =3D tcg_temp_ebb_new_i64(); =20 tcg_gen_andi_i64(t, b, 63); tcg_gen_rotl_i64(d, a, t); @@ -3608,7 +3608,7 @@ static void tcg_gen_rotrv_mod_vec(unsigned vece, TCGv= _vec d, =20 static void tcg_gen_rotr_mod_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b) { - TCGv_i32 t =3D tcg_temp_new_i32(); + TCGv_i32 t =3D tcg_temp_ebb_new_i32(); =20 tcg_gen_andi_i32(t, b, 31); tcg_gen_rotr_i32(d, a, t); @@ -3617,7 +3617,7 @@ static void tcg_gen_rotr_mod_i32(TCGv_i32 d, TCGv_i32= a, TCGv_i32 b) =20 static void tcg_gen_rotr_mod_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b) { - TCGv_i64 t =3D tcg_temp_new_i64(); + TCGv_i64 t =3D tcg_temp_ebb_new_i64(); =20 tcg_gen_andi_i64(t, b, 63); tcg_gen_rotr_i64(d, a, t); @@ -3658,8 +3658,8 @@ void tcg_gen_gvec_rotrv(unsigned vece, uint32_t dofs,= uint32_t aofs, static void expand_cmp_i32(uint32_t dofs, uint32_t aofs, uint32_t bofs, uint32_t oprsz, TCGCond cond) { - TCGv_i32 t0 =3D tcg_temp_new_i32(); - TCGv_i32 t1 =3D tcg_temp_new_i32(); + TCGv_i32 t0 =3D tcg_temp_ebb_new_i32(); + TCGv_i32 t1 =3D tcg_temp_ebb_new_i32(); uint32_t i; =20 for (i =3D 0; i < oprsz; i +=3D 4) { @@ -3676,8 +3676,8 @@ static void expand_cmp_i32(uint32_t dofs, uint32_t ao= fs, uint32_t bofs, static void expand_cmp_i64(uint32_t dofs, uint32_t aofs, uint32_t bofs, uint32_t oprsz, TCGCond cond) { - TCGv_i64 t0 =3D tcg_temp_new_i64(); - TCGv_i64 t1 =3D tcg_temp_new_i64(); + TCGv_i64 t0 =3D tcg_temp_ebb_new_i64(); + TCGv_i64 t1 =3D tcg_temp_ebb_new_i64(); uint32_t i; =20 for (i =3D 0; i < oprsz; i +=3D 8) { @@ -3823,7 +3823,7 @@ void tcg_gen_gvec_cmp(TCGCond cond, unsigned vece, ui= nt32_t dofs, =20 static void tcg_gen_bitsel_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b, TCGv_i6= 4 c) { - TCGv_i64 t =3D tcg_temp_new_i64(); + TCGv_i64 t =3D tcg_temp_ebb_new_i64(); =20 tcg_gen_and_i64(t, b, a); tcg_gen_andc_i64(d, c, a); diff --git a/tcg/tcg-op.c b/tcg/tcg-op.c index c581ae77c4..f2269a1b91 100644 --- a/tcg/tcg-op.c +++ b/tcg/tcg-op.c @@ -264,7 +264,7 @@ void tcg_gen_div_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_= i32 arg2) if (TCG_TARGET_HAS_div_i32) { tcg_gen_op3_i32(INDEX_op_div_i32, ret, arg1, arg2); } else if (TCG_TARGET_HAS_div2_i32) { - TCGv_i32 t0 =3D tcg_temp_new_i32(); + TCGv_i32 t0 =3D tcg_temp_ebb_new_i32(); tcg_gen_sari_i32(t0, arg1, 31); tcg_gen_op5_i32(INDEX_op_div2_i32, ret, t0, arg1, t0, arg2); tcg_temp_free_i32(t0); @@ -278,13 +278,13 @@ void tcg_gen_rem_i32(TCGv_i32 ret, TCGv_i32 arg1, TCG= v_i32 arg2) if (TCG_TARGET_HAS_rem_i32) { tcg_gen_op3_i32(INDEX_op_rem_i32, ret, arg1, arg2); } else if (TCG_TARGET_HAS_div_i32) { - TCGv_i32 t0 =3D tcg_temp_new_i32(); + TCGv_i32 t0 =3D tcg_temp_ebb_new_i32(); tcg_gen_op3_i32(INDEX_op_div_i32, t0, arg1, arg2); tcg_gen_mul_i32(t0, t0, arg2); tcg_gen_sub_i32(ret, arg1, t0); tcg_temp_free_i32(t0); } else if (TCG_TARGET_HAS_div2_i32) { - TCGv_i32 t0 =3D tcg_temp_new_i32(); + TCGv_i32 t0 =3D tcg_temp_ebb_new_i32(); tcg_gen_sari_i32(t0, arg1, 31); tcg_gen_op5_i32(INDEX_op_div2_i32, t0, ret, arg1, t0, arg2); tcg_temp_free_i32(t0); @@ -298,7 +298,7 @@ void tcg_gen_divu_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv= _i32 arg2) if (TCG_TARGET_HAS_div_i32) { tcg_gen_op3_i32(INDEX_op_divu_i32, ret, arg1, arg2); } else if (TCG_TARGET_HAS_div2_i32) { - TCGv_i32 t0 =3D tcg_temp_new_i32(); + TCGv_i32 t0 =3D tcg_temp_ebb_new_i32(); tcg_gen_movi_i32(t0, 0); tcg_gen_op5_i32(INDEX_op_divu2_i32, ret, t0, arg1, t0, arg2); tcg_temp_free_i32(t0); @@ -312,13 +312,13 @@ void tcg_gen_remu_i32(TCGv_i32 ret, TCGv_i32 arg1, TC= Gv_i32 arg2) if (TCG_TARGET_HAS_rem_i32) { tcg_gen_op3_i32(INDEX_op_remu_i32, ret, arg1, arg2); } else if (TCG_TARGET_HAS_div_i32) { - TCGv_i32 t0 =3D tcg_temp_new_i32(); + TCGv_i32 t0 =3D tcg_temp_ebb_new_i32(); tcg_gen_op3_i32(INDEX_op_divu_i32, t0, arg1, arg2); tcg_gen_mul_i32(t0, t0, arg2); tcg_gen_sub_i32(ret, arg1, t0); tcg_temp_free_i32(t0); } else if (TCG_TARGET_HAS_div2_i32) { - TCGv_i32 t0 =3D tcg_temp_new_i32(); + TCGv_i32 t0 =3D tcg_temp_ebb_new_i32(); tcg_gen_movi_i32(t0, 0); tcg_gen_op5_i32(INDEX_op_divu2_i32, t0, ret, arg1, t0, arg2); tcg_temp_free_i32(t0); @@ -332,7 +332,7 @@ void tcg_gen_andc_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv= _i32 arg2) if (TCG_TARGET_HAS_andc_i32) { tcg_gen_op3_i32(INDEX_op_andc_i32, ret, arg1, arg2); } else { - TCGv_i32 t0 =3D tcg_temp_new_i32(); + TCGv_i32 t0 =3D tcg_temp_ebb_new_i32(); tcg_gen_not_i32(t0, arg2); tcg_gen_and_i32(ret, arg1, t0); tcg_temp_free_i32(t0); @@ -374,7 +374,7 @@ void tcg_gen_orc_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_= i32 arg2) if (TCG_TARGET_HAS_orc_i32) { tcg_gen_op3_i32(INDEX_op_orc_i32, ret, arg1, arg2); } else { - TCGv_i32 t0 =3D tcg_temp_new_i32(); + TCGv_i32 t0 =3D tcg_temp_ebb_new_i32(); tcg_gen_not_i32(t0, arg2); tcg_gen_or_i32(ret, arg1, t0); tcg_temp_free_i32(t0); @@ -386,8 +386,8 @@ void tcg_gen_clz_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_= i32 arg2) if (TCG_TARGET_HAS_clz_i32) { tcg_gen_op3_i32(INDEX_op_clz_i32, ret, arg1, arg2); } else if (TCG_TARGET_HAS_clz_i64) { - TCGv_i64 t1 =3D tcg_temp_new_i64(); - TCGv_i64 t2 =3D tcg_temp_new_i64(); + TCGv_i64 t1 =3D tcg_temp_ebb_new_i64(); + TCGv_i64 t2 =3D tcg_temp_ebb_new_i64(); tcg_gen_extu_i32_i64(t1, arg1); tcg_gen_extu_i32_i64(t2, arg2); tcg_gen_addi_i64(t2, t2, 32); @@ -411,8 +411,8 @@ void tcg_gen_ctz_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_= i32 arg2) if (TCG_TARGET_HAS_ctz_i32) { tcg_gen_op3_i32(INDEX_op_ctz_i32, ret, arg1, arg2); } else if (TCG_TARGET_HAS_ctz_i64) { - TCGv_i64 t1 =3D tcg_temp_new_i64(); - TCGv_i64 t2 =3D tcg_temp_new_i64(); + TCGv_i64 t1 =3D tcg_temp_ebb_new_i64(); + TCGv_i64 t2 =3D tcg_temp_ebb_new_i64(); tcg_gen_extu_i32_i64(t1, arg1); tcg_gen_extu_i32_i64(t2, arg2); tcg_gen_ctz_i64(t1, t1, t2); @@ -423,7 +423,7 @@ void tcg_gen_ctz_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_= i32 arg2) || TCG_TARGET_HAS_ctpop_i64 || TCG_TARGET_HAS_clz_i32 || TCG_TARGET_HAS_clz_i64) { - TCGv_i32 z, t =3D tcg_temp_new_i32(); + TCGv_i32 z, t =3D tcg_temp_ebb_new_i32(); =20 if (TCG_TARGET_HAS_ctpop_i32 || TCG_TARGET_HAS_ctpop_i64) { tcg_gen_subi_i32(t, arg1, 1); @@ -448,7 +448,7 @@ void tcg_gen_ctzi_i32(TCGv_i32 ret, TCGv_i32 arg1, uint= 32_t arg2) { if (!TCG_TARGET_HAS_ctz_i32 && TCG_TARGET_HAS_ctpop_i32 && arg2 =3D=3D= 32) { /* This equivalence has the advantage of not requiring a fixup. */ - TCGv_i32 t =3D tcg_temp_new_i32(); + TCGv_i32 t =3D tcg_temp_ebb_new_i32(); tcg_gen_subi_i32(t, arg1, 1); tcg_gen_andc_i32(t, t, arg1); tcg_gen_ctpop_i32(ret, t); @@ -461,7 +461,7 @@ void tcg_gen_ctzi_i32(TCGv_i32 ret, TCGv_i32 arg1, uint= 32_t arg2) void tcg_gen_clrsb_i32(TCGv_i32 ret, TCGv_i32 arg) { if (TCG_TARGET_HAS_clz_i32) { - TCGv_i32 t =3D tcg_temp_new_i32(); + TCGv_i32 t =3D tcg_temp_ebb_new_i32(); tcg_gen_sari_i32(t, arg, 31); tcg_gen_xor_i32(t, t, arg); tcg_gen_clzi_i32(t, t, 32); @@ -477,7 +477,7 @@ void tcg_gen_ctpop_i32(TCGv_i32 ret, TCGv_i32 arg1) if (TCG_TARGET_HAS_ctpop_i32) { tcg_gen_op2_i32(INDEX_op_ctpop_i32, ret, arg1); } else if (TCG_TARGET_HAS_ctpop_i64) { - TCGv_i64 t =3D tcg_temp_new_i64(); + TCGv_i64 t =3D tcg_temp_ebb_new_i64(); tcg_gen_extu_i32_i64(t, arg1); tcg_gen_ctpop_i64(t, t); tcg_gen_extrl_i64_i32(ret, t); @@ -494,8 +494,8 @@ void tcg_gen_rotl_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv= _i32 arg2) } else { TCGv_i32 t0, t1; =20 - t0 =3D tcg_temp_new_i32(); - t1 =3D tcg_temp_new_i32(); + t0 =3D tcg_temp_ebb_new_i32(); + t1 =3D tcg_temp_ebb_new_i32(); tcg_gen_shl_i32(t0, arg1, arg2); tcg_gen_subfi_i32(t1, 32, arg2); tcg_gen_shr_i32(t1, arg1, t1); @@ -515,8 +515,8 @@ void tcg_gen_rotli_i32(TCGv_i32 ret, TCGv_i32 arg1, int= 32_t arg2) tcg_gen_rotl_i32(ret, arg1, tcg_constant_i32(arg2)); } else { TCGv_i32 t0, t1; - t0 =3D tcg_temp_new_i32(); - t1 =3D tcg_temp_new_i32(); + t0 =3D tcg_temp_ebb_new_i32(); + t1 =3D tcg_temp_ebb_new_i32(); tcg_gen_shli_i32(t0, arg1, arg2); tcg_gen_shri_i32(t1, arg1, 32 - arg2); tcg_gen_or_i32(ret, t0, t1); @@ -532,8 +532,8 @@ void tcg_gen_rotr_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv= _i32 arg2) } else { TCGv_i32 t0, t1; =20 - t0 =3D tcg_temp_new_i32(); - t1 =3D tcg_temp_new_i32(); + t0 =3D tcg_temp_ebb_new_i32(); + t1 =3D tcg_temp_ebb_new_i32(); tcg_gen_shr_i32(t0, arg1, arg2); tcg_gen_subfi_i32(t1, 32, arg2); tcg_gen_shl_i32(t1, arg1, t1); @@ -574,7 +574,7 @@ void tcg_gen_deposit_i32(TCGv_i32 ret, TCGv_i32 arg1, T= CGv_i32 arg2, return; } =20 - t1 =3D tcg_temp_new_i32(); + t1 =3D tcg_temp_ebb_new_i32(); =20 if (TCG_TARGET_HAS_extract2_i32) { if (ofs + len =3D=3D 32) { @@ -801,7 +801,7 @@ void tcg_gen_extract2_i32(TCGv_i32 ret, TCGv_i32 al, TC= Gv_i32 ah, } else if (TCG_TARGET_HAS_extract2_i32) { tcg_gen_op4i_i32(INDEX_op_extract2_i32, ret, al, ah, ofs); } else { - TCGv_i32 t0 =3D tcg_temp_new_i32(); + TCGv_i32 t0 =3D tcg_temp_ebb_new_i32(); tcg_gen_shri_i32(t0, al, ofs); tcg_gen_deposit_i32(ret, t0, ah, 32 - ofs, ofs); tcg_temp_free_i32(t0); @@ -818,8 +818,8 @@ void tcg_gen_movcond_i32(TCGCond cond, TCGv_i32 ret, TC= Gv_i32 c1, } else if (TCG_TARGET_HAS_movcond_i32) { tcg_gen_op6i_i32(INDEX_op_movcond_i32, ret, c1, c2, v1, v2, cond); } else { - TCGv_i32 t0 =3D tcg_temp_new_i32(); - TCGv_i32 t1 =3D tcg_temp_new_i32(); + TCGv_i32 t0 =3D tcg_temp_ebb_new_i32(); + TCGv_i32 t1 =3D tcg_temp_ebb_new_i32(); tcg_gen_setcond_i32(cond, t0, c1, c2); tcg_gen_neg_i32(t0, t0); tcg_gen_and_i32(t1, v1, t0); @@ -836,8 +836,8 @@ void tcg_gen_add2_i32(TCGv_i32 rl, TCGv_i32 rh, TCGv_i3= 2 al, if (TCG_TARGET_HAS_add2_i32) { tcg_gen_op6_i32(INDEX_op_add2_i32, rl, rh, al, ah, bl, bh); } else { - TCGv_i64 t0 =3D tcg_temp_new_i64(); - TCGv_i64 t1 =3D tcg_temp_new_i64(); + TCGv_i64 t0 =3D tcg_temp_ebb_new_i64(); + TCGv_i64 t1 =3D tcg_temp_ebb_new_i64(); tcg_gen_concat_i32_i64(t0, al, ah); tcg_gen_concat_i32_i64(t1, bl, bh); tcg_gen_add_i64(t0, t0, t1); @@ -853,8 +853,8 @@ void tcg_gen_sub2_i32(TCGv_i32 rl, TCGv_i32 rh, TCGv_i3= 2 al, if (TCG_TARGET_HAS_sub2_i32) { tcg_gen_op6_i32(INDEX_op_sub2_i32, rl, rh, al, ah, bl, bh); } else { - TCGv_i64 t0 =3D tcg_temp_new_i64(); - TCGv_i64 t1 =3D tcg_temp_new_i64(); + TCGv_i64 t0 =3D tcg_temp_ebb_new_i64(); + TCGv_i64 t1 =3D tcg_temp_ebb_new_i64(); tcg_gen_concat_i32_i64(t0, al, ah); tcg_gen_concat_i32_i64(t1, bl, bh); tcg_gen_sub_i64(t0, t0, t1); @@ -869,14 +869,14 @@ void tcg_gen_mulu2_i32(TCGv_i32 rl, TCGv_i32 rh, TCGv= _i32 arg1, TCGv_i32 arg2) if (TCG_TARGET_HAS_mulu2_i32) { tcg_gen_op4_i32(INDEX_op_mulu2_i32, rl, rh, arg1, arg2); } else if (TCG_TARGET_HAS_muluh_i32) { - TCGv_i32 t =3D tcg_temp_new_i32(); + TCGv_i32 t =3D tcg_temp_ebb_new_i32(); tcg_gen_op3_i32(INDEX_op_mul_i32, t, arg1, arg2); tcg_gen_op3_i32(INDEX_op_muluh_i32, rh, arg1, arg2); tcg_gen_mov_i32(rl, t); tcg_temp_free_i32(t); } else if (TCG_TARGET_REG_BITS =3D=3D 64) { - TCGv_i64 t0 =3D tcg_temp_new_i64(); - TCGv_i64 t1 =3D tcg_temp_new_i64(); + TCGv_i64 t0 =3D tcg_temp_ebb_new_i64(); + TCGv_i64 t1 =3D tcg_temp_ebb_new_i64(); tcg_gen_extu_i32_i64(t0, arg1); tcg_gen_extu_i32_i64(t1, arg2); tcg_gen_mul_i64(t0, t0, t1); @@ -893,16 +893,16 @@ void tcg_gen_muls2_i32(TCGv_i32 rl, TCGv_i32 rh, TCGv= _i32 arg1, TCGv_i32 arg2) if (TCG_TARGET_HAS_muls2_i32) { tcg_gen_op4_i32(INDEX_op_muls2_i32, rl, rh, arg1, arg2); } else if (TCG_TARGET_HAS_mulsh_i32) { - TCGv_i32 t =3D tcg_temp_new_i32(); + TCGv_i32 t =3D tcg_temp_ebb_new_i32(); tcg_gen_op3_i32(INDEX_op_mul_i32, t, arg1, arg2); tcg_gen_op3_i32(INDEX_op_mulsh_i32, rh, arg1, arg2); tcg_gen_mov_i32(rl, t); tcg_temp_free_i32(t); } else if (TCG_TARGET_REG_BITS =3D=3D 32) { - TCGv_i32 t0 =3D tcg_temp_new_i32(); - TCGv_i32 t1 =3D tcg_temp_new_i32(); - TCGv_i32 t2 =3D tcg_temp_new_i32(); - TCGv_i32 t3 =3D tcg_temp_new_i32(); + TCGv_i32 t0 =3D tcg_temp_ebb_new_i32(); + TCGv_i32 t1 =3D tcg_temp_ebb_new_i32(); + TCGv_i32 t2 =3D tcg_temp_ebb_new_i32(); + TCGv_i32 t3 =3D tcg_temp_ebb_new_i32(); tcg_gen_mulu2_i32(t0, t1, arg1, arg2); /* Adjust for negative inputs. */ tcg_gen_sari_i32(t2, arg1, 31); @@ -917,8 +917,8 @@ void tcg_gen_muls2_i32(TCGv_i32 rl, TCGv_i32 rh, TCGv_i= 32 arg1, TCGv_i32 arg2) tcg_temp_free_i32(t2); tcg_temp_free_i32(t3); } else { - TCGv_i64 t0 =3D tcg_temp_new_i64(); - TCGv_i64 t1 =3D tcg_temp_new_i64(); + TCGv_i64 t0 =3D tcg_temp_ebb_new_i64(); + TCGv_i64 t1 =3D tcg_temp_ebb_new_i64(); tcg_gen_ext_i32_i64(t0, arg1); tcg_gen_ext_i32_i64(t1, arg2); tcg_gen_mul_i64(t0, t0, t1); @@ -931,9 +931,9 @@ void tcg_gen_muls2_i32(TCGv_i32 rl, TCGv_i32 rh, TCGv_i= 32 arg1, TCGv_i32 arg2) void tcg_gen_mulsu2_i32(TCGv_i32 rl, TCGv_i32 rh, TCGv_i32 arg1, TCGv_i32 = arg2) { if (TCG_TARGET_REG_BITS =3D=3D 32) { - TCGv_i32 t0 =3D tcg_temp_new_i32(); - TCGv_i32 t1 =3D tcg_temp_new_i32(); - TCGv_i32 t2 =3D tcg_temp_new_i32(); + TCGv_i32 t0 =3D tcg_temp_ebb_new_i32(); + TCGv_i32 t1 =3D tcg_temp_ebb_new_i32(); + TCGv_i32 t2 =3D tcg_temp_ebb_new_i32(); tcg_gen_mulu2_i32(t0, t1, arg1, arg2); /* Adjust for negative input for the signed arg1. */ tcg_gen_sari_i32(t2, arg1, 31); @@ -944,8 +944,8 @@ void tcg_gen_mulsu2_i32(TCGv_i32 rl, TCGv_i32 rh, TCGv_= i32 arg1, TCGv_i32 arg2) tcg_temp_free_i32(t1); tcg_temp_free_i32(t2); } else { - TCGv_i64 t0 =3D tcg_temp_new_i64(); - TCGv_i64 t1 =3D tcg_temp_new_i64(); + TCGv_i64 t0 =3D tcg_temp_ebb_new_i64(); + TCGv_i64 t1 =3D tcg_temp_ebb_new_i64(); tcg_gen_ext_i32_i64(t0, arg1); tcg_gen_extu_i32_i64(t1, arg2); tcg_gen_mul_i64(t0, t0, t1); @@ -1001,8 +1001,8 @@ void tcg_gen_bswap16_i32(TCGv_i32 ret, TCGv_i32 arg, = int flags) if (TCG_TARGET_HAS_bswap16_i32) { tcg_gen_op3i_i32(INDEX_op_bswap16_i32, ret, arg, flags); } else { - TCGv_i32 t0 =3D tcg_temp_new_i32(); - TCGv_i32 t1 =3D tcg_temp_new_i32(); + TCGv_i32 t0 =3D tcg_temp_ebb_new_i32(); + TCGv_i32 t1 =3D tcg_temp_ebb_new_i32(); =20 tcg_gen_shri_i32(t0, arg, 8); if (!(flags & TCG_BSWAP_IZ)) { @@ -1030,8 +1030,8 @@ void tcg_gen_bswap32_i32(TCGv_i32 ret, TCGv_i32 arg) if (TCG_TARGET_HAS_bswap32_i32) { tcg_gen_op3i_i32(INDEX_op_bswap32_i32, ret, arg, 0); } else { - TCGv_i32 t0 =3D tcg_temp_new_i32(); - TCGv_i32 t1 =3D tcg_temp_new_i32(); + TCGv_i32 t0 =3D tcg_temp_ebb_new_i32(); + TCGv_i32 t1 =3D tcg_temp_ebb_new_i32(); TCGv_i32 t2 =3D tcg_constant_i32(0x00ff00ff); =20 /* arg =3D abcd */ @@ -1078,7 +1078,7 @@ void tcg_gen_umax_i32(TCGv_i32 ret, TCGv_i32 a, TCGv_= i32 b) =20 void tcg_gen_abs_i32(TCGv_i32 ret, TCGv_i32 a) { - TCGv_i32 t =3D tcg_temp_new_i32(); + TCGv_i32 t =3D tcg_temp_ebb_new_i32(); =20 tcg_gen_sari_i32(t, a, 31); tcg_gen_xor_i32(ret, a, t); @@ -1241,8 +1241,8 @@ void tcg_gen_mul_i64(TCGv_i64 ret, TCGv_i64 arg1, TCG= v_i64 arg2) TCGv_i64 t0; TCGv_i32 t1; =20 - t0 =3D tcg_temp_new_i64(); - t1 =3D tcg_temp_new_i32(); + t0 =3D tcg_temp_ebb_new_i64(); + t1 =3D tcg_temp_ebb_new_i32(); =20 tcg_gen_mulu2_i32(TCGV_LOW(t0), TCGV_HIGH(t0), TCGV_LOW(arg1), TCGV_LOW(arg2)); @@ -1423,7 +1423,7 @@ static inline void tcg_gen_shifti_i64(TCGv_i64 ret, T= CGv_i64 arg1, tcg_gen_extract2_i32(TCGV_HIGH(ret), TCGV_LOW(arg1), TCGV_HIGH(arg1), 32 - c); } else { - TCGv_i32 t0 =3D tcg_temp_new_i32(); + TCGv_i32 t0 =3D tcg_temp_ebb_new_i32(); tcg_gen_shri_i32(t0, TCGV_LOW(arg1), 32 - c); tcg_gen_deposit_i32(TCGV_HIGH(ret), t0, TCGV_HIGH(arg1), c, 32 - c); @@ -1557,7 +1557,7 @@ void tcg_gen_div_i64(TCGv_i64 ret, TCGv_i64 arg1, TCG= v_i64 arg2) if (TCG_TARGET_HAS_div_i64) { tcg_gen_op3_i64(INDEX_op_div_i64, ret, arg1, arg2); } else if (TCG_TARGET_HAS_div2_i64) { - TCGv_i64 t0 =3D tcg_temp_new_i64(); + TCGv_i64 t0 =3D tcg_temp_ebb_new_i64(); tcg_gen_sari_i64(t0, arg1, 63); tcg_gen_op5_i64(INDEX_op_div2_i64, ret, t0, arg1, t0, arg2); tcg_temp_free_i64(t0); @@ -1571,13 +1571,13 @@ void tcg_gen_rem_i64(TCGv_i64 ret, TCGv_i64 arg1, T= CGv_i64 arg2) if (TCG_TARGET_HAS_rem_i64) { tcg_gen_op3_i64(INDEX_op_rem_i64, ret, arg1, arg2); } else if (TCG_TARGET_HAS_div_i64) { - TCGv_i64 t0 =3D tcg_temp_new_i64(); + TCGv_i64 t0 =3D tcg_temp_ebb_new_i64(); tcg_gen_op3_i64(INDEX_op_div_i64, t0, arg1, arg2); tcg_gen_mul_i64(t0, t0, arg2); tcg_gen_sub_i64(ret, arg1, t0); tcg_temp_free_i64(t0); } else if (TCG_TARGET_HAS_div2_i64) { - TCGv_i64 t0 =3D tcg_temp_new_i64(); + TCGv_i64 t0 =3D tcg_temp_ebb_new_i64(); tcg_gen_sari_i64(t0, arg1, 63); tcg_gen_op5_i64(INDEX_op_div2_i64, t0, ret, arg1, t0, arg2); tcg_temp_free_i64(t0); @@ -1591,7 +1591,7 @@ void tcg_gen_divu_i64(TCGv_i64 ret, TCGv_i64 arg1, TC= Gv_i64 arg2) if (TCG_TARGET_HAS_div_i64) { tcg_gen_op3_i64(INDEX_op_divu_i64, ret, arg1, arg2); } else if (TCG_TARGET_HAS_div2_i64) { - TCGv_i64 t0 =3D tcg_temp_new_i64(); + TCGv_i64 t0 =3D tcg_temp_ebb_new_i64(); tcg_gen_movi_i64(t0, 0); tcg_gen_op5_i64(INDEX_op_divu2_i64, ret, t0, arg1, t0, arg2); tcg_temp_free_i64(t0); @@ -1605,13 +1605,13 @@ void tcg_gen_remu_i64(TCGv_i64 ret, TCGv_i64 arg1, = TCGv_i64 arg2) if (TCG_TARGET_HAS_rem_i64) { tcg_gen_op3_i64(INDEX_op_remu_i64, ret, arg1, arg2); } else if (TCG_TARGET_HAS_div_i64) { - TCGv_i64 t0 =3D tcg_temp_new_i64(); + TCGv_i64 t0 =3D tcg_temp_ebb_new_i64(); tcg_gen_op3_i64(INDEX_op_divu_i64, t0, arg1, arg2); tcg_gen_mul_i64(t0, t0, arg2); tcg_gen_sub_i64(ret, arg1, t0); tcg_temp_free_i64(t0); } else if (TCG_TARGET_HAS_div2_i64) { - TCGv_i64 t0 =3D tcg_temp_new_i64(); + TCGv_i64 t0 =3D tcg_temp_ebb_new_i64(); tcg_gen_movi_i64(t0, 0); tcg_gen_op5_i64(INDEX_op_divu2_i64, t0, ret, arg1, t0, arg2); tcg_temp_free_i64(t0); @@ -1710,8 +1710,8 @@ void tcg_gen_bswap16_i64(TCGv_i64 ret, TCGv_i64 arg, = int flags) } else if (TCG_TARGET_HAS_bswap16_i64) { tcg_gen_op3i_i64(INDEX_op_bswap16_i64, ret, arg, flags); } else { - TCGv_i64 t0 =3D tcg_temp_new_i64(); - TCGv_i64 t1 =3D tcg_temp_new_i64(); + TCGv_i64 t0 =3D tcg_temp_ebb_new_i64(); + TCGv_i64 t1 =3D tcg_temp_ebb_new_i64(); =20 tcg_gen_shri_i64(t0, arg, 8); if (!(flags & TCG_BSWAP_IZ)) { @@ -1749,8 +1749,8 @@ void tcg_gen_bswap32_i64(TCGv_i64 ret, TCGv_i64 arg, = int flags) } else if (TCG_TARGET_HAS_bswap32_i64) { tcg_gen_op3i_i64(INDEX_op_bswap32_i64, ret, arg, flags); } else { - TCGv_i64 t0 =3D tcg_temp_new_i64(); - TCGv_i64 t1 =3D tcg_temp_new_i64(); + TCGv_i64 t0 =3D tcg_temp_ebb_new_i64(); + TCGv_i64 t1 =3D tcg_temp_ebb_new_i64(); TCGv_i64 t2 =3D tcg_constant_i64(0x00ff00ff); =20 /* arg =3D xxxxabcd */ @@ -1778,8 +1778,8 @@ void tcg_gen_bswap64_i64(TCGv_i64 ret, TCGv_i64 arg) { if (TCG_TARGET_REG_BITS =3D=3D 32) { TCGv_i32 t0, t1; - t0 =3D tcg_temp_new_i32(); - t1 =3D tcg_temp_new_i32(); + t0 =3D tcg_temp_ebb_new_i32(); + t1 =3D tcg_temp_ebb_new_i32(); =20 tcg_gen_bswap32_i32(t0, TCGV_LOW(arg)); tcg_gen_bswap32_i32(t1, TCGV_HIGH(arg)); @@ -1790,9 +1790,9 @@ void tcg_gen_bswap64_i64(TCGv_i64 ret, TCGv_i64 arg) } else if (TCG_TARGET_HAS_bswap64_i64) { tcg_gen_op3i_i64(INDEX_op_bswap64_i64, ret, arg, 0); } else { - TCGv_i64 t0 =3D tcg_temp_new_i64(); - TCGv_i64 t1 =3D tcg_temp_new_i64(); - TCGv_i64 t2 =3D tcg_temp_new_i64(); + TCGv_i64 t0 =3D tcg_temp_ebb_new_i64(); + TCGv_i64 t1 =3D tcg_temp_ebb_new_i64(); + TCGv_i64 t2 =3D tcg_temp_ebb_new_i64(); =20 /* arg =3D abcdefgh */ tcg_gen_movi_i64(t2, 0x00ff00ff00ff00ffull); @@ -1822,8 +1822,8 @@ void tcg_gen_bswap64_i64(TCGv_i64 ret, TCGv_i64 arg) void tcg_gen_hswap_i64(TCGv_i64 ret, TCGv_i64 arg) { uint64_t m =3D 0x0000ffff0000ffffull; - TCGv_i64 t0 =3D tcg_temp_new_i64(); - TCGv_i64 t1 =3D tcg_temp_new_i64(); + TCGv_i64 t0 =3D tcg_temp_ebb_new_i64(); + TCGv_i64 t1 =3D tcg_temp_ebb_new_i64(); =20 /* See include/qemu/bitops.h, hswap64. */ tcg_gen_rotli_i64(t1, arg, 32); @@ -1863,7 +1863,7 @@ void tcg_gen_andc_i64(TCGv_i64 ret, TCGv_i64 arg1, TC= Gv_i64 arg2) } else if (TCG_TARGET_HAS_andc_i64) { tcg_gen_op3_i64(INDEX_op_andc_i64, ret, arg1, arg2); } else { - TCGv_i64 t0 =3D tcg_temp_new_i64(); + TCGv_i64 t0 =3D tcg_temp_ebb_new_i64(); tcg_gen_not_i64(t0, arg2); tcg_gen_and_i64(ret, arg1, t0); tcg_temp_free_i64(t0); @@ -1917,7 +1917,7 @@ void tcg_gen_orc_i64(TCGv_i64 ret, TCGv_i64 arg1, TCG= v_i64 arg2) } else if (TCG_TARGET_HAS_orc_i64) { tcg_gen_op3_i64(INDEX_op_orc_i64, ret, arg1, arg2); } else { - TCGv_i64 t0 =3D tcg_temp_new_i64(); + TCGv_i64 t0 =3D tcg_temp_ebb_new_i64(); tcg_gen_not_i64(t0, arg2); tcg_gen_or_i64(ret, arg1, t0); tcg_temp_free_i64(t0); @@ -1938,7 +1938,7 @@ void tcg_gen_clzi_i64(TCGv_i64 ret, TCGv_i64 arg1, ui= nt64_t arg2) if (TCG_TARGET_REG_BITS =3D=3D 32 && TCG_TARGET_HAS_clz_i32 && arg2 <=3D 0xffffffffu) { - TCGv_i32 t =3D tcg_temp_new_i32(); + TCGv_i32 t =3D tcg_temp_ebb_new_i32(); tcg_gen_clzi_i32(t, TCGV_LOW(arg1), arg2 - 32); tcg_gen_addi_i32(t, t, 32); tcg_gen_clz_i32(TCGV_LOW(ret), TCGV_HIGH(arg1), t); @@ -1956,7 +1956,7 @@ void tcg_gen_ctz_i64(TCGv_i64 ret, TCGv_i64 arg1, TCG= v_i64 arg2) if (TCG_TARGET_HAS_ctz_i64) { tcg_gen_op3_i64(INDEX_op_ctz_i64, ret, arg1, arg2); } else if (TCG_TARGET_HAS_ctpop_i64 || TCG_TARGET_HAS_clz_i64) { - TCGv_i64 z, t =3D tcg_temp_new_i64(); + TCGv_i64 z, t =3D tcg_temp_ebb_new_i64(); =20 if (TCG_TARGET_HAS_ctpop_i64) { tcg_gen_subi_i64(t, arg1, 1); @@ -1983,7 +1983,7 @@ void tcg_gen_ctzi_i64(TCGv_i64 ret, TCGv_i64 arg1, ui= nt64_t arg2) if (TCG_TARGET_REG_BITS =3D=3D 32 && TCG_TARGET_HAS_ctz_i32 && arg2 <=3D 0xffffffffu) { - TCGv_i32 t32 =3D tcg_temp_new_i32(); + TCGv_i32 t32 =3D tcg_temp_ebb_new_i32(); tcg_gen_ctzi_i32(t32, TCGV_HIGH(arg1), arg2 - 32); tcg_gen_addi_i32(t32, t32, 32); tcg_gen_ctz_i32(TCGV_LOW(ret), TCGV_LOW(arg1), t32); @@ -1993,7 +1993,7 @@ void tcg_gen_ctzi_i64(TCGv_i64 ret, TCGv_i64 arg1, ui= nt64_t arg2) && TCG_TARGET_HAS_ctpop_i64 && arg2 =3D=3D 64) { /* This equivalence has the advantage of not requiring a fixup. */ - TCGv_i64 t =3D tcg_temp_new_i64(); + TCGv_i64 t =3D tcg_temp_ebb_new_i64(); tcg_gen_subi_i64(t, arg1, 1); tcg_gen_andc_i64(t, t, arg1); tcg_gen_ctpop_i64(ret, t); @@ -2008,7 +2008,7 @@ void tcg_gen_ctzi_i64(TCGv_i64 ret, TCGv_i64 arg1, ui= nt64_t arg2) void tcg_gen_clrsb_i64(TCGv_i64 ret, TCGv_i64 arg) { if (TCG_TARGET_HAS_clz_i64 || TCG_TARGET_HAS_clz_i32) { - TCGv_i64 t =3D tcg_temp_new_i64(); + TCGv_i64 t =3D tcg_temp_ebb_new_i64(); tcg_gen_sari_i64(t, arg, 63); tcg_gen_xor_i64(t, t, arg); tcg_gen_clzi_i64(t, t, 64); @@ -2039,8 +2039,8 @@ void tcg_gen_rotl_i64(TCGv_i64 ret, TCGv_i64 arg1, TC= Gv_i64 arg2) tcg_gen_op3_i64(INDEX_op_rotl_i64, ret, arg1, arg2); } else { TCGv_i64 t0, t1; - t0 =3D tcg_temp_new_i64(); - t1 =3D tcg_temp_new_i64(); + t0 =3D tcg_temp_ebb_new_i64(); + t1 =3D tcg_temp_ebb_new_i64(); tcg_gen_shl_i64(t0, arg1, arg2); tcg_gen_subfi_i64(t1, 64, arg2); tcg_gen_shr_i64(t1, arg1, t1); @@ -2060,8 +2060,8 @@ void tcg_gen_rotli_i64(TCGv_i64 ret, TCGv_i64 arg1, i= nt64_t arg2) tcg_gen_rotl_i64(ret, arg1, tcg_constant_i64(arg2)); } else { TCGv_i64 t0, t1; - t0 =3D tcg_temp_new_i64(); - t1 =3D tcg_temp_new_i64(); + t0 =3D tcg_temp_ebb_new_i64(); + t1 =3D tcg_temp_ebb_new_i64(); tcg_gen_shli_i64(t0, arg1, arg2); tcg_gen_shri_i64(t1, arg1, 64 - arg2); tcg_gen_or_i64(ret, t0, t1); @@ -2076,8 +2076,8 @@ void tcg_gen_rotr_i64(TCGv_i64 ret, TCGv_i64 arg1, TC= Gv_i64 arg2) tcg_gen_op3_i64(INDEX_op_rotr_i64, ret, arg1, arg2); } else { TCGv_i64 t0, t1; - t0 =3D tcg_temp_new_i64(); - t1 =3D tcg_temp_new_i64(); + t0 =3D tcg_temp_ebb_new_i64(); + t1 =3D tcg_temp_ebb_new_i64(); tcg_gen_shr_i64(t0, arg1, arg2); tcg_gen_subfi_i64(t1, 64, arg2); tcg_gen_shl_i64(t1, arg1, t1); @@ -2133,7 +2133,7 @@ void tcg_gen_deposit_i64(TCGv_i64 ret, TCGv_i64 arg1,= TCGv_i64 arg2, } } =20 - t1 =3D tcg_temp_new_i64(); + t1 =3D tcg_temp_ebb_new_i64(); =20 if (TCG_TARGET_HAS_extract2_i64) { if (ofs + len =3D=3D 64) { @@ -2365,7 +2365,7 @@ void tcg_gen_sextract_i64(TCGv_i64 ret, TCGv_i64 arg, tcg_gen_sextract_i32(TCGV_HIGH(ret), TCGV_HIGH(arg), 0, len - = 32); return; } else if (len > 32) { - TCGv_i32 t =3D tcg_temp_new_i32(); + TCGv_i32 t =3D tcg_temp_ebb_new_i32(); /* Extract the bits for the high word normally. */ tcg_gen_sextract_i32(t, TCGV_HIGH(arg), ofs + 32, len - 32); /* Shift the field down for the low part. */ @@ -2460,7 +2460,7 @@ void tcg_gen_extract2_i64(TCGv_i64 ret, TCGv_i64 al, = TCGv_i64 ah, } else if (TCG_TARGET_HAS_extract2_i64) { tcg_gen_op4i_i64(INDEX_op_extract2_i64, ret, al, ah, ofs); } else { - TCGv_i64 t0 =3D tcg_temp_new_i64(); + TCGv_i64 t0 =3D tcg_temp_ebb_new_i64(); tcg_gen_shri_i64(t0, al, ofs); tcg_gen_deposit_i64(ret, t0, ah, 64 - ofs, ofs); tcg_temp_free_i64(t0); @@ -2475,8 +2475,8 @@ void tcg_gen_movcond_i64(TCGCond cond, TCGv_i64 ret, = TCGv_i64 c1, } else if (cond =3D=3D TCG_COND_NEVER) { tcg_gen_mov_i64(ret, v2); } else if (TCG_TARGET_REG_BITS =3D=3D 32) { - TCGv_i32 t0 =3D tcg_temp_new_i32(); - TCGv_i32 t1 =3D tcg_temp_new_i32(); + TCGv_i32 t0 =3D tcg_temp_ebb_new_i32(); + TCGv_i32 t1 =3D tcg_temp_ebb_new_i32(); tcg_gen_op6i_i32(INDEX_op_setcond2_i32, t0, TCGV_LOW(c1), TCGV_HIGH(c1), TCGV_LOW(c2), TCGV_HIGH(c2), cond); @@ -2503,8 +2503,8 @@ void tcg_gen_movcond_i64(TCGCond cond, TCGv_i64 ret, = TCGv_i64 c1, } else if (TCG_TARGET_HAS_movcond_i64) { tcg_gen_op6i_i64(INDEX_op_movcond_i64, ret, c1, c2, v1, v2, cond); } else { - TCGv_i64 t0 =3D tcg_temp_new_i64(); - TCGv_i64 t1 =3D tcg_temp_new_i64(); + TCGv_i64 t0 =3D tcg_temp_ebb_new_i64(); + TCGv_i64 t1 =3D tcg_temp_ebb_new_i64(); tcg_gen_setcond_i64(cond, t0, c1, c2); tcg_gen_neg_i64(t0, t0); tcg_gen_and_i64(t1, v1, t0); @@ -2521,8 +2521,8 @@ void tcg_gen_add2_i64(TCGv_i64 rl, TCGv_i64 rh, TCGv_= i64 al, if (TCG_TARGET_HAS_add2_i64) { tcg_gen_op6_i64(INDEX_op_add2_i64, rl, rh, al, ah, bl, bh); } else { - TCGv_i64 t0 =3D tcg_temp_new_i64(); - TCGv_i64 t1 =3D tcg_temp_new_i64(); + TCGv_i64 t0 =3D tcg_temp_ebb_new_i64(); + TCGv_i64 t1 =3D tcg_temp_ebb_new_i64(); tcg_gen_add_i64(t0, al, bl); tcg_gen_setcond_i64(TCG_COND_LTU, t1, t0, al); tcg_gen_add_i64(rh, ah, bh); @@ -2539,8 +2539,8 @@ void tcg_gen_sub2_i64(TCGv_i64 rl, TCGv_i64 rh, TCGv_= i64 al, if (TCG_TARGET_HAS_sub2_i64) { tcg_gen_op6_i64(INDEX_op_sub2_i64, rl, rh, al, ah, bl, bh); } else { - TCGv_i64 t0 =3D tcg_temp_new_i64(); - TCGv_i64 t1 =3D tcg_temp_new_i64(); + TCGv_i64 t0 =3D tcg_temp_ebb_new_i64(); + TCGv_i64 t1 =3D tcg_temp_ebb_new_i64(); tcg_gen_sub_i64(t0, al, bl); tcg_gen_setcond_i64(TCG_COND_LTU, t1, al, bl); tcg_gen_sub_i64(rh, ah, bh); @@ -2556,13 +2556,13 @@ void tcg_gen_mulu2_i64(TCGv_i64 rl, TCGv_i64 rh, TC= Gv_i64 arg1, TCGv_i64 arg2) if (TCG_TARGET_HAS_mulu2_i64) { tcg_gen_op4_i64(INDEX_op_mulu2_i64, rl, rh, arg1, arg2); } else if (TCG_TARGET_HAS_muluh_i64) { - TCGv_i64 t =3D tcg_temp_new_i64(); + TCGv_i64 t =3D tcg_temp_ebb_new_i64(); tcg_gen_op3_i64(INDEX_op_mul_i64, t, arg1, arg2); tcg_gen_op3_i64(INDEX_op_muluh_i64, rh, arg1, arg2); tcg_gen_mov_i64(rl, t); tcg_temp_free_i64(t); } else { - TCGv_i64 t0 =3D tcg_temp_new_i64(); + TCGv_i64 t0 =3D tcg_temp_ebb_new_i64(); tcg_gen_mul_i64(t0, arg1, arg2); gen_helper_muluh_i64(rh, arg1, arg2); tcg_gen_mov_i64(rl, t0); @@ -2575,16 +2575,16 @@ void tcg_gen_muls2_i64(TCGv_i64 rl, TCGv_i64 rh, TC= Gv_i64 arg1, TCGv_i64 arg2) if (TCG_TARGET_HAS_muls2_i64) { tcg_gen_op4_i64(INDEX_op_muls2_i64, rl, rh, arg1, arg2); } else if (TCG_TARGET_HAS_mulsh_i64) { - TCGv_i64 t =3D tcg_temp_new_i64(); + TCGv_i64 t =3D tcg_temp_ebb_new_i64(); tcg_gen_op3_i64(INDEX_op_mul_i64, t, arg1, arg2); tcg_gen_op3_i64(INDEX_op_mulsh_i64, rh, arg1, arg2); tcg_gen_mov_i64(rl, t); tcg_temp_free_i64(t); } else if (TCG_TARGET_HAS_mulu2_i64 || TCG_TARGET_HAS_muluh_i64) { - TCGv_i64 t0 =3D tcg_temp_new_i64(); - TCGv_i64 t1 =3D tcg_temp_new_i64(); - TCGv_i64 t2 =3D tcg_temp_new_i64(); - TCGv_i64 t3 =3D tcg_temp_new_i64(); + TCGv_i64 t0 =3D tcg_temp_ebb_new_i64(); + TCGv_i64 t1 =3D tcg_temp_ebb_new_i64(); + TCGv_i64 t2 =3D tcg_temp_ebb_new_i64(); + TCGv_i64 t3 =3D tcg_temp_ebb_new_i64(); tcg_gen_mulu2_i64(t0, t1, arg1, arg2); /* Adjust for negative inputs. */ tcg_gen_sari_i64(t2, arg1, 63); @@ -2599,7 +2599,7 @@ void tcg_gen_muls2_i64(TCGv_i64 rl, TCGv_i64 rh, TCGv= _i64 arg1, TCGv_i64 arg2) tcg_temp_free_i64(t2); tcg_temp_free_i64(t3); } else { - TCGv_i64 t0 =3D tcg_temp_new_i64(); + TCGv_i64 t0 =3D tcg_temp_ebb_new_i64(); tcg_gen_mul_i64(t0, arg1, arg2); gen_helper_mulsh_i64(rh, arg1, arg2); tcg_gen_mov_i64(rl, t0); @@ -2609,9 +2609,9 @@ void tcg_gen_muls2_i64(TCGv_i64 rl, TCGv_i64 rh, TCGv= _i64 arg1, TCGv_i64 arg2) =20 void tcg_gen_mulsu2_i64(TCGv_i64 rl, TCGv_i64 rh, TCGv_i64 arg1, TCGv_i64 = arg2) { - TCGv_i64 t0 =3D tcg_temp_new_i64(); - TCGv_i64 t1 =3D tcg_temp_new_i64(); - TCGv_i64 t2 =3D tcg_temp_new_i64(); + TCGv_i64 t0 =3D tcg_temp_ebb_new_i64(); + TCGv_i64 t1 =3D tcg_temp_ebb_new_i64(); + TCGv_i64 t2 =3D tcg_temp_ebb_new_i64(); tcg_gen_mulu2_i64(t0, t1, arg1, arg2); /* Adjust for negative input for the signed arg1. */ tcg_gen_sari_i64(t2, arg1, 63); @@ -2645,7 +2645,7 @@ void tcg_gen_umax_i64(TCGv_i64 ret, TCGv_i64 a, TCGv_= i64 b) =20 void tcg_gen_abs_i64(TCGv_i64 ret, TCGv_i64 a) { - TCGv_i64 t =3D tcg_temp_new_i64(); + TCGv_i64 t =3D tcg_temp_ebb_new_i64(); =20 tcg_gen_sari_i64(t, a, 63); tcg_gen_xor_i64(ret, a, t); @@ -2675,7 +2675,7 @@ void tcg_gen_extrh_i64_i32(TCGv_i32 ret, TCGv_i64 arg) tcg_gen_op2(INDEX_op_extrh_i64_i32, tcgv_i32_arg(ret), tcgv_i64_arg(arg)); } else { - TCGv_i64 t =3D tcg_temp_new_i64(); + TCGv_i64 t =3D tcg_temp_ebb_new_i64(); tcg_gen_shri_i64(t, arg, 32); tcg_gen_mov_i32(ret, (TCGv_i32)t); tcg_temp_free_i64(t); @@ -2714,7 +2714,7 @@ void tcg_gen_concat_i32_i64(TCGv_i64 dest, TCGv_i32 l= ow, TCGv_i32 high) return; } =20 - tmp =3D tcg_temp_new_i64(); + tmp =3D tcg_temp_ebb_new_i64(); /* These extensions are only needed for type correctness. We may be able to do better given target specific information. */ tcg_gen_extu_i32_i64(tmp, high); @@ -2826,7 +2826,7 @@ void tcg_gen_lookup_and_goto_ptr(void) } =20 plugin_gen_disable_mem_helpers(); - ptr =3D tcg_temp_new_ptr(); + ptr =3D tcg_temp_ebb_new_ptr(); gen_helper_lookup_tb_ptr(ptr, cpu_env); tcg_gen_op1i(INDEX_op_goto_ptr, tcgv_ptr_arg(ptr)); tcg_temp_free_ptr(ptr); @@ -2987,7 +2987,7 @@ void tcg_gen_qemu_st_i32(TCGv_i32 val, TCGv addr, TCG= Arg idx, MemOp memop) oi =3D make_memop_idx(memop, idx); =20 if (!TCG_TARGET_HAS_MEMORY_BSWAP && (memop & MO_BSWAP)) { - swap =3D tcg_temp_new_i32(); + swap =3D tcg_temp_ebb_new_i32(); switch (memop & MO_SIZE) { case MO_16: tcg_gen_bswap16_i32(swap, val, 0); @@ -3082,7 +3082,7 @@ void tcg_gen_qemu_st_i64(TCGv_i64 val, TCGv addr, TCG= Arg idx, MemOp memop) oi =3D make_memop_idx(memop, idx); =20 if (!TCG_TARGET_HAS_MEMORY_BSWAP && (memop & MO_BSWAP)) { - swap =3D tcg_temp_new_i64(); + swap =3D tcg_temp_ebb_new_i64(); switch (memop & MO_SIZE) { case MO_16: tcg_gen_bswap16_i64(swap, val, 0); @@ -3224,7 +3224,7 @@ void tcg_gen_qemu_st_i128(TCGv_i128 val, TCGv addr, T= CGArg idx, MemOp memop) =20 addr_p8 =3D tcg_temp_new(); if ((mop[0] ^ memop) & MO_BSWAP) { - TCGv_i64 t =3D tcg_temp_new_i64(); + TCGv_i64 t =3D tcg_temp_ebb_new_i64(); =20 tcg_gen_bswap64_i64(t, x); gen_ldst_i64(INDEX_op_qemu_st_i64, t, addr, mop[0], idx); @@ -3328,8 +3328,8 @@ static void * const table_cmpxchg[(MO_SIZE | MO_BSWAP= ) + 1] =3D { void tcg_gen_nonatomic_cmpxchg_i32(TCGv_i32 retv, TCGv addr, TCGv_i32 cmpv, TCGv_i32 newv, TCGArg idx, MemOp memop) { - TCGv_i32 t1 =3D tcg_temp_new_i32(); - TCGv_i32 t2 =3D tcg_temp_new_i32(); + TCGv_i32 t1 =3D tcg_temp_ebb_new_i32(); + TCGv_i32 t2 =3D tcg_temp_ebb_new_i32(); =20 tcg_gen_ext_i32(t2, cmpv, memop & MO_SIZE); =20 @@ -3385,8 +3385,8 @@ void tcg_gen_nonatomic_cmpxchg_i64(TCGv_i64 retv, TCG= v addr, TCGv_i64 cmpv, return; } =20 - t1 =3D tcg_temp_new_i64(); - t2 =3D tcg_temp_new_i64(); + t1 =3D tcg_temp_ebb_new_i64(); + t2 =3D tcg_temp_ebb_new_i64(); =20 tcg_gen_ext_i64(t2, cmpv, memop & MO_SIZE); =20 @@ -3442,9 +3442,9 @@ void tcg_gen_atomic_cmpxchg_i64(TCGv_i64 retv, TCGv a= ddr, TCGv_i64 cmpv, tcg_gen_movi_i32(TCGV_HIGH(retv), 0); } } else { - TCGv_i32 c32 =3D tcg_temp_new_i32(); - TCGv_i32 n32 =3D tcg_temp_new_i32(); - TCGv_i32 r32 =3D tcg_temp_new_i32(); + TCGv_i32 c32 =3D tcg_temp_ebb_new_i32(); + TCGv_i32 n32 =3D tcg_temp_ebb_new_i32(); + TCGv_i32 r32 =3D tcg_temp_ebb_new_i32(); =20 tcg_gen_extrl_i64_i32(c32, cmpv); tcg_gen_extrl_i64_i32(n32, newv); @@ -3476,10 +3476,10 @@ void tcg_gen_nonatomic_cmpxchg_i128(TCGv_i128 retv,= TCGv addr, TCGv_i128 cmpv, =20 gen(retv, cpu_env, addr, cmpv, newv, tcg_constant_i32(oi)); } else { - TCGv_i128 oldv =3D tcg_temp_new_i128(); - TCGv_i128 tmpv =3D tcg_temp_new_i128(); - TCGv_i64 t0 =3D tcg_temp_new_i64(); - TCGv_i64 t1 =3D tcg_temp_new_i64(); + TCGv_i128 oldv =3D tcg_temp_ebb_new_i128(); + TCGv_i128 tmpv =3D tcg_temp_ebb_new_i128(); + TCGv_i64 t0 =3D tcg_temp_ebb_new_i64(); + TCGv_i64 t1 =3D tcg_temp_ebb_new_i64(); TCGv_i64 z =3D tcg_constant_i64(0); =20 tcg_gen_qemu_ld_i128(oldv, addr, idx, memop); @@ -3541,8 +3541,8 @@ static void do_nonatomic_op_i32(TCGv_i32 ret, TCGv ad= dr, TCGv_i32 val, TCGArg idx, MemOp memop, bool new_val, void (*gen)(TCGv_i32, TCGv_i32, TCGv_i32)) { - TCGv_i32 t1 =3D tcg_temp_new_i32(); - TCGv_i32 t2 =3D tcg_temp_new_i32(); + TCGv_i32 t1 =3D tcg_temp_ebb_new_i32(); + TCGv_i32 t2 =3D tcg_temp_ebb_new_i32(); =20 memop =3D tcg_canonicalize_memop(memop, 0, 0); =20 @@ -3579,8 +3579,8 @@ static void do_nonatomic_op_i64(TCGv_i64 ret, TCGv ad= dr, TCGv_i64 val, TCGArg idx, MemOp memop, bool new_val, void (*gen)(TCGv_i64, TCGv_i64, TCGv_i64)) { - TCGv_i64 t1 =3D tcg_temp_new_i64(); - TCGv_i64 t2 =3D tcg_temp_new_i64(); + TCGv_i64 t1 =3D tcg_temp_ebb_new_i64(); + TCGv_i64 t2 =3D tcg_temp_ebb_new_i64(); =20 memop =3D tcg_canonicalize_memop(memop, 1, 0); =20 @@ -3616,8 +3616,8 @@ static void do_atomic_op_i64(TCGv_i64 ret, TCGv addr,= TCGv_i64 val, tcg_gen_movi_i64(ret, 0); #endif /* CONFIG_ATOMIC64 */ } else { - TCGv_i32 v32 =3D tcg_temp_new_i32(); - TCGv_i32 r32 =3D tcg_temp_new_i32(); + TCGv_i32 v32 =3D tcg_temp_ebb_new_i32(); + TCGv_i32 r32 =3D tcg_temp_ebb_new_i32(); =20 tcg_gen_extrl_i64_i32(v32, val); do_atomic_op_i32(r32, addr, v32, idx, memop & ~MO_SIGN, table); diff --git a/tcg/tcg.c b/tcg/tcg.c index dceb120be9..73215741d0 100644 --- a/tcg/tcg.c +++ b/tcg/tcg.c @@ -1863,7 +1863,7 @@ void tcg_gen_callN(void *func, TCGTemp *ret, int narg= s, TCGTemp **args) case TCG_CALL_ARG_EXTEND_U: case TCG_CALL_ARG_EXTEND_S: { - TCGv_i64 temp =3D tcg_temp_new_i64(); + TCGv_i64 temp =3D tcg_temp_ebb_new_i64(); TCGv_i32 orig =3D temp_tcgv_i32(ts); =20 if (loc->kind =3D=3D TCG_CALL_ARG_EXTEND_S) { --=20 2.34.1