From nobody Fri Apr 26 07:59:23 2024 Delivered-To: importer2@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer2=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1675430445; cv=none; d=zohomail.com; s=zohoarc; b=jMP/8L8kpqkbfhGoWZmLr3Bj62K+M25wNiroQOP/uWjg5zTeNjb3K9VYiJDnY+va4reEM++QtHtXdROWfhzSm5Guo2IPS4+QC4leTr3z94eQwOD1SCeEXSqfXDjaxwlrZyDDjxOM/eWcHOPYHeIEGp/OXk2tOpvcFo9RdXNhR2w= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1675430445; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=5rfukRz4EwI39UCHCn9ZQZjhf9HGOmdDQ8mwzAR/5og=; b=AJASBRvqp8R3lrUP8OdsNhs0Fcxwa9GscVO/2sCZ7RU8IU3wFUxwTcx0VrDGAQ9fXxkkQliDLUfp+YEOuYFrVZ3YXV3tqudm9XBeiFYoOqz7sBZjvBE4Vt3s4j7u6thpTpWsifp/h/9qSzcDuNcp0dbxuaVwxDd327WuexRUryE= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer2=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1675430445422846.3241744942266; Fri, 3 Feb 2023 05:20:45 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pNvy7-0002rO-UM; Fri, 03 Feb 2023 08:19:07 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pNvwn-0002Uu-01 for qemu-devel@nongnu.org; Fri, 03 Feb 2023 08:17:45 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pNvwi-0005l6-QS for qemu-devel@nongnu.org; Fri, 03 Feb 2023 08:17:44 -0500 Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-102-0v10CpXFPY2Iku6pK-c5pw-1; Fri, 03 Feb 2023 08:17:36 -0500 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E3A4738041D2; Fri, 3 Feb 2023 13:17:35 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 60BA22026FFF; Fri, 3 Feb 2023 13:17:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1675430259; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=5rfukRz4EwI39UCHCn9ZQZjhf9HGOmdDQ8mwzAR/5og=; b=ZPrh0ltBlIWqqqJajw6q9MFGcAIurmr1BBs0s+J6lGxmdKU+qjmLBpram+Qld8aVmbcpWM 10K4uxfPO09ww23nAeY8Hw7S+4y2xqGTHG+9lZCIoX4hvkenIEdKkHrE8OBpHZlT6wz/Co Ui3CxJkj2mTWYukipR/sgHdbYeeL5hY= X-MC-Unique: 0v10CpXFPY2Iku6pK-c5pw-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Cc: Stefan Berger , Kevin Wolf , Hanna Reitz , Stefan Weil , Aarushi Mehta , Julia Suvorova , Stefan Hajnoczi , Stefano Garzarella , Greg Kurz , Christian Schoenebeck , Daniel Henrique Barboza , =?UTF-8?q?C=C3=A9dric=20Le=20Goater?= , David Gibson , "Michael S. Tsirkin" , Fam Zheng , Paolo Bonzini , qemu-devel@nongnu.org, qemu-ppc@nongnu.org, Emanuele Giuseppe Esposito Subject: [PATCH v5 1/4] linux-aio: use LinuxAioState from the running thread Date: Fri, 3 Feb 2023 08:17:28 -0500 Message-Id: <20230203131731.851116-2-eesposit@redhat.com> In-Reply-To: <20230203131731.851116-1-eesposit@redhat.com> References: <20230203131731.851116-1-eesposit@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer2=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer2=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer2=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1675430447257100001 Content-Type: text/plain; charset="utf-8" Remove usage of aio_context_acquire by always submitting asynchronous AIO to the current thread's LinuxAioState. In order to prevent mistakes from the caller side, avoid passing LinuxAioSt= ate in laio_io_{plug/unplug} and laio_co_submit, and document the functions to make clear that they work in the current thread's AioContext. Signed-off-by: Emanuele Giuseppe Esposito Reviewed-by: Stefan Hajnoczi --- include/block/aio.h | 4 ---- include/block/raw-aio.h | 18 ++++++++++++------ include/sysemu/block-backend-io.h | 6 ++++++ block/file-posix.c | 10 +++------- block/linux-aio.c | 29 +++++++++++++++++------------ 5 files changed, 38 insertions(+), 29 deletions(-) diff --git a/include/block/aio.h b/include/block/aio.h index 8fba6a3584..b6b396cfcb 100644 --- a/include/block/aio.h +++ b/include/block/aio.h @@ -208,10 +208,6 @@ struct AioContext { struct ThreadPool *thread_pool; =20 #ifdef CONFIG_LINUX_AIO - /* - * State for native Linux AIO. Uses aio_context_acquire/release for - * locking. - */ struct LinuxAioState *linux_aio; #endif #ifdef CONFIG_LINUX_IO_URING diff --git a/include/block/raw-aio.h b/include/block/raw-aio.h index f8cda9df91..db614472e6 100644 --- a/include/block/raw-aio.h +++ b/include/block/raw-aio.h @@ -49,14 +49,20 @@ typedef struct LinuxAioState LinuxAioState; LinuxAioState *laio_init(Error **errp); void laio_cleanup(LinuxAioState *s); -int coroutine_fn laio_co_submit(BlockDriverState *bs, LinuxAioState *s, in= t fd, - uint64_t offset, QEMUIOVector *qiov, int t= ype, - uint64_t dev_max_batch); + +/* laio_co_submit: submit I/O requests in the thread's current AioContext.= */ +int coroutine_fn laio_co_submit(int fd, uint64_t offset, QEMUIOVector *qio= v, + int type, uint64_t dev_max_batch); + void laio_detach_aio_context(LinuxAioState *s, AioContext *old_context); void laio_attach_aio_context(LinuxAioState *s, AioContext *new_context); -void laio_io_plug(BlockDriverState *bs, LinuxAioState *s); -void laio_io_unplug(BlockDriverState *bs, LinuxAioState *s, - uint64_t dev_max_batch); + +/* + * laio_io_plug/unplug work in the thread's current AioContext, therefore = the + * caller must ensure that they are paired in the same IOThread. + */ +void laio_io_plug(void); +void laio_io_unplug(uint64_t dev_max_batch); #endif /* io_uring.c - Linux io_uring implementation */ #ifdef CONFIG_LINUX_IO_URING diff --git a/include/sysemu/block-backend-io.h b/include/sysemu/block-backe= nd-io.h index 031a27ba10..d41698ccc5 100644 --- a/include/sysemu/block-backend-io.h +++ b/include/sysemu/block-backend-io.h @@ -74,8 +74,14 @@ void blk_iostatus_set_err(BlockBackend *blk, int error); int blk_get_max_iov(BlockBackend *blk); int blk_get_max_hw_iov(BlockBackend *blk); =20 +/* + * blk_io_plug/unplug are thread-local operations. This means that multiple + * IOThreads can simultaneously call plug/unplug, but the caller must ensu= re + * that each unplug() is called in the same IOThread of the matching plug(= ). + */ void blk_io_plug(BlockBackend *blk); void blk_io_unplug(BlockBackend *blk); + AioContext *blk_get_aio_context(BlockBackend *blk); BlockAcctStats *blk_get_stats(BlockBackend *blk); void *blk_aio_get(const AIOCBInfo *aiocb_info, BlockBackend *blk, diff --git a/block/file-posix.c b/block/file-posix.c index fa227d9d14..fa99d1c25a 100644 --- a/block/file-posix.c +++ b/block/file-posix.c @@ -2095,10 +2095,8 @@ static int coroutine_fn raw_co_prw(BlockDriverState = *bs, uint64_t offset, #endif #ifdef CONFIG_LINUX_AIO } else if (s->use_linux_aio) { - LinuxAioState *aio =3D aio_get_linux_aio(bdrv_get_aio_context(bs)); assert(qiov->size =3D=3D bytes); - return laio_co_submit(bs, aio, s->fd, offset, qiov, type, - s->aio_max_batch); + return laio_co_submit(s->fd, offset, qiov, type, s->aio_max_batch); #endif } =20 @@ -2137,8 +2135,7 @@ static void raw_aio_plug(BlockDriverState *bs) BDRVRawState __attribute__((unused)) *s =3D bs->opaque; #ifdef CONFIG_LINUX_AIO if (s->use_linux_aio) { - LinuxAioState *aio =3D aio_get_linux_aio(bdrv_get_aio_context(bs)); - laio_io_plug(bs, aio); + laio_io_plug(); } #endif #ifdef CONFIG_LINUX_IO_URING @@ -2154,8 +2151,7 @@ static void raw_aio_unplug(BlockDriverState *bs) BDRVRawState __attribute__((unused)) *s =3D bs->opaque; #ifdef CONFIG_LINUX_AIO if (s->use_linux_aio) { - LinuxAioState *aio =3D aio_get_linux_aio(bdrv_get_aio_context(bs)); - laio_io_unplug(bs, aio, s->aio_max_batch); + laio_io_unplug(s->aio_max_batch); } #endif #ifdef CONFIG_LINUX_IO_URING diff --git a/block/linux-aio.c b/block/linux-aio.c index d2cfb7f523..fc50cdd1bf 100644 --- a/block/linux-aio.c +++ b/block/linux-aio.c @@ -16,6 +16,9 @@ #include "qemu/coroutine.h" #include "qapi/error.h" =20 +/* Only used for assertions. */ +#include "qemu/coroutine_int.h" + #include =20 /* @@ -56,10 +59,8 @@ struct LinuxAioState { io_context_t ctx; EventNotifier e; =20 - /* io queue for submit at batch. Protected by AioContext lock. */ + /* No locking required, only accessed from AioContext home thread */ LaioQueue io_q; - - /* I/O completion processing. Only runs in I/O thread. */ QEMUBH *completion_bh; int event_idx; int event_max; @@ -102,6 +103,7 @@ static void qemu_laio_process_completion(struct qemu_la= iocb *laiocb) * later. Coroutines cannot be entered recursively so avoid doing * that! */ + assert(laiocb->co->ctx =3D=3D laiocb->ctx->aio_context); if (!qemu_coroutine_entered(laiocb->co)) { aio_co_wake(laiocb->co); } @@ -232,13 +234,11 @@ static void qemu_laio_process_completions(LinuxAioSta= te *s) =20 static void qemu_laio_process_completions_and_submit(LinuxAioState *s) { - aio_context_acquire(s->aio_context); qemu_laio_process_completions(s); =20 if (!s->io_q.plugged && !QSIMPLEQ_EMPTY(&s->io_q.pending)) { ioq_submit(s); } - aio_context_release(s->aio_context); } =20 static void qemu_laio_completion_bh(void *opaque) @@ -354,14 +354,19 @@ static uint64_t laio_max_batch(LinuxAioState *s, uint= 64_t dev_max_batch) return max_batch; } =20 -void laio_io_plug(BlockDriverState *bs, LinuxAioState *s) +void laio_io_plug(void) { + AioContext *ctx =3D qemu_get_current_aio_context(); + LinuxAioState *s =3D aio_get_linux_aio(ctx); + s->io_q.plugged++; } =20 -void laio_io_unplug(BlockDriverState *bs, LinuxAioState *s, - uint64_t dev_max_batch) +void laio_io_unplug(uint64_t dev_max_batch) { + AioContext *ctx =3D qemu_get_current_aio_context(); + LinuxAioState *s =3D aio_get_linux_aio(ctx); + assert(s->io_q.plugged); s->io_q.plugged--; =20 @@ -411,15 +416,15 @@ static int laio_do_submit(int fd, struct qemu_laiocb = *laiocb, off_t offset, return 0; } =20 -int coroutine_fn laio_co_submit(BlockDriverState *bs, LinuxAioState *s, in= t fd, - uint64_t offset, QEMUIOVector *qiov, int t= ype, - uint64_t dev_max_batch) +int coroutine_fn laio_co_submit(int fd, uint64_t offset, QEMUIOVector *qio= v, + int type, uint64_t dev_max_batch) { int ret; + AioContext *ctx =3D qemu_get_current_aio_context(); struct qemu_laiocb laiocb =3D { .co =3D qemu_coroutine_self(), .nbytes =3D qiov->size, - .ctx =3D s, + .ctx =3D aio_get_linux_aio(ctx), .ret =3D -EINPROGRESS, .is_read =3D (type =3D=3D QEMU_AIO_READ), .qiov =3D qiov, --=20 2.39.1 From nobody Fri Apr 26 07:59:23 2024 Delivered-To: importer2@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer2=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1675430405; cv=none; d=zohomail.com; s=zohoarc; b=OPeMwRizUrW1r1bygXOgwCdStqyl6wq7Wsje85Dhtbf8SfKYztzZ/xj43myBTgOeia4B86iFNC0GEtJvRJ6k5p96dqiECmoehE3bxroqm2E/gKMJyH9q8WnSHdiYMPoItEkJrulm7uk1HRX39uMg1l2yO8v/x4SILVfZ4z6mwzc= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1675430405; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=O9Bv7L5vXt/I0uE7sVGGKvpSA0fsYXAr3qGnaj0zbJs=; b=nY8IYnl9pffYJvaOjhHlirUay719O93hPCg6PU821cYlIi86hSWX34zxXAbGLtsSjiysZPr6jSzvyiHGLyk09K5AV4fKIBKdg45UEikWEiR0ws3lmmeQBZd1pIXB5YF1UR5/hIFk64bSBjePPY+j5YqlHCrw7th/+K6nPRc2lHI= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer2=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1675430405842715.8262325654531; Fri, 3 Feb 2023 05:20:05 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pNvyA-0002zw-Cv; Fri, 03 Feb 2023 08:19:10 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pNvwq-0002Vl-Ow for qemu-devel@nongnu.org; Fri, 03 Feb 2023 08:17:49 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pNvwk-0005le-1A for qemu-devel@nongnu.org; Fri, 03 Feb 2023 08:17:48 -0500 Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-266-wGnNWYoFOeGsAxIwenQKTA-1; Fri, 03 Feb 2023 08:17:37 -0500 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 8335C80D0E8; Fri, 3 Feb 2023 13:17:36 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id EF9AD2026FFF; Fri, 3 Feb 2023 13:17:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1675430261; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=O9Bv7L5vXt/I0uE7sVGGKvpSA0fsYXAr3qGnaj0zbJs=; b=EvB2V4jTG31Jj97f+71f5Q4WUWLAmb7PgILFvS62I6pCxqCp/kF9pF91tmWCSz0oGVTgP/ nov3l9f0hijq+vTyXDQxAU+tJZVtx9mZ/sSiLVv2j5acxzyIKeL7MIKw7wQ7XvlgK95ejX YtjNnem76AXzSlscynibygKQXJ5p1Fs= X-MC-Unique: wGnNWYoFOeGsAxIwenQKTA-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Cc: Stefan Berger , Kevin Wolf , Hanna Reitz , Stefan Weil , Aarushi Mehta , Julia Suvorova , Stefan Hajnoczi , Stefano Garzarella , Greg Kurz , Christian Schoenebeck , Daniel Henrique Barboza , =?UTF-8?q?C=C3=A9dric=20Le=20Goater?= , David Gibson , "Michael S. Tsirkin" , Fam Zheng , Paolo Bonzini , qemu-devel@nongnu.org, qemu-ppc@nongnu.org, Emanuele Giuseppe Esposito Subject: [PATCH v5 2/4] io_uring: use LuringState from the running thread Date: Fri, 3 Feb 2023 08:17:29 -0500 Message-Id: <20230203131731.851116-3-eesposit@redhat.com> In-Reply-To: <20230203131731.851116-1-eesposit@redhat.com> References: <20230203131731.851116-1-eesposit@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer2=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer2=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer2=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1675430407323100002 Content-Type: text/plain; charset="utf-8" Remove usage of aio_context_acquire by always submitting asynchronous AIO to the current thread's LuringState. In order to prevent mistakes from the caller side, avoid passing LuringState in luring_io_{plug/unplug} and luring_co_submit, and document the functions to make clear that they work in the current thread's AioContext. Signed-off-by: Emanuele Giuseppe Esposito --- include/block/aio.h | 4 ---- include/block/raw-aio.h | 15 +++++++++++---- block/file-posix.c | 12 ++++-------- block/io_uring.c | 23 +++++++++++++++-------- 4 files changed, 30 insertions(+), 24 deletions(-) diff --git a/include/block/aio.h b/include/block/aio.h index b6b396cfcb..3b7634bef4 100644 --- a/include/block/aio.h +++ b/include/block/aio.h @@ -211,10 +211,6 @@ struct AioContext { struct LinuxAioState *linux_aio; #endif #ifdef CONFIG_LINUX_IO_URING - /* - * State for Linux io_uring. Uses aio_context_acquire/release for - * locking. - */ struct LuringState *linux_io_uring; =20 /* State for file descriptor monitoring using Linux io_uring */ diff --git a/include/block/raw-aio.h b/include/block/raw-aio.h index db614472e6..e46a29c3f0 100644 --- a/include/block/raw-aio.h +++ b/include/block/raw-aio.h @@ -69,12 +69,19 @@ void laio_io_unplug(uint64_t dev_max_batch); typedef struct LuringState LuringState; LuringState *luring_init(Error **errp); void luring_cleanup(LuringState *s); -int coroutine_fn luring_co_submit(BlockDriverState *bs, LuringState *s, in= t fd, - uint64_t offset, QEMUIOVector *qiov, int t= ype); + +/* luring_co_submit: submit I/O requests in the thread's current AioContex= t. */ +int coroutine_fn luring_co_submit(BlockDriverState *bs, int fd, uint64_t o= ffset, + QEMUIOVector *qiov, int type); void luring_detach_aio_context(LuringState *s, AioContext *old_context); void luring_attach_aio_context(LuringState *s, AioContext *new_context); -void luring_io_plug(BlockDriverState *bs, LuringState *s); -void luring_io_unplug(BlockDriverState *bs, LuringState *s); + +/* + * luring_io_plug/unplug work in the thread's current AioContext, therefor= e the + * caller must ensure that they are paired in the same IOThread. + */ +void luring_io_plug(void); +void luring_io_unplug(void); #endif =20 #ifdef _WIN32 diff --git a/block/file-posix.c b/block/file-posix.c index fa99d1c25a..b8ee58201c 100644 --- a/block/file-posix.c +++ b/block/file-posix.c @@ -2089,9 +2089,8 @@ static int coroutine_fn raw_co_prw(BlockDriverState *= bs, uint64_t offset, type |=3D QEMU_AIO_MISALIGNED; #ifdef CONFIG_LINUX_IO_URING } else if (s->use_linux_io_uring) { - LuringState *aio =3D aio_get_linux_io_uring(bdrv_get_aio_context(b= s)); assert(qiov->size =3D=3D bytes); - return luring_co_submit(bs, aio, s->fd, offset, qiov, type); + return luring_co_submit(bs, s->fd, offset, qiov, type); #endif #ifdef CONFIG_LINUX_AIO } else if (s->use_linux_aio) { @@ -2140,8 +2139,7 @@ static void raw_aio_plug(BlockDriverState *bs) #endif #ifdef CONFIG_LINUX_IO_URING if (s->use_linux_io_uring) { - LuringState *aio =3D aio_get_linux_io_uring(bdrv_get_aio_context(b= s)); - luring_io_plug(bs, aio); + luring_io_plug(); } #endif } @@ -2156,8 +2154,7 @@ static void raw_aio_unplug(BlockDriverState *bs) #endif #ifdef CONFIG_LINUX_IO_URING if (s->use_linux_io_uring) { - LuringState *aio =3D aio_get_linux_io_uring(bdrv_get_aio_context(b= s)); - luring_io_unplug(bs, aio); + luring_io_unplug(); } #endif } @@ -2181,8 +2178,7 @@ static int coroutine_fn raw_co_flush_to_disk(BlockDri= verState *bs) =20 #ifdef CONFIG_LINUX_IO_URING if (s->use_linux_io_uring) { - LuringState *aio =3D aio_get_linux_io_uring(bdrv_get_aio_context(b= s)); - return luring_co_submit(bs, aio, s->fd, 0, NULL, QEMU_AIO_FLUSH); + return luring_co_submit(bs, s->fd, 0, NULL, QEMU_AIO_FLUSH); } #endif return raw_thread_pool_submit(bs, handle_aiocb_flush, &acb); diff --git a/block/io_uring.c b/block/io_uring.c index 973e15d876..220fb72ae6 100644 --- a/block/io_uring.c +++ b/block/io_uring.c @@ -18,6 +18,9 @@ #include "qapi/error.h" #include "trace.h" =20 +/* Only used for assertions. */ +#include "qemu/coroutine_int.h" + /* io_uring ring size */ #define MAX_ENTRIES 128 =20 @@ -50,10 +53,9 @@ typedef struct LuringState { =20 struct io_uring ring; =20 - /* io queue for submit at batch. Protected by AioContext lock. */ + /* No locking required, only accessed from AioContext home thread */ LuringQueue io_q; =20 - /* I/O completion processing. Only runs in I/O thread. */ QEMUBH *completion_bh; } LuringState; =20 @@ -209,6 +211,7 @@ end: * eventually runs later. Coroutines cannot be entered recursively * so avoid doing that! */ + assert(luringcb->co->ctx =3D=3D luringcb->aio_context); if (!qemu_coroutine_entered(luringcb->co)) { aio_co_wake(luringcb->co); } @@ -262,13 +265,11 @@ static int ioq_submit(LuringState *s) =20 static void luring_process_completions_and_submit(LuringState *s) { - aio_context_acquire(s->aio_context); luring_process_completions(s); =20 if (!s->io_q.plugged && s->io_q.in_queue > 0) { ioq_submit(s); } - aio_context_release(s->aio_context); } =20 static void qemu_luring_completion_bh(void *opaque) @@ -306,14 +307,18 @@ static void ioq_init(LuringQueue *io_q) io_q->blocked =3D false; } =20 -void luring_io_plug(BlockDriverState *bs, LuringState *s) +void luring_io_plug(void) { + AioContext *ctx =3D qemu_get_current_aio_context(); + LuringState *s =3D aio_get_linux_io_uring(ctx); trace_luring_io_plug(s); s->io_q.plugged++; } =20 -void luring_io_unplug(BlockDriverState *bs, LuringState *s) +void luring_io_unplug(void) { + AioContext *ctx =3D qemu_get_current_aio_context(); + LuringState *s =3D aio_get_linux_io_uring(ctx); assert(s->io_q.plugged); trace_luring_io_unplug(s, s->io_q.blocked, s->io_q.plugged, s->io_q.in_queue, s->io_q.in_flight); @@ -373,10 +378,12 @@ static int luring_do_submit(int fd, LuringAIOCB *luri= ngcb, LuringState *s, return 0; } =20 -int coroutine_fn luring_co_submit(BlockDriverState *bs, LuringState *s, in= t fd, - uint64_t offset, QEMUIOVector *qiov, int= type) +int coroutine_fn luring_co_submit(BlockDriverState *bs, int fd, uint64_t o= ffset, + QEMUIOVector *qiov, int type) { int ret; + AioContext *ctx =3D qemu_get_current_aio_context(); + LuringState *s =3D aio_get_linux_io_uring(ctx); LuringAIOCB luringcb =3D { .co =3D qemu_coroutine_self(), .ret =3D -EINPROGRESS, --=20 2.39.1 From nobody Fri Apr 26 07:59:23 2024 Delivered-To: importer2@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer2=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1675430417; cv=none; d=zohomail.com; s=zohoarc; b=HT7OBEAbB3DEZ352HY1MbJXvRowBdDbg5LcA/5z9MWcvx5d5PvhRHX21LKDFoY5yQ1ylnx6zPBwVAou3BRYFnGNWyqtRFXoJUGHuqX680hhWw+jjzXveMJduK1+KnUGU5A1foFsDmnaScHvG8OgaF2ZiedMgkxZ9ImMrGyAhkY4= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1675430417; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=cbtcXkmGeRIiLwlhwQEms+XZ99WxIR5DSBKOOtwrtew=; b=KwJPrAgGkrO/9Fd+wS6D8L6A7l8ZDLzO7/zutQ/Ju3AIQjAzFKTHGSsxLBONw72Y3GULNXmPdIPnKOlZvIqgehnD0UdBg4oxi7mMAz/LHiDcHONdbWf2mdNmaqjXGeK07jKq7r1ZqsVRzbz+IxaeCr05V1jSxX7kaXEJMQ+GQ3E= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer2=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1675430417336225.54021986861596; Fri, 3 Feb 2023 05:20:17 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pNvy7-0002rM-Uv; Fri, 03 Feb 2023 08:19:08 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pNvwo-0002VH-Cd for qemu-devel@nongnu.org; Fri, 03 Feb 2023 08:17:46 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pNvwj-0005lQ-At for qemu-devel@nongnu.org; Fri, 03 Feb 2023 08:17:46 -0500 Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-304-m-bKA6lnPbWk7dqXouHV6Q-1; Fri, 03 Feb 2023 08:17:37 -0500 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 1CB481C05EB4; Fri, 3 Feb 2023 13:17:37 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8E2902026FFE; Fri, 3 Feb 2023 13:17:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1675430260; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=cbtcXkmGeRIiLwlhwQEms+XZ99WxIR5DSBKOOtwrtew=; b=JrHzieL7M5bzSCh2RrxtMIQKDS6Zz3YY7tDC7fi2n2kc2sTQMn6Mox0tsgnMI0iG+FmDqg aulREkiqQ1osnzpowzPgfDm1Whl+8GTH0ljmBUQYD1Q2g9DC1VNSWE7XDjkSflQe8uV7HQ K3fnrrJBv5lgAv2bHES4/xtPuIxhkW8= X-MC-Unique: m-bKA6lnPbWk7dqXouHV6Q-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Cc: Stefan Berger , Kevin Wolf , Hanna Reitz , Stefan Weil , Aarushi Mehta , Julia Suvorova , Stefan Hajnoczi , Stefano Garzarella , Greg Kurz , Christian Schoenebeck , Daniel Henrique Barboza , =?UTF-8?q?C=C3=A9dric=20Le=20Goater?= , David Gibson , "Michael S. Tsirkin" , Fam Zheng , Paolo Bonzini , qemu-devel@nongnu.org, qemu-ppc@nongnu.org, Emanuele Giuseppe Esposito Subject: [PATCH v5 3/4] thread-pool: use ThreadPool from the running thread Date: Fri, 3 Feb 2023 08:17:30 -0500 Message-Id: <20230203131731.851116-4-eesposit@redhat.com> In-Reply-To: <20230203131731.851116-1-eesposit@redhat.com> References: <20230203131731.851116-1-eesposit@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer2=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer2=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer2=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1675430419045100001 Content-Type: text/plain; charset="utf-8" Use qemu_get_current_aio_context() where possible, since we always submit work to the current thread anyways. We want to also be sure that the thread submitting the work is the same as the one processing the pool, to avoid adding synchronization to the pool list. Signed-off-by: Emanuele Giuseppe Esposito --- include/block/thread-pool.h | 5 +++++ block/file-posix.c | 21 ++++++++++----------- block/file-win32.c | 2 +- block/qcow2-threads.c | 2 +- util/thread-pool.c | 9 ++++----- 5 files changed, 21 insertions(+), 18 deletions(-) diff --git a/include/block/thread-pool.h b/include/block/thread-pool.h index 95ff2b0bdb..c408bde74c 100644 --- a/include/block/thread-pool.h +++ b/include/block/thread-pool.h @@ -29,12 +29,17 @@ typedef struct ThreadPool ThreadPool; ThreadPool *thread_pool_new(struct AioContext *ctx); void thread_pool_free(ThreadPool *pool); =20 +/* + * thread_pool_submit* API: submit I/O requests in the thread's + * current AioContext. + */ BlockAIOCB *thread_pool_submit_aio(ThreadPool *pool, ThreadPoolFunc *func, void *arg, BlockCompletionFunc *cb, void *opaque); int coroutine_fn thread_pool_submit_co(ThreadPool *pool, ThreadPoolFunc *func, void *arg); void thread_pool_submit(ThreadPool *pool, ThreadPoolFunc *func, void *arg); + void thread_pool_update_params(ThreadPool *pool, struct AioContext *ctx); =20 #endif diff --git a/block/file-posix.c b/block/file-posix.c index b8ee58201c..f7d88fa857 100644 --- a/block/file-posix.c +++ b/block/file-posix.c @@ -2040,11 +2040,10 @@ out: return result; } =20 -static int coroutine_fn raw_thread_pool_submit(BlockDriverState *bs, - ThreadPoolFunc func, void *= arg) +static int coroutine_fn raw_thread_pool_submit(ThreadPoolFunc func, void *= arg) { /* @bs can be NULL, bdrv_get_aio_context() returns the main context th= en */ - ThreadPool *pool =3D aio_get_thread_pool(bdrv_get_aio_context(bs)); + ThreadPool *pool =3D aio_get_thread_pool(qemu_get_current_aio_context(= )); return thread_pool_submit_co(pool, func, arg); } =20 @@ -2112,7 +2111,7 @@ static int coroutine_fn raw_co_prw(BlockDriverState *= bs, uint64_t offset, }; =20 assert(qiov->size =3D=3D bytes); - return raw_thread_pool_submit(bs, handle_aiocb_rw, &acb); + return raw_thread_pool_submit(handle_aiocb_rw, &acb); } =20 static int coroutine_fn raw_co_preadv(BlockDriverState *bs, int64_t offset, @@ -2181,7 +2180,7 @@ static int coroutine_fn raw_co_flush_to_disk(BlockDri= verState *bs) return luring_co_submit(bs, s->fd, 0, NULL, QEMU_AIO_FLUSH); } #endif - return raw_thread_pool_submit(bs, handle_aiocb_flush, &acb); + return raw_thread_pool_submit(handle_aiocb_flush, &acb); } =20 static void raw_aio_attach_aio_context(BlockDriverState *bs, @@ -2243,7 +2242,7 @@ raw_regular_truncate(BlockDriverState *bs, int fd, in= t64_t offset, }, }; =20 - return raw_thread_pool_submit(bs, handle_aiocb_truncate, &acb); + return raw_thread_pool_submit(handle_aiocb_truncate, &acb); } =20 static int coroutine_fn raw_co_truncate(BlockDriverState *bs, int64_t offs= et, @@ -2993,7 +2992,7 @@ raw_do_pdiscard(BlockDriverState *bs, int64_t offset,= int64_t bytes, acb.aio_type |=3D QEMU_AIO_BLKDEV; } =20 - ret =3D raw_thread_pool_submit(bs, handle_aiocb_discard, &acb); + ret =3D raw_thread_pool_submit(handle_aiocb_discard, &acb); raw_account_discard(s, bytes, ret); return ret; } @@ -3068,7 +3067,7 @@ raw_do_pwrite_zeroes(BlockDriverState *bs, int64_t of= fset, int64_t bytes, handler =3D handle_aiocb_write_zeroes; } =20 - return raw_thread_pool_submit(bs, handler, &acb); + return raw_thread_pool_submit(handler, &acb); } =20 static int coroutine_fn raw_co_pwrite_zeroes( @@ -3279,7 +3278,7 @@ static int coroutine_fn raw_co_copy_range_to(BlockDri= verState *bs, }, }; =20 - return raw_thread_pool_submit(bs, handle_aiocb_copy_range, &acb); + return raw_thread_pool_submit(handle_aiocb_copy_range, &acb); } =20 BlockDriver bdrv_file =3D { @@ -3609,7 +3608,7 @@ hdev_co_ioctl(BlockDriverState *bs, unsigned long int= req, void *buf) struct sg_io_hdr *io_hdr =3D buf; if (io_hdr->cmdp[0] =3D=3D PERSISTENT_RESERVE_OUT || io_hdr->cmdp[0] =3D=3D PERSISTENT_RESERVE_IN) { - return pr_manager_execute(s->pr_mgr, bdrv_get_aio_context(bs), + return pr_manager_execute(s->pr_mgr, qemu_get_current_aio_cont= ext(), s->fd, io_hdr); } } @@ -3625,7 +3624,7 @@ hdev_co_ioctl(BlockDriverState *bs, unsigned long int= req, void *buf) }, }; =20 - return raw_thread_pool_submit(bs, handle_aiocb_ioctl, &acb); + return raw_thread_pool_submit(handle_aiocb_ioctl, &acb); } #endif /* linux */ =20 diff --git a/block/file-win32.c b/block/file-win32.c index 12be9c3d0f..1af6d3c810 100644 --- a/block/file-win32.c +++ b/block/file-win32.c @@ -168,7 +168,7 @@ static BlockAIOCB *paio_submit(BlockDriverState *bs, HA= NDLE hfile, acb->aio_offset =3D offset; =20 trace_file_paio_submit(acb, opaque, offset, count, type); - pool =3D aio_get_thread_pool(bdrv_get_aio_context(bs)); + pool =3D aio_get_thread_pool(qemu_get_current_aio_context()); return thread_pool_submit_aio(pool, aio_worker, acb, cb, opaque); } =20 diff --git a/block/qcow2-threads.c b/block/qcow2-threads.c index 953bbe6df8..6d2e6b7bf4 100644 --- a/block/qcow2-threads.c +++ b/block/qcow2-threads.c @@ -43,7 +43,7 @@ qcow2_co_process(BlockDriverState *bs, ThreadPoolFunc *fu= nc, void *arg) { int ret; BDRVQcow2State *s =3D bs->opaque; - ThreadPool *pool =3D aio_get_thread_pool(bdrv_get_aio_context(bs)); + ThreadPool *pool =3D aio_get_thread_pool(qemu_get_current_aio_context(= )); =20 qemu_co_mutex_lock(&s->lock); while (s->nb_threads >=3D QCOW2_MAX_THREADS) { diff --git a/util/thread-pool.c b/util/thread-pool.c index 31113b5860..a70abb8a59 100644 --- a/util/thread-pool.c +++ b/util/thread-pool.c @@ -48,7 +48,7 @@ struct ThreadPoolElement { /* Access to this list is protected by lock. */ QTAILQ_ENTRY(ThreadPoolElement) reqs; =20 - /* Access to this list is protected by the global mutex. */ + /* This list is only written by the thread pool's mother thread. */ QLIST_ENTRY(ThreadPoolElement) all; }; =20 @@ -175,7 +175,6 @@ static void thread_pool_completion_bh(void *opaque) ThreadPool *pool =3D opaque; ThreadPoolElement *elem, *next; =20 - aio_context_acquire(pool->ctx); restart: QLIST_FOREACH_SAFE(elem, &pool->head, all, next) { if (elem->state !=3D THREAD_DONE) { @@ -195,9 +194,7 @@ restart: */ qemu_bh_schedule(pool->completion_bh); =20 - aio_context_release(pool->ctx); elem->common.cb(elem->common.opaque, elem->ret); - aio_context_acquire(pool->ctx); =20 /* We can safely cancel the completion_bh here regardless of s= omeone * else having scheduled it meanwhile because we reenter the @@ -211,7 +208,6 @@ restart: qemu_aio_unref(elem); } } - aio_context_release(pool->ctx); } =20 static void thread_pool_cancel(BlockAIOCB *acb) @@ -251,6 +247,9 @@ BlockAIOCB *thread_pool_submit_aio(ThreadPool *pool, { ThreadPoolElement *req; =20 + /* Assert that the thread submitting work is the same running the pool= */ + assert(pool->ctx =3D=3D qemu_get_current_aio_context()); + req =3D qemu_aio_get(&thread_pool_aiocb_info, NULL, cb, opaque); req->func =3D func; req->arg =3D arg; --=20 2.39.1 From nobody Fri Apr 26 07:59:23 2024 Delivered-To: importer2@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer2=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1675430281; cv=none; d=zohomail.com; s=zohoarc; b=AD8RE+Sdn74R4NcaTLow5A4Q6D39tJuLFizD2AL25duCrYPcftjhLKNDvbpwtZdW6+SfYGxb6JlyYg2HiOKJs482PjoLIVGXt5EKJTLiC8gcI5jpKT0DQZ41vOOiGtISxZSLW/mvCTuCxJLJE8Zst7gPw/8mbpj05QHrN0ftgJM= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1675430281; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=lJH9DdOJC+9d1fJB6eHYqmtVYvp3TxX2Noy63ralbGs=; b=n9on0PFH1KIzn4MHJXMwgr6f2OskEN29apve6DGl0TdQn2VcW/+ZMIqcsnQU6iqIMyTRmXUV+5B+yc6vTJ4Zz/pHPNGg3/buaj170QZX4prGuNrjn8EMQ+j084U1DdsD5Gv5f0ZSkVbPP/GZ87HI5TJOQRqT9e2yvc24Ew6OHno= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer2=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1675430281911853.53452817609; Fri, 3 Feb 2023 05:18:01 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pNvws-0002VJ-Be; Fri, 03 Feb 2023 08:17:50 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pNvwl-0002Ui-BS for qemu-devel@nongnu.org; Fri, 03 Feb 2023 08:17:43 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pNvwi-0005kv-II for qemu-devel@nongnu.org; Fri, 03 Feb 2023 08:17:42 -0500 Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-440-1oC9bktDP7uruID1mN4utQ-1; Fri, 03 Feb 2023 08:17:38 -0500 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B1B833C0DDA9; Fri, 3 Feb 2023 13:17:37 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 296ED2026FFE; Fri, 3 Feb 2023 13:17:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1675430259; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=lJH9DdOJC+9d1fJB6eHYqmtVYvp3TxX2Noy63ralbGs=; b=Bu3U6kX0HxSXOc/PooghByfvdDWYVeCw55lTjI7PbjgBiiC+6a8t9VhR8IJlVJl+h9zi9n QLl1dFtrLi31uXd8M5Wu/nTAPxdpPuTDBVryZxTX1qdvqlGMM7ywyGe3VEnDDYSxbVMyzV JrjU2g0+TZX3LFvodkPXjmtrJ0OJ1E4= X-MC-Unique: 1oC9bktDP7uruID1mN4utQ-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Cc: Stefan Berger , Kevin Wolf , Hanna Reitz , Stefan Weil , Aarushi Mehta , Julia Suvorova , Stefan Hajnoczi , Stefano Garzarella , Greg Kurz , Christian Schoenebeck , Daniel Henrique Barboza , =?UTF-8?q?C=C3=A9dric=20Le=20Goater?= , David Gibson , "Michael S. Tsirkin" , Fam Zheng , Paolo Bonzini , qemu-devel@nongnu.org, qemu-ppc@nongnu.org, Emanuele Giuseppe Esposito Subject: [PATCH v5 4/4] thread-pool: avoid passing the pool parameter every time Date: Fri, 3 Feb 2023 08:17:31 -0500 Message-Id: <20230203131731.851116-5-eesposit@redhat.com> In-Reply-To: <20230203131731.851116-1-eesposit@redhat.com> References: <20230203131731.851116-1-eesposit@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer2=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer2=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer2=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1675430284033100003 Content-Type: text/plain; charset="utf-8" thread_pool_submit_aio() is always called on a pool taken from qemu_get_current_aio_context(), and that is the only intended use: each pool runs only in the same thread that is submitting work to it, it can't run anywhere else. Therefore simplify the thread_pool_submit* API and remove the ThreadPool function parameter. Signed-off-by: Emanuele Giuseppe Esposito --- include/block/thread-pool.h | 10 ++++------ backends/tpm/tpm_backend.c | 4 +--- block/file-posix.c | 4 +--- block/file-win32.c | 4 +--- block/qcow2-threads.c | 3 +-- hw/9pfs/coth.c | 3 +-- hw/ppc/spapr_nvdimm.c | 6 ++---- hw/virtio/virtio-pmem.c | 3 +-- scsi/pr-manager.c | 3 +-- scsi/qemu-pr-helper.c | 3 +-- tests/unit/test-thread-pool.c | 12 +++++------- util/thread-pool.c | 16 ++++++++-------- 12 files changed, 27 insertions(+), 44 deletions(-) diff --git a/include/block/thread-pool.h b/include/block/thread-pool.h index c408bde74c..948ff5f30c 100644 --- a/include/block/thread-pool.h +++ b/include/block/thread-pool.h @@ -33,12 +33,10 @@ void thread_pool_free(ThreadPool *pool); * thread_pool_submit* API: submit I/O requests in the thread's * current AioContext. */ -BlockAIOCB *thread_pool_submit_aio(ThreadPool *pool, - ThreadPoolFunc *func, void *arg, - BlockCompletionFunc *cb, void *opaque); -int coroutine_fn thread_pool_submit_co(ThreadPool *pool, - ThreadPoolFunc *func, void *arg); -void thread_pool_submit(ThreadPool *pool, ThreadPoolFunc *func, void *arg); +BlockAIOCB *thread_pool_submit_aio(ThreadPoolFunc *func, void *arg, + BlockCompletionFunc *cb, void *opaque); +int coroutine_fn thread_pool_submit_co(ThreadPoolFunc *func, void *arg); +void thread_pool_submit(ThreadPoolFunc *func, void *arg); =20 void thread_pool_update_params(ThreadPool *pool, struct AioContext *ctx); =20 diff --git a/backends/tpm/tpm_backend.c b/backends/tpm/tpm_backend.c index 375587e743..485a20b9e0 100644 --- a/backends/tpm/tpm_backend.c +++ b/backends/tpm/tpm_backend.c @@ -100,8 +100,6 @@ bool tpm_backend_had_startup_error(TPMBackend *s) =20 void tpm_backend_deliver_request(TPMBackend *s, TPMBackendCmd *cmd) { - ThreadPool *pool =3D aio_get_thread_pool(qemu_get_aio_context()); - if (s->cmd !=3D NULL) { error_report("There is a TPM request pending"); return; @@ -109,7 +107,7 @@ void tpm_backend_deliver_request(TPMBackend *s, TPMBack= endCmd *cmd) =20 s->cmd =3D cmd; object_ref(OBJECT(s)); - thread_pool_submit_aio(pool, tpm_backend_worker_thread, s, + thread_pool_submit_aio(tpm_backend_worker_thread, s, tpm_backend_request_completed, s); } =20 diff --git a/block/file-posix.c b/block/file-posix.c index f7d88fa857..e4c433d071 100644 --- a/block/file-posix.c +++ b/block/file-posix.c @@ -2042,9 +2042,7 @@ out: =20 static int coroutine_fn raw_thread_pool_submit(ThreadPoolFunc func, void *= arg) { - /* @bs can be NULL, bdrv_get_aio_context() returns the main context th= en */ - ThreadPool *pool =3D aio_get_thread_pool(qemu_get_current_aio_context(= )); - return thread_pool_submit_co(pool, func, arg); + return thread_pool_submit_co(func, arg); } =20 /* diff --git a/block/file-win32.c b/block/file-win32.c index 1af6d3c810..c4c9c985c8 100644 --- a/block/file-win32.c +++ b/block/file-win32.c @@ -153,7 +153,6 @@ static BlockAIOCB *paio_submit(BlockDriverState *bs, HA= NDLE hfile, BlockCompletionFunc *cb, void *opaque, int type) { RawWin32AIOData *acb =3D g_new(RawWin32AIOData, 1); - ThreadPool *pool; =20 acb->bs =3D bs; acb->hfile =3D hfile; @@ -168,8 +167,7 @@ static BlockAIOCB *paio_submit(BlockDriverState *bs, HA= NDLE hfile, acb->aio_offset =3D offset; =20 trace_file_paio_submit(acb, opaque, offset, count, type); - pool =3D aio_get_thread_pool(qemu_get_current_aio_context()); - return thread_pool_submit_aio(pool, aio_worker, acb, cb, opaque); + return thread_pool_submit_aio(aio_worker, acb, cb, opaque); } =20 int qemu_ftruncate64(int fd, int64_t length) diff --git a/block/qcow2-threads.c b/block/qcow2-threads.c index 6d2e6b7bf4..d6071a1eae 100644 --- a/block/qcow2-threads.c +++ b/block/qcow2-threads.c @@ -43,7 +43,6 @@ qcow2_co_process(BlockDriverState *bs, ThreadPoolFunc *fu= nc, void *arg) { int ret; BDRVQcow2State *s =3D bs->opaque; - ThreadPool *pool =3D aio_get_thread_pool(qemu_get_current_aio_context(= )); =20 qemu_co_mutex_lock(&s->lock); while (s->nb_threads >=3D QCOW2_MAX_THREADS) { @@ -52,7 +51,7 @@ qcow2_co_process(BlockDriverState *bs, ThreadPoolFunc *fu= nc, void *arg) s->nb_threads++; qemu_co_mutex_unlock(&s->lock); =20 - ret =3D thread_pool_submit_co(pool, func, arg); + ret =3D thread_pool_submit_co(func, arg); =20 qemu_co_mutex_lock(&s->lock); s->nb_threads--; diff --git a/hw/9pfs/coth.c b/hw/9pfs/coth.c index 2802d41cce..598f46add9 100644 --- a/hw/9pfs/coth.c +++ b/hw/9pfs/coth.c @@ -41,6 +41,5 @@ static int coroutine_enter_func(void *arg) void co_run_in_worker_bh(void *opaque) { Coroutine *co =3D opaque; - thread_pool_submit_aio(aio_get_thread_pool(qemu_get_aio_context()), - coroutine_enter_func, co, coroutine_enter_cb, c= o); + thread_pool_submit_aio(coroutine_enter_func, co, coroutine_enter_cb, c= o); } diff --git a/hw/ppc/spapr_nvdimm.c b/hw/ppc/spapr_nvdimm.c index 04a64cada3..a8688243a6 100644 --- a/hw/ppc/spapr_nvdimm.c +++ b/hw/ppc/spapr_nvdimm.c @@ -496,7 +496,6 @@ static int spapr_nvdimm_flush_post_load(void *opaque, i= nt version_id) { SpaprNVDIMMDevice *s_nvdimm =3D (SpaprNVDIMMDevice *)opaque; SpaprNVDIMMDeviceFlushState *state; - ThreadPool *pool =3D aio_get_thread_pool(qemu_get_aio_context()); HostMemoryBackend *backend =3D MEMORY_BACKEND(PC_DIMM(s_nvdimm)->hostm= em); bool is_pmem =3D object_property_get_bool(OBJECT(backend), "pmem", NUL= L); bool pmem_override =3D object_property_get_bool(OBJECT(s_nvdimm), @@ -517,7 +516,7 @@ static int spapr_nvdimm_flush_post_load(void *opaque, i= nt version_id) } =20 QLIST_FOREACH(state, &s_nvdimm->pending_nvdimm_flush_states, node) { - thread_pool_submit_aio(pool, flush_worker_cb, state, + thread_pool_submit_aio(flush_worker_cb, state, spapr_nvdimm_flush_completion_cb, state); } =20 @@ -664,7 +663,6 @@ static target_ulong h_scm_flush(PowerPCCPU *cpu, SpaprM= achineState *spapr, PCDIMMDevice *dimm; HostMemoryBackend *backend =3D NULL; SpaprNVDIMMDeviceFlushState *state; - ThreadPool *pool =3D aio_get_thread_pool(qemu_get_aio_context()); int fd; =20 if (!drc || !drc->dev || @@ -699,7 +697,7 @@ static target_ulong h_scm_flush(PowerPCCPU *cpu, SpaprM= achineState *spapr, =20 state->drcidx =3D drc_index; =20 - thread_pool_submit_aio(pool, flush_worker_cb, state, + thread_pool_submit_aio(flush_worker_cb, state, spapr_nvdimm_flush_completion_cb, state); =20 continue_token =3D state->continue_token; diff --git a/hw/virtio/virtio-pmem.c b/hw/virtio/virtio-pmem.c index dff402f08f..c3512c2dae 100644 --- a/hw/virtio/virtio-pmem.c +++ b/hw/virtio/virtio-pmem.c @@ -70,7 +70,6 @@ static void virtio_pmem_flush(VirtIODevice *vdev, VirtQue= ue *vq) VirtIODeviceRequest *req_data; VirtIOPMEM *pmem =3D VIRTIO_PMEM(vdev); HostMemoryBackend *backend =3D MEMORY_BACKEND(pmem->memdev); - ThreadPool *pool =3D aio_get_thread_pool(qemu_get_aio_context()); =20 trace_virtio_pmem_flush_request(); req_data =3D virtqueue_pop(vq, sizeof(VirtIODeviceRequest)); @@ -88,7 +87,7 @@ static void virtio_pmem_flush(VirtIODevice *vdev, VirtQue= ue *vq) req_data->fd =3D memory_region_get_fd(&backend->mr); req_data->pmem =3D pmem; req_data->vdev =3D vdev; - thread_pool_submit_aio(pool, worker_cb, req_data, done_cb, req_data); + thread_pool_submit_aio(worker_cb, req_data, done_cb, req_data); } =20 static void virtio_pmem_get_config(VirtIODevice *vdev, uint8_t *config) diff --git a/scsi/pr-manager.c b/scsi/pr-manager.c index 2098d7e759..fb5fc29730 100644 --- a/scsi/pr-manager.c +++ b/scsi/pr-manager.c @@ -51,7 +51,6 @@ static int pr_manager_worker(void *opaque) int coroutine_fn pr_manager_execute(PRManager *pr_mgr, AioContext *ctx, in= t fd, struct sg_io_hdr *hdr) { - ThreadPool *pool =3D aio_get_thread_pool(ctx); PRManagerData data =3D { .pr_mgr =3D pr_mgr, .fd =3D fd, @@ -62,7 +61,7 @@ int coroutine_fn pr_manager_execute(PRManager *pr_mgr, Ai= oContext *ctx, int fd, =20 /* The matching object_unref is in pr_manager_worker. */ object_ref(OBJECT(pr_mgr)); - return thread_pool_submit_co(pool, pr_manager_worker, &data); + return thread_pool_submit_co(pr_manager_worker, &data); } =20 bool pr_manager_is_connected(PRManager *pr_mgr) diff --git a/scsi/qemu-pr-helper.c b/scsi/qemu-pr-helper.c index 196b78c00d..55888cd9ac 100644 --- a/scsi/qemu-pr-helper.c +++ b/scsi/qemu-pr-helper.c @@ -180,7 +180,6 @@ static int do_sgio_worker(void *opaque) static int do_sgio(int fd, const uint8_t *cdb, uint8_t *sense, uint8_t *buf, int *sz, int dir) { - ThreadPool *pool =3D aio_get_thread_pool(qemu_get_aio_context()); int r; =20 PRHelperSGIOData data =3D { @@ -192,7 +191,7 @@ static int do_sgio(int fd, const uint8_t *cdb, uint8_t = *sense, .dir =3D dir, }; =20 - r =3D thread_pool_submit_co(pool, do_sgio_worker, &data); + r =3D thread_pool_submit_co(do_sgio_worker, &data); *sz =3D data.sz; return r; } diff --git a/tests/unit/test-thread-pool.c b/tests/unit/test-thread-pool.c index 6020e65d69..448fbf7e5f 100644 --- a/tests/unit/test-thread-pool.c +++ b/tests/unit/test-thread-pool.c @@ -8,7 +8,6 @@ #include "qemu/main-loop.h" =20 static AioContext *ctx; -static ThreadPool *pool; static int active; =20 typedef struct { @@ -47,7 +46,7 @@ static void done_cb(void *opaque, int ret) static void test_submit(void) { WorkerTestData data =3D { .n =3D 0 }; - thread_pool_submit(pool, worker_cb, &data); + thread_pool_submit(worker_cb, &data); while (data.n =3D=3D 0) { aio_poll(ctx, true); } @@ -57,7 +56,7 @@ static void test_submit(void) static void test_submit_aio(void) { WorkerTestData data =3D { .n =3D 0, .ret =3D -EINPROGRESS }; - data.aiocb =3D thread_pool_submit_aio(pool, worker_cb, &data, + data.aiocb =3D thread_pool_submit_aio(worker_cb, &data, done_cb, &data); =20 /* The callbacks are not called until after the first wait. */ @@ -78,7 +77,7 @@ static void co_test_cb(void *opaque) active =3D 1; data->n =3D 0; data->ret =3D -EINPROGRESS; - thread_pool_submit_co(pool, worker_cb, data); + thread_pool_submit_co(worker_cb, data); =20 /* The test continues in test_submit_co, after qemu_coroutine_enter...= */ =20 @@ -122,7 +121,7 @@ static void test_submit_many(void) for (i =3D 0; i < 100; i++) { data[i].n =3D 0; data[i].ret =3D -EINPROGRESS; - thread_pool_submit_aio(pool, worker_cb, &data[i], done_cb, &data[i= ]); + thread_pool_submit_aio(worker_cb, &data[i], done_cb, &data[i]); } =20 active =3D 100; @@ -150,7 +149,7 @@ static void do_test_cancel(bool sync) for (i =3D 0; i < 100; i++) { data[i].n =3D 0; data[i].ret =3D -EINPROGRESS; - data[i].aiocb =3D thread_pool_submit_aio(pool, long_cb, &data[i], + data[i].aiocb =3D thread_pool_submit_aio(long_cb, &data[i], done_cb, &data[i]); } =20 @@ -235,7 +234,6 @@ int main(int argc, char **argv) { qemu_init_main_loop(&error_abort); ctx =3D qemu_get_current_aio_context(); - pool =3D aio_get_thread_pool(ctx); =20 g_test_init(&argc, &argv, NULL); g_test_add_func("/thread-pool/submit", test_submit); diff --git a/util/thread-pool.c b/util/thread-pool.c index a70abb8a59..0d97888df0 100644 --- a/util/thread-pool.c +++ b/util/thread-pool.c @@ -241,11 +241,12 @@ static const AIOCBInfo thread_pool_aiocb_info =3D { .get_aio_context =3D thread_pool_get_aio_context, }; =20 -BlockAIOCB *thread_pool_submit_aio(ThreadPool *pool, - ThreadPoolFunc *func, void *arg, - BlockCompletionFunc *cb, void *opaque) +BlockAIOCB *thread_pool_submit_aio(ThreadPoolFunc *func, void *arg, + BlockCompletionFunc *cb, void *opaque) { ThreadPoolElement *req; + AioContext *ctx =3D qemu_get_current_aio_context(); + ThreadPool *pool =3D aio_get_thread_pool(ctx); =20 /* Assert that the thread submitting work is the same running the pool= */ assert(pool->ctx =3D=3D qemu_get_current_aio_context()); @@ -283,19 +284,18 @@ static void thread_pool_co_cb(void *opaque, int ret) aio_co_wake(co->co); } =20 -int coroutine_fn thread_pool_submit_co(ThreadPool *pool, ThreadPoolFunc *f= unc, - void *arg) +int coroutine_fn thread_pool_submit_co(ThreadPoolFunc *func, void *arg) { ThreadPoolCo tpc =3D { .co =3D qemu_coroutine_self(), .ret =3D -EINPRO= GRESS }; assert(qemu_in_coroutine()); - thread_pool_submit_aio(pool, func, arg, thread_pool_co_cb, &tpc); + thread_pool_submit_aio(func, arg, thread_pool_co_cb, &tpc); qemu_coroutine_yield(); return tpc.ret; } =20 -void thread_pool_submit(ThreadPool *pool, ThreadPoolFunc *func, void *arg) +void thread_pool_submit(ThreadPoolFunc *func, void *arg) { - thread_pool_submit_aio(pool, func, arg, NULL, NULL); + thread_pool_submit_aio(func, arg, NULL, NULL); } =20 void thread_pool_update_params(ThreadPool *pool, AioContext *ctx) --=20 2.39.1