From nobody Tue May 7 16:10:53 2024 Delivered-To: importer2@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer2=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1623876436157948.8237074865243; Wed, 16 Jun 2021 13:47:16 -0700 (PDT) Received: from localhost ([::1]:60886 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ltcRP-0005MW-5f for importer2@patchew.org; Wed, 16 Jun 2021 16:47:15 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:33084) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ltbOG-00032r-V1 for qemu-devel@nongnu.org; Wed, 16 Jun 2021 15:39:59 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:26449) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ltbOD-0000Sj-4T for qemu-devel@nongnu.org; Wed, 16 Jun 2021 15:39:55 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-19-maJoKjJzONyIXyeLrTP-tw-1; Wed, 16 Jun 2021 15:39:48 -0400 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 017061923764 for ; Wed, 16 Jun 2021 19:39:48 +0000 (UTC) Received: from iangelak.remote.csb (ovpn-113-44.rdu2.redhat.com [10.10.113.44]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 5BAC360BF1; Wed, 16 Jun 2021 19:39:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1623872390; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3hdiajH6QTmvgtgUAK6x5BiWcOL/cOc5zTfyqjiPLaA=; b=ce8wHlr0oEFUSwFg8GIbI4hoeTZmXycmIAv7V32ZtwuYpraw8A6cYte06mqPBEswWBAjXB FHJ+lMc6O8Ze4B10YMbO0jJCHJI5kxCxXu6dW+eRHM+1RGcY5YlNyvx8NbJqnvnreXOh3q dk2l/BWivThwhpm9Dv/vS76GMYSqAoI= X-MC-Unique: maJoKjJzONyIXyeLrTP-tw-1 From: Ioannis Angelakopoulos To: qemu-devel@nongnu.org, virtio-fs@redhat.com Subject: [PATCH 1/6] virtiofsd: Release file locks using F_UNLCK Date: Wed, 16 Jun 2021 15:39:16 -0400 Message-Id: <20210616193921.608720-2-iangelak@redhat.com> In-Reply-To: <20210616193921.608720-1-iangelak@redhat.com> References: <20210616193921.608720-1-iangelak@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=iangelak@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer2=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=216.205.24.124; envelope-from=iangelak@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -29 X-Spam_score: -3.0 X-Spam_bar: --- X-Spam_report: (-3.0 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.199, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-Mailman-Approved-At: Wed, 16 Jun 2021 16:41:47 -0400 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: iangelak@redhat.com, stefanha@redhat.com, dgilbert@redhat.com, vgoyal@redhat.com Errors-To: qemu-devel-bounces+importer2=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Type: text/plain; charset="utf-8" From: Vivek Goyal We are emulating posix locks for guest using open file description locks in virtiofsd. When any of the fd is closed in guest, we find associated OFD lock fd (if there is one) and close it to release all the locks. Assumption here is that there is no other thread using lo_inode_plock structure or plock->fd, hence it is safe to do so. But now we are about to introduce blocking variant of locks (SETLKW), and that means we might be waiting to a lock to be available and using plock->fd. And that means there are still users of plock structure. So release locks using fcntl(SETLK, F_UNLCK) instead and plock will be freed later. Signed-off-by: Vivek Goyal Signed-off-by: Ioannis Angelakopoulos --- tools/virtiofsd/passthrough_ll.c | 32 ++++++++++++++++++++++++++++---- 1 file changed, 28 insertions(+), 4 deletions(-) diff --git a/tools/virtiofsd/passthrough_ll.c b/tools/virtiofsd/passthrough= _ll.c index 49c21fd855..f2fa9d95bb 100644 --- a/tools/virtiofsd/passthrough_ll.c +++ b/tools/virtiofsd/passthrough_ll.c @@ -968,6 +968,14 @@ static int do_statx(struct lo_data *lo, int dirfd, con= st char *pathname, return 0; } =20 +static void posix_locks_value_destroy(gpointer data) +{ + struct lo_inode_plock *plock =3D data; + + close(plock->fd); + free(plock); +} + /* * Increments nlookup on the inode on success. unref_inode_lolocked() must= be * called eventually to decrement nlookup again. If inodep is non-NULL, the @@ -1473,9 +1481,6 @@ static void unref_inode(struct lo_data *lo, struct lo= _inode *inode, uint64_t n) lo_map_remove(&lo->ino_map, inode->fuse_ino); g_hash_table_remove(lo->inodes, &inode->key); if (lo->posix_lock) { - if (g_hash_table_size(inode->posix_locks)) { - fuse_log(FUSE_LOG_WARNING, "Hash table is not empty\n"); - } g_hash_table_destroy(inode->posix_locks); pthread_mutex_destroy(&inode->plock_mutex); } @@ -1974,6 +1979,9 @@ static struct lo_inode_plock *lookup_create_plock_ctx= (struct lo_data *lo, plock =3D g_hash_table_lookup(inode->posix_locks, GUINT_TO_POINTER(lock_owne= r)); =20 + fuse_log(FUSE_LOG_DEBUG, "lookup_create_plock_ctx():" + " Inserted element in posix_locks hash table" + " with value pointer %p\n", plock); if (plock) { return plock; } @@ -2182,6 +2190,8 @@ static void lo_flush(fuse_req_t req, fuse_ino_t ino, = struct fuse_file_info *fi) (void)ino; struct lo_inode *inode; struct lo_data *lo =3D lo_data(req); + struct lo_inode_plock *plock; + struct flock flock; =20 inode =3D lo_inode(req, ino); if (!inode) { @@ -2198,8 +2208,22 @@ static void lo_flush(fuse_req_t req, fuse_ino_t ino,= struct fuse_file_info *fi) /* An fd is going away. Cleanup associated posix locks */ if (lo->posix_lock) { pthread_mutex_lock(&inode->plock_mutex); - g_hash_table_remove(inode->posix_locks, + plock =3D g_hash_table_lookup(inode->posix_locks, GUINT_TO_POINTER(fi->lock_owner)); + + if (plock) { + /* + * An fd is being closed. For posix locks, this means + * drop all the associated locks. + */ + memset(&flock, 0, sizeof(struct flock)); + flock.l_type =3D F_UNLCK; + flock.l_whence =3D SEEK_SET; + /* Unlock whole file */ + flock.l_start =3D flock.l_len =3D 0; + fcntl(plock->fd, F_OFD_SETLK, &flock); + } + pthread_mutex_unlock(&inode->plock_mutex); } res =3D close(dup(lo_fi_fd(req, fi))); --=20 2.27.0 From nobody Tue May 7 16:10:53 2024 Delivered-To: importer2@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer2=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1623876232825559.6757811594188; Wed, 16 Jun 2021 13:43:52 -0700 (PDT) Received: from localhost ([::1]:52308 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ltcO7-0007oW-KN for importer2@patchew.org; Wed, 16 Jun 2021 16:43:51 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:33118) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ltbOM-00034I-47 for qemu-devel@nongnu.org; Wed, 16 Jun 2021 15:40:02 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:58988) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ltbOF-0000VF-NW for qemu-devel@nongnu.org; Wed, 16 Jun 2021 15:40:01 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-423-P9OB3ZPCN1m980M-PivtYg-1; Wed, 16 Jun 2021 15:39:52 -0400 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id A03971084F47 for ; Wed, 16 Jun 2021 19:39:51 +0000 (UTC) Received: from iangelak.remote.csb (ovpn-113-44.rdu2.redhat.com [10.10.113.44]) by smtp.corp.redhat.com (Postfix) with ESMTPS id E57FA60BF1; Wed, 16 Jun 2021 19:39:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1623872393; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=MQqb/8n9ImY0HAJx1kYCZ22vklzLbARBOLz3KyeFMQ4=; b=XBKI5LtY1zQ+M0qnhBwBkcYYH66V9YJoEfiNm4w2DXIOsIinYh6vkudBjv3NuxLa98t+Uz MfFGq6/F7i9bBLD1I22nAm3rpMRmFp4m39GBjMCfrwAQ/3Tw3Aar+v8U/3ZZGjpNau7ING s56uKnYtrRJBnG9XSOIYG8gfkeAiwog= X-MC-Unique: P9OB3ZPCN1m980M-PivtYg-1 From: Ioannis Angelakopoulos To: qemu-devel@nongnu.org, virtio-fs@redhat.com Subject: [PATCH 2/6] virtiofsd: Create a notification queue Date: Wed, 16 Jun 2021 15:39:17 -0400 Message-Id: <20210616193921.608720-3-iangelak@redhat.com> In-Reply-To: <20210616193921.608720-1-iangelak@redhat.com> References: <20210616193921.608720-1-iangelak@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=iangelak@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer2=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=216.205.24.124; envelope-from=iangelak@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -29 X-Spam_score: -3.0 X-Spam_bar: --- X-Spam_report: (-3.0 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.199, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-Mailman-Approved-At: Wed, 16 Jun 2021 16:41:47 -0400 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: iangelak@redhat.com, stefanha@redhat.com, dgilbert@redhat.com, vgoyal@redhat.com Errors-To: qemu-devel-bounces+importer2=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Type: text/plain; charset="utf-8" From: Vivek Goyal Add a notification queue which will be used to send async notifications for file lock availability. Signed-off-by: Vivek Goyal Signed-off-by: Ioannis Angelakopoulos --- hw/virtio/vhost-user-fs.c | 30 ++++++-- include/hw/virtio/vhost-user-fs.h | 2 +- include/standard-headers/linux/virtio_fs.h | 3 + tools/virtiofsd/fuse_i.h | 1 + tools/virtiofsd/fuse_virtio.c | 79 +++++++++++++++------- 5 files changed, 85 insertions(+), 30 deletions(-) diff --git a/hw/virtio/vhost-user-fs.c b/hw/virtio/vhost-user-fs.c index 6f7f91533d..c7fd5f3123 100644 --- a/hw/virtio/vhost-user-fs.c +++ b/hw/virtio/vhost-user-fs.c @@ -31,6 +31,7 @@ static const int user_feature_bits[] =3D { VIRTIO_F_NOTIFY_ON_EMPTY, VIRTIO_F_RING_PACKED, VIRTIO_F_IOMMU_PLATFORM, + VIRTIO_FS_F_NOTIFICATION, =20 VHOST_INVALID_FEATURE_BIT }; @@ -145,9 +146,20 @@ static uint64_t vuf_get_features(VirtIODevice *vdev, { VHostUserFS *fs =3D VHOST_USER_FS(vdev); =20 + virtio_add_feature(&features, VIRTIO_FS_F_NOTIFICATION); + return vhost_get_features(&fs->vhost_dev, user_feature_bits, features); } =20 +static void vuf_set_features(VirtIODevice *vdev, uint64_t features) +{ + VHostUserFS *fs =3D VHOST_USER_FS(vdev); + + if (virtio_has_feature(features, VIRTIO_FS_F_NOTIFICATION)) { + fs->notify_enabled =3D true; + } +} + static void vuf_handle_output(VirtIODevice *vdev, VirtQueue *vq) { /* @@ -223,16 +235,25 @@ static void vuf_device_realize(DeviceState *dev, Erro= r **errp) sizeof(struct virtio_fs_config)); =20 /* Hiprio queue */ - fs->hiprio_vq =3D virtio_add_queue(vdev, fs->conf.queue_size, vuf_hand= le_output); + fs->hiprio_vq =3D virtio_add_queue(vdev, fs->conf.queue_size, + vuf_handle_output); + + /* + * Notification queue. Feature negotiation happens later. So at this + * point of time we don't know if driver will use notification queue + * or not. + */ + virtio_add_queue(vdev, fs->conf.queue_size, vuf_handle_output); =20 /* Request queues */ fs->req_vqs =3D g_new(VirtQueue *, fs->conf.num_request_queues); for (i =3D 0; i < fs->conf.num_request_queues; i++) { - fs->req_vqs[i] =3D virtio_add_queue(vdev, fs->conf.queue_size, vuf= _handle_output); + fs->req_vqs[i] =3D virtio_add_queue(vdev, fs->conf.queue_size, + vuf_handle_output); } =20 - /* 1 high prio queue, plus the number configured */ - fs->vhost_dev.nvqs =3D 1 + fs->conf.num_request_queues; + /* 1 high prio queue, 1 notification queue plus the number configured = */ + fs->vhost_dev.nvqs =3D 2 + fs->conf.num_request_queues; fs->vhost_dev.vqs =3D g_new0(struct vhost_virtqueue, fs->vhost_dev.nvq= s); ret =3D vhost_dev_init(&fs->vhost_dev, &fs->vhost_user, VHOST_BACKEND_TYPE_USER, 0); @@ -311,6 +332,7 @@ static void vuf_class_init(ObjectClass *klass, void *da= ta) vdc->realize =3D vuf_device_realize; vdc->unrealize =3D vuf_device_unrealize; vdc->get_features =3D vuf_get_features; + vdc->set_features =3D vuf_set_features; vdc->get_config =3D vuf_get_config; vdc->set_status =3D vuf_set_status; vdc->guest_notifier_mask =3D vuf_guest_notifier_mask; diff --git a/include/hw/virtio/vhost-user-fs.h b/include/hw/virtio/vhost-us= er-fs.h index 0d62834c25..13e2cbc48e 100644 --- a/include/hw/virtio/vhost-user-fs.h +++ b/include/hw/virtio/vhost-user-fs.h @@ -40,7 +40,7 @@ struct VHostUserFS { VirtQueue **req_vqs; VirtQueue *hiprio_vq; int32_t bootindex; - + bool notify_enabled; /*< public >*/ }; =20 diff --git a/include/standard-headers/linux/virtio_fs.h b/include/standard-= headers/linux/virtio_fs.h index a32fe8a64c..6383d723a3 100644 --- a/include/standard-headers/linux/virtio_fs.h +++ b/include/standard-headers/linux/virtio_fs.h @@ -8,6 +8,9 @@ #include "standard-headers/linux/virtio_config.h" #include "standard-headers/linux/virtio_types.h" =20 +/* Feature bits */ +#define VIRTIO_FS_F_NOTIFICATION 0 /* Notification queue supported */ + struct virtio_fs_config { /* Filesystem name (UTF-8, not NUL-terminated, padded with NULs) */ uint8_t tag[36]; diff --git a/tools/virtiofsd/fuse_i.h b/tools/virtiofsd/fuse_i.h index 492e002181..4942d080da 100644 --- a/tools/virtiofsd/fuse_i.h +++ b/tools/virtiofsd/fuse_i.h @@ -73,6 +73,7 @@ struct fuse_session { int vu_socketfd; struct fv_VuDev *virtio_dev; int thread_pool_size; + bool notify_enabled; }; =20 struct fuse_chan { diff --git a/tools/virtiofsd/fuse_virtio.c b/tools/virtiofsd/fuse_virtio.c index fa4aff9b0e..3ff4cc1430 100644 --- a/tools/virtiofsd/fuse_virtio.c +++ b/tools/virtiofsd/fuse_virtio.c @@ -14,6 +14,7 @@ #include "qemu/osdep.h" #include "qemu/iov.h" #include "qapi/error.h" +#include "standard-headers/linux/virtio_fs.h" #include "fuse_i.h" #include "standard-headers/linux/fuse.h" #include "fuse_misc.h" @@ -80,23 +81,31 @@ struct fv_VuDev { */ size_t nqueues; struct fv_QueueInfo **qi; -}; - -/* From spec */ -struct virtio_fs_config { - char tag[36]; - uint32_t num_queues; + /* True if notification queue is being used */ + bool notify_enabled; }; =20 /* Callback from libvhost-user */ static uint64_t fv_get_features(VuDev *dev) { - return 1ULL << VIRTIO_F_VERSION_1; + uint64_t features; + + features =3D 1ull << VIRTIO_F_VERSION_1 | + 1ull << VIRTIO_FS_F_NOTIFICATION; + + return features; } =20 /* Callback from libvhost-user */ static void fv_set_features(VuDev *dev, uint64_t features) { + struct fv_VuDev *vud =3D container_of(dev, struct fv_VuDev, dev); + struct fuse_session *se =3D vud->se; + + if ((1ull << VIRTIO_FS_F_NOTIFICATION) & features) { + vud->notify_enabled =3D true; + se->notify_enabled =3D true; + } } =20 /* @@ -736,19 +745,20 @@ static void fv_queue_cleanup_thread(struct fv_VuDev *= vud, int qidx) =20 assert(qidx < vud->nqueues); ourqi =3D vud->qi[qidx]; - - /* Kill the thread */ - if (eventfd_write(ourqi->kill_fd, 1)) { - fuse_log(FUSE_LOG_ERR, "Eventfd_write for queue %d: %s\n", - qidx, strerror(errno)); - } - ret =3D pthread_join(ourqi->thread, NULL); - if (ret) { - fuse_log(FUSE_LOG_ERR, "%s: Failed to join thread idx %d err %d\n", - __func__, qidx, ret); + /* qidx =3D=3D 1 is the notification queue */ + if (qidx !=3D 1) { + /* Kill the thread */ + if (eventfd_write(ourqi->kill_fd, 1)) { + fuse_log(FUSE_LOG_ERR, "Eventfd_read for queue: %m\n"); + } + ret =3D pthread_join(ourqi->thread, NULL); + if (ret) { + fuse_log(FUSE_LOG_ERR, "%s: Failed to join thread idx %d err" + " %d\n", __func__, qidx, ret); + } + close(ourqi->kill_fd); } pthread_mutex_destroy(&ourqi->vq_lock); - close(ourqi->kill_fd); ourqi->kick_fd =3D -1; g_free(vud->qi[qidx]); vud->qi[qidx] =3D NULL; @@ -759,6 +769,9 @@ static void fv_queue_set_started(VuDev *dev, int qidx, = bool started) { struct fv_VuDev *vud =3D container_of(dev, struct fv_VuDev, dev); struct fv_QueueInfo *ourqi; + void * (*thread_func) (void *) =3D fv_queue_thread; + int valid_queues =3D 2; /* One hiprio queue and one request queue */ + bool notification_q =3D false; =20 fuse_log(FUSE_LOG_INFO, "%s: qidx=3D%d started=3D%d\n", __func__, qidx, started); @@ -770,10 +783,19 @@ static void fv_queue_set_started(VuDev *dev, int qidx= , bool started) * well-behaved client in mind and may not protect against all types of * races yet. */ - if (qidx > 1) { - fuse_log(FUSE_LOG_ERR, - "%s: multiple request queues not yet implemented, please = only " - "configure 1 request queue\n", + if (vud->notify_enabled) { + valid_queues++; + /* + * If notification queue is enabled, then qidx 1 is notificaiton q= ueue. + */ + if (qidx =3D=3D 1) { + notification_q =3D true; + } + } + + if (qidx >=3D valid_queues) { + fuse_log(FUSE_LOG_ERR, "%s: multiple request queues not yet" + "implemented, please only configure 1 request queue\n", __func__); exit(EXIT_FAILURE); } @@ -795,13 +817,20 @@ static void fv_queue_set_started(VuDev *dev, int qidx= , bool started) assert(vud->qi[qidx]->kick_fd =3D=3D -1); } ourqi =3D vud->qi[qidx]; + pthread_mutex_init(&ourqi->vq_lock, NULL); + /* + * For notification queue, we don't have to start a thread yet. + */ + if (notification_q) { + return; + } + ourqi->kick_fd =3D dev->vq[qidx].kick_fd; =20 ourqi->kill_fd =3D eventfd(0, EFD_CLOEXEC | EFD_SEMAPHORE); assert(ourqi->kill_fd !=3D -1); - pthread_mutex_init(&ourqi->vq_lock, NULL); =20 - if (pthread_create(&ourqi->thread, NULL, fv_queue_thread, ourqi)) { + if (pthread_create(&ourqi->thread, NULL, thread_func, ourqi)) { fuse_log(FUSE_LOG_ERR, "%s: Failed to create thread for queue = %d\n", __func__, qidx); assert(0); @@ -1058,7 +1087,7 @@ int virtio_session_mount(struct fuse_session *se) se->vu_socketfd =3D data_sock; se->virtio_dev->se =3D se; pthread_rwlock_init(&se->virtio_dev->vu_dispatch_rwlock, NULL); - if (!vu_init(&se->virtio_dev->dev, 2, se->vu_socketfd, fv_panic, NULL, + if (!vu_init(&se->virtio_dev->dev, 3, se->vu_socketfd, fv_panic, NULL, fv_set_watch, fv_remove_watch, &fv_iface)) { fuse_log(FUSE_LOG_ERR, "%s: vu_init failed\n", __func__); return -1; --=20 2.27.0 From nobody Tue May 7 16:10:53 2024 Delivered-To: importer2@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer2=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1623876227286231.06223964990966; Wed, 16 Jun 2021 13:43:47 -0700 (PDT) Received: from localhost ([::1]:52144 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ltcO1-0007iQ-TR for importer2@patchew.org; Wed, 16 Jun 2021 16:43:45 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:33176) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ltbOf-00035c-Cz for qemu-devel@nongnu.org; Wed, 16 Jun 2021 15:40:21 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:28388) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ltbOS-0000bL-3s for qemu-devel@nongnu.org; Wed, 16 Jun 2021 15:40:16 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-555-hedHqTQHP5-JjjxnDuFKkw-1; Wed, 16 Jun 2021 15:40:02 -0400 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 90E119F936 for ; Wed, 16 Jun 2021 19:40:01 +0000 (UTC) Received: from iangelak.remote.csb (ovpn-113-44.rdu2.redhat.com [10.10.113.44]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 38A6060BF1; Wed, 16 Jun 2021 19:39:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1623872406; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0ziP+ZNfLLgkOxkWbiKlfs4e3G8pX5HQXUWLVQFFhlY=; b=AufS4mTDSGEDXfWdl0j+9BVx+USUY6Fsm6i8COZip6TPmDlMtFmVrUWp1IOE00lZ7zKkdq q9kIO18vIJrKFaxGL6SEMuuNdt7Vbgh81kv8p2KXnqbpCtmG/UqoIb2gO2uBgFIZ+Mnzk4 5j6qHjK/RU4dMViAc7Mf+D7/CpZlYwM= X-MC-Unique: hedHqTQHP5-JjjxnDuFKkw-1 From: Ioannis Angelakopoulos To: qemu-devel@nongnu.org, virtio-fs@redhat.com Subject: [PATCH 3/6] virtiofsd: Specify size of notification buffer using config space Date: Wed, 16 Jun 2021 15:39:18 -0400 Message-Id: <20210616193921.608720-4-iangelak@redhat.com> In-Reply-To: <20210616193921.608720-1-iangelak@redhat.com> References: <20210616193921.608720-1-iangelak@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=iangelak@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer2=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=216.205.24.124; envelope-from=iangelak@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -29 X-Spam_score: -3.0 X-Spam_bar: --- X-Spam_report: (-3.0 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.199, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-Mailman-Approved-At: Wed, 16 Jun 2021 16:41:47 -0400 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: iangelak@redhat.com, stefanha@redhat.com, dgilbert@redhat.com, vgoyal@redhat.com Errors-To: qemu-devel-bounces+importer2=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Type: text/plain; charset="utf-8" From: Vivek Goyal Daemon specifies size of notification buffer needed and that should be done using config space. Only ->notify_buf_size value of config space comes from daemon. Rest of it is filled by qemu device emulation code. Signed-off-by: Vivek Goyal Signed-off-by: Ioannis Angelakopoulos --- hw/virtio/vhost-user-fs.c | 27 +++++++++++++++++++ include/hw/virtio/vhost-user-fs.h | 4 ++- include/standard-headers/linux/virtio_fs.h | 2 ++ tools/virtiofsd/fuse_virtio.c | 31 ++++++++++++++++++++++ 4 files changed, 63 insertions(+), 1 deletion(-) diff --git a/hw/virtio/vhost-user-fs.c b/hw/virtio/vhost-user-fs.c index c7fd5f3123..f510bd8029 100644 --- a/hw/virtio/vhost-user-fs.c +++ b/hw/virtio/vhost-user-fs.c @@ -36,15 +36,40 @@ static const int user_feature_bits[] =3D { VHOST_INVALID_FEATURE_BIT }; =20 +static int vhost_user_fs_handle_config_change(struct vhost_dev *dev) +{ + return 0; +} + +const VhostDevConfigOps fs_ops =3D { + .vhost_dev_config_notifier =3D vhost_user_fs_handle_config_change, +}; + static void vuf_get_config(VirtIODevice *vdev, uint8_t *config) { VHostUserFS *fs =3D VHOST_USER_FS(vdev); struct virtio_fs_config fscfg =3D {}; + int ret; + + /* + * As of now we only get notification buffer size from device. And tha= t's + * needed only if notification queue is enabled. + */ + if (fs->notify_enabled) { + ret =3D vhost_dev_get_config(&fs->vhost_dev, (uint8_t *)&fs->fscfg, + sizeof(struct virtio_fs_config)); + if (ret < 0) { + error_report("vhost-user-fs: get device config space failed." + " ret=3D%d", ret); + return; + } + } =20 memcpy((char *)fscfg.tag, fs->conf.tag, MIN(strlen(fs->conf.tag) + 1, sizeof(fscfg.tag))); =20 virtio_stl_p(vdev, &fscfg.num_request_queues, fs->conf.num_request_que= ues); + virtio_stl_p(vdev, &fscfg.notify_buf_size, fs->fscfg.notify_buf_size); =20 memcpy(config, &fscfg, sizeof(fscfg)); } @@ -255,6 +280,8 @@ static void vuf_device_realize(DeviceState *dev, Error = **errp) /* 1 high prio queue, 1 notification queue plus the number configured = */ fs->vhost_dev.nvqs =3D 2 + fs->conf.num_request_queues; fs->vhost_dev.vqs =3D g_new0(struct vhost_virtqueue, fs->vhost_dev.nvq= s); + + vhost_dev_set_config_notifier(&fs->vhost_dev, &fs_ops); ret =3D vhost_dev_init(&fs->vhost_dev, &fs->vhost_user, VHOST_BACKEND_TYPE_USER, 0); if (ret < 0) { diff --git a/include/hw/virtio/vhost-user-fs.h b/include/hw/virtio/vhost-us= er-fs.h index 13e2cbc48e..03780322ee 100644 --- a/include/hw/virtio/vhost-user-fs.h +++ b/include/hw/virtio/vhost-user-fs.h @@ -14,6 +14,7 @@ #ifndef _QEMU_VHOST_USER_FS_H #define _QEMU_VHOST_USER_FS_H =20 +#include "standard-headers/linux/virtio_fs.h" #include "hw/virtio/virtio.h" #include "hw/virtio/vhost.h" #include "hw/virtio/vhost-user.h" @@ -37,11 +38,12 @@ struct VHostUserFS { struct vhost_virtqueue *vhost_vqs; struct vhost_dev vhost_dev; VhostUserState vhost_user; + struct virtio_fs_config fscfg; VirtQueue **req_vqs; VirtQueue *hiprio_vq; int32_t bootindex; - bool notify_enabled; /*< public >*/ + bool notify_enabled; }; =20 #endif /* _QEMU_VHOST_USER_FS_H */ diff --git a/include/standard-headers/linux/virtio_fs.h b/include/standard-= headers/linux/virtio_fs.h index 6383d723a3..8f0075269a 100644 --- a/include/standard-headers/linux/virtio_fs.h +++ b/include/standard-headers/linux/virtio_fs.h @@ -17,6 +17,8 @@ struct virtio_fs_config { =20 /* Number of request queues */ uint32_t num_request_queues; + /* Size of notification buffer */ + uint32_t notify_buf_size; } QEMU_PACKED; =20 /* For the id field in virtio_pci_shm_cap */ diff --git a/tools/virtiofsd/fuse_virtio.c b/tools/virtiofsd/fuse_virtio.c index 3ff4cc1430..f16801bbee 100644 --- a/tools/virtiofsd/fuse_virtio.c +++ b/tools/virtiofsd/fuse_virtio.c @@ -851,6 +851,35 @@ static bool fv_queue_order(VuDev *dev, int qidx) return false; } =20 +static uint64_t fv_get_protocol_features(VuDev *dev) +{ + return 1ull << VHOST_USER_PROTOCOL_F_CONFIG; +} + +static int fv_get_config(VuDev *dev, uint8_t *config, uint32_t len) +{ + struct virtio_fs_config fscfg =3D {}; + unsigned notify_size, roundto =3D 64; + union fuse_notify_union { + struct fuse_notify_poll_wakeup_out wakeup_out; + struct fuse_notify_inval_inode_out inode_out; + struct fuse_notify_inval_entry_out entry_out; + struct fuse_notify_delete_out delete_out; + struct fuse_notify_store_out store_out; + struct fuse_notify_retrieve_out retrieve_out; + }; + + notify_size =3D sizeof(struct fuse_out_header) + + sizeof(union fuse_notify_union); + notify_size =3D ((notify_size + roundto) / roundto) * roundto; + + fscfg.notify_buf_size =3D notify_size; + memcpy(config, &fscfg, len); + fuse_log(FUSE_LOG_DEBUG, "%s:Setting notify_buf_size=3D%d\n", __func__, + fscfg.notify_buf_size); + return 0; +} + static const VuDevIface fv_iface =3D { .get_features =3D fv_get_features, .set_features =3D fv_set_features, @@ -859,6 +888,8 @@ static const VuDevIface fv_iface =3D { .queue_set_started =3D fv_queue_set_started, =20 .queue_is_processed_in_order =3D fv_queue_order, + .get_protocol_features =3D fv_get_protocol_features, + .get_config =3D fv_get_config, }; =20 /* --=20 2.27.0 From nobody Tue May 7 16:10:53 2024 Delivered-To: importer2@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer2=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1623876232774395.414649865446; Wed, 16 Jun 2021 13:43:52 -0700 (PDT) Received: from localhost ([::1]:52300 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ltcO7-0007oM-8Q for importer2@patchew.org; Wed, 16 Jun 2021 16:43:51 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:33232) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ltbOh-00035o-LL for qemu-devel@nongnu.org; Wed, 16 Jun 2021 15:40:24 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:44054) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ltbOd-0000fA-Ka for qemu-devel@nongnu.org; Wed, 16 Jun 2021 15:40:23 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-438-5ogt5sKgPFiuwfx1_029YQ-1; Wed, 16 Jun 2021 15:40:08 -0400 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 545779F92C for ; Wed, 16 Jun 2021 19:40:07 +0000 (UTC) Received: from iangelak.remote.csb (ovpn-113-44.rdu2.redhat.com [10.10.113.44]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 9980360BF1; Wed, 16 Jun 2021 19:40:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1623872409; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SH9aHa4y7IKWCLMThg+01m769L9XohG50OjHZwT3zpk=; b=KVIAQC2zmTpCoG7TyqDIfdMt6x3bNTaD2iEhrXRbet4AulUV5qQ29rm6bi7UpEp91E56Nr nV5AQyC9xvpgLo9PF+MqXJMHM4f5Er90lU30Cw21t63FAHjj3rJH7vI8k9ndPX6Hb93LML o0ouEHBGshfT7mhFEhzxVIu3ZIQA7yk= X-MC-Unique: 5ogt5sKgPFiuwfx1_029YQ-1 From: Ioannis Angelakopoulos To: qemu-devel@nongnu.org, virtio-fs@redhat.com Subject: [PATCH 4/6] virtiofsd: Implement blocking posix locks Date: Wed, 16 Jun 2021 15:39:19 -0400 Message-Id: <20210616193921.608720-5-iangelak@redhat.com> In-Reply-To: <20210616193921.608720-1-iangelak@redhat.com> References: <20210616193921.608720-1-iangelak@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=iangelak@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer2=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=iangelak@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -29 X-Spam_score: -3.0 X-Spam_bar: --- X-Spam_report: (-3.0 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.199, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-Mailman-Approved-At: Wed, 16 Jun 2021 16:41:47 -0400 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: iangelak@redhat.com, stefanha@redhat.com, dgilbert@redhat.com, vgoyal@redhat.com Errors-To: qemu-devel-bounces+importer2=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Type: text/plain; charset="utf-8" From: Vivek Goyal As of now we don't support fcntl(F_SETLKW) and if we see one, we return -EOPNOTSUPP. Change that by accepting these requests and returning a reply immediately asking caller to wait. Once lock is available, send a notification to the waiter indicating lock is available. Signed-off-by: Vivek Goyal Signed-off-by: Ioannis Angelakopoulos --- include/standard-headers/linux/fuse.h | 8 ++ tools/virtiofsd/fuse_lowlevel.c | 38 +++++++++- tools/virtiofsd/fuse_lowlevel.h | 26 +++++++ tools/virtiofsd/fuse_virtio.c | 101 ++++++++++++++++++++++++-- tools/virtiofsd/passthrough_ll.c | 59 ++++++++++----- 5 files changed, 207 insertions(+), 25 deletions(-) diff --git a/include/standard-headers/linux/fuse.h b/include/standard-heade= rs/linux/fuse.h index 950d7edb7e..4680efc531 100644 --- a/include/standard-headers/linux/fuse.h +++ b/include/standard-headers/linux/fuse.h @@ -511,6 +511,7 @@ enum fuse_notify_code { FUSE_NOTIFY_STORE =3D 4, FUSE_NOTIFY_RETRIEVE =3D 5, FUSE_NOTIFY_DELETE =3D 6, + FUSE_NOTIFY_LOCK =3D 7, FUSE_NOTIFY_CODE_MAX, }; =20 @@ -898,6 +899,13 @@ struct fuse_notify_retrieve_in { uint64_t dummy4; }; =20 +struct fuse_notify_lock_out { + uint64_t unique; + int32_t error; + int32_t padding; +}; + + /* Device ioctls: */ #define FUSE_DEV_IOC_CLONE _IOR(229, 0, uint32_t) =20 diff --git a/tools/virtiofsd/fuse_lowlevel.c b/tools/virtiofsd/fuse_lowleve= l.c index 7fe2cef1eb..4b03ec2f9f 100644 --- a/tools/virtiofsd/fuse_lowlevel.c +++ b/tools/virtiofsd/fuse_lowlevel.c @@ -179,8 +179,8 @@ int fuse_send_reply_iov_nofree(fuse_req_t req, int erro= r, struct iovec *iov, .unique =3D req->unique, .error =3D error, }; - - if (error <=3D -1000 || error > 0) { + /* error =3D 1 has been used to signal client to wait for notificaiton= */ + if (error <=3D -1000 || error > 1) { fuse_log(FUSE_LOG_ERR, "fuse: bad error value: %i\n", error); out.error =3D -ERANGE; } @@ -290,6 +290,12 @@ int fuse_reply_err(fuse_req_t req, int err) return send_reply(req, -err, NULL, 0); } =20 +int fuse_reply_wait(fuse_req_t req) +{ + /* TODO: This is a hack. Fix it */ + return send_reply(req, 1, NULL, 0); +} + void fuse_reply_none(fuse_req_t req) { fuse_free_req(req); @@ -2145,6 +2151,34 @@ static void do_destroy(fuse_req_t req, fuse_ino_t no= deid, send_reply_ok(req, NULL, 0); } =20 +static int send_notify_iov(struct fuse_session *se, int notify_code, + struct iovec *iov, int count) +{ + struct fuse_out_header out; + if (!se->got_init) { + return -ENOTCONN; + } + out.unique =3D 0; + out.error =3D notify_code; + iov[0].iov_base =3D &out; + iov[0].iov_len =3D sizeof(struct fuse_out_header); + return fuse_send_msg(se, NULL, iov, count); +} + +int fuse_lowlevel_notify_lock(struct fuse_session *se, uint64_t unique, + int32_t error) +{ + struct fuse_notify_lock_out outarg =3D {0}; + struct iovec iov[2]; + + outarg.unique =3D unique; + outarg.error =3D -error; + + iov[1].iov_base =3D &outarg; + iov[1].iov_len =3D sizeof(outarg); + return send_notify_iov(se, FUSE_NOTIFY_LOCK, iov, 2); +} + int fuse_lowlevel_notify_store(struct fuse_session *se, fuse_ino_t ino, off_t offset, struct fuse_bufvec *bufv) { diff --git a/tools/virtiofsd/fuse_lowlevel.h b/tools/virtiofsd/fuse_lowleve= l.h index 3bf786b034..1e8b3d2c35 100644 --- a/tools/virtiofsd/fuse_lowlevel.h +++ b/tools/virtiofsd/fuse_lowlevel.h @@ -1250,6 +1250,22 @@ struct fuse_lowlevel_ops { */ int fuse_reply_err(fuse_req_t req, int err); =20 +/** + * Ask caller to wait for lock. + * + * Possible requests: + * setlkw + * + * If caller sends a blocking lock request (setlkw), then reply to caller + * that wait for lock to be available. Once lock is available caller will + * receive a notification with request's unique id. Notification will + * carry info whether lock was successfully obtained or not. + * + * @param req request handle + * @return zero for success, -errno for failure to send reply + */ +int fuse_reply_wait(fuse_req_t req); + /** * Don't send reply * @@ -1684,6 +1700,16 @@ int fuse_lowlevel_notify_delete(struct fuse_session = *se, fuse_ino_t parent, int fuse_lowlevel_notify_store(struct fuse_session *se, fuse_ino_t ino, off_t offset, struct fuse_bufvec *bufv); =20 +/** + * Notify event related to previous lock request + * + * @param se the session object + * @param unique the unique id of the request which requested setlkw + * @param error zero for success, -errno for the failure + */ +int fuse_lowlevel_notify_lock(struct fuse_session *se, uint64_t unique, + int32_t error); + /* * Utility functions */ diff --git a/tools/virtiofsd/fuse_virtio.c b/tools/virtiofsd/fuse_virtio.c index f16801bbee..cb4dbafd91 100644 --- a/tools/virtiofsd/fuse_virtio.c +++ b/tools/virtiofsd/fuse_virtio.c @@ -233,6 +233,86 @@ static void copy_iov(struct iovec *src_iov, int src_co= unt, } } =20 +static int virtio_send_notify_msg(struct fuse_session *se, struct iovec *i= ov, + int count) +{ + struct fv_QueueInfo *qi; + VuDev *dev =3D &se->virtio_dev->dev; + VuVirtq *q; + FVRequest *req; + VuVirtqElement *elem; + unsigned int in_num; + struct fuse_out_header *out =3D iov[0].iov_base; + size_t in_len, tosend_len =3D iov_size(iov, count); + struct iovec *in_sg; + int ret =3D 0; + + /* Notifications have unique =3D=3D 0 */ + assert(!out->unique); + + if (!se->notify_enabled) { + return -EOPNOTSUPP; + } + + /* If notifications are enabled, queue index 1 is notification queue */ + qi =3D se->virtio_dev->qi[1]; + q =3D vu_get_queue(dev, qi->qidx); + + pthread_rwlock_rdlock(&qi->virtio_dev->vu_dispatch_rwlock); + pthread_mutex_lock(&qi->vq_lock); + /* Pop an element from queue */ + req =3D vu_queue_pop(dev, q, sizeof(FVRequest)); + if (!req) { + /* + * TODO: Implement some sort of ring buffer and queue notifications + * on that and send these later when notification queue has space + * available. + */ + ret =3D -ENOSPC; + } + pthread_mutex_unlock(&qi->vq_lock); + pthread_rwlock_unlock(&qi->virtio_dev->vu_dispatch_rwlock); + + if (ret) { + return ret; + } + + out->len =3D tosend_len; + elem =3D &req->elem; + in_num =3D elem->in_num; + in_sg =3D elem->in_sg; + in_len =3D iov_size(in_sg, in_num); + fuse_log(FUSE_LOG_DEBUG, "%s: elem %d: with %d in desc of length %zd\n= ", + __func__, elem->index, in_num, in_len); + + if (in_len < sizeof(struct fuse_out_header)) { + fuse_log(FUSE_LOG_ERR, "%s: elem %d too short for out_header\n", + __func__, elem->index); + ret =3D -E2BIG; + goto out; + } + + if (in_len < tosend_len) { + fuse_log(FUSE_LOG_ERR, "%s: elem %d too small for data len" + " %zd\n", __func__, elem->index, tosend_len); + ret =3D -E2BIG; + goto out; + } + + /* First copy the header data from iov->in_sg */ + copy_iov(iov, count, in_sg, in_num, tosend_len); + + pthread_rwlock_rdlock(&qi->virtio_dev->vu_dispatch_rwlock); + pthread_mutex_lock(&qi->vq_lock); + vu_queue_push(dev, q, elem, tosend_len); + vu_queue_notify(dev, q); + pthread_mutex_unlock(&qi->vq_lock); + pthread_rwlock_unlock(&qi->virtio_dev->vu_dispatch_rwlock); +out: + free(req); + return ret; +} + /* * pthread_rwlock_rdlock() and pthread_rwlock_wrlock can fail if * a deadlock condition is detected or the current thread already @@ -266,11 +346,11 @@ static void vu_dispatch_unlock(struct fv_VuDev *vud) int virtio_send_msg(struct fuse_session *se, struct fuse_chan *ch, struct iovec *iov, int count) { - FVRequest *req =3D container_of(ch, FVRequest, ch); - struct fv_QueueInfo *qi =3D ch->qi; + FVRequest *req; + struct fv_QueueInfo *qi; VuDev *dev =3D &se->virtio_dev->dev; - VuVirtq *q =3D vu_get_queue(dev, qi->qidx); - VuVirtqElement *elem =3D &req->elem; + VuVirtq *q; + VuVirtqElement *elem; int ret =3D 0; =20 assert(count >=3D 1); @@ -281,8 +361,16 @@ int virtio_send_msg(struct fuse_session *se, struct fu= se_chan *ch, =20 size_t tosend_len =3D iov_size(iov, count); =20 - /* unique =3D=3D 0 is notification, which we don't support */ - assert(out->unique); + /* unique =3D=3D 0 is notification */ + if (!out->unique) { + return virtio_send_notify_msg(se, iov, count); + } + + assert(ch); + req =3D container_of(ch, FVRequest, ch); + elem =3D &req->elem; + qi =3D ch->qi; + q =3D vu_get_queue(dev, qi->qidx); assert(!req->reply_sent); =20 /* The 'in' part of the elem is to qemu */ @@ -867,6 +955,7 @@ static int fv_get_config(VuDev *dev, uint8_t *config, u= int32_t len) struct fuse_notify_delete_out delete_out; struct fuse_notify_store_out store_out; struct fuse_notify_retrieve_out retrieve_out; + struct fuse_notify_lock_out lock_out; }; =20 notify_size =3D sizeof(struct fuse_out_header) + diff --git a/tools/virtiofsd/passthrough_ll.c b/tools/virtiofsd/passthrough= _ll.c index f2fa9d95bb..8f24954a00 100644 --- a/tools/virtiofsd/passthrough_ll.c +++ b/tools/virtiofsd/passthrough_ll.c @@ -968,14 +968,6 @@ static int do_statx(struct lo_data *lo, int dirfd, con= st char *pathname, return 0; } =20 -static void posix_locks_value_destroy(gpointer data) -{ - struct lo_inode_plock *plock =3D data; - - close(plock->fd); - free(plock); -} - /* * Increments nlookup on the inode on success. unref_inode_lolocked() must= be * called eventually to decrement nlookup again. If inodep is non-NULL, the @@ -2064,7 +2056,10 @@ static void lo_setlk(fuse_req_t req, fuse_ino_t ino,= struct fuse_file_info *fi, struct lo_data *lo =3D lo_data(req); struct lo_inode *inode; struct lo_inode_plock *plock; - int ret, saverr =3D 0; + int ret, saverr =3D 0, ofd; + uint64_t unique; + struct fuse_session *se =3D req->se; + bool async_lock =3D false; =20 fuse_log(FUSE_LOG_DEBUG, "lo_setlk(ino=3D%" PRIu64 ", flags=3D%d)" @@ -2078,11 +2073,6 @@ static void lo_setlk(fuse_req_t req, fuse_ino_t ino,= struct fuse_file_info *fi, return; } =20 - if (sleep) { - fuse_reply_err(req, EOPNOTSUPP); - return; - } - inode =3D lo_inode(req, ino); if (!inode) { fuse_reply_err(req, EBADF); @@ -2095,21 +2085,56 @@ static void lo_setlk(fuse_req_t req, fuse_ino_t ino= , struct fuse_file_info *fi, =20 if (!plock) { saverr =3D ret; + pthread_mutex_unlock(&inode->plock_mutex); goto out; } =20 + /* + * plock is now released when inode is going away. We already have + * a reference on inode, so it is guaranteed that plock->fd is + * still around even after dropping inode->plock_mutex lock + */ + ofd =3D plock->fd; + pthread_mutex_unlock(&inode->plock_mutex); + + /* + * If this lock request can block, request caller to wait for + * notification. Do not access req after this. Once lock is + * available, send a notification instead. + */ + if (sleep && lock->l_type !=3D F_UNLCK) { + /* + * If notification queue is not enabled, can't support async + * locks. + */ + if (!se->notify_enabled) { + saverr =3D EOPNOTSUPP; + goto out; + } + async_lock =3D true; + unique =3D req->unique; + fuse_reply_wait(req); + } + /* TODO: Is it alright to modify flock? */ lock->l_pid =3D 0; - ret =3D fcntl(plock->fd, F_OFD_SETLK, lock); + if (async_lock) { + ret =3D fcntl(ofd, F_OFD_SETLKW, lock); + } else { + ret =3D fcntl(ofd, F_OFD_SETLK, lock); + } if (ret =3D=3D -1) { saverr =3D errno; } =20 out: - pthread_mutex_unlock(&inode->plock_mutex); lo_inode_put(lo, &inode); =20 - fuse_reply_err(req, saverr); + if (!async_lock) { + fuse_reply_err(req, saverr); + } else { + fuse_lowlevel_notify_lock(se, unique, saverr); + } } =20 static void lo_fsyncdir(fuse_req_t req, fuse_ino_t ino, int datasync, --=20 2.27.0 From nobody Tue May 7 16:10:53 2024 Delivered-To: importer2@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer2=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1623876584026371.6917999236266; Wed, 16 Jun 2021 13:49:44 -0700 (PDT) Received: from localhost ([::1]:41302 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ltcTm-0002fs-Tn for importer2@patchew.org; Wed, 16 Jun 2021 16:49:42 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:33252) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ltbOi-000363-Uz for qemu-devel@nongnu.org; Wed, 16 Jun 2021 15:40:24 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:48846) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ltbOg-0000hc-SZ for qemu-devel@nongnu.org; Wed, 16 Jun 2021 15:40:24 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-443-kwSBOM4EN0GKsOOTI6sJAQ-1; Wed, 16 Jun 2021 15:40:14 -0400 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 7AE7D1923762 for ; Wed, 16 Jun 2021 19:40:13 +0000 (UTC) Received: from iangelak.remote.csb (ovpn-113-44.rdu2.redhat.com [10.10.113.44]) by smtp.corp.redhat.com (Postfix) with ESMTPS id D51DF60BF1; Wed, 16 Jun 2021 19:40:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1623872415; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=EZvRji51nIav+fy8kQUz7X+kCloOXz7jK3LuPpoi8ms=; b=EYHcsaRMBBhgsPNvI9D05Z/6iA4+TM+/NKjY6q+ukMajiYlfFWec7wRlaA+V1mJhPudSGO ChJAPsj0BASmKQpXxWdnUuGxOHnThordvPLDmcbO1ZA9wcbuw69/BH4YZVe8MJ1FeAW2Ma /IrTB0UOwuCYSo66iZRj5i2ZkTAjAX8= X-MC-Unique: kwSBOM4EN0GKsOOTI6sJAQ-1 From: Ioannis Angelakopoulos To: qemu-devel@nongnu.org, virtio-fs@redhat.com Subject: [PATCH 5/6] virtiofsd: Thread state cleanup when blocking posix locks are used Date: Wed, 16 Jun 2021 15:39:20 -0400 Message-Id: <20210616193921.608720-6-iangelak@redhat.com> In-Reply-To: <20210616193921.608720-1-iangelak@redhat.com> References: <20210616193921.608720-1-iangelak@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=iangelak@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer2=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=216.205.24.124; envelope-from=iangelak@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -29 X-Spam_score: -3.0 X-Spam_bar: --- X-Spam_report: (-3.0 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.199, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-Mailman-Approved-At: Wed, 16 Jun 2021 16:41:48 -0400 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: iangelak@redhat.com, stefanha@redhat.com, dgilbert@redhat.com, vgoyal@redhat.com Errors-To: qemu-devel-bounces+importer2=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Type: text/plain; charset="utf-8" Stop virtiofsd thread from sending any notifications/messages through the virtqueue while the guest hard-reboots. If a guest attempts to hard reboot while a virtiofsd thread blocks waiting for a lock held by another guest's virtiofsd process, then QEMU will block the guest from rebooting until the lock is released. When the virtiofsd thread acquires the lock it will not attempt to send a message to the notification virtqueue since the queue is now destroyed, due to the hard-reboot attempt of the guest. The thread will only release the lock, without sending any notifications through the virtqueue. Then the cleanup process can proceed normally. Signed-off-by: Ioannis Angelakopoulos --- tools/virtiofsd/fuse_i.h | 1 + tools/virtiofsd/fuse_lowlevel.c | 2 ++ tools/virtiofsd/fuse_virtio.c | 10 ++++++++++ tools/virtiofsd/passthrough_ll.c | 23 +++++++++++++++++++---- 4 files changed, 32 insertions(+), 4 deletions(-) diff --git a/tools/virtiofsd/fuse_i.h b/tools/virtiofsd/fuse_i.h index 4942d080da..269fd5e77b 100644 --- a/tools/virtiofsd/fuse_i.h +++ b/tools/virtiofsd/fuse_i.h @@ -62,6 +62,7 @@ struct fuse_session { pthread_mutex_t lock; pthread_rwlock_t init_rwlock; int got_destroy; + int in_cleanup; int broken_splice_nonblock; uint64_t notify_ctr; struct fuse_notify_req notify_list; diff --git a/tools/virtiofsd/fuse_lowlevel.c b/tools/virtiofsd/fuse_lowleve= l.c index 4b03ec2f9f..a9f6ea61dc 100644 --- a/tools/virtiofsd/fuse_lowlevel.c +++ b/tools/virtiofsd/fuse_lowlevel.c @@ -1905,6 +1905,7 @@ static void do_init(fuse_req_t req, fuse_ino_t nodeid, se->conn.proto_minor =3D arg->minor; se->conn.capable =3D 0; se->conn.want =3D 0; + se->in_cleanup =3D 0; =20 memset(&outarg, 0, sizeof(outarg)); outarg.major =3D FUSE_KERNEL_VERSION; @@ -2397,6 +2398,7 @@ void fuse_session_process_buf_int(struct fuse_session= *se, fuse_log(FUSE_LOG_DEBUG, "%s: reinit\n", __func__); se->got_destroy =3D 1; se->got_init =3D 0; + se->in_cleanup =3D 0; if (se->op.destroy) { se->op.destroy(se->userdata); } diff --git a/tools/virtiofsd/fuse_virtio.c b/tools/virtiofsd/fuse_virtio.c index cb4dbafd91..7efaf9ae68 100644 --- a/tools/virtiofsd/fuse_virtio.c +++ b/tools/virtiofsd/fuse_virtio.c @@ -839,11 +839,14 @@ static void fv_queue_cleanup_thread(struct fv_VuDev *= vud, int qidx) if (eventfd_write(ourqi->kill_fd, 1)) { fuse_log(FUSE_LOG_ERR, "Eventfd_read for queue: %m\n"); } + ret =3D pthread_join(ourqi->thread, NULL); + if (ret) { fuse_log(FUSE_LOG_ERR, "%s: Failed to join thread idx %d err" " %d\n", __func__, qidx, ret); } + close(ourqi->kill_fd); } pthread_mutex_destroy(&ourqi->vq_lock); @@ -929,6 +932,13 @@ static void fv_queue_set_started(VuDev *dev, int qidx,= bool started) * the queue thread doesn't block in virtio_send_msg(). */ vu_dispatch_unlock(vud); + /* + * Indicate to any thread that was blocked and wakes up + * that we are in the thread cleanup process + */ + if (!vud->se->in_cleanup) { + vud->se->in_cleanup =3D 1; + } fv_queue_cleanup_thread(vud, qidx); vu_dispatch_wrlock(vud); } diff --git a/tools/virtiofsd/passthrough_ll.c b/tools/virtiofsd/passthrough= _ll.c index 8f24954a00..8a2aa10b9c 100644 --- a/tools/virtiofsd/passthrough_ll.c +++ b/tools/virtiofsd/passthrough_ll.c @@ -1971,9 +1971,6 @@ static struct lo_inode_plock *lookup_create_plock_ctx= (struct lo_data *lo, plock =3D g_hash_table_lookup(inode->posix_locks, GUINT_TO_POINTER(lock_owne= r)); =20 - fuse_log(FUSE_LOG_DEBUG, "lookup_create_plock_ctx():" - " Inserted element in posix_locks hash table" - " with value pointer %p\n", plock); if (plock) { return plock; } @@ -1997,6 +1994,10 @@ static struct lo_inode_plock *lookup_create_plock_ct= x(struct lo_data *lo, plock->fd =3D fd; g_hash_table_insert(inode->posix_locks, GUINT_TO_POINTER(plock->lock_o= wner), plock); + fuse_log(FUSE_LOG_DEBUG, "lookup_create_plock_ctx():" + " Inserted element in posix_locks hash table" + " with value pointer %p\n", plock); + return plock; } =20 @@ -2133,7 +2134,21 @@ out: if (!async_lock) { fuse_reply_err(req, saverr); } else { - fuse_lowlevel_notify_lock(se, unique, saverr); + /* + * Before attempting to send any message through + * the thread should check if the queue actually + * exists + */ + if (!se->in_cleanup) { + fuse_lowlevel_notify_lock(se, unique, saverr); + } else { + /* Release the locks */ + lock->l_type =3D F_UNLCK; + lock->l_whence =3D SEEK_SET; + /* Unlock whole file */ + lock->l_start =3D lock->l_len =3D 0; + fcntl(ofd, F_OFD_SETLKW, lock); + } } } =20 --=20 2.27.0 From nobody Tue May 7 16:10:53 2024 Delivered-To: importer2@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer2=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1623876246679400.56104532361223; Wed, 16 Jun 2021 13:44:06 -0700 (PDT) Received: from localhost ([::1]:52520 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ltcOL-0007xS-Kr for importer2@patchew.org; Wed, 16 Jun 2021 16:44:05 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:33944) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ltbQL-0003wy-FL for qemu-devel@nongnu.org; Wed, 16 Jun 2021 15:42:05 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:33391) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ltbQI-0002AO-UJ for qemu-devel@nongnu.org; Wed, 16 Jun 2021 15:42:05 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-166-zAmvOGtfPzqEJRYgWouroA-1; Wed, 16 Jun 2021 15:40:23 -0400 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 310261084F41 for ; Wed, 16 Jun 2021 19:40:23 +0000 (UTC) Received: from iangelak.remote.csb (ovpn-113-44.rdu2.redhat.com [10.10.113.44]) by smtp.corp.redhat.com (Postfix) with ESMTPS id D733460BF1; Wed, 16 Jun 2021 19:40:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1623872522; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=QJNS8kNhEM+NjI/cgj/GZ9ARp+JMy1k7vfyRMRJq5zU=; b=E8Sq6l+SC6L6PSYoJSMOUOnnF67QnkKeYl99+KzsZVFJVCIHv5+SWbVDUyGXLlPnjRpbka /0o3HnLD1Twym25Eau6i30zWjkmUP7DsKdhWVeBlMe9TguOUBOBc+renPxrNojjrd63aWV zALcjreTexmGZ/MHGXNX+09z0HbGAg4= X-MC-Unique: zAmvOGtfPzqEJRYgWouroA-1 From: Ioannis Angelakopoulos To: qemu-devel@nongnu.org, virtio-fs@redhat.com Subject: [PATCH 6/6] virtiofsd: Custom threadpool for remote blocking posix locks requests Date: Wed, 16 Jun 2021 15:39:21 -0400 Message-Id: <20210616193921.608720-7-iangelak@redhat.com> In-Reply-To: <20210616193921.608720-1-iangelak@redhat.com> References: <20210616193921.608720-1-iangelak@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=iangelak@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer2=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=216.205.24.124; envelope-from=iangelak@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -29 X-Spam_score: -3.0 X-Spam_bar: --- X-Spam_report: (-3.0 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.199, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-Mailman-Approved-At: Wed, 16 Jun 2021 16:41:48 -0400 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: iangelak@redhat.com, stefanha@redhat.com, dgilbert@redhat.com, vgoyal@redhat.com Errors-To: qemu-devel-bounces+importer2=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Type: text/plain; charset="utf-8" Add a new custom threadpool using posix threads that specifically service locking requests. In the case of a fcntl(SETLKW) request, if the guest is waiting for a lock or locks and issues a hard-reboot through SYSRQ then virtiofsd unblocks the blocked threads by sending a signal to them and waking them up. The current threadpool (GThreadPool) is not adequate to service the locking requests that result in a thread blocking. That is because GLib does not provide an API to cancel the request while it is serviced by a thread. In addition, a user might be running virtiofsd without a threadpool (--thread-pool-size=3D0), thus a locking request that blocks, will block the main virtqueue thread that services requests from servicing any other requests. Then virtiofsd proceeds to cleanup the state of the threads, release them back to the system and re-initialize. Signed-off-by: Ioannis Angelakopoulos --- tools/virtiofsd/fuse_virtio.c | 407 +++++++++++++++++++++++++++++++++- 1 file changed, 403 insertions(+), 4 deletions(-) diff --git a/tools/virtiofsd/fuse_virtio.c b/tools/virtiofsd/fuse_virtio.c index 7efaf9ae68..b23aff5a50 100644 --- a/tools/virtiofsd/fuse_virtio.c +++ b/tools/virtiofsd/fuse_virtio.c @@ -29,6 +29,45 @@ #include "libvhost-user.h" =20 struct fv_VuDev; + +/* + * Create a separate thread pool for handling locking requests. This way we + * can safely monitor, wake up and clean the threads during a hard-reboot + */ + +struct fv_LockReq { + struct fv_LockReq *next; /* pointer to next tas= k */ + void (*worker_func)(void *arg1, void *arg2); /* worker function */ + void *arg1; /* 1st arg: Request */ + void *arg2; /* 2nd arg: Virtqueue = */ +} fv_LockReq; + +struct fv_LockReqQueue { + pthread_mutex_t lock; + struct fv_LockReq *head; /* Front of the queue = */ + struct fv_LockReq *tail; /* Back of the queue */ + pthread_cond_t notify; /* Conditional variable= */ + int size; /* Size of the queue */ + +} fv_LockTaskQueue; + +struct fv_LockThread { + pthread_t pthread; + int alive; + int id; + struct fv_LockThreadPool *lock_t_pool; +}; + +struct fv_LockThreadPool { + struct fv_LockThread **threads; + struct fv_LockReqQueue *lreq_queue; /* Locking Request Qu= eue*/ + pthread_mutex_t tp_lock; + + int num_threads; /* Total threads */ + int created; /* Threads created */ + int destroy_pool; /* Destroy pool flag = */ +}; + struct fv_QueueInfo { pthread_t thread; /* @@ -710,6 +749,325 @@ out: free(req); } =20 +/* Reuse of code in fv fv_queue_worker. Need to clean up */ +static int fv_get_request_opcode(gpointer data, gpointer user_data) +{ + struct fv_QueueInfo *qi =3D user_data; + struct fuse_session *se =3D qi->virtio_dev->se; + FVRequest *req =3D data; + VuVirtqElement *elem =3D &req->elem; + struct fuse_buf fbuf =3D {}; + struct fuse_in_header inh; + + assert(se->bufsize > sizeof(struct fuse_in_header)); + + /* + * An element contains one request and the space to send our response + * They're spread over multiple descriptors in a scatter/gather set + * and we can't trust the guest to keep them still; so copy in/out. + */ + fbuf.mem =3D g_malloc(se->bufsize); + + /* The 'out' part of the elem is from qemu */ + unsigned int out_num =3D elem->out_num; + struct iovec *out_sg =3D elem->out_sg; + size_t out_len =3D iov_size(out_sg, out_num); + fuse_log(FUSE_LOG_DEBUG, + "%s: elem %d: with %d out desc of length %zd\n", + __func__, elem->index, out_num, out_len); + + /* + * The elem should contain a 'fuse_in_header' (in to fuse) + * plus the data based on the len in the header. + */ + if (out_len < sizeof(struct fuse_in_header)) { + fuse_log(FUSE_LOG_ERR, "%s: elem %d too short for in_header\n", + __func__, elem->index); + assert(0); /* TODO */ + } + if (out_len > se->bufsize) { + fuse_log(FUSE_LOG_ERR, "%s: elem %d too large for buffer\n", __fun= c__, + elem->index); + assert(0); /* TODO */ + } + /* Copy just the fuse_in_header and get the request opcode */ + copy_from_iov(&fbuf, out_num, out_sg, + sizeof(struct fuse_in_header)); + memcpy(&inh, fbuf.mem, sizeof(struct fuse_in_header)); + + g_free(fbuf.mem); + /* Return the request opcode */ + return inh.opcode; +} + +/* Initialize the Locking Request Queue */ +static struct fv_LockReqQueue *fv_lock_request_queue_init(void) +{ + struct fv_LockReqQueue *lock_req_queue; + + lock_req_queue =3D g_new(struct fv_LockReqQueue, 1); + lock_req_queue->size =3D 0; + lock_req_queue->head =3D NULL; + lock_req_queue->tail =3D NULL; + + pthread_mutex_init(&(lock_req_queue->lock), NULL); + pthread_cond_init(&(lock_req_queue->notify), NULL); + + return lock_req_queue; +} + +/* Push a new locking request to the queue*/ +static void fv_lock_tpool_push(struct fv_LockThreadPool *tpool, + void (*worker_func)(void *, void *), + void *arg1, void *arg2) +{ + struct fv_LockReq *newreq; + + newreq =3D g_new(struct fv_LockReq, 1); + newreq->worker_func =3D worker_func; + newreq->arg1 =3D arg1; + newreq->arg2 =3D arg2; + newreq->next =3D NULL; + + /* Now add the request to the queue */ + pthread_mutex_lock(&tpool->lreq_queue->lock); + + if (tpool->lreq_queue->size =3D=3D 0) { + tpool->lreq_queue->head =3D newreq; + tpool->lreq_queue->tail =3D newreq; + } else { + tpool->lreq_queue->tail->next =3D newreq; + tpool->lreq_queue->tail =3D tpool->lreq_queue->tail->next; + } + + tpool->lreq_queue->size++; + + /* Notify the threads that a request is available */ + pthread_cond_signal(&tpool->lreq_queue->notify); + + pthread_mutex_unlock(&tpool->lreq_queue->lock); + +} + +/* Pop a locking request from the queue*/ +static struct fv_LockReq *fv_lock_tpool_pop(struct fv_LockThreadPool *tpoo= l) +{ + struct fv_LockReq *lock_req; + + pthread_mutex_lock(&tpool->lreq_queue->lock); + + lock_req =3D tpool->lreq_queue->head; + + /* Must remove the element from the queue */ + if (!tpool->lreq_queue->size) { + ; + } else if (tpool->lreq_queue->size =3D=3D 1) { + tpool->lreq_queue->head =3D NULL; + tpool->lreq_queue->tail =3D NULL; + tpool->lreq_queue->size--; + } else { + tpool->lreq_queue->head =3D tpool->lreq_queue->head->next; + tpool->lreq_queue->size--; + /* + * Notify the rest of the threads + * that a request is available + */ + pthread_cond_signal(&tpool->lreq_queue->notify); + } + + pthread_mutex_unlock(&tpool->lreq_queue->lock); + + return lock_req; + +} + +static void fv_lock_request_queue_destroy(struct fv_LockThreadPool *tpool) +{ + while (tpool->lreq_queue->size) { + g_free(fv_lock_tpool_pop(tpool)); + } + + /* Now free the actual queue itself */ + g_free(tpool->lreq_queue); +} + +/* + * Signal handler for blcking threads that wait on a remote lock to be rel= eased + * Called when virtiofsd does cleanup and wants to wake up these threads + */ +static void fv_thread_unblock_handler(int signal) +{ + fuse_log(FUSE_LOG_INFO, "Thread received a wake up signal...unblocking= \n"); + return; +} + +static void *fv_lock_thread_do_work(void *thread) +{ + struct fv_LockThread *lk_thread =3D (struct fv_LockThread *)thread; + struct fv_LockThreadPool *tpool =3D lk_thread->lock_t_pool; + struct fv_LockReq *lock_request; + /* Actual worker function and arguments. Same as non locking requests = */ + void (*worker_func)(void*, void*); + void *arg1; + void *arg2; + + /* + * Register a signal handler to wake up the thread when it is blocking= on + * waiting for a lock + */ + struct sigaction sa; + sigemptyset(&sa.sa_mask); + sa.sa_flags =3D 0; + sa.sa_handler =3D fv_thread_unblock_handler; + if (sigaction(SIGUSR1, &sa, NULL) =3D=3D -1) { + fuse_log(FUSE_LOG_ERR, "Cannot register the signal handler for" + " thread %d\n", lk_thread->id); + } + + while (!tpool->destroy_pool) { + /* + * Get the queue lock first so that we can wait on the conditional + * variable afterwards + */ + pthread_mutex_lock(&tpool->lreq_queue->lock); + + /* Wait on the condition variable until it is available */ + while (tpool->lreq_queue->size =3D=3D 0 && !tpool->destroy_pool) { + pthread_cond_wait(&tpool->lreq_queue->notify, + &tpool->lreq_queue->lock); + } + + /* Unlock the queue for other threads */ + pthread_mutex_unlock(&tpool->lreq_queue->lock); + + if (tpool->destroy_pool) { + break; + } + + /* Now the request must be serviced */ + lock_request =3D fv_lock_tpool_pop(tpool); + + if (lock_request && !tpool->destroy_pool) { + fuse_log(FUSE_LOG_DEBUG, "%s: Locking Thread:%d handling" + " a request\n", __func__, lk_thread->id); + worker_func =3D lock_request->worker_func; + arg1 =3D lock_request->arg1; + arg2 =3D lock_request->arg2; + worker_func(arg1, arg2); + g_free(lock_request); + } + } + + /* Mark the thread as inactive */ + pthread_mutex_lock(&tpool->tp_lock); + tpool->threads[lk_thread->id]->alive =3D 0; + tpool->created--; + pthread_mutex_unlock(&tpool->tp_lock); + + return NULL; +} + +/* Create a single thread that handles locking requests */ +static void fv_lock_thread_init(struct fv_LockThreadPool *tpool, + struct fv_LockThread **l_thread, int id) +{ + *l_thread =3D g_new(struct fv_LockThread, 1); + (*l_thread)->lock_t_pool =3D tpool; + (*l_thread)->id =3D id; + (*l_thread)->alive =3D 1; + + pthread_create(&(*l_thread)->pthread, NULL, + fv_lock_thread_do_work, (*l_thread)); + pthread_detach((*l_thread)->pthread); +} + +/* Initialize the thread pool for the locking posix threads */ +static struct fv_LockThreadPool *fv_lock_thread_pool_init(int thread_num) +{ + struct fv_LockThreadPool *tpool =3D NULL; + int i; + + if (thread_num < 0) { + thread_num =3D 0; + } + + tpool =3D g_new(struct fv_LockThreadPool, 1); + tpool->num_threads =3D 0; + tpool->destroy_pool =3D 0; + tpool->created =3D 0; + pthread_mutex_init(&(tpool->tp_lock), NULL); + + /* Initialize the Lock Request Queue */ + tpool->lreq_queue =3D fv_lock_request_queue_init(); + + /* Create the threads in the pool */ + tpool->threads =3D g_new(struct fv_LockThread *, thread_num); + + for (i =3D 0; i < thread_num; i++) { + fv_lock_thread_init(tpool, &tpool->threads[i], i); + tpool->num_threads++; + tpool->created++; + } + + return tpool; +} + +static void fv_lock_thread_pool_destroy(struct fv_LockThreadPool *tpool) +{ + int i, tmp; + + if (!tpool) { + return; + } + + /*Get the lock to the queue */ + pthread_mutex_lock(&tpool->lreq_queue->lock); + + /* We want to destroy the pool */ + pthread_mutex_lock(&tpool->tp_lock); + tpool->destroy_pool =3D 1; + pthread_mutex_unlock(&tpool->tp_lock); + + /* Wake up threads waiting for requests */ + pthread_cond_broadcast(&tpool->lreq_queue->notify); + pthread_mutex_unlock(&tpool->lreq_queue->lock); + + for (i =3D 0; i < tpool->num_threads; i++) { + /* + * Even though the threads are notified about the conditional vari= able + * there still might be blocking threads on a request. Signal them= to + * wake up + */ + if (tpool->threads[i]->alive) { + pthread_kill(tpool->threads[i]->pthread, SIGUSR1); + } + } + + /* + * Now wait for the threads to exit before releasing the pool resources + * back to the system + */ + while (1) { + pthread_mutex_lock(&tpool->tp_lock); + tmp =3D tpool->created; + pthread_mutex_unlock(&tpool->tp_lock); + if (tmp =3D=3D 0) { + break; + } + } + + /* Destroy the locking request queue */ + fv_lock_request_queue_destroy(tpool); + for (i =3D 0; i < tpool->num_threads; i++) { + g_free(tpool->threads[i]); + } + + /* Now free the threadpool */ + g_free(tpool->threads); + g_free(tpool); + +} + /* Thread function for individual queues, created when a queue is 'started= ' */ static void *fv_queue_thread(void *opaque) { @@ -717,18 +1075,36 @@ static void *fv_queue_thread(void *opaque) struct VuDev *dev =3D &qi->virtio_dev->dev; struct VuVirtq *q =3D vu_get_queue(dev, qi->qidx); struct fuse_session *se =3D qi->virtio_dev->se; + struct fv_LockThreadPool *lk_tpool =3D NULL; + int request_opcode; GThreadPool *pool =3D NULL; GList *req_list =3D NULL; =20 if (se->thread_pool_size) { + /* Create the GThreadPool to handle normal requests */ fuse_log(FUSE_LOG_DEBUG, "%s: Creating thread pool for Queue %d\n", - __func__, qi->qidx); + __func__, qi->qidx); pool =3D g_thread_pool_new(fv_queue_worker, qi, se->thread_pool_si= ze, - FALSE, NULL); + FALSE, NULL); if (!pool) { fuse_log(FUSE_LOG_ERR, "%s: g_thread_pool_new failed\n", __fun= c__); return NULL; } + + } + + fuse_log(FUSE_LOG_DEBUG, "%s: Creating a locking thread pool for" + " Queue %d with size %d\n", __func__, qi->qidx, 4); + /* + * Create the custom thread pool to handle locking requests + * TODO: Add or remove threads dynamically from the queue depending on + * the number of locking requests that are pending + */ + lk_tpool =3D fv_lock_thread_pool_init(4); + if (!lk_tpool) { + fuse_log(FUSE_LOG_ERR, "%s: fv_lock_thread_pool" + " failed\n", __func__); + return NULL; } =20 fuse_log(FUSE_LOG_INFO, "%s: Start for queue %d kick_fd %d\n", __func_= _, @@ -801,10 +1177,28 @@ static void *fv_queue_thread(void *opaque) =20 req->reply_sent =3D false; =20 + /* + * In every case we get the opcode of the request and check if= it + * is a locking request. If yes, we assign the request to the + * custom thread pool + */ + request_opcode =3D fv_get_request_opcode(req, qi); if (!se->thread_pool_size) { - req_list =3D g_list_prepend(req_list, req); + if (request_opcode =3D=3D FUSE_GETLK || + request_opcode =3D=3D FUSE_SETLK || + request_opcode =3D=3D FUSE_SETLKW) { + fv_lock_tpool_push(lk_tpool, fv_queue_worker, req, qi); + } else { + req_list =3D g_list_prepend(req_list, req); + } } else { - g_thread_pool_push(pool, req, NULL); + if (request_opcode =3D=3D FUSE_GETLK || + request_opcode =3D=3D FUSE_SETLK || + request_opcode =3D=3D FUSE_SETLKW) { + fv_lock_tpool_push(lk_tpool, fv_queue_worker, req, qi); + } else { + g_thread_pool_push(pool, req, NULL); + } } } =20 @@ -819,10 +1213,15 @@ static void *fv_queue_thread(void *opaque) } } =20 + /* Free the pools */ if (pool) { g_thread_pool_free(pool, FALSE, TRUE); } =20 + if (lk_tpool) { + fv_lock_thread_pool_destroy(lk_tpool); + } + return NULL; } =20 --=20 2.27.0