Just set l2-cache-size to INT64_MAX for all format nodes of
qcow2 type in block node graph.
AFAIK this is sane because *actual* cache size depends on size
of data being referenced in image and thus the total size of
all cache sizes for all images in disk backing chain will not
exceed the cache size that covers just one full image as in
case of no backing chain.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
---
src/qemu/qemu_block.c | 5 ++++-
src/qemu/qemu_domain.c | 1 +
src/util/virstoragefile.c | 1 +
src/util/virstoragefile.h | 1 +
4 files changed, 7 insertions(+), 1 deletion(-)
diff --git a/src/qemu/qemu_block.c b/src/qemu/qemu_block.c
index 5321dda..8771cc1 100644
--- a/src/qemu/qemu_block.c
+++ b/src/qemu/qemu_block.c
@@ -1322,7 +1322,6 @@ qemuBlockStorageSourceGetFormatQcow2Props(virStorageSourcePtr src,
* 'pass-discard-snapshot'
* 'pass-discard-other'
* 'overlap-check'
- * 'l2-cache-size'
* 'l2-cache-entry-size'
* 'refcount-cache-size'
* 'cache-clean-interval'
@@ -1331,6 +1330,10 @@ qemuBlockStorageSourceGetFormatQcow2Props(virStorageSourcePtr src,
if (qemuBlockStorageSourceGetFormatQcowGenericProps(src, "qcow2", props) < 0)
return -1;
+ if (src->metadata_cache_size == VIR_DOMAIN_DISK_METADATA_CACHE_SIZE_MAXIMUM &&
+ virJSONValueObjectAdd(props, "I:l2-cache-size", INT64_MAX, NULL) < 0)
+ return -1;
+
return 0;
}
diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c
index 896adf3..f87cfd2 100644
--- a/src/qemu/qemu_domain.c
+++ b/src/qemu/qemu_domain.c
@@ -13245,6 +13245,7 @@ qemuDomainPrepareDiskSourceData(virDomainDiskDefPtr disk,
src->iomode = disk->iomode;
src->cachemode = disk->cachemode;
src->discard = disk->discard;
+ src->metadata_cache_size = disk->metadata_cache_size;
if (disk->device == VIR_DOMAIN_DISK_DEVICE_FLOPPY)
src->floppyimg = true;
diff --git a/src/util/virstoragefile.c b/src/util/virstoragefile.c
index 94c32d8..9089e2f 100644
--- a/src/util/virstoragefile.c
+++ b/src/util/virstoragefile.c
@@ -2210,6 +2210,7 @@ virStorageSourceCopy(const virStorageSource *src,
ret->cachemode = src->cachemode;
ret->discard = src->discard;
ret->detect_zeroes = src->detect_zeroes;
+ ret->metadata_cache_size = src->metadata_cache_size;
/* storage driver metadata are not copied */
ret->drv = NULL;
diff --git a/src/util/virstoragefile.h b/src/util/virstoragefile.h
index 3ff6c4f..8b57399 100644
--- a/src/util/virstoragefile.h
+++ b/src/util/virstoragefile.h
@@ -331,6 +331,7 @@ struct _virStorageSource {
int cachemode; /* enum virDomainDiskCache */
int discard; /* enum virDomainDiskDiscard */
int detect_zeroes; /* enum virDomainDiskDetectZeroes */
+ int metadata_cache_size; /* enum virDomainDiskImageMetadataCacheSize */
bool floppyimg; /* set to true if the storage source is going to be used
as a source for floppy drive */
--
1.8.3.1
--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list
Am 01.11.2018 um 12:32 hat Nikolay Shirokovskiy geschrieben: > Just set l2-cache-size to INT64_MAX for all format nodes of > qcow2 type in block node graph. > > AFAIK this is sane because *actual* cache size depends on size > of data being referenced in image and thus the total size of > all cache sizes for all images in disk backing chain will not > exceed the cache size that covers just one full image as in > case of no backing chain. This is not quite correct. Starting from qemu 3.1, INT64_MAX will add a cache that covers the whole image. Memory is only used if a cache entry is actually used, so if you never access the backing file, it doesn't really use any memory. However, the granularity isn't a single cluster here, but L2 tables. So if some L2 table contains one cluster in the overlay and another cluster in the backing file, and both are accessed, the L2 table will be cached in both the overlay and th backing file. More importantly, before 3.1, I think QEMU will actually try to allocate 2^63-1 bytes for the cache and fail to open the image. So we can't do this unconditionally. (Is this patch tested with a real QEMU?) Kevin -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
On 02.11.2018 13:23, Kevin Wolf wrote: > Am 01.11.2018 um 12:32 hat Nikolay Shirokovskiy geschrieben: >> Just set l2-cache-size to INT64_MAX for all format nodes of >> qcow2 type in block node graph. >> >> AFAIK this is sane because *actual* cache size depends on size >> of data being referenced in image and thus the total size of >> all cache sizes for all images in disk backing chain will not >> exceed the cache size that covers just one full image as in >> case of no backing chain. > > This is not quite correct. > > Starting from qemu 3.1, INT64_MAX will add a cache that covers the whole > image. Memory is only used if a cache entry is actually used, so if you > never access the backing file, it doesn't really use any memory. > However, the granularity isn't a single cluster here, but L2 tables. So > if some L2 table contains one cluster in the overlay and another cluster > in the backing file, and both are accessed, the L2 table will be cached > in both the overlay and th backing file. So we can end up N times bigger memory usage for L2 cache in case of backing chain comparing to single image? > > More importantly, before 3.1, I think QEMU will actually try to allocate > 2^63-1 bytes for the cache and fail to open the image. So we can't do > this unconditionally. > > (Is this patch tested with a real QEMU?) > Test was quite minimal: upstream and 2.10 but with recent patches that makes setting INT64_MAX possible I guess. Ok then we have to use version check in capabilities instead of feature test. Nikolay -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Am 02.11.2018 um 12:37 hat Nikolay Shirokovskiy geschrieben: > On 02.11.2018 13:23, Kevin Wolf wrote: > > Am 01.11.2018 um 12:32 hat Nikolay Shirokovskiy geschrieben: > >> Just set l2-cache-size to INT64_MAX for all format nodes of > >> qcow2 type in block node graph. > >> > >> AFAIK this is sane because *actual* cache size depends on size > >> of data being referenced in image and thus the total size of > >> all cache sizes for all images in disk backing chain will not > >> exceed the cache size that covers just one full image as in > >> case of no backing chain. > > > > This is not quite correct. > > > > Starting from qemu 3.1, INT64_MAX will add a cache that covers the whole > > image. Memory is only used if a cache entry is actually used, so if you > > never access the backing file, it doesn't really use any memory. > > However, the granularity isn't a single cluster here, but L2 tables. So > > if some L2 table contains one cluster in the overlay and another cluster > > in the backing file, and both are accessed, the L2 table will be cached > > in both the overlay and th backing file. > > So we can end up N times bigger memory usage for L2 cache in case > of backing chain comparing to single image? In the worst case, yes. Kevin -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
© 2016 - 2025 Red Hat, Inc.