Random write IOPS will drop dramatically if qcow2 l2 cache could not
cover the whole disk. This patch gives libvirt user a chance to adjust
the qcow2 cache configuration.
Three new qcow2 driver parameters are added. They are l2-cache-size,
refcount-cache-size and cache-clean-interval.
The following are from qcow2-cache.txt.
The amount of virtual disk that can be mapped by the L2 and refcount
caches (in bytes) is:
disk_size = l2_cache_size * cluster_size / 8
disk_size = refcount_cache_size * cluster_size * 8 / refcount_bits
The parameter "cache-clean-interval" defines an interval (in seconds).
All cache entries that haven't been accessed during that interval are
removed from memory.
Signed-off-by: Liu Qing <liuqing@huayun.com>
---
Change since v3: a) copy qcow2 cache configurion from source to backing$
to backing file source.$
docs/formatdomain.html.in | 43 ++++++++++
docs/schemas/domaincommon.rng | 35 ++++++++
src/conf/domain_conf.c | 95 +++++++++++++++++++++-
src/qemu/qemu_command.c | 6 ++
src/qemu/qemu_driver.c | 5 ++
src/util/virstoragefile.c | 3 +
src/util/virstoragefile.h | 6 ++
.../qemuxml2argv-disk-drive-qcow2-cache.xml | 43 ++++++++++
.../qemuxml2xmlout-disk-drive-qcow2-cache.xml | 43 ++++++++++
tests/qemuxml2xmltest.c | 1 +
10 files changed, 279 insertions(+), 1 deletion(-)
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-drive-qcow2-cache.xml
create mode 100644 tests/qemuxml2xmloutdata/qemuxml2xmlout-disk-drive-qcow2-cache.xml
diff --git a/docs/formatdomain.html.in b/docs/formatdomain.html.in
index 8ca7637..245d5c4 100644
--- a/docs/formatdomain.html.in
+++ b/docs/formatdomain.html.in
@@ -3036,6 +3036,49 @@
set. (<span class="since">Since 3.5.0</span>)
</li>
</ul>
+ The <code>drive</code> element may contain a qcow2 sub element, which
+ allows to specifying further details related to qcow2 driver type.
+ <span class="since">Since 3.8.0</span>
+ <ul>
+ <li>
+ The optional <code>l2_cache_size</code> attribute controls how much
+ memory will be consumed by qcow2 l2 table cache in bytes. This
+ option is only valid when the driver type is qcow2. The default
+ size is 2097152. The amount of virtual disk that can be mapped by
+ the L2 caches (in bytes) is:
+ disk_size = l2_cache_size * cluster_size / 8
+ <span class='since'>Since 3.8.0</span>
+
+ <b>In general you should leave this option alone, unless you
+ are very certain you know what you are doing.</b>
+ </li>
+ <li>
+ The optional <code>refcount_cache_size</code> attribute controls
+ how much memory will be consumed by qcow2 reference count table
+ cache in bytes. This option is only valid when the driver type is
+ qcow2. The default size is 262144. The amount of virtual disk that
+ can be mapped by the refcount caches (in bytes) is:
+ disk_size = refcount_cache_size * cluster_size * 8 / refcount_bits
+ <span class='since'>Since 3.8.0</span>
+
+ <b>In general you should leave this option alone, unless you
+ are very certain you know what you are doing.</b>
+ </li>
+ <li>
+ The optional <code>cache_clean_interval</code> attribute defines
+ an interval (in seconds). All cache entries that haven't been
+ accessed during that interval are removed from memory. This option
+ is only valid when the driver type is qcow2. The default
+ value is 0, which disables this feature. If the interval is too
+ short, it will cause frequent cache write back and load, which
+ impact performance. If the interval is too long, unused cache
+ will still consume memory until it's been written back to disk.
+ <span class='since'>Since 3.8.0</span>
+
+ <b>In general you should leave this option alone, unless you
+ are very certain you know what you are doing.</b>
+ </li>
+ </ul>
</dd>
<dt><code>backenddomain</code></dt>
<dd>The optional <code>backenddomain</code> element allows specifying a
diff --git a/docs/schemas/domaincommon.rng b/docs/schemas/domaincommon.rng
index c9a4f7a..e4200fe 100644
--- a/docs/schemas/domaincommon.rng
+++ b/docs/schemas/domaincommon.rng
@@ -1756,6 +1756,23 @@
</element>
</define>
<!--
+ Parameters for qcow2 driver
+ -->
+ <define name="qcow2Driver">
+ <element name="qcow2">
+ <optional>
+ <ref name="qcow2_l2_cache_size"/>
+ </optional>
+ <optional>
+ <ref name="qcow2_refcount_cache_size"/>
+ </optional>
+ <optional>
+ <ref name="qcow2_cache_clean_interval"/>
+ </optional>
+ </element>
+ </define>
+
+ <!--
Disk may use a special driver for access.
-->
<define name="diskDriver">
@@ -1794,6 +1811,9 @@
<ref name="detect_zeroes"/>
</optional>
<ref name="virtioOptions"/>
+ <zeroOrMore>
+ <ref name="qcow2Driver"/>
+ </zeroOrMore>
<empty/>
</element>
</define>
@@ -1889,6 +1909,21 @@
</choice>
</attribute>
</define>
+ <define name="qcow2_l2_cache_size">
+ <attribute name='l2_cache_size'>
+ <ref name="unsignedInt"/>
+ </attribute>
+ </define>
+ <define name="qcow2_refcount_cache_size">
+ <attribute name='refcount_cache_size'>
+ <ref name="unsignedInt"/>
+ </attribute>
+ </define>
+ <define name="qcow2_cache_clean_interval">
+ <attribute name='cache_clean_interval'>
+ <ref name="unsignedInt"/>
+ </attribute>
+ </define>
<define name="controller">
<element name="controller">
<optional>
diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c
index a43b25c..4949e8b 100644
--- a/src/conf/domain_conf.c
+++ b/src/conf/domain_conf.c
@@ -5734,6 +5734,28 @@ virDomainDeviceLoadparmIsValid(const char *loadparm)
static void
+virDoaminQcow2CacheOptionsFormat(virBufferPtr buf,
+ virDomainDiskDefPtr def)
+{
+ virBuffer qcow2Buff = VIR_BUFFER_INITIALIZER;
+ if (def->src->l2_cache_size > 0)
+ virBufferAsprintf(&qcow2Buff, " l2_cache_size='%llu'",
+ def->src->l2_cache_size);
+ if (def->src->refcount_cache_size > 0)
+ virBufferAsprintf(&qcow2Buff, " refcount_cache_size='%llu'",
+ def->src->refcount_cache_size);
+ if (def->src->cache_clean_interval > 0)
+ virBufferAsprintf(&qcow2Buff, " cache_clean_interval='%llu'",
+ def->src->cache_clean_interval);
+
+ if (virBufferUse(&qcow2Buff)) {
+ virBufferAddLit(buf, "<qcow2");
+ virBufferAddBuffer(buf, &qcow2Buff);
+ virBufferAddLit(buf, "/>\n");
+ }
+}
+
+static void
virDomainVirtioOptionsFormat(virBufferPtr buf,
virDomainVirtioOptionsPtr virtio)
{
@@ -8572,15 +8594,69 @@ virDomainDiskDefParseValidate(const virDomainDiskDef *def)
}
}
+ if (def->src->format != VIR_STORAGE_FILE_QCOW2 &&
+ (def->src->l2_cache_size > 0 || def->src->refcount_cache_size > 0 ||
+ def->src->cache_clean_interval > 0)) {
+ virReportError(VIR_ERR_XML_ERROR,
+ _("Setting l2_cache_size, refcount_cache_size, "
+ "cache_clean_interval is not allowed for types "
+ "other than QCOW2"));
+ return -1;
+ }
+
return 0;
}
static int
+virDomainDiskDefQcow2ParseXML(virDomainDiskDefPtr def,
+ xmlNodePtr cur)
+{
+ char *tmp = NULL;
+ int ret = -1;
+
+ if ((tmp = virXMLPropString(cur, "l2_cache_size")) &&
+ (virStrToLong_ullp(tmp, NULL, 10, &def->src->l2_cache_size) < 0)) {
+ virReportError(VIR_ERR_XML_ERROR,
+ _("Invalid l2_cache_size attribute in disk "
+ "driver element: %s"), tmp);
+ goto cleanup;
+ }
+ VIR_FREE(tmp);
+
+ if ((tmp = virXMLPropString(cur, "refcount_cache_size")) &&
+ (virStrToLong_ullp(tmp, NULL, 10, &def->src->refcount_cache_size) < 0)) {
+ virReportError(VIR_ERR_XML_ERROR,
+ _("Invalid refcount_cache_size attribute in disk "
+ "driver element: %s"), tmp);
+ goto cleanup;
+ }
+ VIR_FREE(tmp);
+
+ if ((tmp = virXMLPropString(cur, "cache_clean_interval")) &&
+ (virStrToLong_ullp(tmp, NULL, 10, &def->src->cache_clean_interval) < 0)) {
+ virReportError(VIR_ERR_XML_ERROR,
+ _("Invalid cache_clean_interval attribute in "
+ "disk driver element: %s"), tmp);
+ goto cleanup;
+ }
+ VIR_FREE(tmp);
+
+ ret = 0;
+
+ cleanup:
+ VIR_FREE(tmp);
+
+ return ret;
+}
+
+
+static int
virDomainDiskDefDriverParseXML(virDomainDiskDefPtr def,
xmlNodePtr cur)
{
char *tmp = NULL;
+ xmlNodePtr child;
int ret = -1;
def->src->driverName = virXMLPropString(cur, "name");
@@ -8683,6 +8759,12 @@ virDomainDiskDefDriverParseXML(virDomainDiskDefPtr def,
}
VIR_FREE(tmp);
+ for (child = cur->children; child != NULL; child = child->next) {
+ if (virXMLNodeNameEqual(child, "qcow2") &&
+ virDomainDiskDefQcow2ParseXML(def, child) < 0) {
+ goto cleanup;
+ }
+ }
ret = 0;
cleanup:
@@ -21969,7 +22051,18 @@ virDomainDiskDefFormat(virBufferPtr buf,
if (virBufferUse(&driverBuf)) {
virBufferAddLit(buf, "<driver");
virBufferAddBuffer(buf, &driverBuf);
- virBufferAddLit(buf, "/>\n");
+
+ if (def->src->l2_cache_size == 0 &&
+ def->src->refcount_cache_size == 0 &&
+ def->src->cache_clean_interval == 0) {
+ virBufferAddLit(buf, "/>\n");
+ } else {
+ virBufferAddLit(buf, ">\n");
+ virBufferAdjustIndent(buf, 2);
+ virDoaminQcow2CacheOptionsFormat(buf, def);
+ virBufferAdjustIndent(buf, -2);
+ virBufferAddLit(buf, "</driver>\n");
+ }
}
if (def->src->auth) {
diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c
index d553df5..ba81525 100644
--- a/src/qemu/qemu_command.c
+++ b/src/qemu/qemu_command.c
@@ -1433,6 +1433,12 @@ qemuBuildDriveSourceStr(virDomainDiskDefPtr disk,
qemuformat = "luks";
virBufferAsprintf(buf, "format=%s,", qemuformat);
}
+ if (disk->src->l2_cache_size > 0)
+ virBufferAsprintf(buf, "l2-cache-size=%llu,", disk->src->l2_cache_size);
+ if (disk->src->refcount_cache_size > 0)
+ virBufferAsprintf(buf, "refcount-cache-size=%llu,", disk->src->refcount_cache_size);
+ if (disk->src->cache_clean_interval > 0)
+ virBufferAsprintf(buf, "cache-clean-interval=%llu,", disk->src->cache_clean_interval);
ret = 0;
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index e956839..c3b81e1 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -14318,6 +14318,11 @@ qemuDomainSnapshotDiskDataCollect(virQEMUDriverPtr driver,
if (!(dd->src = virStorageSourceCopy(snap->def->disks[i].src, false)))
goto error;
+ /* keep the qcow2 cache configuration */
+ dd->src->l2_cache_size = vm->def->disks[i]->src->l2_cache_size;
+ dd->src->refcount_cache_size = vm->def->disks[i]->src->refcount_cache_size;
+ dd->src->cache_clean_interval = vm->def->disks[i]->src->cache_clean_interval;
+
if (virStorageSourceInitChainElement(dd->src, dd->disk->src, false) < 0)
goto error;
diff --git a/src/util/virstoragefile.c b/src/util/virstoragefile.c
index e94ad32..f23390f 100644
--- a/src/util/virstoragefile.c
+++ b/src/util/virstoragefile.c
@@ -2038,6 +2038,9 @@ virStorageSourceCopy(const virStorageSource *src,
ret->physical = src->physical;
ret->readonly = src->readonly;
ret->shared = src->shared;
+ ret->l2_cache_size = src->l2_cache_size;
+ ret->refcount_cache_size = src->refcount_cache_size;
+ ret->cache_clean_interval = src->cache_clean_interval;
/* storage driver metadata are not copied */
ret->drv = NULL;
diff --git a/src/util/virstoragefile.h b/src/util/virstoragefile.h
index 6c388b1..9b5a5f3 100644
--- a/src/util/virstoragefile.h
+++ b/src/util/virstoragefile.h
@@ -280,6 +280,12 @@ struct _virStorageSource {
/* metadata that allows identifying given storage source */
char *nodeformat; /* name of the format handler object */
char *nodestorage; /* name of the storage object */
+
+ unsigned long long l2_cache_size; /* qcow2 l2 cache size */
+ /* qcow2 reference count table cache size */
+ unsigned long long refcount_cache_size;
+ /* clean unused cache entries interval */
+ unsigned long long cache_clean_interval;
};
diff --git a/tests/qemuxml2argvdata/qemuxml2argv-disk-drive-qcow2-cache.xml b/tests/qemuxml2argvdata/qemuxml2argv-disk-drive-qcow2-cache.xml
new file mode 100644
index 0000000..3f464db
--- /dev/null
+++ b/tests/qemuxml2argvdata/qemuxml2argv-disk-drive-qcow2-cache.xml
@@ -0,0 +1,43 @@
+<domain type='qemu'>
+ <name>QEMUGuest1</name>
+ <uuid>c7a5fdbd-edaf-9455-926a-d65c16db1809</uuid>
+ <memory unit='KiB'>219136</memory>
+ <currentMemory unit='KiB'>219136</currentMemory>
+ <vcpu placement='static'>1</vcpu>
+ <os>
+ <type arch='i686' machine='pc'>hvm</type>
+ <boot dev='hd'/>
+ </os>
+ <clock offset='utc'/>
+ <on_poweroff>destroy</on_poweroff>
+ <on_reboot>restart</on_reboot>
+ <on_crash>destroy</on_crash>
+ <devices>
+ <emulator>/usr/bin/qemu-system-i686</emulator>
+ <disk type='block' device='disk'>
+ <driver name='qemu' type='qcow2' cache='none'>
+ <qcow2 l2_cache_size='2097152' refcount_cache_size='524288' cache_clean_interval='900'/>
+ </driver>
+ <source dev='/dev/HostVG/QEMUGuest1'/>
+ <target dev='hda' bus='ide'/>
+ <address type='drive' controller='0' bus='0' target='0' unit='0'/>
+ </disk>
+ <disk type='block' device='cdrom'>
+ <driver name='qemu' type='raw'/>
+ <source dev='/dev/HostVG/QEMUGuest2'/>
+ <target dev='hdc' bus='ide'/>
+ <readonly/>
+ <address type='drive' controller='0' bus='1' target='0' unit='0'/>
+ </disk>
+ <controller type='usb' index='0'>
+ <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
+ </controller>
+ <controller type='ide' index='0'>
+ <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
+ </controller>
+ <controller type='pci' index='0' model='pci-root'/>
+ <input type='mouse' bus='ps2'/>
+ <input type='keyboard' bus='ps2'/>
+ <memballoon model='none'/>
+ </devices>
+</domain>
diff --git a/tests/qemuxml2xmloutdata/qemuxml2xmlout-disk-drive-qcow2-cache.xml b/tests/qemuxml2xmloutdata/qemuxml2xmlout-disk-drive-qcow2-cache.xml
new file mode 100644
index 0000000..3f464db
--- /dev/null
+++ b/tests/qemuxml2xmloutdata/qemuxml2xmlout-disk-drive-qcow2-cache.xml
@@ -0,0 +1,43 @@
+<domain type='qemu'>
+ <name>QEMUGuest1</name>
+ <uuid>c7a5fdbd-edaf-9455-926a-d65c16db1809</uuid>
+ <memory unit='KiB'>219136</memory>
+ <currentMemory unit='KiB'>219136</currentMemory>
+ <vcpu placement='static'>1</vcpu>
+ <os>
+ <type arch='i686' machine='pc'>hvm</type>
+ <boot dev='hd'/>
+ </os>
+ <clock offset='utc'/>
+ <on_poweroff>destroy</on_poweroff>
+ <on_reboot>restart</on_reboot>
+ <on_crash>destroy</on_crash>
+ <devices>
+ <emulator>/usr/bin/qemu-system-i686</emulator>
+ <disk type='block' device='disk'>
+ <driver name='qemu' type='qcow2' cache='none'>
+ <qcow2 l2_cache_size='2097152' refcount_cache_size='524288' cache_clean_interval='900'/>
+ </driver>
+ <source dev='/dev/HostVG/QEMUGuest1'/>
+ <target dev='hda' bus='ide'/>
+ <address type='drive' controller='0' bus='0' target='0' unit='0'/>
+ </disk>
+ <disk type='block' device='cdrom'>
+ <driver name='qemu' type='raw'/>
+ <source dev='/dev/HostVG/QEMUGuest2'/>
+ <target dev='hdc' bus='ide'/>
+ <readonly/>
+ <address type='drive' controller='0' bus='1' target='0' unit='0'/>
+ </disk>
+ <controller type='usb' index='0'>
+ <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
+ </controller>
+ <controller type='ide' index='0'>
+ <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
+ </controller>
+ <controller type='pci' index='0' model='pci-root'/>
+ <input type='mouse' bus='ps2'/>
+ <input type='keyboard' bus='ps2'/>
+ <memballoon model='none'/>
+ </devices>
+</domain>
diff --git a/tests/qemuxml2xmltest.c b/tests/qemuxml2xmltest.c
index 0a87ced..fab1e19 100644
--- a/tests/qemuxml2xmltest.c
+++ b/tests/qemuxml2xmltest.c
@@ -461,6 +461,7 @@ mymain(void)
DO_TEST("disk-drive-cache-v2-none", NONE);
DO_TEST("disk-drive-cache-directsync", NONE);
DO_TEST("disk-drive-cache-unsafe", NONE);
+ DO_TEST("disk-drive-qcow2-cache", NONE);
DO_TEST("disk-drive-network-nbd", NONE);
DO_TEST("disk-drive-network-nbd-export", NONE);
DO_TEST("disk-drive-network-nbd-ipv6", NONE);
--
1.8.3.1
--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list
On Wed, Sep 13, 2017 at 17:21:23 +0800, Liu Qing wrote: > Random write IOPS will drop dramatically if qcow2 l2 cache could not > cover the whole disk. This patch gives libvirt user a chance to adjust > the qcow2 cache configuration. > > Three new qcow2 driver parameters are added. They are l2-cache-size, > refcount-cache-size and cache-clean-interval. > > The following are from qcow2-cache.txt. > The amount of virtual disk that can be mapped by the L2 and refcount > caches (in bytes) is: > disk_size = l2_cache_size * cluster_size / 8 > disk_size = refcount_cache_size * cluster_size * 8 / refcount_bits > > The parameter "cache-clean-interval" defines an interval (in seconds). > All cache entries that haven't been accessed during that interval are > removed from memory. > > Signed-off-by: Liu Qing <liuqing@huayun.com> > --- > Change since v3: a) copy qcow2 cache configurion from source to backing$ > to backing file source.$ This looks like one of the tuning parameters which really is hard for users to set and thus it should be justified properly if we need it. [1] From the commit message above it looks like that there are guidelines how to set them. Can't we just make them implicit and not expose anything to tune the settings? Does it make sense to do so? Are there any drawbacks? If any of them need to be configured by the user, please describe that in detail why it's necessary. Peter [1] There's discussion I can link to for other tuning parameters. The gist is that allowing users to set something withoug giving them guidance is pointless since they might not use it. Also if the guidance is strict (e.g. a formula, libvirt or qemu should set the defaults properly and not force users to do the calculation) -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
On Wed, Sep 13, 2017 at 01:20:03PM +0200, Peter Krempa wrote: > On Wed, Sep 13, 2017 at 17:21:23 +0800, Liu Qing wrote: > > Random write IOPS will drop dramatically if qcow2 l2 cache could not > > cover the whole disk. This patch gives libvirt user a chance to adjust > > the qcow2 cache configuration. > > > > Three new qcow2 driver parameters are added. They are l2-cache-size, > > refcount-cache-size and cache-clean-interval. > > > > The following are from qcow2-cache.txt. > > The amount of virtual disk that can be mapped by the L2 and refcount > > caches (in bytes) is: > > disk_size = l2_cache_size * cluster_size / 8 > > disk_size = refcount_cache_size * cluster_size * 8 / refcount_bits > > > > The parameter "cache-clean-interval" defines an interval (in seconds). > > All cache entries that haven't been accessed during that interval are > > removed from memory. > > > > Signed-off-by: Liu Qing <liuqing@huayun.com> > > --- > > Change since v3: a) copy qcow2 cache configurion from source to backing$ > > to backing file source.$ > > This looks like one of the tuning parameters which really is hard for > users to set and thus it should be justified properly if we need it. [1] > > From the commit message above it looks like that there are guidelines > how to set them. Can't we just make them implicit and not expose > anything to tune the settings? Does it make sense to do so? Are there > any drawbacks? > > If any of them need to be configured by the user, please describe that > in detail why it's necessary. > > Peter > > [1] There's discussion I can link to for other tuning parameters. The > gist is that allowing users to set something withoug giving them > guidance is pointless since they might not use it. Also if the guidance > is strict (e.g. a formula, libvirt or qemu should set the defaults > properly and not force users to do the calculation) The guidance could be found in doc/qcow2-cache.txt in qemu source code. As John Ferlan suggested I have added the file locaton in formatdomain.html.in. -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
On Thu, Sep 14, 2017 at 13:02:46 +0800, Liu Qing wrote: > On Wed, Sep 13, 2017 at 01:20:03PM +0200, Peter Krempa wrote: > > On Wed, Sep 13, 2017 at 17:21:23 +0800, Liu Qing wrote: [...] > > [1] There's discussion I can link to for other tuning parameters. The > > gist is that allowing users to set something withoug giving them > > guidance is pointless since they might not use it. Also if the guidance > > is strict (e.g. a formula, libvirt or qemu should set the defaults > > properly and not force users to do the calculation) > The guidance could be found in doc/qcow2-cache.txt in qemu source code. > As John Ferlan suggested I have added the file locaton in formatdomain.html.in. Well, if the guidance is absolute (use this cache size and it's okay) then we should implement it automatically (don't allow users to set it.) I'm asking whether there are some catches why not to do it automatically. E.g whether there's an possibility that users would do something *else* as described by the document and what would be the reasons for it. One of the reasons might be memory consumption of the cache as it looks like you need 1 MiB of memory to fully cover a 8GiB image. Also the problem is that those parameters are qemu-isms so we should be aware when modelling them in the XML since they can change and may not map well to any other technology. Also how does the cache impact read-performance from the backing layers? We might need to set the cache for the backing layers as well. -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
On Thu, Sep 14, 2017 at 09:29:28 +0200, Peter Krempa wrote: > On Thu, Sep 14, 2017 at 13:02:46 +0800, Liu Qing wrote: > > On Wed, Sep 13, 2017 at 01:20:03PM +0200, Peter Krempa wrote: > > > On Wed, Sep 13, 2017 at 17:21:23 +0800, Liu Qing wrote: > > [...] > > > > [1] There's discussion I can link to for other tuning parameters. The > > > gist is that allowing users to set something withoug giving them > > > guidance is pointless since they might not use it. Also if the guidance > > > is strict (e.g. a formula, libvirt or qemu should set the defaults > > > properly and not force users to do the calculation) > > The guidance could be found in doc/qcow2-cache.txt in qemu source code. > > As John Ferlan suggested I have added the file locaton in formatdomain.html.in. > > Well, if the guidance is absolute (use this cache size and it's okay) > then we should implement it automatically (don't allow users > to set it.) One more thing. The design you've proposed is really not user friendly. The user has to read a lenghty document, then inquire the parameters for every single qcow2 image, do the calculations and set it in the XML. This might be okay for higher level management tools, but not for users who use libvirt directly. -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
On Thu, Sep 14, 2017 at 10:02:40AM +0200, Peter Krempa wrote: > On Thu, Sep 14, 2017 at 09:29:28 +0200, Peter Krempa wrote: > > On Thu, Sep 14, 2017 at 13:02:46 +0800, Liu Qing wrote: > > > On Wed, Sep 13, 2017 at 01:20:03PM +0200, Peter Krempa wrote: > > > > On Wed, Sep 13, 2017 at 17:21:23 +0800, Liu Qing wrote: > > > > [...] > > > > > > [1] There's discussion I can link to for other tuning parameters. The > > > > gist is that allowing users to set something withoug giving them > > > > guidance is pointless since they might not use it. Also if the guidance > > > > is strict (e.g. a formula, libvirt or qemu should set the defaults > > > > properly and not force users to do the calculation) > > > The guidance could be found in doc/qcow2-cache.txt in qemu source code. > > > As John Ferlan suggested I have added the file locaton in formatdomain.html.in. > > > > Well, if the guidance is absolute (use this cache size and it's okay) > > then we should implement it automatically (don't allow users > > to set it.) > > One more thing. The design you've proposed is really not user friendly. > The user has to read a lenghty document, then inquire the parameters for > every single qcow2 image, do the calculations and set it in the XML. > This might be okay for higher level management tools, but not for users > who use libvirt directly. I agree this is not user friendly. I only give the user a choice, maybe not the best, right now. Any suggestion will be greatly appreicated. For our company, this feature will be an add-on to Openstack. -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
On Thu, Sep 14, 2017 at 16:37:03 +0800, Liu Qing wrote: > On Thu, Sep 14, 2017 at 10:02:40AM +0200, Peter Krempa wrote: > > On Thu, Sep 14, 2017 at 09:29:28 +0200, Peter Krempa wrote: [...] > > One more thing. The design you've proposed is really not user friendly. > > The user has to read a lenghty document, then inquire the parameters for > > every single qcow2 image, do the calculations and set it in the XML. > > This might be okay for higher level management tools, but not for users > > who use libvirt directly. > I agree this is not user friendly. I only give the user a choice, maybe > not the best, right now. Any suggestion will be greatly appreicated. > For our company, this feature will be an add-on to Openstack. This means that you will have to deal with everything that I've pointed out. How are you planning to expose this to the user? And which calculations are you going to use? -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
On Thu, Sep 14, 2017 at 12:56:30PM +0200, Peter Krempa wrote: > On Thu, Sep 14, 2017 at 16:37:03 +0800, Liu Qing wrote: > > On Thu, Sep 14, 2017 at 10:02:40AM +0200, Peter Krempa wrote: > > > On Thu, Sep 14, 2017 at 09:29:28 +0200, Peter Krempa wrote: > > [...] > > > > One more thing. The design you've proposed is really not user friendly. > > > The user has to read a lenghty document, then inquire the parameters for > > > every single qcow2 image, do the calculations and set it in the XML. > > > This might be okay for higher level management tools, but not for users > > > who use libvirt directly. > > I agree this is not user friendly. I only give the user a choice, maybe > > not the best, right now. Any suggestion will be greatly appreicated. > > For our company, this feature will be an add-on to Openstack. > > This means that you will have to deal with everything that I've pointed > out. How are you planning to expose this to the user? And which > calculations are you going to use? We will not expose this to the user on Openstack level. We have different volume types like performance and capacity. For performance volumes we will cache as much space as possible. Free memory left in the host will be counted in. For capacity volume we will set a max cache size. The cache setting strategy be implemented in a higher level management tool seems much resaonable to me. As this will be much more flexible. -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
On Thu, Sep 14, 2017 at 09:29:28AM +0200, Peter Krempa wrote: > On Thu, Sep 14, 2017 at 13:02:46 +0800, Liu Qing wrote: > > On Wed, Sep 13, 2017 at 01:20:03PM +0200, Peter Krempa wrote: > > > On Wed, Sep 13, 2017 at 17:21:23 +0800, Liu Qing wrote: > > [...] > > > > [1] There's discussion I can link to for other tuning parameters. The > > > gist is that allowing users to set something withoug giving them > > > guidance is pointless since they might not use it. Also if the guidance > > > is strict (e.g. a formula, libvirt or qemu should set the defaults > > > properly and not force users to do the calculation) > > The guidance could be found in doc/qcow2-cache.txt in qemu source code. > > As John Ferlan suggested I have added the file locaton in formatdomain.html.in. > > Well, if the guidance is absolute (use this cache size and it's okay) > then we should implement it automatically (don't allow users > to set it.) > > I'm asking whether there are some catches why not to do it automatically. > E.g whether there's an possibility that users would do something *else* > as described by the document and what would be the reasons for it. I think there is some trade off between the cache size and performance. Otherwise qemu does not need to export these parameters to user, they could do the automation in qemu. The guidance only let the user know how much disk space is coverred by caches, and how much memory will be needed to cover all disk spaces. User should do their own decision to choose a right size. But if the user is using the current version of libvirt, they could not set the value even if they know what's the proper value. For example if the user may need a LUN which need high IOPS, he could set the cache to cover all disk spaces. And in another situation, he needs a large capacity,for example 4T, but low perfermance LUN, then he could set the l2 cache size to a value much less than 512M. > > One of the reasons might be memory consumption of the cache as it > looks like you need 1 MiB of memory to fully cover a 8GiB image. Yes, as I said above. > > Also the problem is that those parameters are qemu-isms so we should be > aware when modelling them in the XML since they can change and may not > map well to any other technology. Currently the new element is added as a child elemnt of driver, and the driver type needs to be qcow2. Also add this kind information in the doc. > > Also how does the cache impact read-performance from the backing layers? > We might need to set the cache for the backing layers as well. I have a peek at the latest qemu code and the backing layers will have the same cache size value as the top volume. And this will reduce metadata read count if the table is already in the memory. Also more memory will consumed. -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
© 2016 - 2025 Red Hat, Inc.