From nobody Wed May 14 18:21:59 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of redhat.com designates 209.132.183.28 as permitted sender) client-ip=209.132.183.28; envelope-from=libvir-list-bounces@redhat.com; helo=mx1.redhat.com; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of redhat.com designates 209.132.183.28 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com Return-Path: Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by mx.zohomail.com with SMTPS id 1518780170126742.5113212617152; Fri, 16 Feb 2018 03:22:50 -0800 (PST) Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id BD37528225; Fri, 16 Feb 2018 11:22:48 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.20]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 9A05B60474; Fri, 16 Feb 2018 11:22:48 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id 6236E18033EA; Fri, 16 Feb 2018 11:22:48 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id w1GBMSr3012915 for ; Fri, 16 Feb 2018 06:22:29 -0500 Received: by smtp.corp.redhat.com (Postfix) id CD70F2026DFD; Fri, 16 Feb 2018 11:22:28 +0000 (UTC) Received: from t460.redhat.com (unknown [10.33.36.60]) by smtp.corp.redhat.com (Postfix) with ESMTP id 93E0C2024CA1; Fri, 16 Feb 2018 11:22:27 +0000 (UTC) From: =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= To: libvir-list@redhat.com Date: Fri, 16 Feb 2018 11:22:15 +0000 Message-Id: <20180216112222.26572-4-berrange@redhat.com> In-Reply-To: <20180216112222.26572-1-berrange@redhat.com> References: <20180216112222.26572-1-berrange@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4 X-loop: libvir-list@redhat.com Subject: [libvirt] [PATCH v3 03/10] qemu: stop passing virConnectPtr into qemuMonitorStartCPUs X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Sender: libvir-list-bounces@redhat.com Errors-To: libvir-list-bounces@redhat.com X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.30]); Fri, 16 Feb 2018 11:22:49 +0000 (UTC) X-ZohoMail: RSF_0 Z_629925259 SPT_0 There is a long standing hack to pass a virConnectPtr into the qemuMonitorStartCPUs method, so that when the text monitor prompts for a disk password, we can lookup virSecretPtr objects. This causes us to have to pass a virConnectPtr around through countless methods up the call chain....except some places don't have any virConnectPtr available so have always just passed NULL. We can finally fix this disastrous design by using virGetConnectSecret() to open a connection to the secret driver at time of use. Reviewed-by: John Ferlan Signed-off-by: Daniel P. Berrang=C3=A9 --- src/qemu/qemu_driver.c | 32 +++++++++++++++----------------- src/qemu/qemu_migration.c | 6 +++--- src/qemu/qemu_monitor.c | 10 ++++------ src/qemu/qemu_monitor.h | 11 +---------- src/qemu/qemu_monitor_json.c | 3 +-- src/qemu/qemu_monitor_json.h | 3 +-- src/qemu/qemu_monitor_text.c | 9 +++------ src/qemu/qemu_monitor_text.h | 3 +-- src/qemu/qemu_process.c | 32 +++++++++++++++----------------- src/qemu/qemu_process.h | 4 +--- tests/qemumonitorjsontest.c | 2 +- 11 files changed, 46 insertions(+), 69 deletions(-) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index bbce5bd81b..134deb05a0 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -1954,7 +1954,7 @@ static int qemuDomainResume(virDomainPtr dom) } else if ((state =3D=3D VIR_DOMAIN_CRASHED && reason =3D=3D VIR_DOMAIN_CRASHED_PANICKED) || state =3D=3D VIR_DOMAIN_PAUSED) { - if (qemuProcessStartCPUs(driver, vm, dom->conn, + if (qemuProcessStartCPUs(driver, vm, VIR_DOMAIN_RUNNING_UNPAUSED, QEMU_ASYNC_JOB_NONE) < 0) { if (virGetLastError() =3D=3D NULL) @@ -3346,7 +3346,7 @@ qemuDomainSaveMemory(virQEMUDriverPtr driver, * this returns (whether returning success or failure). */ static int -qemuDomainSaveInternal(virQEMUDriverPtr driver, virDomainPtr dom, +qemuDomainSaveInternal(virQEMUDriverPtr driver, virDomainObjPtr vm, const char *path, int compressed, const char *compressedpath, const char *xmlin, unsigned int flags) @@ -3447,7 +3447,7 @@ qemuDomainSaveInternal(virQEMUDriverPtr driver, virDo= mainPtr dom, if (ret < 0) { if (was_running && virDomainObjIsActive(vm)) { virErrorPtr save_err =3D virSaveLastError(); - if (qemuProcessStartCPUs(driver, vm, dom->conn, + if (qemuProcessStartCPUs(driver, vm, VIR_DOMAIN_RUNNING_SAVE_CANCELED, QEMU_ASYNC_JOB_SAVE) < 0) { VIR_WARN("Unable to resume guest CPUs after save failure"); @@ -3582,7 +3582,7 @@ qemuDomainSaveFlags(virDomainPtr dom, const char *pat= h, const char *dxml, goto cleanup; } =20 - ret =3D qemuDomainSaveInternal(driver, dom, vm, path, compressed, + ret =3D qemuDomainSaveInternal(driver, vm, path, compressed, compressedpath, dxml, flags); =20 cleanup: @@ -3656,7 +3656,7 @@ qemuDomainManagedSave(virDomainPtr dom, unsigned int = flags) =20 VIR_INFO("Saving state of domain '%s' to '%s'", vm->def->name, name); =20 - ret =3D qemuDomainSaveInternal(driver, dom, vm, name, compressed, + ret =3D qemuDomainSaveInternal(driver, vm, name, compressed, compressedpath, NULL, flags); if (ret =3D=3D 0) vm->hasManagedSave =3D true; @@ -4029,7 +4029,7 @@ qemuDomainCoreDumpWithFormat(virDomainPtr dom, } =20 if (resume && virDomainObjIsActive(vm)) { - if (qemuProcessStartCPUs(driver, vm, dom->conn, + if (qemuProcessStartCPUs(driver, vm, VIR_DOMAIN_RUNNING_UNPAUSED, QEMU_ASYNC_JOB_DUMP) < 0) { event =3D virDomainEventLifecycleNewFromObj(vm, @@ -4216,7 +4216,7 @@ processWatchdogEvent(virQEMUDriverPtr driver, virReportError(VIR_ERR_OPERATION_FAILED, "%s", _("Dump failed")); =20 - ret =3D qemuProcessStartCPUs(driver, vm, NULL, + ret =3D qemuProcessStartCPUs(driver, vm, VIR_DOMAIN_RUNNING_UNPAUSED, QEMU_ASYNC_JOB_DUMP); =20 @@ -6677,7 +6677,7 @@ qemuDomainSaveImageStartVM(virConnectPtr conn, =20 /* If it was running before, resume it now unless caller requested pau= se. */ if (header->was_running && !start_paused) { - if (qemuProcessStartCPUs(driver, vm, conn, + if (qemuProcessStartCPUs(driver, vm, VIR_DOMAIN_RUNNING_RESTORED, asyncJob) < 0) { if (virGetLastError() =3D=3D NULL) @@ -14005,8 +14005,7 @@ qemuDomainSnapshotCreateInactiveExternal(virQEMUDri= verPtr driver, =20 /* The domain is expected to be locked and active. */ static int -qemuDomainSnapshotCreateActiveInternal(virConnectPtr conn, - virQEMUDriverPtr driver, +qemuDomainSnapshotCreateActiveInternal(virQEMUDriverPtr driver, virDomainObjPtr vm, virDomainSnapshotObjPtr snap, unsigned int flags) @@ -14062,7 +14061,7 @@ qemuDomainSnapshotCreateActiveInternal(virConnectPt= r conn, =20 cleanup: if (resume && virDomainObjIsActive(vm) && - qemuProcessStartCPUs(driver, vm, conn, + qemuProcessStartCPUs(driver, vm, VIR_DOMAIN_RUNNING_UNPAUSED, QEMU_ASYNC_JOB_SNAPSHOT) < 0) { event =3D virDomainEventLifecycleNewFromObj(vm, @@ -14878,8 +14877,7 @@ qemuDomainSnapshotCreateDiskActive(virQEMUDriverPtr= driver, =20 =20 static int -qemuDomainSnapshotCreateActiveExternal(virConnectPtr conn, - virQEMUDriverPtr driver, +qemuDomainSnapshotCreateActiveExternal(virQEMUDriverPtr driver, virDomainObjPtr vm, virDomainSnapshotObjPtr snap, unsigned int flags) @@ -15026,7 +15024,7 @@ qemuDomainSnapshotCreateActiveExternal(virConnectPt= r conn, =20 cleanup: if (resume && virDomainObjIsActive(vm) && - qemuProcessStartCPUs(driver, vm, conn, + qemuProcessStartCPUs(driver, vm, VIR_DOMAIN_RUNNING_UNPAUSED, QEMU_ASYNC_JOB_SNAPSHOT) < 0) { event =3D virDomainEventLifecycleNewFromObj(vm, @@ -15279,12 +15277,12 @@ qemuDomainSnapshotCreateXML(virDomainPtr domain, if (flags & VIR_DOMAIN_SNAPSHOT_CREATE_DISK_ONLY || snap->def->memory =3D=3D VIR_DOMAIN_SNAPSHOT_LOCATION_EXTERNAL= ) { /* external checkpoint or disk snapshot */ - if (qemuDomainSnapshotCreateActiveExternal(domain->conn, drive= r, + if (qemuDomainSnapshotCreateActiveExternal(driver, vm, snap, flags) < = 0) goto endjob; } else { /* internal checkpoint */ - if (qemuDomainSnapshotCreateActiveInternal(domain->conn, drive= r, + if (qemuDomainSnapshotCreateActiveInternal(driver, vm, snap, flags) < = 0) goto endjob; } @@ -16003,7 +16001,7 @@ qemuDomainRevertToSnapshot(virDomainSnapshotPtr sna= pshot, _("guest unexpectedly quit")); goto endjob; } - rc =3D qemuProcessStartCPUs(driver, vm, snapshot->domain->conn, + rc =3D qemuProcessStartCPUs(driver, vm, VIR_DOMAIN_RUNNING_FROM_SNAPSHOT, QEMU_ASYNC_JOB_START); if (rc < 0) diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index 3641b801f6..88639c71fc 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -273,7 +273,7 @@ qemuMigrationRestoreDomainState(virConnectPtr conn, vir= DomainObjPtr vm) VIR_DEBUG("Restoring pre-migration state due to migration error"); =20 /* we got here through some sort of failure; start the domain agai= n */ - if (qemuProcessStartCPUs(driver, vm, conn, + if (qemuProcessStartCPUs(driver, vm, VIR_DOMAIN_RUNNING_MIGRATION_CANCELED, QEMU_ASYNC_JOB_MIGRATION_OUT) < 0) { /* Hm, we already know we are in error here. We don't want to @@ -2853,7 +2853,7 @@ qemuMigrationPrepareAny(virQEMUDriverPtr driver, QEMU_ASYNC_JOB_MIGRATION_IN) < 0) goto stopjob; =20 - if (qemuProcessFinishStartup(dconn, driver, vm, QEMU_ASYNC_JOB_MIGRATI= ON_IN, + if (qemuProcessFinishStartup(driver, vm, QEMU_ASYNC_JOB_MIGRATION_IN, false, VIR_DOMAIN_PAUSED_MIGRATION) < 0) goto stopjob; =20 @@ -5389,7 +5389,7 @@ qemuMigrationFinish(virQEMUDriverPtr driver, * >=3D 0.10.6 to work properly. This isn't strictly necessary on * older qemu's, but it also doesn't hurt anything there */ - if (qemuProcessStartCPUs(driver, vm, dconn, + if (qemuProcessStartCPUs(driver, vm, inPostCopy ? VIR_DOMAIN_RUNNING_POSTCOPY : VIR_DOMAIN_RUNNING_MIGRATED, QEMU_ASYNC_JOB_MIGRATION_IN) < 0) { diff --git a/src/qemu/qemu_monitor.c b/src/qemu/qemu_monitor.c index 9b5ad72cf9..ad5c572aee 100644 --- a/src/qemu/qemu_monitor.c +++ b/src/qemu/qemu_monitor.c @@ -1319,7 +1319,6 @@ qemuMonitorHMPCommandWithFd(qemuMonitorPtr mon, =20 int qemuMonitorGetDiskSecret(qemuMonitorPtr mon, - virConnectPtr conn, const char *path, char **secret, size_t *secretLen) @@ -1328,7 +1327,7 @@ qemuMonitorGetDiskSecret(qemuMonitorPtr mon, *secret =3D NULL; *secretLen =3D 0; =20 - QEMU_MONITOR_CALLBACK(mon, ret, diskSecretLookup, conn, mon->vm, + QEMU_MONITOR_CALLBACK(mon, ret, diskSecretLookup, mon->vm, path, secret, secretLen); return ret; } @@ -1700,15 +1699,14 @@ qemuMonitorSetCapabilities(qemuMonitorPtr mon) =20 =20 int -qemuMonitorStartCPUs(qemuMonitorPtr mon, - virConnectPtr conn) +qemuMonitorStartCPUs(qemuMonitorPtr mon) { QEMU_CHECK_MONITOR(mon); =20 if (mon->json) - return qemuMonitorJSONStartCPUs(mon, conn); + return qemuMonitorJSONStartCPUs(mon); else - return qemuMonitorTextStartCPUs(mon, conn); + return qemuMonitorTextStartCPUs(mon); } =20 =20 diff --git a/src/qemu/qemu_monitor.h b/src/qemu/qemu_monitor.h index ea0c01ae7f..954ae88e4f 100644 --- a/src/qemu/qemu_monitor.h +++ b/src/qemu/qemu_monitor.h @@ -109,13 +109,7 @@ typedef void (*qemuMonitorEofNotifyCallback)(qemuMonit= orPtr mon, typedef void (*qemuMonitorErrorNotifyCallback)(qemuMonitorPtr mon, virDomainObjPtr vm, void *opaque); -/* XXX we'd really like to avoid virConnectPtr here - * It is required so the callback can find the active - * secret driver. Need to change this to work like the - * security drivers do, to avoid this - */ typedef int (*qemuMonitorDiskSecretLookupCallback)(qemuMonitorPtr mon, - virConnectPtr conn, virDomainObjPtr vm, const char *path, char **secret, @@ -363,9 +357,7 @@ int qemuMonitorHMPCommandWithFd(qemuMonitorPtr mon, # define qemuMonitorHMPCommand(mon, cmd, reply) \ qemuMonitorHMPCommandWithFd(mon, cmd, -1, reply) =20 -/* XXX same comment about virConnectPtr as above */ int qemuMonitorGetDiskSecret(qemuMonitorPtr mon, - virConnectPtr conn, const char *path, char **secret, size_t *secretLen); @@ -440,8 +432,7 @@ int qemuMonitorEmitDumpCompleted(qemuMonitorPtr mon, qemuMonitorDumpStatsPtr stats, const char *error); =20 -int qemuMonitorStartCPUs(qemuMonitorPtr mon, - virConnectPtr conn); +int qemuMonitorStartCPUs(qemuMonitorPtr mon); int qemuMonitorStopCPUs(qemuMonitorPtr mon); =20 typedef enum { diff --git a/src/qemu/qemu_monitor_json.c b/src/qemu/qemu_monitor_json.c index 242b92ea3f..a09e93e464 100644 --- a/src/qemu/qemu_monitor_json.c +++ b/src/qemu/qemu_monitor_json.c @@ -1274,8 +1274,7 @@ qemuMonitorJSONSetCapabilities(qemuMonitorPtr mon) =20 =20 int -qemuMonitorJSONStartCPUs(qemuMonitorPtr mon, - virConnectPtr conn ATTRIBUTE_UNUSED) +qemuMonitorJSONStartCPUs(qemuMonitorPtr mon) { int ret; virJSONValuePtr cmd =3D qemuMonitorJSONMakeCommand("cont", NULL); diff --git a/src/qemu/qemu_monitor_json.h b/src/qemu/qemu_monitor_json.h index a62e2418dc..ec243becc4 100644 --- a/src/qemu/qemu_monitor_json.h +++ b/src/qemu/qemu_monitor_json.h @@ -48,8 +48,7 @@ int qemuMonitorJSONHumanCommandWithFd(qemuMonitorPtr mon, =20 int qemuMonitorJSONSetCapabilities(qemuMonitorPtr mon); =20 -int qemuMonitorJSONStartCPUs(qemuMonitorPtr mon, - virConnectPtr conn); +int qemuMonitorJSONStartCPUs(qemuMonitorPtr mon); int qemuMonitorJSONStopCPUs(qemuMonitorPtr mon); int qemuMonitorJSONGetStatus(qemuMonitorPtr mon, bool *running, diff --git a/src/qemu/qemu_monitor_text.c b/src/qemu/qemu_monitor_text.c index 2db71548cb..7c34ca5b07 100644 --- a/src/qemu/qemu_monitor_text.c +++ b/src/qemu/qemu_monitor_text.c @@ -293,9 +293,8 @@ qemuMonitorSendDiskPassphrase(qemuMonitorPtr mon, qemuMonitorMessagePtr msg, const char *data, size_t len ATTRIBUTE_UNUSED, - void *opaque) + void *opaque ATTRIBUTE_UNUSED) { - virConnectPtr conn =3D opaque; char *path; char *passphrase =3D NULL; size_t passphrase_len =3D 0; @@ -326,7 +325,6 @@ qemuMonitorSendDiskPassphrase(qemuMonitorPtr mon, =20 /* Fetch the disk password if possible */ res =3D qemuMonitorGetDiskSecret(mon, - conn, path, &passphrase, &passphrase_len); @@ -358,14 +356,13 @@ qemuMonitorSendDiskPassphrase(qemuMonitorPtr mon, } =20 int -qemuMonitorTextStartCPUs(qemuMonitorPtr mon, - virConnectPtr conn) +qemuMonitorTextStartCPUs(qemuMonitorPtr mon) { char *reply; =20 if (qemuMonitorTextCommandWithHandler(mon, "cont", qemuMonitorSendDiskPassphrase, - conn, + NULL, -1, &reply) < 0) return -1; =20 diff --git a/src/qemu/qemu_monitor_text.h b/src/qemu/qemu_monitor_text.h index 86f43e7c55..d57bdbc55f 100644 --- a/src/qemu/qemu_monitor_text.h +++ b/src/qemu/qemu_monitor_text.h @@ -39,8 +39,7 @@ int qemuMonitorTextCommandWithFd(qemuMonitorPtr mon, int scm_fd, char **reply); =20 -int qemuMonitorTextStartCPUs(qemuMonitorPtr mon, - virConnectPtr conn); +int qemuMonitorTextStartCPUs(qemuMonitorPtr mon); int qemuMonitorTextStopCPUs(qemuMonitorPtr mon); int qemuMonitorTextGetStatus(qemuMonitorPtr mon, bool *running, diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 10211de871..d0a25cecb9 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -447,13 +447,13 @@ qemuProcessGetVolumeQcowPassphrase(virConnectPtr conn, =20 static int qemuProcessFindVolumeQcowPassphrase(qemuMonitorPtr mon ATTRIBUTE_UNUSED, - virConnectPtr conn, virDomainObjPtr vm, const char *path, char **secretRet, size_t *secretLen, void *opaque ATTRIBUTE_UNUSED) { + virConnectPtr conn =3D NULL; virDomainDiskDefPtr disk; int ret =3D -1; =20 @@ -465,9 +465,11 @@ qemuProcessFindVolumeQcowPassphrase(qemuMonitorPtr mon= ATTRIBUTE_UNUSED, goto cleanup; } =20 + conn =3D virGetConnectSecret(); ret =3D qemuProcessGetVolumeQcowPassphrase(conn, disk, secretRet, secr= etLen); =20 cleanup: + virObjectUnref(conn); virObjectUnlock(vm); return ret; } @@ -565,7 +567,7 @@ qemuProcessFakeReboot(void *opaque) if (virDomainObjGetState(vm, NULL) =3D=3D VIR_DOMAIN_CRASHED) reason =3D VIR_DOMAIN_RUNNING_CRASHED; =20 - if (qemuProcessStartCPUs(driver, vm, NULL, + if (qemuProcessStartCPUs(driver, vm, reason, QEMU_ASYNC_JOB_NONE) < 0) { if (virGetLastError() =3D=3D NULL) @@ -2854,7 +2856,7 @@ qemuProcessPrepareMonitorChr(virDomainChrSourceDefPtr= monConfig, */ int qemuProcessStartCPUs(virQEMUDriverPtr driver, virDomainObjPtr vm, - virConnectPtr conn, virDomainRunningReason reason, + virDomainRunningReason reason, qemuDomainAsyncJob asyncJob) { int ret =3D -1; @@ -2879,7 +2881,7 @@ qemuProcessStartCPUs(virQEMUDriverPtr driver, virDoma= inObjPtr vm, if (qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob) < 0) goto release; =20 - ret =3D qemuMonitorStartCPUs(priv->mon, conn); + ret =3D qemuMonitorStartCPUs(priv->mon); if (qemuDomainObjExitMonitor(driver, vm) < 0) ret =3D -1; =20 @@ -3040,7 +3042,6 @@ qemuProcessUpdateState(virQEMUDriverPtr driver, virDo= mainObjPtr vm) static int qemuProcessRecoverMigrationIn(virQEMUDriverPtr driver, virDomainObjPtr vm, - virConnectPtr conn, qemuMigrationJobPhase phase, virDomainState state, int reason) @@ -3072,7 +3073,7 @@ qemuProcessRecoverMigrationIn(virQEMUDriverPtr driver, * and hope we are all set */ VIR_DEBUG("Incoming migration finished, resuming domain %s", vm->def->name); - if (qemuProcessStartCPUs(driver, vm, conn, + if (qemuProcessStartCPUs(driver, vm, VIR_DOMAIN_RUNNING_UNPAUSED, QEMU_ASYNC_JOB_NONE) < 0) { VIR_WARN("Could not resume domain %s", vm->def->name); @@ -3099,7 +3100,6 @@ qemuProcessRecoverMigrationIn(virQEMUDriverPtr driver, static int qemuProcessRecoverMigrationOut(virQEMUDriverPtr driver, virDomainObjPtr vm, - virConnectPtr conn, qemuMigrationJobPhase phase, virDomainState state, int reason, @@ -3179,7 +3179,7 @@ qemuProcessRecoverMigrationOut(virQEMUDriverPtr drive= r, if (state =3D=3D VIR_DOMAIN_PAUSED && (reason =3D=3D VIR_DOMAIN_PAUSED_MIGRATION || reason =3D=3D VIR_DOMAIN_PAUSED_UNKNOWN)) { - if (qemuProcessStartCPUs(driver, vm, conn, + if (qemuProcessStartCPUs(driver, vm, VIR_DOMAIN_RUNNING_UNPAUSED, QEMU_ASYNC_JOB_NONE) < 0) { VIR_WARN("Could not resume domain %s", vm->def->name); @@ -3194,7 +3194,6 @@ qemuProcessRecoverMigrationOut(virQEMUDriverPtr drive= r, static int qemuProcessRecoverJob(virQEMUDriverPtr driver, virDomainObjPtr vm, - virConnectPtr conn, const struct qemuDomainJobObj *job, unsigned int *stopFlags) { @@ -3206,13 +3205,13 @@ qemuProcessRecoverJob(virQEMUDriverPtr driver, =20 switch (job->asyncJob) { case QEMU_ASYNC_JOB_MIGRATION_OUT: - if (qemuProcessRecoverMigrationOut(driver, vm, conn, job->phase, + if (qemuProcessRecoverMigrationOut(driver, vm, job->phase, state, reason, stopFlags) < 0) return -1; break; =20 case QEMU_ASYNC_JOB_MIGRATION_IN: - if (qemuProcessRecoverMigrationIn(driver, vm, conn, job->phase, + if (qemuProcessRecoverMigrationIn(driver, vm, job->phase, state, reason) < 0) return -1; break; @@ -3237,7 +3236,7 @@ qemuProcessRecoverJob(virQEMUDriverPtr driver, (reason =3D=3D VIR_DOMAIN_PAUSED_SNAPSHOT || reason =3D=3D VIR_DOMAIN_PAUSED_MIGRATION)) || reason =3D=3D VIR_DOMAIN_PAUSED_UNKNOWN)) { - if (qemuProcessStartCPUs(driver, vm, conn, + if (qemuProcessStartCPUs(driver, vm, VIR_DOMAIN_RUNNING_UNPAUSED, QEMU_ASYNC_JOB_NONE) < 0) { VIR_WARN("Could not resume domain '%s' after migration to= file", @@ -6260,8 +6259,7 @@ qemuProcessRefreshState(virQEMUDriverPtr driver, * Finish starting a new domain. */ int -qemuProcessFinishStartup(virConnectPtr conn, - virQEMUDriverPtr driver, +qemuProcessFinishStartup(virQEMUDriverPtr driver, virDomainObjPtr vm, qemuDomainAsyncJob asyncJob, bool startCPUs, @@ -6272,7 +6270,7 @@ qemuProcessFinishStartup(virConnectPtr conn, =20 if (startCPUs) { VIR_DEBUG("Starting domain CPUs"); - if (qemuProcessStartCPUs(driver, vm, conn, + if (qemuProcessStartCPUs(driver, vm, VIR_DOMAIN_RUNNING_BOOTED, asyncJob) < 0) { if (!virGetLastError()) @@ -6366,7 +6364,7 @@ qemuProcessStart(virConnectPtr conn, qemuMigrationRunIncoming(driver, vm, incoming->deferredURI, asyncJ= ob) < 0) goto stop; =20 - if (qemuProcessFinishStartup(conn, driver, vm, asyncJob, + if (qemuProcessFinishStartup(driver, vm, asyncJob, !(flags & VIR_QEMU_PROCESS_START_PAUSED), incoming ? VIR_DOMAIN_PAUSED_MIGRATION : @@ -7470,7 +7468,7 @@ qemuProcessReconnect(void *opaque) if (qemuProcessRefreshBalloonState(driver, obj, QEMU_ASYNC_JOB_NONE) <= 0) goto error; =20 - if (qemuProcessRecoverJob(driver, obj, conn, &oldjob, &stopFlags) < 0) + if (qemuProcessRecoverJob(driver, obj, &oldjob, &stopFlags) < 0) goto error; =20 if (qemuProcessUpdateDevices(driver, obj) < 0) diff --git a/src/qemu/qemu_process.h b/src/qemu/qemu_process.h index 8d210282f8..42f92eb458 100644 --- a/src/qemu/qemu_process.h +++ b/src/qemu/qemu_process.h @@ -30,7 +30,6 @@ int qemuProcessPrepareMonitorChr(virDomainChrSourceDefPtr= monConfig, =20 int qemuProcessStartCPUs(virQEMUDriverPtr driver, virDomainObjPtr vm, - virConnectPtr conn, virDomainRunningReason reason, qemuDomainAsyncJob asyncJob); int qemuProcessStopCPUs(virQEMUDriverPtr driver, @@ -126,8 +125,7 @@ int qemuProcessLaunch(virConnectPtr conn, virNetDevVPortProfileOp vmop, unsigned int flags); =20 -int qemuProcessFinishStartup(virConnectPtr conn, - virQEMUDriverPtr driver, +int qemuProcessFinishStartup(virQEMUDriverPtr driver, virDomainObjPtr vm, qemuDomainAsyncJob asyncJob, bool startCPUs, diff --git a/tests/qemumonitorjsontest.c b/tests/qemumonitorjsontest.c index 1eeefbce9b..908ec3a3c8 100644 --- a/tests/qemumonitorjsontest.c +++ b/tests/qemumonitorjsontest.c @@ -1238,7 +1238,7 @@ testQemuMonitorJSONCPU(const void *data) goto cleanup; } =20 - if (qemuMonitorJSONStartCPUs(qemuMonitorTestGetMonitor(test), NULL) < = 0) + if (qemuMonitorJSONStartCPUs(qemuMonitorTestGetMonitor(test)) < 0) goto cleanup; =20 if (qemuMonitorGetStatus(qemuMonitorTestGetMonitor(test), --=20 2.14.3 -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list