Instead of starting from the minimal Ubuntu 18.04 base
image and installing all requirements at build time,
use a Docker image that has been specifically tailored
at building libvirt and thus already includes all
required packages.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
---
The pre-built images have been hand-crafted using the
build dependencies recorded in the libvirt-jenkins-ci
repository: of course that's not something that we want
to keep doing manually going forward, so figuring out a
sensible way to generate Dockerfiles and potentially
even Docker images automatically is pretty high on the
priority list.
.travis.yml | 75 ++---------------------------------------------------
1 file changed, 2 insertions(+), 73 deletions(-)
diff --git a/.travis.yml b/.travis.yml
index f62e8c6437..1b0d1e824b 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -10,7 +10,7 @@ matrix:
- services:
- docker
env:
- - IMAGE=ubuntu:18.04
+ - IMAGE="ubuntu-18"
- DISTCHECK_CONFIGURE_FLAGS="--with-init-script=systemd"
- compiler: clang
language: c
@@ -22,13 +22,11 @@ matrix:
script:
- docker run
- --privileged
-v $(pwd):/build
-w /build
-e VIR_TEST_DEBUG="$VIR_TEST_DEBUG"
- -e PACKAGES="$PACKAGES"
-e DISTCHECK_CONFIGURE_FLAGS="$DISTCHECK_CONFIGURE_FLAGS"
- "$IMAGE"
+ "libvirt/build:$IMAGE"
/bin/sh -xc "$LINUX_CMD"
git:
@@ -38,8 +36,6 @@ env:
global:
- VIR_TEST_DEBUG=1
- LINUX_CMD="
- apt-get update &&
- apt-get install -y \$PACKAGES &&
./autogen.sh &&
make -j3 &&
make -j3 syntax-check &&
@@ -67,73 +63,6 @@ env:
exit 1
)
"
- # Please keep this list sorted alphabetically
- - PACKAGES="
- augeas-tools
- autoconf
- automake
- autopoint
- bash-completion
- ccache
- dnsmasq-base
- dwarves
- ebtables
- gcc
- gettext
- git
- glusterfs-client
- libacl1-dev
- libapparmor-dev
- libattr1-dev
- libaudit-dev
- libavahi-client-dev
- libblkid-dev
- libc6-dev
- libcap-ng-dev
- libc-dev-bin
- libdbus-1-dev
- libdevmapper-dev
- libfuse-dev
- libgnutls28-dev
- libnetcf-dev
- libnl-3-dev
- libnl-route-3-dev
- libnuma-dev
- libopenwsman-dev
- libparted-dev
- libpcap-dev
- libpciaccess-dev
- librbd-dev
- libreadline-dev
- libsanlock-dev
- libsasl2-dev
- libselinux1-dev
- libssh2-1-dev
- libssh-dev
- libtirpc-dev
- libtool
- libudev-dev
- libxen-dev
- libxml2-dev
- libxml2-utils
- libyajl-dev
- lvm2
- make
- nfs-common
- open-iscsi
- parted
- patch
- perl
- pkgconf
- policykit-1
- qemu-utils
- radvd
- scrub
- sheepdog
- systemtap-sdt-dev
- xsltproc
- zfs-fuse
- "
notifications:
irc:
--
2.17.1
--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list
On Tue, Jun 12, 2018 at 12:12:12PM +0200, Andrea Bolognani wrote: > Instead of starting from the minimal Ubuntu 18.04 base > image and installing all requirements at build time, > use a Docker image that has been specifically tailored > at building libvirt and thus already includes all > required packages. > > Signed-off-by: Andrea Bolognani <abologna@redhat.com> > --- > The pre-built images have been hand-crafted using the > build dependencies recorded in the libvirt-jenkins-ci > repository: of course that's not something that we want > to keep doing manually going forward, so figuring out a > sensible way to generate Dockerfiles and potentially > even Docker images automatically is pretty high on the > priority list. When first testing I produced a custom Ubuntu docker image with not much effort. I was just creating a file in "libvirt-jenkins-ci" repo called "images/ubuntu-18.04.docker" that contains FROM ubuntu:18.04 RUN apt-get update ENV PACKAGES \ ::PACKAGE-LIST:: \ RUN apt-get -y install $PACKAGES RUN mkdir /build WORKDIR /build ::PACKAGE-LIST:: can be built by reading the guests/vars/projects/libvirt.yml file, and then expanding it based on guest/vars/mappings.yml I hadn't written code for that bit, but it just needs a short python script to read the two yaml files and map the data sets. I was only going to do packages forthe libvirt.yml, but we can expand to cover the other modules too quite easily, as its just taking the union of all the project files. Other distros are just the same but change the name of the pkg manager command. > diff --git a/.travis.yml b/.travis.yml > index f62e8c6437..1b0d1e824b 100644 > --- a/.travis.yml > +++ b/.travis.yml > @@ -10,7 +10,7 @@ matrix: > - services: > - docker > env: > - - IMAGE=ubuntu:18.04 > + - IMAGE="ubuntu-18" > - DISTCHECK_CONFIGURE_FLAGS="--with-init-script=systemd" > - compiler: clang > language: c > @@ -22,13 +22,11 @@ matrix: > > script: > - docker run > - --privileged > -v $(pwd):/build > -w /build > -e VIR_TEST_DEBUG="$VIR_TEST_DEBUG" > - -e PACKAGES="$PACKAGES" > -e DISTCHECK_CONFIGURE_FLAGS="$DISTCHECK_CONFIGURE_FLAGS" > - "$IMAGE" > + "libvirt/build:$IMAGE" This is a pretty alien approach to take for docker images. This defines an image called 'libvirt/build' and then uses a tag to identify different distros. The normal practice is for tags to identify different versions of the same distro. Using tags for completely different distros, fedora vs ubuntu, on the same image name is not something I'd expect. ie rather than Name: libvirt/build Tag: ubuntu-18.04 we should have Name: libvirt/ubuntu Tag: 18.04 Though perhaps make clear it is for CI, so Name: libvirt/ci-ubuntu Tag: 18.04 Annoyingly you can't use '/' in an image name to create a multilevel namespace, so we have to include 'ci' to either the image name or organization name. Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :| -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
On Wed, 2018-06-13 at 11:32 +0100, Daniel P. Berrangé wrote: > On Tue, Jun 12, 2018 at 12:12:12PM +0200, Andrea Bolognani wrote: > > The pre-built images have been hand-crafted using the > > build dependencies recorded in the libvirt-jenkins-ci > > repository: of course that's not something that we want > > to keep doing manually going forward, so figuring out a > > sensible way to generate Dockerfiles and potentially > > even Docker images automatically is pretty high on the > > priority list. > > When first testing I produced a custom Ubuntu docker image > with not much effort. I was just creating a file in > "libvirt-jenkins-ci" repo called "images/ubuntu-18.04.docker" > that contains > > FROM ubuntu:18.04 > RUN apt-get update > ENV PACKAGES \ > ::PACKAGE-LIST:: \ > RUN apt-get -y install $PACKAGES Pretty much exactly how I've created the images you can find on Docker Hub, except for > RUN mkdir /build > WORKDIR /build this bit, which AFAICT is entirely unnecessary. > ::PACKAGE-LIST:: can be built by reading the > guests/vars/projects/libvirt.yml file, and then > expanding it based on guest/vars/mappings.yml > > I hadn't written code for that bit, but it just > needs a short python script to read the two > yaml files and map the data sets. Yeah, same here: I just extracted the package list from the output of a 'lcitool update' run this time around, but I was planning on writing a tool to do that for me just like you mention. The one thing I haven't quite figured out yet is where to store the resulting Dockerfiles. If we committed them to some repository we could take advantage of Docker Hub's autobuild support, which would be pretty neat; on the other hand, being generated content, they have no business being committed, plus it would be tricky to ensure the generated files are always in sync with the source mappings without introducing a bunch of scaffoling to the libvirt-jenkins-ci repository. Similarly, once we have a tool that can process the mappings and spit out a flat list of packages, it would make sense to drop the Ansible code that we already have for that, and load generated files instead, but that would have the same drawbacks as above. So right now I'm leaning towards leaving the Ansible part alone and using the new tool once a month or so to manually generate fresh images and upload them to Docker Hub, but I'd like to hear what you think about the issue. > I was only going to do packages forthe libvirt.yml, > but we can expand to cover the other modules too > quite easily, as its just taking the union of all > the project files. I don't think we can/want to do that. The way build dependencies for projects are set up right now, we expect to build eg. libvirt-glib against our own copy of libvirt rather than the distro-provided one, so libvirt is *not* included among the packages required by the libvirt-glib project. So if we included build dependencies for all projects in the Docker images, that would make them way bigger and you would still be unable to build libvirt-glib or whatever on top of them. Perhaps we can find some way out of this later, but I'd rather move one step at the time instead of trying to solve all the things in one fell swoop :) > Other distros are just the same but change the > name of the pkg manager command. Yup. [...] > > env: [...] > > + - IMAGE="ubuntu-18" > > script: > > - docker run [...] > > + "libvirt/build:$IMAGE" > > This is a pretty alien approach to take for docker images. > This defines an image called 'libvirt/build' and then uses a > tag to identify different distros. The normal practice is > for tags to identify different versions of the same distro. > Using tags for completely different distros, fedora vs ubuntu, > on the same image name is not something I'd expect. > > ie rather than > > Name: libvirt/build > Tag: ubuntu-18.04 > > we should have > > Name: libvirt/ubuntu > Tag: 18.04 I don't mind having several images and using the tag only for the version number, if that's something that will make the result look less alien to Docker users; however, I think we should keep the names consistent with what we use on our CentOS CI, so it would be ubuntu:18 instead of ubuntu:18.04. > Though perhaps make clear it is for CI, so > > Name: libvirt/ci-ubuntu > Tag: 18.04 Just like for the guests you can create and manage with lcitool, while these images will be primarily used for Travis CI they should be usable for developers as well, which is why I picked the name "build" instead of "ci" or "travis-ci" in the first place. At the same time, I didn't use just the distribution name because I didn't want to give the impression that by pulling them you would get an OS with libvirt already installed. I'm thinking something libvirt/build-on-ubuntu:18 would do, but perhaps other people can come up with something better. > Annoyingly you can't use '/' in an image name to create a > multilevel namespace, so we have to include 'ci' to either > the image name or organization name. We already have gotten hold of the libvirt organization name, and we should definitely maintain control of it to avoid squatting. -- Andrea Bolognani / Red Hat / Virtualization -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
On Wed, Jun 13, 2018 at 02:58:55PM +0200, Andrea Bolognani wrote: > On Wed, 2018-06-13 at 11:32 +0100, Daniel P. Berrangé wrote: > > On Tue, Jun 12, 2018 at 12:12:12PM +0200, Andrea Bolognani wrote: > > > The pre-built images have been hand-crafted using the > > > build dependencies recorded in the libvirt-jenkins-ci > > > repository: of course that's not something that we want > > > to keep doing manually going forward, so figuring out a > > > sensible way to generate Dockerfiles and potentially > > > even Docker images automatically is pretty high on the > > > priority list. > > > > When first testing I produced a custom Ubuntu docker image > > with not much effort. I was just creating a file in > > "libvirt-jenkins-ci" repo called "images/ubuntu-18.04.docker" > > that contains > > > > FROM ubuntu:18.04 > > RUN apt-get update > > ENV PACKAGES \ > > ::PACKAGE-LIST:: \ > > RUN apt-get -y install $PACKAGES > > Pretty much exactly how I've created the images you can find on > Docker Hub, except for > > > RUN mkdir /build > > WORKDIR /build > > this bit, which AFAICT is entirely unnecessary. WORKDIR /build, avoids the need for the '-w /build' arg in the .travis.yml docker run command. I think there's a slight plus to having the workdir set automatically, avoiding the need for a -w arg. > > ::PACKAGE-LIST:: can be built by reading the > > guests/vars/projects/libvirt.yml file, and then > > expanding it based on guest/vars/mappings.yml > > > > I hadn't written code for that bit, but it just > > needs a short python script to read the two > > yaml files and map the data sets. > > Yeah, same here: I just extracted the package list from the output > of a 'lcitool update' run this time around, but I was planning on > writing a tool to do that for me just like you mention. > > The one thing I haven't quite figured out yet is where to store > the resulting Dockerfiles. If we committed them to some repository > we could take advantage of Docker Hub's autobuild support, which > would be pretty neat; on the other hand, being generated content, > they have no business being committed, plus it would be tricky to > ensure the generated files are always in sync with the source > mappings without introducing a bunch of scaffoling to the > libvirt-jenkins-ci repository. I think we should just have the dockerfile templates (ie with the ::PACKAGE:: placeholder) in the libvirt-jenkins-ci repo. We don't need to store the expanded dockerfile. Then we can have a CI job somewhere that automatically rebuilds the & uploads new docker images whenever a change is pushed to libvirt-jenkins-ci. > > I was only going to do packages forthe libvirt.yml, > > but we can expand to cover the other modules too > > quite easily, as its just taking the union of all > > the project files. > > I don't think we can/want to do that. > > The way build dependencies for projects are set up right now, we > expect to build eg. libvirt-glib against our own copy of libvirt > rather than the distro-provided one, so libvirt is *not* included > among the packages required by the libvirt-glib project. > > So if we included build dependencies for all projects in the Docker > images, that would make them way bigger and you would still be > unable to build libvirt-glib or whatever on top of them. Perhaps we > can find some way out of this later, but I'd rather move one step > at the time instead of trying to solve all the things in one fell > swoop :) Yeah, lets only focus on core libvirt right now until we have an immediate need for more. > > ie rather than > > > > Name: libvirt/build > > Tag: ubuntu-18.04 > > > > we should have > > > > Name: libvirt/ubuntu > > Tag: 18.04 > > I don't mind having several images and using the tag only for the > version number, if that's something that will make the result look > less alien to Docker users; however, I think we should keep the > names consistent with what we use on our CentOS CI, so it would be > ubuntu:18 instead of ubuntu:18.04. NB that is ambiguous as Ubuntu does two releases a year, 18.04 and 18.10 > > Though perhaps make clear it is for CI, so > > > > Name: libvirt/ci-ubuntu > > Tag: 18.04 > > Just like for the guests you can create and manage with lcitool, > while these images will be primarily used for Travis CI they > should be usable for developers as well, which is why I picked the > name "build" instead of "ci" or "travis-ci" in the first place. Sure, 'libvirt/build-' is fine > At the same time, I didn't use just the distribution name because > I didn't want to give the impression that by pulling them you > would get an OS with libvirt already installed. Agreed. Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :| -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
On Wed, 2018-06-13 at 14:28 +0100, Daniel P. Berrangé wrote: > On Wed, Jun 13, 2018 at 02:58:55PM +0200, Andrea Bolognani wrote: > > Pretty much exactly how I've created the images you can find on > > Docker Hub, except for > > > > > RUN mkdir /build > > > WORKDIR /build > > > > this bit, which AFAICT is entirely unnecessary. > > WORKDIR /build, avoids the need for the '-w /build' arg in > the .travis.yml docker run command. I think there's a slight > plus to having the workdir set automatically, avoiding the > need for a -w arg. The images we're creating are basically generic OS images with a few extra packages baked in, so tying them to *one* of the things we're going to use them for (although arguably the main one) this way is wrong IMHO: providing the -w argument at run time is much cleaner. Put it another way, if for whatever reason we decided to change the working directory at some point in the feature, with your approach we would have to post patches to two different projects rather than a single one. > > The one thing I haven't quite figured out yet is where to store > > the resulting Dockerfiles. If we committed them to some repository > > we could take advantage of Docker Hub's autobuild support, which > > would be pretty neat; on the other hand, being generated content, > > they have no business being committed, plus it would be tricky to > > ensure the generated files are always in sync with the source > > mappings without introducing a bunch of scaffoling to the > > libvirt-jenkins-ci repository. > > I think we should just have the dockerfile templates (ie with > the ::PACKAGE:: placeholder) in the libvirt-jenkins-ci repo. > We don't need to store the expanded dockerfile. Then we can have > a CI job somewhere that automatically rebuilds the & uploads new > docker images whenever a change is pushed to libvirt-jenkins-ci. That means rolling our own autobuild pipeline instead of taking advantage of Docker Hub's: we'd have to make sure we don't kick off builds unless the list of packages has actually changed, have a separate Docker Hub account with write permissions to the organization, actually run those builds somewhere... Not saying it's totally out of the question, just pointing out the hurdles and wondering if that's really the best way forward. What about generating the Dockerfiles manually and committing them to libvirt.git every now and then as the need arises? That's basically what we're doing at the moment to keep the list of packages in .travis.yml synced with libvirt-jenkins-ci.git, and while not perfect it's been serving us reasonably well so far... We could then hook up Docker Hub to perform container builds when the Dockerfiles in libvirt.git change. > > I don't mind having several images and using the tag only for the > > version number, if that's something that will make the result look > > less alien to Docker users; however, I think we should keep the > > names consistent with what we use on our CentOS CI, so it would be > > ubuntu:18 instead of ubuntu:18.04. > > NB that is ambiguous as Ubuntu does two releases a year, 18.04 and > 18.10 It's okay for us, because we only care about LTS Ubuntu releases anyway so there's no ambiguity. We already use that naming scheme in the libvirt-jenkins-ci repository and I'd really rather remain consistent. -- Andrea Bolognani / Red Hat / Virtualization -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
© 2016 - 2025 Red Hat, Inc.