Commit 6d1ef008 authored by Scott Wittenburg's avatar Scott Wittenburg Committed by Todd Gamblin

WIP: Automate the building of the tutorial container

Similar to how it was originally done here:

    https://github.com/scottwittenburg/sc2019-tutorial-pipeline
parent 1a705a90
pipeline-job:
image:
name: spack/ubuntu-bionic
entrypoint: ['']
tags:
- spack-kube
before_script:
- git clone ${SPACK_REPO} --branch ${SPACK_REF}
- . "./spack/share/spack/setup-env.sh"
script:
- spack ci start
--spack-repo ${SPACK_REPO}
--spack-ref ${SPACK_REF}
--downstream-repo "${DOWNSTREAM_CI_REPO}"
--branch-name "${CI_COMMIT_REF_NAME}"
--commit-sha "${CI_COMMIT_SHA}"
after_script:
- rm -rf "./spack"
# Overview
At a high level, this repository illustrates a custom workflow designed to assist in iteratively building a set of spack packages, pushing package binaries to a mirror, and then automatically building a Docker image containing the resulting binary packages. The workflow proceeds as follows:
1. Developer makes changes to the spack stack (environment) in this repository, and pushes a branch against which a PR is created.
2. Gitlab CI/CD notices the change and triggers a pipeline which builds the entire stack of packages, and reports a status check back to Github
3. Developer can make more changes and push to the PR branch as many times as necessary, triggering new pipelines each time.
4. When the developer is satisfied with the stack/environment and all the packages are successfully built, the developer merges the PR into the master branch of this repo.
5. The merge to master triggers an automated DockerHub build which copies the binary packages (along with ancillary resources) into the container and publishes the resulting container.
## Moving parts
This document describes the pieces involved in this workflow and how those pieces fit together, including how the new spack automated pipeline workflow is used in the process.
This custom workflow involves this Github repository, a Gitlab CI/CD repository (premium license needed for "CI/CD only" repository) with runners configured, an external spack mirror to hold the binaries (in our case hosted as an AWS S3 bucket), and a DockerHub repository for building the final container.
## Github repo (this repository)
This repository contains a spack environment with a set of packages to be built into a Docker container. It also contains a `.gitlab-ci.yml` file to be used by a Gitlab CI/CD repository (for automated pipeline testing), as well as Docker resources needed to build both the pipeline package building container, as well as a final output container with all the binaries from spack environment.
The simple `.gitlab-ci.yml` file in this repo describes a single job which is used to generate the full workload of jobs for the pipeline. Because the runner we have targeted with the `spack-kube` tag does not have the version of spack we need, we first clone and activate the version we do need. Note that, as described below, the presence of the `SPACK_REPO` and `SPACK_REF` environment variables at the time the jobs are generated (`spack ci generate`) will cause the generated jobs to be run with the same custom version of spack.
## Gitlab CI/CD repo
Create a CI/CD only repository in Gitlab and link it to the Github repository. Some things that may need to be configured include conditions under which pipelines should be run (PRs, push to PR branch, etc...) and other pipeline settings (clone strategy, job timeouts, and job variables).
When creating the CI/CD only repository, you can choose what kinds of events on the Github repository should run pipelines.
See the spack pipeline [documentation](https://github.com/scottwittenburg/spack/blob/add-spack-ci-command/lib/spack/docs/pipelines.rst#environment-variables-affecting-pipeline-operation) for more details on the job environment variables that may need to be set, but a brief summary of some common environment variables follows.
1. `SPACK_REPO` (optional) Needed if a custom spack should be cloned to generate and run the pipeline jobs
2. `SPACK_REF` (optional) Needed if a custom spack should be cloned to generate and run the pipeline jobs
3. `SPACK_SIGNING_KEY` Required to sign buildcache entries (package binaries) after they are built
4. `AWS_ACCESS_KEY_ID` (optional) Needed to authenticate to S3 mirror
5. `AWS_SECRET_ACCESS_KEY` (optional) Needed to authenticat to S3 mirror
6. `S3_ENDPOIT_URL` (optional) Needed if targeting an S3 mirror which is NOT hosted on AWS
7. `CDASH_AUTH_TOKEN` (optional) Needed if reporting build results to a CDash instance
8. `DOWNSTREAM_CI_REPO` Needed until Gitlab child pipelines are implemented. This is the repo url where the generated workload should be pushed, and for many cases, pushing to the same repo is a workable approach.
Because we want to use a custom spack (one other than the runners may already have available), we need to add the environment variables listed above. The presence of those variables at job generation time (`spack ci generate`) results in custom that custom spack getting cloned as a part of the `before_script` of each generated pipeline job.
## Spack mirror
This workflow makes use of a public mirror hosted on AWS S3. The automated pipeline pushes binaries to the mirror as packages are successfully built, and then the auto-build on DockerHub pulls those and copies them into the image for publishing. To authenticate with the S3 bucket, authentication credentials should be stored in CI variables as described in the section above.
## DockerHub repository
The DockerHub repository is configured to automatically build a container from the `Dockerfile` and build context in the `docker/tutorial-image/` directory of this repository. The build context also contains a custom build hook used by DockerHub to create the container in such a way that a few environment variables are made available as `--build-args` when the container is built.
The variables are `REMOTE_BUILDCACHE_URL` (publicly accessible mirror url where the binares were pushed by the Gitlab CI/CD pipeline), as well as `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` (needed to authenticate to the binary mirror, which is an S3 bucket).
FROM ubuntu:16.04
RUN mkdir /mirror
COPY build_cache /mirror/build_cache
COPY public.key /mirror/public.key
#COPY sc-tutorial-sysconfig/packages.yaml /etc/spack/packages.yaml
#RUN chmod -R go+r /etc/spack
RUN chmod -R go+r /mirror
RUN apt-get -yqq update && apt-get -yqq install ca-certificates
RUN apt-get -yqq update \
&& apt-get -yqq install \
ca-certificates curl g++ \
gcc-4.7 g++-4.7 gfortran-4.7 \
gcc gfortran git gnupg2 \
iproute2 make \
openssh-server python python-pip tcl \
clang clang-3.7 emacs unzip \
autoconf
# copy in scripts to test container
COPY tutorial-test.sh /tutorial/.test/tutorial-test.sh
RUN chmod -R go+rx /tutorial/.test/tutorial-test.sh
RUN useradd -ms /bin/bash spack
USER spack
WORKDIR /home/spack
CMD ["bash"]
# Introduction
This document describes the container versioning scheme and the process for
updating the container.
# Steps
1. (optional) Use the shell script in the `keyhelp` directory to take a key
keychain and export both the public and private parts to files used in the
pipeline. The `public.key` will be stored in the container so users can
verify the packages in the build cache. The `pipeline.key` output file will
contain the text that should be copy/pasted into the CI environment variable
`SPACK_SIGNING_KEY` so pipeline jobs will be able to both sign and verify
packages. This repo contains a script that can generate the needed products
from a keychain, given an export path and a key name. From the root of this
repository, run:
$ ./keyhelp/export_pipeline_key.sh ./docker "name-of-key-to-export"
The `public.key` can be committed to the repository and kept around for as
long as the signing key (private) will continue to be used to sign the
tutorial packages.
1. Create a new mirror (this is required until the sync process between the
mirror and the container build cache is controlled by/limited to the spack
environment in which it runs)
1. Create a new buildgroup in CDash to track the builds from the version to be
(done by editing the `spack.yaml` and changing the `build-group` property)
1. Update the set of packages, bootstrapped compilers, etc as desired
1. Create a PR on this repository and iterate on the `spack.yaml` until all
the packages build.
1. Merge the PR into master and tag the result, the tag name will later be
picked up by DockerHub and used to tag the container it builds from the repo
tag.
\ No newline at end of file
FROM ubuntu:18.04
ARG REMOTE_BUILDCACHE_URL="s3://spack-tutorial/mirror/build_cache"
ARG SPACK_PUBLIC_GPG_KEY=""
ARG AWS_ACCESS_KEY_ID="None"
ARG AWS_SECRET_ACCESS_KEY="None"
RUN apt-get update -y && \
apt-get upgrade -y && \
apt-get install -y --no-install-recommends \
autoconf
ca-certificates \
curl \
clang \
clang-3.7 \
emacs \
g++ \
g++-4.7 \
gcc \
gcc-4.7 \
gfortran \
gfortran-4.7 \
git \
gnupg2 \
iproute2 \
make \
openssh-server \
python3 \
python3-pip \
tcl \
unzip \
vim \
wget && \
python3 -m pip install --upgrade pip setuptools wheel && \
python3 -m pip install --upgrade gnureadline && \
python3 -m pip install --upgrade awscli && \
apt-get autoremove --purge && \
apt-get clean && \
ln -s /usr/bin/gpg /usr/bin/gpg2
RUN mkdir -p /mirror/build_cache && \
aws s3 sync ${REMOTE_BUILDCACHE_URL} /mirror/build_cache
COPY /public.key /mirror/public.key
COPY /packages.yaml /etc/spack/packages.yaml
COPY /tutorial-test.sh /tutorial/.test/tutorial-test.sh
### TODO: Find another way to set perms without increasing the image download size
RUN useradd -ms /bin/bash spack && \
chmod -R go+r /mirror && \
chmod -R go+r /etc/spack && \
chmod go+rx /tutorial/.test/tutorial-test.sh
USER spack
WORKDIR /home/spack
CMD ["bash"]
#!/bin/bash
docker build \
--build-arg REMOTE_BUILDCACHE_URL=${REMOTE_BUILDCACHE_URL} \
--build-arg AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID} \
--build-arg AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY} \
-f ${DOCKERFILE_PATH} \
-t ${IMAGE_NAME} .
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQENBF1IgqcBCADqSIBM0TT4+6Acv6SUpQ2l1Ql+UVRtJ74VGFOw+8I8aBWcBryB
wNsS/Drxn9M9rX8il2aGtAmwc1dhTh0JvdZO7KqG8Q4vvWOytdLnGSE61LV4147q
S/dJiYH2DCvhMKpOByIsEiuoTrUHzd1EQBnEPSwAQV8oWPrc1++f3iYmRemsOBCT
BldAu7Y5RwjI3qQ6GazoCF5rd1uyiMYrpT4amEKFE91VRe+IG8XfEaSTapOc/hO3
Sw4fzPelA2qD12I+JMj56vM0fQy3TXD5qngIb+leb2jGI+0bTz8RGS0xSMYVvftA
upzQPaQIfzijVBt3tFSayx/NXKR0p+EuCqGBABEBAAG0MFNwYWNrIEJ1aWxkIFBp
cGVsaW5lIChEZW1vIEtleSkgPGtleUBzcGFjay5kZW1vPokBTgQTAQgAOBYhBDHI
4nh6FErErdiO0pX4aBGV4jnYBQJdSIKnAhsvBQsJCAcCBhUKCQgLAgQWAgMBAh4B
AheAAAoJEJX4aBGV4jnYpf0IAJDYEjpm0h1pNswTvmnEhgNVbojCGRfAts7F5uf8
IFXGafKQsekMWZh0Ig0YXVn72jsOuNK/+keErMfXM3DFNTq0Ki7mcFedR9r5EfLf
4YW2n6mphsfMgsg8NwKVLFYWyhQQ4OzhdydPxkGVhEebHwfHNQ3aIcqbFmzkhxnX
CIYh2Flf3T306tKX4lXbhsXKG1L/bLtDiFRaMCBp66HGZ8u9Dbyy/W8aDwyx4duD
MG+y2OrhOf+zEu3ZPFyc/jsjmfnUtIfQVyRajh/8vh+i9fkvFlLaOQittNElt3z1
8+ybGjE9qWY/mvR2ZqnP8SVkGvxSpBVfVXiFFdepvuPAcLu5AQ0EXUiCpwEIAJ2s
npNBAVocDUSdOF/Z/eCRvy3epuYm5f1Ge1ao9K2qWYno2FatnsYxK4qqB5yGRkfj
sEzAGP8JtJvqDSuB5Xk7CIjRNOwoSB3hqvmxWh2h+HsITUhMl11FZ0Cllz+etXcK
APz2ZHSKnA3R8uf4JzIr1cHLS+gDBoj8NgBCZhcyva2b5UC///FLm1+/Lpvekd0U
n7B524hbXhFUG+UMfHO/U1c4TvCMt7RGMoWUtRzfO6XB1VQCwWJBVcVGl8Yy59Zk
3K76VbFWQWOq6fRBE0xHBAga7pOgCc9qrb+FGl1IHUT8aV8CzkxckHlNb3PlntmE
lXZLPcGFWaPtGtuIJVsAEQEAAYkCbAQYAQgAIBYhBDHI4nh6FErErdiO0pX4aBGV
4jnYBQJdSIKnAhsuAUAJEJX4aBGV4jnYwHQgBBkBCAAdFiEEneR3pKqi9Rnivv07
CYCNVr37XP0FAl1IgqcACgkQCYCNVr37XP13RQf/Ttxidgo9upF8jxrWnT5YhM6D
ozzGWzqE+/KDBX+o4f33o6uzozjESRXQUKdclC9ftDJQ84lFTMs3Z+/12ZDqCV2k
2qf0VfXg4e5xMq4tt6hojXUeYSfeGZXNU9LzjURCcMD+amIKjVztFg4kl3KHW3Pi
/aPTr4xWWgy2tZ1FDEuA5J6AZiKKJSVeoSPOGANouPqm4fNj273XFXQepIhQ5wve
4No0abxfXcLt5Yp3y06rNCBC9QdC++19N5+ajn2z9Qd2ZwztPb0mNuqHAok4vrlE
1c4WBWk93Nfy9fKImalGENpPDz0td2H9pNC9IafOWltGSWSINRrU1GeaNXS/uAOT
CADjcDN+emLbDTTReW4FLoQ0mPJ0tACgszGW50PtncTMPSj4uxSktQPWWk41oD9q
gpXm1Vgto4GvPWYs/ewR6Kyd8K0YkBxbRFyYOmycu3/zzYJnry+EHdvtQspwUDPg
QlI/avDrncERzICsbd86Jz0CMY4kzpg5v9dt/N6WnHlSk/S+vv4pPUDSz26Q4Ehh
iDvDavLGyzKSlVzWQ4bzzlQxXbDL6TZyVAQ4DBI4sI+WGtLbfD51EI5G9BfmDsbw
XJ0Dt2yEwRfDUx/lYbAMvhUnWEu2DSpYdJb8GG0GKTGqU4YpvO1JgTCsLSLIAHfT
tQMw04Gs+kORRNbggsdTD4sR
=N5Wp
-----END PGP PUBLIC KEY BLOCK-----
#!/bin/bash
# Default, no-argument behavior is to export all keys and write
# into the current directory. Otherwise, 1) export path and 2) the
# name of the key can be specified, in that order.
#
# $ ./export_pipeline_key.sh </path/to/output_files> <name of key to export>
#
# Key name can be omitted, or both arguments can be omitted, but don't provide
# only key name, as it will be treated as the output path.
#
EXPORT_PATH="$1"
EXPORT_KEY_NAME="$2"
if [ -z "${EXPORT_PATH}" ]; then
EXPORT_PATH="./"
fi
# To put in container for users to verify signed packages
gpg2 --export \
--armor "${EXPORT_KEY_NAME}" \
> "${EXPORT_PATH}/public.key"
# Contents of output file "private.key" should be put in CI environment
# variable SPACK_SIGNING_KEY, so pipeline jobs can import and use it
# for both signing and verifying packages.
(
gpg2 --export --armor "${EXPORT_KEY_NAME}" \
&& gpg2 --export-secret-keys --armor "${EXPORT_KEY_NAME}"
) | base64 | tr -d '\n' > "${EXPORT_PATH}/private.key"
chmod 600 "${EXPORT_PATH}/private.key"
File deleted
spack:
definitions:
- bootstrapped_compilers:
- "llvm@6.0.0 os=ubuntu18.04"
- "gcc@6.5.0 os=ubuntu18.04"
- "gcc@8.3.0 os=ubuntu18.04"
- gcc_system_packages:
- matrix:
- - zlib
- zlib@1.2.8
- zlib@1.2.8 cppflags=-O3
- tcl
- tcl ^zlib@1.2.8 cppflags=-O3
- hdf5
- hdf5~mpi
- hdf5+hl+mpi ^mpich
- trilinos
- trilinos +hdf5 ^hdf5+hl+mpi ^mpich
- gcc@8.3.0
- mpileaks
- lmod
- ['%gcc@7.4.0']
- gcc_old_packages:
- zlib%gcc@6.5.0
- clang_packages:
- matrix:
- [zlib, tcl ^zlib@1.2.8]
- ['%clang@6.0.0']
- gcc_spack_built_packages:
- matrix:
- [netlib-scalapack]
- [^mpich, ^openmpi]
- [^openblas, ^netlib-lapack]
- ['%gcc@8.3.0']
- matrix:
- [py-scipy^openblas, armadillo^openblas, netlib-lapack, openmpi, mpich, elpa^mpich]
- ['%gcc@8.3.0']
specs:
- $gcc_system_packages
- $gcc_old_packages
- $clang_packages
- $gcc_spack_built_packages
mirrors:
cloud_mirror: 's3://spack-public/mirror'
gitlab-ci:
bootstrap:
- name: bootstrapped_compilers
compiler-agnostic: true
mappings:
- match: [trilinos os=ubuntu18.04]
runner-attributes:
image:
name: spack/ubuntu-bionic
entrypoint: ['']
tags: [spack-kube, r5.2xlarge]
variables: {}
- match: [os=ubuntu18.04]
runner-attributes:
image:
name: spack/ubuntu-bionic
entrypoint: ['']
tags: [spack-kube]
variables: {}
final-stage-rebuild-index:
tags: [spack-kube]
image:
name: spack/ubuntu-bionic
entrypoint: ['']
enable-debug-messages: True
cdash:
build-group: Spack Tutorial Container (Test)
url: https://cdash.spack.io
project: Spack Testing
site: Spack Gitlab Cloud Infrastructure
view: false
config: {}
modules:
enable: []
packages:
all:
target: [x86_64]
repos: []
upstreams: {}
concretization: separately
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment