Commit 952c43f3 authored by Scott Wittenburg's avatar Scott Wittenburg Committed by Todd Gamblin

More doc fixes

parent bf53c943
......@@ -3,10 +3,10 @@
This repository is used as a component in the automated building of the Spack tutorial containers. However, it also serves to illustrate a custom workflow (based on spack ci/pipeline infrastructure) designed to assist in iteratively building a set of spack packages, pushing package binaries to a mirror, and then automatically building a Docker image containing the resulting binary packages. The workflow proceeds as follows:
1. Developer makes changes to the spack stack (environment) in this repository, and pushes a branch against which a PR is created. The changes could include changing the CDash build group so that builds will be tracked together.
2. Gitlab CI/CD notices the change and triggers a pipeline which builds the entire stack of packages, and reports a status check back to Github
2. Gitlab CI/CD notices the change and triggers a pipeline to build the entire stack of packages, and reports a status check back to Github.
3. Developer can make more changes and push to the PR branch as many times as necessary, and each time a pipeline will be triggered on Gitlab.
4. When the developer is satisfied with the stack/environment and all the packages are successfully built, the developer merges the PR into the master branch of this repo, and then tags the repository following a predefined tag format.
5. The creation of the tag triggers an automated DockerHub build which copies the binary packages (along with ancillary resources) into the container and publishes the resulting container.
5. The creation of the tag triggers an automated DockerHub build that copies the binary packages (along with ancillary resources) into the container and publishes the resulting container.
## Moving parts
......@@ -14,11 +14,13 @@ This document describes the pieces involved in this workflow and how those piece
This custom workflow involves this Github repository, a Gitlab "CI/CD only" repository (premium license needed for "CI/CD only" repository) with runners configured, an external spack mirror to hold the binaries (in our case hosted as an AWS S3 bucket), and a DockerHub repository for building the final container.
Note that a Gitlab "CI/CD Only" repo is a convenience, rather than a fundamental requirement of this type of workflow. You could, for example, set up a Gitlab trigger to run a pipeline, invoking that trigger in a GitHub Action. Then your pre-ci job (defined in the `.gitlab-ci.yml` in this repo) could use `curl` to access the GitHub API and post a status check back on your GitHub PR before running `spack ci start`.
## Github repo (this repository)
This repository contains a spack environment with a set of packages to be built into a Docker container. It also contains a `.gitlab-ci.yml` file to be used by a Gitlab CI/CD repository (for automated pipeline testing), as well as Docker resources needed to build both the pipeline package building container, as well as a final output container with all the binaries from spack environment.
The simple `.gitlab-ci.yml` file in this repo describes a single job which is used to generate the full workload of jobs for the pipeline (often referred to as the pre-ci phase). Because the runner we have targeted with the `spack-kube` tag does not have the version of spack we need already installed, we first clone and activate the version we do need. Also note how the command line arguments `--spack-repo` and `--spack-ref` are used to propagate that information to `spack ci generate ...`, so that build jobs are generated to use the same version of spack as used during the pre-ci phase.
The simple `.gitlab-ci.yml` file in this repo describes a single job that is used to generate the full workload of jobs for the pipeline (often referred to as the pre-ci phase). Because the runner we have targeted with the `spack-kube` tag does not have the version of spack we need already installed, we first clone and activate the version we do need. Also note how the command line arguments `--spack-repo` and `--spack-ref` are used to propagate that information to `spack ci generate ...`, so that build jobs are generated to use the same version of spack as used during the pre-ci phase.
## Gitlab CI/CD repo
......@@ -33,7 +35,7 @@ See the spack pipeline [documentation](https://github.com/scottwittenburg/spack/
3. `SPACK_SIGNING_KEY` Required to sign buildcache entries (package binaries) after they are built
4. `AWS_ACCESS_KEY_ID` (optional) Needed to authenticate to S3 mirror
5. `AWS_SECRET_ACCESS_KEY` (optional) Needed to authenticat to S3 mirror
6. `S3_ENDPOIT_URL` (optional) Needed if targeting an S3 mirror which is NOT hosted on AWS
6. `S3_ENDPOIT_URL` (optional) Needed if targeting an S3 mirror that is NOT hosted on AWS
7. `CDASH_AUTH_TOKEN` (optional) Needed if reporting build results to a CDash instance
8. `DOWNSTREAM_CI_REPO` Needed until Gitlab child pipelines are implemented. This is the repo url where the generated workload should be pushed, and for many cases, pushing to the same repo is a workable approach.
......@@ -47,7 +49,7 @@ This workflow makes use of a public mirror hosted on AWS S3. The automated pipe
## DockerHub repository
The DockerHub repository is configured to automatically build a container from the `Dockerfile` and build context in the `docker/tutorial-image/` directory of this repository. The build context also contains a custom build hook used by DockerHub to create the container in such a way that a few environment variables are made available as `--build-args` when the container is built.
The DockerHub repository is configured to automatically build a container from the `Dockerfile` and build context in the `docker/` directory of this repository. The build context also contains a custom build hook used by DockerHub to create the container in such a way that a few environment variables are made available as `--build-args` when the container is built.
The variables are `REMOTE_BUILDCACHE_URL` (publicly accessible mirror url where the binares were pushed by the Gitlab CI/CD pipeline), as well as `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` (needed to authenticate to the binary mirror, which is an S3 bucket).
The environment variables we need to set in the DockerHub UI and pass to the `docker build...` command are: `REMOTE_BUILDCACHE_URL` (publicly accessible mirror url where the binares were pushed by the Gitlab CI/CD pipeline), as well as `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` (needed to authenticate to the binary mirror, because the mirror is an AWS S3 bucket and we use `awscli` to sync it's contents with the container filesystem).
# DockerHub Automated Build Setup
This document describes how to set up the automated DockerHub container builds
which can build, tag, and push a Docker container with the contents of the
binary mirror populated by the Gitlab pipeline.
This document describes how to set up automated DockerHub container builds
to build, tag, and push a Docker container with the contents of the binary
mirror populated by the Gitlab pipeline.
DockerHub provides information on autobuilds [here](https://docs.docker.com/docker-hub/builds/).
## Overview
DockerHub autobuilds can be set up to build either when you push to a specific
branch, or when you push a tag matching some criterion. For the current case,
we are setting up one build to run every time the `master` branch is pushed
(in which case the resulting container will be tagged simply with `latest`), as
well as a build to detect when tags of a certain format are pushed.
Because we're accessing a spack mirror that lives in an AWS S3 bucket via a
specific `s3` url, we need to:
1. install the `boto3` module in the container
2. ensure AWS access credentials are available during container build
In order to set up the automated builds, you just need to link your DockerHub
repository to your source code repository on GitHub then configure some build
rules telling DockerHub what source code repo actions should trigger builds
and how the resulting images should be tagged. These steps are described in
more detail below.
## Link your DockerHub repo to your GitHub repo
......@@ -52,7 +44,7 @@ In the screenshot above, you can see we have set up two build rules. The first
will trigger a build whenever we push to the `master` branch on our linked
GitHub repository. The resulting Docker image will simply be tagged `latest`.
We have configured anoher build to be triggered upon certain tags. Any tag
which starts with `rev-` will be noticed and autobuilt. Anything in the tag
that starts with `rev-` will be noticed and autobuilt. Anything in the tag
string after the `rev-`, will be captured and used as the image tag. For example
if you wish your resulting container to be tagged `sc19`, you would tag the repo
`rev-sc19`.
......
......@@ -3,7 +3,22 @@
This document describes the container versioning scheme and the process for
updating the container.
# Steps
## Repo/Container versioning scheme
This section describes how and when DockerHub decides to automatically
rebuild the container, and how the built images are tagged.
There are two build rules (see this [section](./DOCKERHUB_SETUP.md#configure-builds)
for more info) governing when DockerHub will rebuild the container. The
first specifies that any time you push to the `master` branch on this repo,
DockerHub will build an image and tag it with `latest`. The second build rull
specifies that any time you push a tag matching the regular expression
`^rev-(.+)$`, DockerHub will build an image and use the matched group
(everything in the tag following the `rev-`) to tag the image. So if, for
example, you want a container to be tagged `sc20`, you would tag this
repository `rev-sc20` and push it.
## Steps to build an updated container
1. (optional) Use the shell script in the `keyhelp` directory to take a key
keychain and export both the public and private parts to files used in the
......@@ -23,16 +38,19 @@ the helper script, run the following command from the root of the repository:
1. Create a new mirror (this is required until the sync process between the
mirror and the container build cache is controlled by/limited to the spack
environment in which it runs)
environment in which it runs). The mirror url currently needs to be updated
in two locations: The `spack.yaml` and the DockerHub autobuild environment
variables (see [here](./DOCKERHUB_SETUP.md#configure-builds) for more
information).
1. Create a new buildgroup in CDash to track the builds from the version to be
(done by editing the `spack.yaml` and changing the `build-group` property)
1. Update the set of packages, bootstrapped compilers, etc as desired
1. Update the `spack.yaml` file to adjust the set of packages, bootstrapped
compilers, etc as desired.
1. Create a PR on this repository and iterate on the `spack.yaml` until all
the packages build.
1. Merge the PR into master and tag the result, the tag name will later be
picked up by DockerHub and used to tag the container it builds from the repo
tag.
\ No newline at end of file
1. Merge the PR into master and/or tag the result using the tagging scheme
described in the section above.
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment