Commit 648d1e30 authored by Scott Wittenburg's avatar Scott Wittenburg Committed by Todd Gamblin

Docs fixes, assign big pkgs to beefier instance

parent 6d1ef008
# Overview
At a high level, this repository illustrates a custom workflow designed to assist in iteratively building a set of spack packages, pushing package binaries to a mirror, and then automatically building a Docker image containing the resulting binary packages. The workflow proceeds as follows:
This repository is used as a component in the automated building of the Spack tutorial containers. However, it also serves to illustrate a custom workflow (based on spack ci/pipeline infrastructure) designed to assist in iteratively building a set of spack packages, pushing package binaries to a mirror, and then automatically building a Docker image containing the resulting binary packages. The workflow proceeds as follows:
1. Developer makes changes to the spack stack (environment) in this repository, and pushes a branch against which a PR is created.
1. Developer makes changes to the spack stack (environment) in this repository, and pushes a branch against which a PR is created. The changes could include changing the CDash build group so that builds will be tracked together.
2. Gitlab CI/CD notices the change and triggers a pipeline which builds the entire stack of packages, and reports a status check back to Github
3. Developer can make more changes and push to the PR branch as many times as necessary, triggering new pipelines each time.
4. When the developer is satisfied with the stack/environment and all the packages are successfully built, the developer merges the PR into the master branch of this repo.
5. The merge to master triggers an automated DockerHub build which copies the binary packages (along with ancillary resources) into the container and publishes the resulting container.
3. Developer can make more changes and push to the PR branch as many times as necessary, and each time a pipeline will be triggered on Gitlab.
4. When the developer is satisfied with the stack/environment and all the packages are successfully built, the developer merges the PR into the master branch of this repo, and then tags the repository following a predefined tag format.
5. The creation of the tag triggers an automated DockerHub build which copies the binary packages (along with ancillary resources) into the container and publishes the resulting container.
## Moving parts
This document describes the pieces involved in this workflow and how those pieces fit together, including how the new spack automated pipeline workflow is used in the process.
This custom workflow involves this Github repository, a Gitlab CI/CD repository (premium license needed for "CI/CD only" repository) with runners configured, an external spack mirror to hold the binaries (in our case hosted as an AWS S3 bucket), and a DockerHub repository for building the final container.
This custom workflow involves this Github repository, a Gitlab "CI/CD only" repository (premium license needed for "CI/CD only" repository) with runners configured, an external spack mirror to hold the binaries (in our case hosted as an AWS S3 bucket), and a DockerHub repository for building the final container.
## Github repo (this repository)
This repository contains a spack environment with a set of packages to be built into a Docker container. It also contains a `.gitlab-ci.yml` file to be used by a Gitlab CI/CD repository (for automated pipeline testing), as well as Docker resources needed to build both the pipeline package building container, as well as a final output container with all the binaries from spack environment.
The simple `.gitlab-ci.yml` file in this repo describes a single job which is used to generate the full workload of jobs for the pipeline. Because the runner we have targeted with the `spack-kube` tag does not have the version of spack we need, we first clone and activate the version we do need. Note that, as described below, the presence of the `SPACK_REPO` and `SPACK_REF` environment variables at the time the jobs are generated (`spack ci generate`) will cause the generated jobs to be run with the same custom version of spack.
The simple `.gitlab-ci.yml` file in this repo describes a single job which is used to generate the full workload of jobs for the pipeline (often referred to as the pre-ci phase). Because the runner we have targeted with the `spack-kube` tag does not have the version of spack we need already installed, we first clone and activate the version we do need. Also note how the command line arguments `--spack-repo` and `--spack-ref` are used to propagate that information to `spack ci generate ...`, so that build jobs are generated to use the same version of spack as used during the pre-ci phase.
## Gitlab CI/CD repo
......@@ -28,8 +28,8 @@ When creating the CI/CD only repository, you can choose what kinds of events on
See the spack pipeline [documentation](https://github.com/scottwittenburg/spack/blob/add-spack-ci-command/lib/spack/docs/pipelines.rst#environment-variables-affecting-pipeline-operation) for more details on the job environment variables that may need to be set, but a brief summary of some common environment variables follows.
1. `SPACK_REPO` (optional) Needed if a custom spack should be cloned to generate and run the pipeline jobs
2. `SPACK_REF` (optional) Needed if a custom spack should be cloned to generate and run the pipeline jobs
1. `SPACK_REPO` (optional) Useful if a custom spack should be cloned to generate and run the pipeline jobs
2. `SPACK_REF` (optional) Useful if a custom spack should be cloned to generate and run the pipeline jobs
3. `SPACK_SIGNING_KEY` Required to sign buildcache entries (package binaries) after they are built
4. `AWS_ACCESS_KEY_ID` (optional) Needed to authenticate to S3 mirror
5. `AWS_SECRET_ACCESS_KEY` (optional) Needed to authenticat to S3 mirror
......@@ -37,7 +37,7 @@ See the spack pipeline [documentation](https://github.com/scottwittenburg/spack/
7. `CDASH_AUTH_TOKEN` (optional) Needed if reporting build results to a CDash instance
8. `DOWNSTREAM_CI_REPO` Needed until Gitlab child pipelines are implemented. This is the repo url where the generated workload should be pushed, and for many cases, pushing to the same repo is a workable approach.
Because we want to use a custom spack (one other than the runners may already have available), we need to add the environment variables listed above. The presence of those variables at job generation time (`spack ci generate`) results in custom that custom spack getting cloned as a part of the `before_script` of each generated pipeline job.
Because we want to use a custom spack (one other than the runners may already have available), we add the `SPACK_REPO` and `SPACK_REF` environment variables listed above. We then use those variables to clone spack in the pre-ci phase, as well as pass them along to the `spack ci generate` command so that custom spack will be cloned and activated as a part of the `before_script` of each generated pipeline job.
## Spack mirror
......
......@@ -21,6 +21,10 @@ The `public.key` can be committed to the repository and kept around for as
long as the signing key (private) will continue to be used to sign the
tutorial packages.
NOTE: the "name-of-key-to-export" should identify a key from the keychain
that does not require a passphrase, or it will not work in the pipeline
build jobs.
1. Create a new mirror (this is required until the sync process between the
mirror and the container build cache is controlled by/limited to the spack
environment in which it runs)
......
......@@ -49,7 +49,7 @@ spack:
- name: bootstrapped_compilers
compiler-agnostic: true
mappings:
- match: [trilinos os=ubuntu18.04]
- match: [trilinos, llvm, gcc]
runner-attributes:
image:
name: spack/ubuntu-bionic
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment