Dependencies

How and where do you define, update, and import your dependencies?

Dependencies
Explicitly declare and isolate dependencies

When you build your system/library/whatever, you should be able to take a brand new machine, install your SCM tool (eg: git) if its not built in, download your repository, and run a single command to create your output.  If you can’t, you have a dependency problem.

What’s allowed

We’ve already identified your SCM as an allowable external dependency.  You may also need compilers, Installshield, or other third party products.  Whatever they are, you should have a document describing them - and furthermore none of them should assume any configuration beyond a basic installation.  Keep focused on the goal - to take a known, “pure” machine, pull your repository, run a single command, and produce your output.

Stateless-ness

A running theme throughout the 12F recommendations is that Stateless-ness is Good.  Stateful systems inevitably become hard to maintain and harder to scale, where as Stateless systems are simple - they may not be the most efficient solution to any given problem, but they produce scalability problems that are well understood and have simple (if potentially inefficient) solutions.  Since people are by far your most expensive resource, unless you’re running at Netflix scale simplicity should trump efficiency 9 times out of 10.

Advantages

An amazingly large number of organizations would struggle to build their products if their build server went down.  No matter how infrequent your release cycle is, this is an unacceptable business risk in 2019.  A poorly defined dependency tree not only leaves you vulnerable to this risk, it makes running any kind of repeatable automated tests problematic at best.

Well defined dependencies also make on boarding a breeze.  If any random CI tool can be shown how to build your application in a couple of lines of code, so can any new hires.  Hopefully you can already see how this will help reduce time spent bringing new people - or systems - up to speed.

The Traditional Way

Dependencies are installed onto the build server - sometimes, especially on Windows projects, by running actual installers.  No single source of truth exists other than poorly maintained documentation as to the products and versions required.  If the build server ever goes down, projects it is responsible for building become at risk; recreating the build server exactly may be impossible if backups fail.

The Stupid Way

You can always just add all of your external dependencies into your repository.  This is by far the worst way to solve the problem, but easily 10X better than relying on undocumented external dependencies.  If you follow this approach then you and only you are responsible for staying up to date, and you don’t have any way to use tools to perform minor updates - on the other hand, once you have a build that woks for you you should be able to replicate it at will no matter what the Internet is up to.  That’s not terrible.

The Modern Way

Most if not all modern build systems have some form of internet-accessible central repository for public files, and additionally allow for one or more private repositories (sometimes as simple as a file share).  All dependencies are added to a configuration file (gemfile, pom, etc).

Systems are built from the bottom up, starting with one or more projects that have no dependencies beyond publicly accessible libraries and anything provided in the singular repository.  These may then create artifacts that are published to an internal repository, which is convenient but ultimately re-creatable at will and is not the ultimate source of truth, but instead a stable cache.  Projects depending on those artifacts may then be built.

The Real Benefits

Not only are artifacts no longer “important,” since they can be reliably and repeatedly regenerated, by doing so we allow the source repositories to be the only “sources of truth” for the software - a role that they are well suited for.

As an added bonus, complex system builds can be well defined in such a way as to be understood by a surprisingly large number of tools, both site install and hosted, allowing builds to be automated and parallelized as much as possible with very little devops input needed.

Build servers are now effectively stateless, allowing for convenient temporary services to be used, which then enables efficiently triggered build/test cycles  on a per-checkin or pull-request basis.