Conducto for CI/CD
Node Parameters
Exec, Serial, and Parallel Nodes support several parameters that make pipeline specification in Conducto extremely powerful. You have already learned about image
and env
. You can also specify:
cpu
andmem
to constrain resourcesrequires_docker
to run docker commandsstop_on_error
to implement the finally patternsame_container
to control container sharingdoc
to show pretty documentation in the web appskip
to default skip a node
Explore our live demo, view the source code for this tutorial, or clone the demo and run it for yourself.
git clone https://github.com/conducto/demo.git
cd demo/cicd
python node_params.py --local
Alternatively, download the zip archive here.
You can view most of these parameters for any node in the Execution Parameters section of the node pane.
And, you can modify most of these parameters for any node in a live pipeline from the Modify modal and Reset the node re-run in place.
cpu
and mem
The cpu
and mem
parameters limit the cpu and memory that get assigned to an Exec node. The default values are cpu=1
cpu and mem=2
GB. Allocate less if your commands require very little cpu or memory to allow your local pipeline to launch more nodes in parallel. Allocate more if necessary.
co.Exec("echo not doing much", cpu=0.25, mem=0.25)
requires_docker
To enable running docker commands like docker build
, docker run
, etc. in a node, you must set requires_docker=True
. This is because your commands run within a docker container already, and running docker within docker requires non-trivial setup that Conducto will not do by default. Also, note that your image must have docker installed.
image = co.Image("docker:19.03")
co.Exec("docker run hello-world", requires_docker=True, image=image)
stop_on_error
A Serial node defaults to stop_on_error=True
, which means that it stops and reports itself as errored as soon as any child node encounters an error. If stop_on_error=False
, then it will run all child nodes, but will still report itself as errored if any child encountered an error. This is useful for implementing a finally pattern to guarantee that your pipeline always runs a cleanup step.
with co.Serial(name="stop_on_error_false", stop_on_error=False):
co.Exec("echo doing some setup", name="setup")
co.Exec("this_command_will_fail", name="bad_command")
co.Exec("echo doing some cleanup", name="finally_cleanup")
same_container
Exec nodes are not guaranteed to run in the same containers, although Conducto will re-use containers when possible for efficiency. You can force commands to run in the same container with the argument same_container=co.SameContainer.NEW
. All child nodes will have the default same_container=co.SameContainer.INHERIT
and will share the container with the parent. This is useful if you want greater visibility into a command that chains together multiple subcommands. An error in a single subcommand will be easier to identify than an error in a long command.
long_command = """set -ex
echo This is a long command.
echo First I do this.
echo Then I do that.
oops_this_is_not_a_valid_command
echo Then I go home.
"""
co.Exec(long_command)
versus
with co.Serial(name="example", same_container=co.SameContainer.NEW):
co.Exec("echo This is a long command.", name="intro")
co.Exec("echo First I do this.", name="do_this")
co.Exec("echo Then I do that.", name="do_that")
co.Exec("oops_this_is_not_a_valid_command", name="oops")
co.Exec("echo Then I go home.", name="go_home")
Another reason to use same_container=co.SameContainer.NEW
to force container sharing is when you want your commands to share a filesystem. This makes a build and test pipeline very easy, for example, because you simply write a binary to the filesystem in the build node, and the test node can automatically see it. There is no need to put the binary in a separate data store.
with co.Serial(name="shared", same_container=co.SameContainer.NEW):
co.Exec("go build -o bin/app ./app.go", name="build")
co.Exec("bin/app --test", name="test")
However, there is a downside to this same_container
mode. When sharing a container, Exec nodes will always run in serial, even if the parent is a Parallel node. So, you lose the ability to parallelize.
with co.Parallel(
name="always_serial", same_container=co.SameContainer.NEW
):
co.Exec("echo I cannot run in parallel", name="parallel_exec_1")
co.Exec("echo even if I want to", name="parallel_exec_2")
doc
Nodes can be documented with the doc
parameter. It supports Markdown and is rendered at the top of the node pane. Nodes with docs are marked with a doc icon in the pipeline pane. We make extensive use of this feature in all of our demos.
markdown_doc = "### I _can_ **use** `markdown`"more_markdown_doc = """
Markdown even supports [links](https://www.conducto.com)
and images ![alt text](
http://cdn.loc.gov/service/pnp/highsm/21700/21778r.jpg "a pretty picture")
"""co.Exec("echo doc example 1", doc=markdown_doc)
co.Exec("echo doc example 2", doc=more_markdown_doc)
skip
Nodes can be skipped in the web app or with skip=True
. This is useful, for example, if you have a pipeline that has a reasonable default way to run, but you want the ability to manually enable (unskip) additional steps from the web app. A specific example might be deploying a production environment. You could skip the deployment node by default, and require that someone manually reviews the output of the pipeline before unskipping and running the node to complete the deployment.
image = co.Image("bash:5.0")
with co.Serial(image=image) as skip_example:
co.Exec("echo build some stuff", name="build")
co.Exec("echo test some stuff", name="test")
co.Exec("echo deploy staging", name="deploy staging")
co.Exec("echo deploy prod", name="deploy prod", skip=True)
co.Exec("echo send status email", name="send email")
Now, with the information you learned in Your First Pipeline, Execution Environment, Environment Variables and Secrets, Data Stores, CI/CD Extras, and here, you can create arbitrarily complex pipelines.