The Prefect Blog
Published in

The Prefect Blog

Deploy Prefect Pipelines with Python: Perfect! 🐍

How to quickly make a Prefect Python deployment file

blue duck with baby blue ducks swimming
Code available in this GitHub repo


  • schedule your flows
  • collaborate with other users for GUI-based orchestration
  • filter flows to be run by different agents
  • create flow runs with custom parameters from the GUI
  • use remote flow storage code from locations such as AWS or GitHub
  • turn your flow into an API

Doing it

Deployments from a Python file

flow chart diagram of two ways to create deployments
Option 1 arrived with Prefect 2.1
Code available in this GitHub repo
single deployment graphical user interface
Prefect deployment example graphical user interface

Scheduling flow runs

Form with parameter APPLE from graphical user interface


More deployment options

S3 Storage


blocks form in graphical user interface



Putting it all together

Code available in this GitHub repo
deployments listed in graphical user interface



  • flow: The name of the flow this deployment encapsulates.
  • name: A name for the deployment.

Optional arguments:

  • version: An optional version for the deployment. Defaults to the flow’s version.
  • output: if provided, the full deployment specification will be written as a YAML file in the location specified by output. You don’t need to output a YAML file, but you can.
  • skip_upload: if True, deployment files are not automatically uploaded to remote storage. If you don’t want to re-upload files, this is a handy setting.
  • apply: if True, the deployment is automatically registered with the API. Personally, I’d rather apply in the if __name__ == "__main__" block.

Optional keyword-only arguments:

  • description: An optional description of the deployment. Defaults to the flow’s description.
  • tags: An optional list of tags to associate with this deployment; note that tags are used only for organizational purposes. For delegating work to agents, see work_queue_name.
  • schedule: A schedule to run this deployment on. Prefect offers several scheduling formats.
  • work_queue_name: The work queue that will handle this deployment’s runs.
  • parameters: A dictionary of parameter values to pass to runs created from this deployment. If you didn’t specify default arguments for your flow, this is a good place to do so.
  • infrastructure: DockerContainer, KubernetesJob, or Process. An optional infrastructure block used to configure infrastructure for runs. If not provided, will default to running this deployment in Agent subprocesses.
  • infra_overrides: A dictionary of dot delimited infrastructure overrides that will be applied at runtime; for example env.CONFIG_KEY=config_value or namespace='prefect'. Often useful when working with K8s.
  • storage: An optional remote storage block used to store and retrieve this workflow. If not provided, will default to referencing this flow by its local path.
  • path: The path to the working directory for the workflow, relative to remote storage or, if stored on a local filesystem, an absolute path. This allows you to use the same infrastructure block across multiple deployments.
  • entrypoint: The path to the entrypoint for the workflow, always relative to the path. You might find this option helpful if your flow code is in a subfolder in your remote storage.

Deployments from the command line

Which approach should I use?

Help and niceties


Wrap 🌯



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Jeff Hale

I write about data science. Join my Data Awesome mailing list to stay on top of the latest data tools and tips: