
There are also features for letting you easily pre-configure access to a central resource, like a datastore, in the form of Connections & Hooks, and for limiting concurrency, via Pools. TaskFlow API automatically passes data between tasks via implicit XComsĪirflow sends out Tasks to run on Workers as space becomes available, so there’s no guarantee all the tasks in your DAG will run on the same worker or the same machine.Īs you build out your DAGs, they are likely to get very complex, so Airflow provides several mechanisms for making this more sustainable - SubDAGs let you make “reusable” DAGs you can embed into other ones, and TaskGroups let you visually group tasks in the UI. Alice 3 has a new rich gallery of models that includes everything you need to spark. It has all of the features that have make Alice an exciting and creative first programming experience with an added emphasis on object-oriented concepts. Uploading and downloading large files from a storage service (either one you run, or part of a public cloud) Alice 3 is the latest IDE (integrated development environment) for the Alice programming language. XComs (“Cross-communications”), a system where you can have tasks push and pull small bits of metadata. To pass data between tasks you have three options: By default, a task will wait for all of its upstream tasks to succeed before it runs, but this can be customized using features like Branching, LatestOnly, and Trigger Rules. These dependencies are what make up the “edges” of the graph, and how Airflow works out which order to run your tasks in. Most executors will generally also introduce other components to let them talk to their workers - like a task queue - but you can still think of the executor and its workers as a single logical component in Airflow overall, handling the actual task execution.Īirflow itself is agnostic to what you’re running - it will happily orchestrate and run anything, either with high-level support from one of our providers, or directly as a command using the shell or Python Operators.įirst_task. Airflow community provides a single docker compose file which installs all the components in a single machine. In the default Airflow installation, this runs everything inside the scheduler, but most production-suitable executors actually push task execution out to workers.Ī webserver, which presents a handy user interface to inspect, trigger and debug the behaviour of DAGs and tasks.Ī folder of DAG files, read by the scheduler and executor (and any workers the executor has)Ī metadata database, used by the scheduler, executor and webserver to store state. A workflow is represented as a DAG (a Directed Acyclic Graph), and contains individual pieces of work called Tasks, arranged with dependencies and data flows taken into account.Ī DAG specifies the dependencies between Tasks, and the order in which to execute them and run retries the Tasks themselves describe what to do, be it fetching data, running analysis, triggering other systems, or more.Īn Airflow installation generally consists of the following components:Ī scheduler, which handles both triggering scheduled workflows, and submitting Tasks to the executor to run.Īn executor, which handles running tasks. Airflow is a platform that lets you build and run workflows.
