How is this different from Airflow?
I just read their documentation. It appears that from their perspective, Airflow deals with operations and dependencies between operations, while Dagster derives “solid” (their name for operations)’s dependencies from their inputs / outputs.
In this way, it can drive the same operations with different data from local development environment the same way as it is deployed in your ETL pipeline, much easier to develop / debug. Since it only cares about data dependencies, input artifacts and output artifacts can be managed by Dagster too, hence, it is easier to retry without worrying about side-effect.