Local execution
With Fusion, you can run Nextflow pipelines using the local executor and a cloud storage bucket as the pipeline scratch directory. This is useful to scale your pipeline execution vertically with a large compute instance, without the need to allocate a large storage volume for temporary pipeline data.
This configuration requires the use of Docker (or a similar container engine) for the execution of your pipeline tasks.
- AWS S3
- Azure Blob Storage
- Google Cloud Storage
-
Set
AWS_ACCESS_KEY_ID
andAWS_SECRET_ACCESS_KEY
environment variables to grant Nextflow and Fusion access to your storage credentials. -
Add the following to your
nextflow.conf
file:wave.enabled = true
docker.enabled = true
fusion.enabled = true
fusion.exportStorageCredentials = true -
Run the pipeline with the usual run command:
nextflow run <YOUR PIPELINE SCRIPT> -w s3://<YOUR-BUCKET>/work
Replace
YOUR PIPELINE SCRIPT
with your pipeline Git repository URI andYOUR-BUCKET
with an S3 bucket of your choice.
To achieve optimal performance, set up an SSD volume as the temporary directory.
-
Set
AZURE_STORAGE_ACCOUNT_NAME
andAZURE_STORAGE_ACCOUNT_KEY
orAZURE_STORAGE_SAS_TOKEN
environment variables to grant Nextflow and Fusion access to your storage credentials. -
Add the following to your
nextflow.conf
file:wave.enabled = true
docker.enabled = true
fusion.enabled = true
fusion.exportStorageCredentials = true -
Run the pipeline with the usual run command:
nextflow run <YOUR PIPELINE SCRIPT> -w s3://<YOUR-BUCKET>/work
Replace
YOUR PIPELINE SCRIPT
with your pipeline Git repository URI andYOUR-BUCKET
with an S3 bucket of your choice.
To achieve optimal performance, set up an SSD volume as the temporary directory.
-
Set the
GOOGLE_APPLICATION_CREDENTIALS
environment variable with your service account JSON key to grant Nextflow and Fusion access to your storage credentials. -
Add the following to your
nextflow.conf
file:wave.enabled = true
docker.enabled = true
fusion.enabled = true
fusion.exportStorageCredentials = true -
Run the pipeline with the usual run command:
nextflow run <YOUR PIPELINE SCRIPT> -w s3://<YOUR-BUCKET>/work
Replace
YOUR PIPELINE SCRIPT
with your pipeline Git repository URI andYOUR-BUCKET
with an S3 bucket of your choice.
To achieve optimal performance, set up an SSD volume as the temporary directory.