All pages
Powered by GitBook
1 of 8

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Containerization

Recall that there is a third way by which we handle package management on Luria: containerization. Similar to Environment Modules and Conda environments, containerization is a way of packaging an environment into a container such that it contains the absolute bare minimum needed to run a specific set of software. Containers can be thought of as a very stripped down operating systems packaged into a little box, although that is not what they are.

Containers have their own filesystem where the software runs, and it is a snapshot of the filesystem needed for a piece of software to run. Therefore, if a container can run a piece of software now, it should be able to do so always.

Since a containerized application contains everything it needs in order to run, it is very easy to share containers with people so that they can easily run software.

Container pros:

  • Portable

  • Shareable

  • Consistent

Popular container engines:

  • Docker

  • Singularity

Singularity

  • Begun in 2015

    • Linux only, primarily used in HPC

    • Integrates with Slurm

    • Does not require root privileges

Docker isn't available on Luria, but Singularity is, and it supports running Docker images. However, there are multiple Singularity registries online, similar to Dockerhub, which host many useful images built for use in Singularity. Below are two of these registries. It's best to look for images in these registries before deciding to use a Docker image, as Docker images can sometimes not work exactly as intended in Singularity.

Running Docker Images

To create a basic Docker container from the Debian image, we run the following:

docker run debian echo 'Hello, World!'

# If this is your first time running the ubuntu image, this will pull a lot of data from Dockerhub, then run:

Hello, World!

What's happening here? We invoke the docker command, and tell it to run a command in a container created using the debian image, which it gets by default from Dockerhub. The rest of this line simply tells Docker what command to run in the container, in this case echo 'Hello, World!'.

Important note: Docker images are built for specific CPU architectures (i.e. amd64 vs arm64) and you can only run an image if its the same architecture as your computer. Many popular Docker images have versions for both amd64 and arm64, but it's up to you to check whether or not a version compatible with your CPU's architecture exists before trying to run an image.

We can explore this container a bit more by creating an interactive session inside of it. This allows us to see the filesystem present in the container.

To start an interactive session in the container created using the ubuntu image:

docker run -it debian bash
root@dsliajldkajs:/#

Inside the container, we can't do very much, but we can see that it has its own filesystem with the usual FHS layout. We can certainly make changes to this filesystem similarly to how we do so on a normal Unix system. However, once the container stops running, any changes we make are reset once the container comes back up.

If data inside the container does not survive container reboots, how does any data persist?

There are two ways that Docker allows us to persist data: bind mounting the host's filesystem or creating a Docker volume.

We'll focus on bind mounting first. Bind mounting essentially lets you plug a hole in a Docker container filesystem that points to somewhere on your host computer's filesystem.

To bind mount a directory, when we run our container, we pass the -v flag then provide <source>:<destination> where <source> is the directory that you want accessed in the container and <destination> is where in the container the directory points to.

For example, to bind mount a local directory in the Ubuntu container

docker run -v "/home/asoberan:/mnt" -it debian bash
root@mlfmlkma:#/ ls /mnt

# Your files should be present in the /mnt directory inside the container

Now, if you create something in /mnt in the container, you'll see those changes made on your local directory as well.

This Ubuntu images is pretty barebones, as you've seen. Images such as this aren't meant for using outright, but instead for building upon to create other, more useful images.

We'll use one of these more useful images to set up an R development environment.

The image we'll be using is rocker/rstudio, an image made by the R community for setting up a barebones R environemnt or for building a more robust R environment.

Let's start up an interactive session using the rocker/rstudio image available on Dockerhub.

docker run --rm -it rocker/rstudio bash
root@damldkmsla:#/ R
> library("tidyverse")
> Error in library("tidyverse") : there is no package called ‘tidyverse’
> install.packages(c("tidyverse"))
# tidyverse installation output
> library(tidyverse)
── Attaching core tidyverse packages ─────────────────── tidyverse 2.0.0 ──
:heavy_check_mark: dplyr    1.1.4    :heavy_check_mark: readr    2.1.5
:heavy_check_mark: forcats  1.0.0    :heavy_check_mark: stringr  1.5.1
:heavy_check_mark: ggplot2  3.5.0    :heavy_check_mark: tibble   3.2.1
:heavy_check_mark: lubridate 1.9.3    :heavy_check_mark: tidyr    1.3.1
:heavy_check_mark: purrr    1.0.2
── Conflicts ───────────────────────────────────── tidyverse_conflicts() ──
:heavy_multiplication_x: dplyr::filter() masks stats::filter()
:heavy_multiplication_x: dplyr::lag()   masks stats::lag()
:information_source: Use the conflicted package (<http://conflicted.r-lib.org/>) to force all
conflicts to become errors

As you can see, rocker/rstudio does not come with tidyverse built-in. However, the R environment it provides is just like any other R environment, so it's incredibly simple to install it.

Working with R in the command line can be fairly cumbersome. The real power of rocker/rstudio is that it comes built-in with an RStudio server.

By default, RStudio binds to the port 8787. However, the port in the container is in its own network. So like before, we'll need to port forward from the container to our local network. Thankfully, Docker has a built-in way of doing this, using the -p flag, which is supplied with <host port>:<container port>, where host port is the port on your own computer and container port is the port in the container. To keep things simple, we'll keep the port numbers the same.

docker run --rm -it -p 8787:8787 rocker/rstudio

This should start an RStudio server which you can access on your computer's web browser at http://localhost:8787.

Remember, any files you create in this RStudio Server are created in the Docker container. When the Docker container stops, those files will be gone. If you want to save your files or use R files you have from previous work, it's best to bind mount the directory with your files and make sure to only make changes to the bind-mounted directory in the container.

Building Docker Images

Docker is a container engine, but it's also an image build tool. You can build Docker images yourself by creating a Dockerfile, essentially a file that outlines each step in creating your image.

Below are the common commands used in a Dockerfile to outline these steps:

  • FROM - Dictates what the base image you're building off of.

  • LABEL - A simple label attached to your image as metadata. A common label would be description for writing a description of the image.

  • RUN - Runs the command you specify in the image. For example, if the base image is Ubuntu, then you can run any Ubuntu commands here. Common things to run would be apt-get install <package> to install an Ubuntu package into your container.

  • CMD - The command that should run when the container is started. This tends to be the major software that is being packaged.

Knowing these is enough to build a simple Docker image. We'll be using this knowledge to build our own Docker image for Seurat.

Seurat is an R package designed for QC, analysis, and exploration of single-cell RNA-seq data. Seurat aims to enable users to identify and interpret sources of heterogeneity from single-cell transcriptomic measurements, and to integrate diverse types of single-cell data.

We'll use rocker/rstudio as a base so that we can have RStudio available to us automatically.

Create a file named "Dockerfile".

First, we must select the base image. We'll use rocker/rstudio version 4.3.2, which comes with R 4.3.2. We'll make sure to label the image with a simple description.

Then, we must outline the steps needed to install Seurat4. rocker/rstudio is built on top of Ubuntu, so any packages we need to install should use Ubuntu's apt-get utility. The following packages are needed for the installation of Seurat and other tools:

Now, we can run R to install Seurat and other useful R tools, including BiocManager, which we'll use in the next step to install useful bioinformatics R libraries.

Installing R libaries using BiocManager:

Installing other tools from GitHub:

All together, the Dockerfile should look like this:

Now that we have the Dockerfile, we can invoke the Docker build commands in the command line. We'll want to tag our Docker image with our name and the name of the image, preferably something descriptive. I'll choose asoberan/abrfseurat for my build.

Of course, each of you could build this yourselves and have a custom local copy of this image. However, the benefits of containerization are that it makes programs and environments portable. I've already created the image and uploaded it to Dockerhub. So instead of everyone needing to create their own image, you just pull my existing image and use it immediately.

I've created images for both amd64 and arm64. If you're running a PC or an Intel-based Mac, you'll want to use the tag latest-x86_64. If you're running Apple Silicon or another ARM processor, you'll want to use the tag latest-arm64.

Once the Docker image is pulled and runs, you can navigate to and login to the RStudio instance with user rstudio and the given password. All the libraries needed for Seurat should be available out of the box.

However, we've fallen into the same problem as previously: we are running this instance of RStudio locally on our computers. How can we take advantage of this image on the Luria cluster?

FROM rocker/rstudio:4.3.2
LABEL description="Docker image for Seurat4"
RUN apt-get update && apt-get install -y \
    libhdf5-dev build-essential libxml2-dev \
    libssl-dev libv8-dev libsodium-dev libglpk40 \
    libgdal-dev libboost-dev libomp-dev \
    libbamtools-dev libboost-iostreams-dev \
    libboost-log-dev libboost-system-dev \
    libboost-test-dev libcurl4-openssl-dev libz-dev \
    libarmadillo-dev libhdf5-cpp-103
RUN R -e "install.packages(c('Seurat', 'hdf5r', 'dplyr', 'cowplot', 'knitr', 'slingshot', 'msigdbr', 'remotes', 'metap', 'devtools', 'R.utils', 'ggalt', 'ggpubr', 'BiocManager'), repos='http://cran.rstudio.com/')"
RUN R -e "BiocManager::install(c('SingleR', 'slingshot', 'scRNAseq', 'celldex', 'fgsea', 'multtest', 'scuttle', 'BiocGenerics', 'DelayedArray', 'DelayedMatrixStats', 'limma', 'S4Vectors', 'SingleCellExperiment', 'SummarizedExperiment', 'batchelor', 'org.Mm.eg.db', 'AnnotationHub', 'scater', 'edgeR', 'apeglm', 'DESeq2', 'pcaMethods', 'clusterProfiler'))"
RUN R -e "remotes::install_github(c('satijalab/seurat-wrappers', 'kevinblighe/PCAtools', 'chris-mcginnis-ucsf/DoubletFinder', 'velocyto-team/velocyto.R'))"
FROM rocker/rstudio:4.3.2
LABEL description="Docker image for Seurat4"

RUN apt-get update && apt-get install -y \
    libhdf5-dev build-essential libxml2-dev \
    libssl-dev libv8-dev libsodium-dev libglpk40 \
    libgdal-dev libboost-dev libomp-dev \
    libbamtools-dev libboost-iostreams-dev \
    libboost-log-dev libboost-system-dev \
    libboost-test-dev libcurl4-openssl-dev libz-dev \
    libarmadillo-dev libhdf5-cpp-103

RUN R -e "install.packages(c('Seurat', 'hdf5r', 'dplyr', 'tidyverse', 'cowplot', 'knitr', 'slingshot', 'msigdbr', 'remotes', 'metap', 'devtools', 'R.utils', 'ggalt', 'ggpubr', 'BiocManager'), repos='http://cran.rstudio.com/')"

RUN R -e "BiocManager::install(c('SingleR', 'slingshot', 'scRNAseq', 'celldex', 'fgsea', 'multtest', 'scuttle', 'BiocGenerics', 'DelayedArray', 'DelayedMatrixStats', 'limma', 'S4Vectors', 'SingleCellExperiment', 'SummarizedExperiment', 'batchelor', 'org.Mm.eg.db', 'AnnotationHub', 'scater', 'edgeR', 'apeglm', 'DESeq2', 'pcaMethods', 'clusterProfiler'))"

RUN R -e "remotes::install_github(c('satijalab/seurat-wrappers', 'kevinblighe/PCAtools', 'chris-mcginnis-ucsf/DoubletFinder', 'velocyto-team/velocyto.R'))"
cd /path/to/directory/where/Dockerfile/is/located

docker buildx build -t asoberan/abrfseurat .
docker run --rm -it -p 8787:8787 asoberan/abrseurat:<tag>
http://localhost:8787

Running Images in Singularity

Singularity Commands

Singularity is packaged on Luria as an environment module, so you'll need to load the module in before invoking any Singularity commands. We'll also run these commands on an interactive session on a compute node so we don't spend the head node's resources.

Now, we can either have Singularity manage the image itself, or create the SIF file in our current directory. We'll do both in this exercise.

Let's run the same basic 'Hello, World!' command we did in Docker, again using the Debian Docker image:

srun --pty bash

module load singularity/3.10.4

singularity exec docker://debian echo 'Hello, World!'
INFO:    Converting OCI blobs to SIF format
INFO:    Starting build...
Getting image source signatures
Copying blob 1468e7ff95fc done
Copying config d5269ef9ec done
Writing manifest to image destination
Storing signatures
2024/05/01 11:02:47  info unpack layer: sha256:1468e7ff95fcb865fbc4dee7094f8b99c4dcddd6eb2180cf044c7396baf6fc2f
INFO:    Creating SIF file...
Hello, World!

Instead of run, Singularity uses exec to execute programs inside of a container. The image we provide is a Docker image, so we specify that to singularity by prepending the image name with docker://. Singularity will look in Dockerhub for this image. Once it finds it, it will download the image, convert it to the SIF file format, place that SIF file in ~/.singularity, then execute the given command in the container. Subsequent calls to use this image will use this downloaded image instead of downloading the image every time you want to use it.

To run an interactive session in Singularity, you could do something similar to what we did in Docker, where we simply execute bash in the container.

singularity exec docker://debian bash
Singularity>

However, Singularity has a built-in command to do this that makes the syntax much nicer.

singularity shell docker://debian
Singularity>

Singularity automatically bind-mounts your user's home directory to the container, so you'll have access to your files like normal. However, your user's ~/data folder is a symbolic link to /net/<storage server>, which is not a directory inside of the Singularity container, so this symbolic link will be broken.

Like Docker, Singularity allows you to mount directories from your computer to inside the container. To get around the symbolic link issue when running the image from your home directory, you could simply mount the /net directory on Luria to the /net directory in your container. Since the /net directory contains your lab's storage server and you're keeping the same name on the container, the symbolic link should work as normal.

singularity shell --bind /net:/net docker://debian
Singularity> ls data
# You should see the files from your storage server

SIF Files

So far, we've been letting Singularity manage images itself. However, we can also instruct Singularity to download the Docker image, create a SIF file from it, and let us handle this SIF file. We do this by using the pull command:

singularity pull docker://debian

ls

# You should see a file named debian_latest.sif

Now, instead of instructing Singularity to use the docker://debian image, we can simply point it to the debian_latest.sif file. This means a lab can make a directory of common Singularity images and lab members simply run these images instead of every lab member pulling their own image. This way is also faster than the previous method.

singularity shell debian_latest.sif
Singularity>

singularity exec debian_latest.sif echo 'Hello, World!'
Hello, World!

Running RStudio with Seurat Tools

Let's run the image we created earlier. There's a pre-built version of this image available on Dockerhub at asoberan/abrfseurat.

Singularity has trouble interpreting symbolic links, and since the ~/data/ directory in a user's home directory is a symbolic link, we'll see issues when trying to run Singularity. To remedy this, we'll run the following command once we're in the ~/data/ folder:

cd $(pwd -P)

This will change our directory to the full physical path of our current working directory. So we'll be in the same directory, but without following the symbolic link in our home directories.

Now, we can begin to run Singularity images. The Singularity program is packaged as an environment module on Luria, so you'll have to load it in first. To start an interactive session in a Singularity container, we use singularity shell <image>. In this case, the image we'll be using is the asoberan/abrfseurat:latest-x86_64 image on Dockerhub, so we'll run the following:

module load singularity/3.10.4

singularity shell docker://asoberan/abrfseurat:latest-x86_64

This will begin pulling the Docker image, convert it to a SIF file, store it in ~/.singularity, which is a symbolic link to <your lab storage server user directory>/singularity, then run an interactive session on that image. Once it's done you'll have a shell session inside the image, and you'll be able to use the tools in the Singularity image. For example, you'll be able to use R:

[asoberan@luria test]$ singularity shell docker://asoberan/abrfseurat:latest-x86_64
Singularity> R

R version 4.2.2 (2022-10-31) -- "Innocent and Trusting"
Copyright (C) 2022 The R Foundation for Statistical Computing
Platform: x86_64-pc-linux-gnu (64-bit)

R is free software and comes with ABSOLUTELY NO WARRANTY.
You are welcome to redistribute it under certain conditions.
Type 'license()' or 'licence()' for distribution details.

  Natural language support but running in an English locale

R is a collaborative project with many contributors.
Type 'contributors()' for more information and
'citation()' on how to cite R or R packages in publications.

Type 'demo()' for some demos, 'help()' for on-line help, or
'help.start()' for an HTML browser interface to help.
Type 'q()' to quit R.

> 

However, we will not be able to run RStudio on the image as it stands. This is because RStudio needs to create particular settings and database files at locations in the filesystem which are read-only in the Singularity image. To fix this, we'll need to create these directories ourselves. Below is a script that does just that, while also running the RStudio server from the Singularity image:

#!/bin/bash

#SBATCH --job-name=Rstudio       # Assign an short name to your job
#SBATCH --output=slurm.%N.%j.out     # STDOUT output file

module load singularity/3.10.4

workdir=$(python -c 'import tempfile; print(tempfile.mkdtemp())')

mkdir -p -m 700 ${workdir}/run ${workdir}/tmp ${workdir}/var/lib/rstudio-server
cat > ${workdir}/database.conf <<END
provider=sqlite
directory=/var/lib/rstudio-server
END

cat > ${workdir}/rsession.sh <<END
#!/bin/sh
export OMP_NUM_THREADS=${SLURM_JOB_CPUS_PER_NODE}
exec /usr/lib/rstudio-server/bin/rsession "\${@}"
END

chmod +x ${workdir}/rsession.sh

export SINGULARITY_BIND="${workdir}/run:/run,${workdir}/tmp:/tmp,${workdir}/database.conf:/etc/rstudio/database.conf,${workdir}/rsession.sh:/etc/rstudio/rsession.sh,${workdir}/var/lib/rstudio-server:/var/lib/rstudio-server"
export SINGULARITYENV_RSTUDIO_SESSION_TIMEOUT=0
export SINGULARITYENV_USER=$(id -un)
export SINGULARITYENV_PASSWORD=$(echo $RANDOM | base64 | head -c 20)

readonly PORT=$(python -c 'import socket; s=socket.socket(); s.bind(("", 0)); print(s.getsockname()[1]); s.close()')

cat 1>&2 <<END

1. SSH tunnel from your workstation using the following command:

   ssh -t -L 8787:localhost:${PORT} ${SINGULARITYENV_USER}@luria.mit.edu ssh -t ${HOSTNAME} -L ${PORT}:localhost:${PORT}

   and point your web browser to http://localhost:8787

2. log in to RStudio Server using the following credentials:

   user: ${SINGULARITYENV_USER}
   password: ${SINGULARITYENV_PASSWORD}

When done using RStudio Server, terminate the job by:

1. Exit the RStudio Session ("power" button in the top right corner of the RStudio window)
2. Issue the following command on the login node:

      scancel -f ${SLURM_JOB_ID}
END

singularity exec --cleanenv -H ~/data:/home/rstudio docker://asoberan/abrfseurat:latest-x86_64 /usr/lib/rstudio-server/bin/rserver \
            --server-user ${USER} --www-port ${PORT} \
            --auth-none=0 \
            --auth-pam-helper-path=pam-helper \
            --auth-stay-signed-in-days=30 \
            --auth-timeout-minutes=0 \
            --rsession-path=/etc/rstudio/rsession.sh 
printf 'rserver exited' 1>&2

Let's go through the script step-by-step to understand what it's doing.

workdir=$(python -c 'import tempfile; print(tempfile.mkdtemp())')

mkdir -p -m 700 ${workdir}/run ${workdir}/tmp ${workdir}/var/lib/rstudio-server

cat > ${workdir}/database.conf <<END
provider=sqlite
directory=/var/lib/rstudio-server
END

This part of the script uses Python to creates a temporary directory that will be populated with directories to bind-mount in the Singularity container where writable file systems are necessary.

The latter portion of the script is making a file in the temporary directory, database.conf, with the contents you see. These settings are used by RStudio to configure the database.

cat > ${workdir}/rsession.sh <<END
#!/bin/sh
export OMP_NUM_THREADS=${SLURM_JOB_CPUS_PER_NODE}
exec /usr/lib/rstudio-server/bin/rsession "\${@}"
END

chmod +x ${workdir}/rsession.sh

Here, the script makes another script in the temporary directory, rsession.sh, with the contents you see. The script sets OMP_NUM_THREADS to prevent OpenBLAS (and any other OpenMP-enhanced libraries used by R) from spawning more threads than the number of processors allocated to the job. Then it makes this script executable.

export SINGULARITY_BIND="${workdir}/run:/run,${workdir}/tmp:/tmp,${workdir}/database.conf:/etc/rstudio/database.conf,${workdir}/rsession.sh:/etc/rstudio/rsession.sh,${workdir}/var/lib/rstudio-server:/var/lib/rstudio-server"
export SINGULARITYENV_RSTUDIO_SESSION_TIMEOUT=0
export SINGULARITYENV_USER=$(id -un)
export SINGULARITYENV_PASSWORD=$(echo $RANDOM | base64 | head -c 20)

readonly PORT=$(python -c 'import socket; s=socket.socket(); s.bind(("", 0)); print(s.getsockname()[1]); s.close()')

This portion sets a couple of environment variables. The environment variables which begin with SINGULARITY will be used when we invoke the Singularity program, while the the environment variables which begin with SINGULARITYENV will be accessed inside of the Singularity image.

SINGULARITY_BIND is outlining the bind-mounts that should be created when we run the Singularity image. The bind-mounts are the temporary directories we made.

SINGULARITYENV_RSTUDIO_SESSION_TIMEOUT is setting the session timeout for RStudio. In this case, it's set not to suspend idle sessions.

SINGULARITYENV_USER is storing the user which will be used in RStudio. In this case it's ourselves.

SINGULARITYENV_PASSWORD is storing the password which will be used later in RStudio. The password is being generated using the random number generator built in to Linux.

PORT is finding an unused port number and storing it for later usage.

cat 1>&2 <<END

1. SSH tunnel from your workstation using the following command on your workstation:

   ssh -t -L 8787:localhost:${PORT} ${SINGULARITYENV_USER}@luria.mit.edu ssh -t ${HOSTNAME} -L ${PORT}:localhost:${PORT}

   and point your web browser to http://localhost:8787

2. log in to RStudio Server using the following credentials:

   user: ${SINGULARITYENV_USER}
   password: ${SINGULARITYENV_PASSWORD}

When done using RStudio Server, terminate the job by:

1. Exit the RStudio Session ("power" button in the top right corner of the RStudio window)
2. Issue the following command on the login node:

      scancel -f ${SLURM_JOB_ID}
END

This part of the script prints out information to the user so they can remember how to port-forward and what the login information for RStudio is.

singularity exec --cleanenv -H ~/data:/home/rstudio docker://asoberan/abrfseurat:latest-x86_64 /usr/lib/rstudio-server/bin/rserver \
            --server-user ${USER} --www-port ${PORT} \
            --auth-none=0 \
            --auth-pam-helper-path=pam-helper \
            --auth-stay-signed-in-days=30 \
            --auth-timeout-minutes=0 \
            --rsession-path=/etc/rstudio/rsession.sh 
printf 'rserver exited' 1>&2

This final piece is where Singularity actually runs the RStudio server program in asoberan/abrfseurat using all of the configuration created earlier in the script.

Save this script somewhere on the cluster. Send it a compute node using Slurm. Then, read the contents of the Slurm output file and you'll receive instructions to port forward from your workstation in order to access RStudio at http://localhost:8787.

sbatch seurat_script.sh

cat slurm-<id>.out

# Follow instructions

Docker

  • Founded in 2010

    • Most popular container engine

    • Available on Linux, Mac, Windows

    • Requires root privileges

Docker manages Dockerhub, a central repository where people can upload Docker images to share to the wider community.

We're going to grab an image from Dockerhub and use it for some examples. A very basic image available on Dockerhub is the Ubuntu image, which provides a bare-bones Ubuntu environment.

Differences from Docker

Before we use Singularity, we must understand that it works differently from Docker in very subtle ways:

  1. Images are files

  2. Images are read-only

  3. Singularity automatically bind-mounts your home directory

1. Images are files

Singularity can manage images itself so you never have to see where or how they're installed. However, images in Singularity can also be created as a SIF file that you manage just like any other file.

Since Singularity uses SIF files for images, Docker images will need to be converted to SIF. Thankfully, this feature is built into Singularity and will be invoked automatically when we run a Docker image in Singularity.

Managing SIF files ourselves can be useful for having one single image shared between an entire lab, instead of each lab member downloading their own image for the exact same tools.

2. Images are read-only

Just like Docker images, SIF files contain their own filesystem with the environment needed to run whatever program or programs are packaged in it. However, whereas Docker's filesystem lets you make temporary changes to this filesystem, the Singularity image's filesystem is read-only. Therefore, you won't be able to create or delete any files inside the image when you start an interactive session in it.

Images being read-only sometimes make running images made for Docker cumbersome, as we will see later.

3. Singularity automatically bind-mounts your home directory

When you enter a Singularity image, Singularity will automatically bind mount your home directory to the home directory inside the image. Therefore, everything inside the Singularity home directory will be write-able. You'll be able to see the tools available in the Singularity image to do work on files in your own home directory.

Sylabscloud.sylabs.io
Singularity HPC Librarysingularityhub.github.io
Logo

Docker Installation

The easiest way of installing Docker is by downloading Docker Desktop. Docker Desktop is a nice GUI front-end for Docker, so you can see your containers, images, active builds, login to your Docker Hub account, etc. It also comes with the Docker command line tools that you'll need for building Docker images.

Download Docker Desktop for your computer and follow the instructions to install it.

On Mac, you can access the Docker command line tools by calling docker from Terminal.

On Windows, the Docker command line tools will be invoked by calling docker.exe from the Command Prompt.

For brevity's sake, the rest of this material will refer to the Docker command line tools by calling docker. If you're following along on Windows, make sure to replace docker with docker.exe.

Logo