This is a multi-stage Dockerfile that builds and packages the TensorRT-LLM library.
Let's break it down section by section:
Base image and environment setup
ARG BASE_IMAGE=nvcr.io/nvidia/pytorch
ARG BASE_TAG=24.02-py3
ARG DEVEL_IMAGE=devel
FROM ${BASE_IMAGE}:${BASE_TAG} as base
ENV BASH_ENV=${BASH_ENV:-/etc/bash.bashrc}
ENV ENV=${ENV:-/etc/shinit_v2}
SHELL ["/bin/bash", "-c"]
This section sets up the base image using the NVIDIA PyTorch image and specifies the tag for the image.
It also sets environment variables for the bash configuration and shell initialization. The SHELL instruction sets the default shell to Bash.
Development stage
FROM base as devel
COPY docker/common/install_base.sh install_base.sh
RUN bash ./install_base.sh && rm install_base.sh
COPY docker/common/install_cmake.sh install_cmake.sh
RUN bash ./install_cmake.sh && rm install_cmake.sh
COPY docker/common/install_ccache.sh install_ccache.sh
RUN bash ./install_ccache.sh && rm install_ccache.sh
This stage builds upon the base image and installs necessary dependencies, CMake, and ccache.
The installation scripts are copied into the image and executed using RUN instructions. After each script is run, it is removed to keep the image size small.
This section installs PyTorch using the provided installation script. The TORCH_INSTALL_TYPE argument specifies the type of installation (default is "skip").
The release stage starts from the development image and sets the working directory to /app/tensorrt_llm.
It copies the built wheel package from the previous stage and installs it using pip.
The README, documentation, and include files are copied into the image.
The RUNinstruction creates symbolic links for the TensorRT-LLM libraries and updates the library configuration.
The benchmark files and examples are copied from the wheel stage into the release image. Some unnecessary files are removed, and the examples directory is given write permissions.
Finally, the GIT_COMMIT and TRT_LLM_VER build arguments are used to set environment variables in the image.
This multi-stage Dockerfile allows for efficient building and packaging of the TensorRT-LLM library, separating the development dependencies from the final release image.