TensorRT-LLM Dockerfile
This is a multi-stage Dockerfile that builds and packages the TensorRT-LLM library.
Let's break it down section by section:
Base image and environment setup
This section sets up the base image using the NVIDIA PyTorch image and specifies the tag for the image.
It also sets environment variables for the bash configuration and shell initialization. The SHELL
instruction sets the default shell to Bash.
Development stage
This stage builds upon the base image and installs necessary dependencies, CMake, and ccache.
The installation scripts are copied into the image and executed using RUN
instructions. After each script is run, it is removed to keep the image size small.
TensorRT installation
This section installs TensorRT using the provided installation script.
The script takes various arguments for the versions of TensorRT, CUDA, cuDNN, NCCL, and cuBLAS.
These arguments are passed as build arguments to the Dockerfile.
Additional dependencies
This section installs additional dependencies, namely Polygraphy and mpi4py, using their respective installation scripts.
PyTorch installation
This section installs PyTorch using the provided installation script. The TORCH_INSTALL_TYPE
argument specifies the type of installation (default is "skip").
Wheel building stage
This stage builds the TensorRT-LLM wheel package.
It starts from the development image and sets the working directory to /src/tensorrt_llm
.
The necessary source files and directories are copied into the image.
The build_wheel.py
script is run with the specified build arguments to create the wheel package.
Release stage
The release stage starts from the development image and sets the working directory to /app/tensorrt_llm
.
It copies the built wheel package from the previous stage and installs it using pip
.
The README, documentation, and include files are copied into the image.
The RUN
instruction creates symbolic links for the TensorRT-LLM libraries and updates the library configuration.
The benchmark files and examples are copied from the wheel stage into the release image. Some unnecessary files are removed, and the examples directory is given write permissions.
Finally, the GIT_COMMIT
and TRT_LLM_VER
build arguments are used to set environment variables in the image.
This multi-stage Dockerfile allows for efficient building and packaging of the TensorRT-LLM library, separating the development dependencies from the final release image.
Last updated
Was this helpful?