LogoLogo
Continuum WebsiteContinuum ApplicationsContinuum KnowledgeAxolotl Platform
  • TensorRT-LLM
  • The TensorRT-LLM Process
  • Performance
  • Virtual Machine Creation
  • CUDA Introduction
    • CUDA Architecture
    • Stream Multiprocessors: The Heart of GPU Computing
    • Pre Installation
    • Compatibility Assessment
    • NVCC: The NVIDIA CUDA Compiler
    • Installing Cuda
    • Installing the NVIDIA Container Toolkit
    • CUDA and bandwidth
    • Tensor Cores
  • Building TensorRT-LLM
    • Building from Source
    • TensorRT-LLM Dockerfile
      • Base Image
      • install_base.sh
      • install_cmake.sh
      • install_tensorrt.sh
      • install_pytorch.sh
      • requirements.txt
      • build_wheel.py
      • setup.py
      • Docker Makefile
      • Persistence
      • Running with persistent volumes
  • TensorRT-LLM Architecture and Process
    • The TensorRT-LLM process
    • INetworkDefinition
    • Model Definition
    • Compilation
    • Runtime Engine
    • Weight Bindings
    • Model Configuration
  • TensorRT-LLM build workflow
    • TensorRT-LLM build workflow - process
  • CUDA Graphs
    • Experimentation with CUDA Graphs
  • TensorRT-LLM Libraries
    • tensorrt_llm folders
    • tensorrt_llm/builder.py
    • tensorrt_llm/network.py
    • tensorrt_llm/module.py
    • top_model_mixin.py
    • trt-llm build command
    • trtllm-build CLI configurations
  • LLama2 installation
    • Converting Checkpoints
      • Checkpoint List - Arguments
      • Examples of running the convert_checkpoint.py script
      • convert_checkpoint examples
      • Checkpoint Script Arguments
      • checkpoint configuration file
      • run_convert_checkpoint.py script
    • LLama2 Files Analysis
    • TensorRT-LLM Build Engine Process
    • TensorRT-LLM Build Process Documentation
    • Build arguments
    • trtllm build configuration file
    • Run the buildconfig file
    • Analysis of the output from build.py
    • LLama3 configurations
    • Proposed checkpoint config file for LLama3
    • Proposed build config file for LLama3
    • run.py for inference
    • Using the models - running Llama
    • generate_int8 function
    • summarize.py script in Llama folder
    • Compiling LLama Models
  • Tasks
  • LLama Model Directory
    • llama/model.py
    • llama/utils.py
    • llama/weight.py
    • llama/convert.py
    • PreTrainedModel class
    • LlamaForCausalLM class
    • PretrainedConfig class
  • TensorRT-LLM Tutorial
  • Tutorial 2 - get inference going
  • examples/run.py
  • examples/utils.py
  • examples/summarize.py
  • The Python API
    • Layers
    • Functionals
    • functional.py
    • tensorrt_llm.functional.embedding
    • tensorrt_llm.functional.gpt_attention
    • tensorrt_llm.functional.layer_norm
    • tensorrt_llm.functional.rms_norm
    • Model
    • Quantization
    • Runtime
    • Runtime Process
  • Transformer Architecture
    • Attention Mechanism
    • Multi Head Attention
    • Positional Encoding
    • Scaled dot-product attention
    • Layer Normalisation
    • Activation Functions
    • Residual Connections
    • Position Wise Feed-Forward Layer
    • Transformer Feed-Forward Layers Are Key-Value Memories
    • KV Cache
      • Efficient Streaming Language Models with Attention Sinks
      • Input QKV tensor
    • General Notes on Model Architecture
  • Best Practices for Tuning the Performance of TensorRT-LLM
    • Optimisation Techniques
    • Batch Manager
    • Alibi
    • Relative Attention Bias
    • Beam Search
    • Rotary Positional Embedding (RoPE)
    • Numerical Precision
    • FP8 Formats for Deep Learning
  • Graph Rewriting
  • Reducing Activation Recomputation in Large Transformer Models
  • Efficient Large-Scale Language Model Training on GPU Clusters Using Megatron-LM
  • Numerical Position
  • TensorRT Models
  • Bloom
    • Huggingface Bloom Documentation
  • Runtime
  • Graph Rewriting (GW) module
  • FasterTransfomer Library
  • Dual ABI issues
  • Phi 2.0
  • ONNX
  • Message Passing Interface (MPI)
  • NVIDIA Nsight Systems: A Comprehensive Guide for TensorRT-LLM and Triton Inference Server
  • NCCL
Powered by GitBook
LogoLogo

Continuum - Accelerated Artificial Intelligence

  • Continuum Website
  • Axolotl Platform

Copyright Continuum Labs - 2023

On this page

Was this helpful?

  1. TensorRT-LLM Libraries

tensorrt_llm folders

TensorRT-LLM/tensorrt_llm

The folder structure represents various components and modules of the TensorRT-LLM framework.

auto_parallel

This folder likely contains code related to automatic parallelization of the TensorRT-LLM framework, enabling efficient execution on multiple GPUs or nodes.

commands

This folder contains the build.py command.

hlapi

HLAPI stands for High-Level API. This folder contains higher-level abstractions and interfaces for working with the TensorRT-LLM framework, providing a more user-friendly and simplified API.

layers:

This folder likely contains implementations of various neural network layers and operations used in the TensorRT-LLM framework, such as attention layers, feedforward layers, normalization layers, etc.

models:

This folder may contain pre-trained models or model architectures supported by the TensorRT-LLM framework, such as Transformer-based models like BERT, GPT, or other language models.

plugin:

This folder likely contains custom TensorRT plugins developed for the TensorRT-LLM framework. TensorRT plugins are used to extend the functionality of TensorRT by implementing custom layers or operations that are not natively supported.

quantization:

This folder probably contains code related to quantization techniques used in the TensorRT-LLM framework, such as post-training quantization or quantization-aware training, to reduce the model size and improve inference performance.

runtime:

This folder may contain the runtime components of the TensorRT-LLM framework, responsible for executing the models and managing the inference process.

tools:

This folder likely contains various utility tools and scripts used in the development, testing, and deployment of the TensorRT-LLM framework.

_common.py, _ipc_utils.py, _utils.py: These files likely contain common utility functions, inter-process communication utilities, and other helper functions used throughout the TensorRT-LLM framework.

builder.py:

This file may contain the core functionality for building TensorRT engines from the LLM models, handling the conversion and optimization process.

executor.py:

This file contains the execution logic for running inference on the TensorRT engines, managing memory, and handling input/output data.

functional.py:

This file may contain functional-style operations and transformations used in the TensorRT-LLM framework, similar to the functional API in deep learning frameworks like PyTorch.

graph_rewriting.py:

This file likely contains graph rewriting techniques and optimizations applied to the TensorRT-LLM models to improve performance and efficiency.

logger.py:

This file may contain logging utilities and configurations used for debugging, monitoring, and tracking the execution of the TensorRT-LLM framework.

lora_manager.py:

This file likely contains code related to the management of Low-Rank Adaptation (LoRA) techniques used in the TensorRT-LLM framework for fine-tuning and adapting pre-trained models.

mapping.py:

This file may contain mapping functions or utilities used for transforming data or model parameters between different representations.

module.py, parameter.py:

These files likely contain base classes and abstractions for defining neural network modules and managing model parameters in the TensorRT-LLM framework.

network.py:

This file may contain the core definition and functionality of the TensorRT network used in the TensorRT-LLM framework.

profiler.py:

This file likely contains profiling utilities and tools for measuring and analyzing the performance of the TensorRT-LLM models and framework.

top_model_mixin.py:

This file may contain a mixin class or utility functions for working with top-level models in the TensorRT-LLM framework.

version.py:

This file likely contains version information and metadata for the TensorRT-LLM framework.

These folders and files work together to provide the functionality and capabilities of the TensorRT-LLM framework, enabling efficient deployment and execution of large language models using TensorRT.

PreviousTensorRT-LLM LibrariesNexttensorrt_llm/builder.py

Last updated 1 year ago

Was this helpful?