run.py for inference
The run.py
file is a script for running inference with a pre-built TensorRT engine.
It takes various command-line arguments to configure the inference process and generates output based on the provided input.
Let's analyze the key arguments parsed to the script and their significance:
Key arguments:
--max_output_len
: The maximum length of the generated output sequence.--max_attention_window_size
: The attention window size that controls the sliding window attention or cyclic KV cache behavior.--sink_token_length
: The length of the sink token.--log_level
: The logging level for the script.--engine_dir
: The directory containing the pre-built TensorRT engine.--use_py_session
: Whether to use the Python runtime session or the C++ session.--input_text
: The input text to be used for generation.--input_file
: An alternative to--input_text
, allowing input to be read from a CSV or Numpy file.--max_input_length
: The maximum length of the input sequence.--output_csv
,--output_npy
: Files to store the tokenized output in CSV or Numpy format.--output_logits_npy
: File to store the generation logits in Numpy format (only whennum_beams==1
).--output_log_probs_npy
,--output_cum_log_probs_npy
: Files to store the log probabilities and cumulative log probabilities in Numpy format.--tokenizer_dir
,--tokenizer_type
,--vocab_file
: Configuration for the tokenizer.--num_beams
: The number of beams to use for beam search (usenum_beams > 1
for beam search).--temperature
,--top_k
,--top_p
,--length_penalty
,--repetition_penalty
,--presence_penalty
,--frequency_penalty
: Parameters for controlling the generation process.--early_stopping
: Whether to use early stopping during beam search.--debug_mode
: Whether to turn on debug mode.--streaming
,--streaming_interval
: Configuration for streaming mode.--prompt_table_path
,--prompt_tasks
: Configuration for prompt tuning.--lora_dir
,--lora_task_uids
,--lora_ckpt_source
: Configuration for LoRA (Low-Rank Adaptation).--num_prepend_vtokens
: The number of default virtual tokens to prepend to each sentence.--run_profiling
: Whether to run profiling iterations to measure inference latencies.--medusa_choices
: Configuration for Medusa decoding.
Relationship to the LLaMA engine:
The
run.py
script is designed to perform inference using a pre-built TensorRT engine, such as therank0.engine
file generated from the LLaMA model.The
--engine_dir
argument specifies the directory containing the TensorRT engine file, which is loaded by the script for inference.The
config.json
file contains the configuration details of the LLaMA model used to build the TensorRT engine. It includes information such as the model architecture, data types, vocabulary size, and other hyperparameters.The
run.py
script uses the configuration fromconfig.json
to set up the appropriate tokenizer, input parsing, and output processing based on the LLaMA model's specifications.The script also supports additional features like LoRA, prompt tuning, and Medusa decoding, which can be configured through the respective command-line arguments and are specific to the LLaMA model.
In summary, the run.py
script is a generic inference script that can be used with any pre-built TensorRT engine, including the LLaMA engine. It takes the necessary configuration from the config.json
file and the rank0.engine
file to perform inference and generate output based on the provided input and specified arguments.
Last updated