• Docs >
  • Dynamo / torch.compile
Shortcuts

Dynamo / torch.compile

Torch-TensorRT provides a backend for the new torch.compile API released in PyTorch 2.0. In the following examples we describe a number of ways you can leverage this backend to accelerate inference.

Torch Compile Stable Diffusion

Torch Compile Stable Diffusion

Torch Export with Cudagraphs

Torch Export with Cudagraphs

Compiling a Transformer using torch.compile and TensorRT

Compiling a Transformer using torch.compile and TensorRT

Refit TenorRT Graph Module with Torch-TensorRT

Refit TenorRT Graph Module with Torch-TensorRT

Torch Compile Advanced Usage

Torch Compile Advanced Usage

Compiling ResNet using the Torch-TensorRT torch.compile Backend

Compiling ResNet using the Torch-TensorRT torch.compile Backend

Deploy Quantized Models using Torch-TensorRT

Deploy Quantized Models using Torch-TensorRT

Using Custom Kernels within TensorRT Engines with Torch-TensorRT

Using Custom Kernels within TensorRT Engines with Torch-TensorRT

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources