# Using TensorRT This document outlines the general process of AI inference acceleration with TensorRT on an OVP8xx device. ## Building a TensorRT container There are two options: - Use a base NVIDIA container and import the runtime libraries directly from the firmware. This is the preferred method that we will describe below. - Use a complete NVIDIA container that includes the TensorRT libraries directly. This is not recommended since containers sizes will increase dramatically. ### NVIDIA base containers NVIDIA provides L4T-based containers with TensorFlow that can be downloaded directly from [their containers catalog](https://ngc.nvidia.com/catalog/containers/nvidia:l4t-tensorflow). TensorFlow should be used with the corresponding recommended version of JetPack. The recommendations can be found on the [TensorFlow for Jetson website](https://docs.nvidia.com/deeplearning/frameworks/install-tf-jetson-platform-release-notes/tf-jetson-rel.html). #### Compatibility Matrix
VPU Hardware | VPU Firmware | L4T Version | Jetpack Version | Tensorflow | Pytorch | Machine learning |
---|---|---|---|---|---|---|
OVP81x | 1.10.13 | R32.7.5 | 4.6.5 |
nvcr.io/nvidia/l4t-tensorflow:r32.7.1-tf2.7-py3 nvcr.io/nvidia/l4t-tensorflow:r32.7.1-tf1.15-py3 |
nvcr.io/nvidia/l4t-pytorch:r32.7.1-pth1.10-py3 nvcr.io/nvidia/l4t-pytorch:r32.7.1-pth1.9-py3 |
nvcr.io/nvidia/l4t-ml:r32.7.1-py3 |
OVP81x | 1.4.30 | R32.7.3 | 4.6.3 |
nvcr.io/nvidia/l4t-tensorflow:r32.7.1-tf2.7-py3 nvcr.io/nvidia/l4t-tensorflow:r32.7.1-tf1.15-py3 |
nvcr.io/nvidia/l4t-pytorch:r32.7.1-pth1.10-py3 nvcr.io/nvidia/l4t-pytorch:r32.7.1-pth1.9-py3 |
nvcr.io/nvidia/l4t-ml:r32.7.1-py3 |
OVP80x | 1.4.32 | R32.4.3 | 4.4.0 |
nvcr.io/nvidia/l4t-tensorflow:r32.4.3-tf2.2-py3 nvcr.io/nvidia/l4t-tensorflow:r32.4.3-tf1.15-py3 |
nvcr.io/nvidia/l4t-pytorch:r32.4.3-pth1.6-py3 | nvcr.io/nvidia/l4t-ml:r32.4.3-py3 |