The following describes how our AI engineering department leveraged the NVIDIA Jetson ecosystem to rapidly deploy accurate, cost-effective, commercial-ready, and industrial-grade UAV solutions.
The challenge this presents is that massive computational and memory resources are required to run neural networks. This is why they are typically run in a data center, not at the edge.
TensorRT is a C++ library based on the CUDA parallel programming model that optimizes trained neural networks for deployment in production-ready embedded systems. It achieves this by compressing the neural net into a condensed runtime engine and adjusting the precision of floating-point and integer operations for the target GPU platform. This correlates directly to improved latency, power efficiency, and memory consumption in deep learning-enabled systems, all of which are of course essential in video recognition applications.
A variety of computer vision-based chipsets are available in the market but what ultimately sets Jetson products apart is a vibrant software ecosystem. For AI, this meant access to TensorRT, a programmable software platform that accelerates inferencing workloads running on NVIDIA GPUs.
TensorRT is part of the NVIDIA JetPack SDK, a software development suite that includes a Ubuntu Linux OS image, Linux kernel, bootloaders, libraries, APIs, an IDE, and other tools. The SDK is continuously updated by NVIDIA to help accelerate the AI development lifecycle.
Web page was made with Mobirise