Running DNNs on resource-constrained edge devices is challenging, since it incurs high performance and energy overhead. Offloading DNNs to the cloud for execution may however result in unpredictable performance, due to the uncontrolled long wide-area network latency. Discover how HCL used an object detection model based out of SSD inception v2 to achieve 3X improved inference time using Intel® Distribution of OpenVINO™ toolkit.