Optimizing Edge Analytics to Reduce Latency Issues in Inferencing

Running DNNs on  resource-constrained  edge  devices  is challenging,  since  it  incurs  high performance and energy overhead. Offloading DNNs to the cloud for execution may however result in  unpredictable  performance,  due  to  the  uncontrolled  long  wide-area  network  latency. Discover how HCL used an object detection model based out of SSD inception v2 to achieve 3X improved inference time using Intel® Distribution of OpenVINO™ toolkit.




We use cookies to optimize your experience, enhance site navigation, analyze site usage, assist in our marketing efforts. Privacy Policy