Exploring Azure Spatial Analysis containers

Azure’s Cognitive Services are a quick and easy way to add machine learning to many different types of applications. Available as REST APIs, they can be quickly hooked into your code using simple asynchronous calls from their own dedicated SDKs and libraries. It doesn’t matter what language or platform you’re building on, as long as your code can deliver HTTP calls and parse JSON documents.

Not all applications have the luxury of a low-latency connection to Azure. That’s why Microsoft is rolling out an increasing number of its Cognitive Services as containers, for use on appropriate hardware that may only have intermittent connectivity. That often requires using systems with a relatively high-end GPU, as the underlying neural nets used by the ML inferencing models require a lot of compute. Even so, with devices like Intel’s NUC9 hardware with an Nvidia Tesla-series GPU, that can be very small indeed.

Packaging Azure Cognitive Services

At the heart of the Cognitive Services suite are Microsoft’s computer vision models. They manage everything from object recognition and image analysis to object detection and tagging to character and handwriting recognition. They’re useful tools that can form the basis of complex applications and feed into either serverless Azure Functions or into no-code Power Apps.

Microsoft has taken some of its computer vision modules and packaged them in containers for use on low-latency edge hardware or where regulations require data to be held inside your own data center. That means you can use the OCR container to capture data from pharmacies securely or use the spatial analysis container to deliver a secure and safe work environment.

With businesses struggling to manage social distancing and safe working conditions during the current pandemic, tools like the Cognitive Services spatial analysis are especially important. With existing camera networks or relatively low-cost devices, such as the Azure Kinect Camera, you can build systems that can identify people and show if they are working safely: keeping away from dangerous equipment, maintaining a safe separation, or being in a well-ventilated space. All you need is an RTSP (real time streaming protocol) stream from each camera you’re using and an Azure subscription.

Setting up spatial analysis

Getting started with the spatial analysis container is easy enough. It’s intended for use with Azure IoT Edge, which manages container deployment, and requires a server with at least one Nvidia Tesla GPU. Microsoft recommends its own Azure Stack Edge hardware, as this now offers a T4 GPU option. Using Azure Stack Edge reduces your capital expenditure, as the hardware and software is managed from Azure and billed through an Azure subscription. For test and development, a desktop is good enough; the recommended hardware is a fairly hefty workstation-class PC with 32GB of RAM and two Tesla T4 GPUs with 16GB of GPU RAM.

Copyright © 2020 IDG Communications, Inc.

Source link