Skip to content

Contact sales

By filling out this form and clicking submit, you acknowledge our privacy policy.

AWS DeepLens brings video artificial intelligence to every developer

Jun 08, 2023 • 0 Minute Read

Please set an alt value for this image...

The future of deep learning is not a hotdog — it’s at the edge

The AWS re:Invent 2017 keynote from Andy Jassy introduced the world to Amazon’s new DeepLens device — a fully programmable video camera that lets you run deep learning models locally to analyze and take action on what it sees. The state-of-the-art camera will be retailing for $250 starting in April and is available now for preorder.

After the announcement, AWS opened several hands-on labs for re:Invent participants to explore and experiment with the new device. On initial inspection, the exterior is nothing special. The DeepLens device basically is a camera mounted on top of a white box with some ports and buttons — but the magic is inside.

It’s a snappy little box running Ubuntu with 8GB of RAM. DeepLens is preconfigured with AWS Greengrass so the processing happens on the device instead of the cloud.

AWS Greengrass is software that lets you run local compute, messaging, data caching, sync, and machine learning capabilities. Connected devices can run AWS Lambda functions, keep device data in sync, and communicate with other devices securely — even when not connected to the Internet.

The key thing to realize is that DeepLens is not just a video camera — it’s the world’s first deep learning enabled developer kit. The programmable camera continuously monitors the video stream, uses an interference model to find items you define, and then takes subsequent actions in the cloud as needed.

The video camera comes equipped with some existing training models to help with object detection, people detection, activity detection and so forth.

The architecture is designed to keep all the heavy-lifting and bandwidth-hungry activities on the device. DeepLens uses the cloud to manage downstream connections and event handling — and the new SageMaker tool to manage the mundane and difficult parts of machine learning.

From a workflow perspective, this is actually very well conceive — reminiscent of the clear guided flows in other complex AWS tools like Elastic Beanstalk.

When you log into the AWS DeepLens console, setting up the device is as easy as selecting the training model, choosing inference Lambda functions, and deploying to the device. After unboxing, you can be live in just a few minutes by simply using the Amazon console.

In the AWS training labs, we used pre-configured models which are the brains of the whole machine learning process. Our lab is built around the dubious Silicon Valley-inspired ‘hotdog or no hotdog’ task, and — just like in the TV show — it doesn’t really work well.

Mustard or none, bun or no bun — it didn’t make much difference. Our model failed dismally at the hotdog challenge.

But this is day one — and the more useful task of identifying people worked surprisingly well. As the photos below shows, DeepLens was able to take a live video and comfortably find all the people with no issues at all.

Impressively — even in a packed training room — the bounding boxes were consistently accurate even at weird angles.

DeepLens integrates with the AWS IoT world so you can wire-up events to Lambda for SNS (email, SMS, etc) or take any other action within the ecosystem. The sheer speed of doing all the work on the device opens up a realm of possibilities that wouldn’t really be feasible by processing in the cloud.

To be fair — you will still need familiarity with model building for machine learning to take full advantage of DeepLeans, although I can imagine a marketplace for models emerging fairly quickly.

Even with SageMaker, there’s no leapfrogging the fact you’ll need some training to grasp machine learning concepts. This product is aimed squarely at developers — not the general public. We’re just scratching the surface of the possibilities once it gets loose in the wild.

Workshop attendees received a complimentary DeepLens device. I’ll continuing playing with it over the next few weeks and provide an update. — although I probably won’t be using hotdogs.

Did you get a chance to experiment with DeepLens at re:Invent, or plan to order the device to explore deep learning? I’d love to hear your thoughts on DeepLens and learn about your projects in the comments below!