Getting High Resolution Images from Video

Using an API is like playing with Lego blocks. Ideally the individual components should be small and useful on their own. They should also be clear in what their purpose. Mostly important for me, is that they should be able to connect together to form more specialized blocks. This blog post will cover how to create new functionality from existing blocks in the Eagle Eye API.

Step 1: Define the Problem

Inside of the Ealge Eye API we provide a low resolution previews and the full high resolution video. For some API users, there is a need for images with higher resolution than preview images, but not the need to play the video. This sample was written for a construction equipment rental company. They needed detailed images but because they are using a metered cellular connection, playing full video was not practical.

Step 2: Understand Available Resources

Instead of transmitting the entire video, this solution only sends up a fraction of the video and extracts a full resolution image from it. This means they can use the preview images for low resolution overview, then request a high resolution image if needed. Because they are recording the video, if something critical does happen they are able to request the video from the bridge.

From the two basic streams we are able to combine them into new functionality.

Step 3: Combine Into Something New

The core idea is fairly simple. If you send this new service a camera, timestamp and Auth key, it will return a link the the full resolution image. You would make a HTTP GET to "/api/pull_frame/device_id/start_timestamp/auth_key" and retrieve the image with a HTTP GET to "/api/download/device_id-start_timestamp.jpg"

We are using ffmpeg to extract the image. Ffmpeg is known as a Swiss Army knife for video conversion, but it is also known has its opaque options. We are using the following command behind the scenes "ffmpeg -i {local_filename} -ss 00.00 -vframes 1 -y tmp/{device_id}-{start_timestamp}.jpg".

Other than calling ffmpeg, most of the functionality is just in standing up a Flask server to handle the two endpoints.

What else can we do with this?

This is a simple little example that you can run on your own insfrastructure and call it as needed. The code is available on Github. There are plenty of things that a high resolution image can be used for. It is common to use this to create a full resolution timelapse. You could do the first pass of analytics on the preview images and the high resolution image can be requested as needed for additional processing.

I hope you found this helpful. Please feel free to reach out to me directly at