Generating Time Lapses (part 2)
This is the second of a three part series. In the previous post we talked about how to generate a time lapse based on preview images from a camera. We also looked at two examples of how it can give you an overview of what happened throughout the day.
In this post we are going to look at another strategy for deciding what images to show. We will be creating the time lapse based on activity instead of time. We will still keep it to the same 60 seconds in total length.
In additiona to providing preview images, the Eagle Eye VMS also generates special previews called thumbnails. Thumbnails are the same resolution as preview images but are intended to best represent the object in motion. We do this by tracking when the object is largest and most centered inside the frame.
Imagine that you had a person walked across the screen from left to right. A thumbnail would be generated when the person is in the middle of the screen. Thumbnails are created in addition to preview images so we would still have the expect preview images during this time.
The other important thing to know about thumbnails is that we generate them for each object in motion. If your lobby camera has people continuosly walking in front of it continuosly for ten minutes, you would want to have a thumbnail for each person walking through and not just a single thumbnail for the entire 10 minute period.
With all this explained, let's look at how we can use thumbnails to genereate a different type of time lapse video.
This example is written in Python but the concept is the same for other languages. We will be making HTTP calls to our REST API, downloading the needed images, and then providing this as an input to ffmpeg. All of these are standard processes and tools.
Step 1: Login and get the list of images
After you have logged-in we can get the list of images. We can just grab a list of all the thumbnail images from the start of the day until the end. I am also making sure the dates are in the EEN time format. Start time = 20190301000000.000 and End time = 20190301235959.999 (YYYYMMDDhhmmss.nnn).
We will be calling our Get list of Images endpoint. This call requires that you pass tell it the camera_id, start_timestamp, end_timestamp, and asset_class. We are going to be working with thumbnail images so the asset_class will be 'thumb'.
Getting the list of thumbnails for this time range will give back a list of images based on the activity in front of the camera. Depending on the camera and motion, this may be more or less images than we expected. Our goal is still to compress this down to a maximum of a 60 second video. In situations where there is not enough thumbnails during that time period we will shorten the length of the video. [Part 3 will show strategies so that we can always generate a 60 second video]
Step 2: Downloading the images
Before we start downloading the thumbnail images we need to look at how we will generate the time lapse. If we show 10 thumbnail images per second of time lapse video we will only need 600 images (60 seconds * 10 frames per second). The challenge is to figure out which 600 to show.
If there are less than 600 thumbnails, we will need to slowdown how quickly we show each frame. If there are more than that number we can do the same as we did with preview images in the previous post.
To figure out which images we want, we should start with the entire list of images for the time period. We can devide the number of images by the amount of images we are going to use. For example, we will assume there are 10,000 thumbnails in the requested time period. Our math becomes, 10,000 / 600 = 17. This means we would use one frame every 17 thumbnails. We refer to this number as the step.
We can now go through the list of thumbnail images, getting every 17th image and saving it to your computer. In order to keep the files straight I named them with the camera ESN and the EEN timestamp in the filename. The EEN timestamp is handy because it can be sorted on alphabetically.
NOTE: The API will throttle the total number of requests per second. It will return a HTTP status code of 429 if you're requesting too much, too quickly.
Step 3: Generating the time lapse video
FFmpeg is a terrific tool and is my Swiss-army knife for dealing with video. It can take an input and convert it to almost any output. In this case we are going to be passing a list of images in as the input and get a movie as the output. FFmpeg can be very intimedating but with some reading it will start to make sense.
ffmpeg -framerate 10 -pattern_type glob -i '*.jpg' -y -r 30 -pix_fmt yuv420p out.mp4
This is the command we used previously. We will be re-using most of it, but will need to add in some logic to scale the input framerate if there are less than thumbnails than we need. We are going to do this by be dividing the number of images by the desired length of the video. This gives us a scale factor that we can apply to the input FPS setting. We are also rounding up because we can't show partial frames.
Step 4: Putting it all together
On the right are three examples I've generated from my driveway. The first video is a 60 second video showing the previews throughout a 24 hour period. The second video is showing thumbnails during the same 24 hour period. It is only 20 seconds because we didn't slow down the input framerate yet. The last video is showing thumbnails and is slowed down to be 60 seconds.
I've included the Python script I used to generate this. It can be downloaded from Github . The example script requires a username and password to login. A camera ESN to know which camera to use and the time range we want to get images for.
You can run it locally or you can run it in the included Docker container. The README file has instructions for both methods.
What else can we do with this?
In the next article we will look at how to highlight the activity even further.
I hope you found this helpful. Please feel free to reach out to me directly at firstname.lastname@example.org