I spent a while using some techniques I've learned for astrophotography. I trimmed the file in Avidemux, exported frames using PIPP, stacked around 10 tracked frames in AutoStakkert then finally sharpened the result in RegiStax. Unfortunately there's not much detail to work with.
View attachment 58572
View attachment 58573
In astronomy, frames are often stacked to get more light, but in this case we already have plenty of light.
They are also stacked to remove atmospheric turbulence where it uses lots of very similar frames and takes the best bits from each, but we don't have significant atmospheric turbulence.
The main problem here is lack of detail due to the detail being compressed out of the image by the codec, so none of the frames have best bits to be selected by algorithms, the detail is simply missing:
![1633771743647.png 1633771743647.png](https://dashcamtalk.com/forum/data/attachments/58/58377-5e745d1538181096e9f18aabca11843d.jpg?hash=XnRdFTgYEJ)
![1633771789188.png 1633771789188.png](https://dashcamtalk.com/forum/data/attachments/58/58378-c9c008c3c2e72a58e3b82e212ce9c522.jpg?hash=ycAIw8LnKl)
(original | enlarged)
Any detail you see in that plate is largely compression noise, not real image detail.
Also, there is not really enough resolution to read the plate anyway. If the frames had not been heavily compressed then we might be able to use super-resolution where the movement/vibration between frames provides data for calculating extra resolution, but with the data compressed out of the image, there is nothing to calculate from.
The dashcam needs to collect more data for that to be readable.
This is the frame with the most detail:
![1633772269338.png 1633772269338.png](https://dashcamtalk.com/forum/data/attachments/58/58379-d383f0668cb75e8291b87cf4723da6d5.jpg?hash=04PwZoy3Xo)
Mainly the best frame because the windscreen wiper was blocking some of the light, so the plate has darker exposure, which has resulted in the compression algorithm keeping more of the data. This frame maybe does have enough data for stacking, but there is only one of it, and resolution is still marginal anyway.
If this frame had been from a 2K sensor then it may have been readable.
If more bitrate had been used then the other frames may have been as readable as this one, and then stacking might have helped.
If more framerate had been used then we may have had a second of these frames and been able to use some stacking.
But the best answer is to use more resolution and more bitrate in the camera so that the data we need is recorded, and then there is no need for any image processing, you can just read the plate.
Also helps to use larger plates with bigger characters!
So for anyone buying a new camera, the minimum specification should be: 2K or higher resolution, 25Mb/s or higher bitrate, then you have a reasonable chance of catching plates, although it still depends on lighting conditions and other circumstances. 1080 resolution/16Mb/s is maybe OK if you live in UK where the plates and text are much bigger.
Another point to make, if you try using AI algorithms to read the plates, as some software is doing, they are making informed guesses as to what it is most likely to be, and guesses can not be used as evidence in court!