HELP! HIT AND RUN! Can almost make out the plate!!!

Status
Not open for further replies.
Can be very successful when everyone is not moving, but normally you don't need it when nobody is moving.
In this video there are insufficient/no frames suitable for stacking.

No, that is not true. You obviously don't understand how this works. You can use multiple screen captures of the same poor quality image to create a stack that can be processed into a usable image. It can also work well with certain adjacent frames that show movement. In fact, that it how it is often used for astrophotography from images shot as the earth spins on it axis. In many cases, slightly off register images improve your chances of detail recovery depending upon which image operators you apply.

There is a reasonable likelihood that the almost discernible license plate in the screen shots above can be processed to make it readable.

The following is from AutoStakkert.

target1.png

target2.png
 
Last edited:
I'm not for a second going to claim I know what I'm doing. But I extracted 250 frames from the video and ran through https://www.autostakkert.com/wp/enhance/. While it appeared to fix other plates, it didn't make the SUV plates readable. Again, I have never done this and I could be doing it all wrong.

So take my effort with a grain of salt.

FYI this is the clearest frame of all I extracted...Unfortunately it's just barely unreadable...No matter what I do.

New Project_f223.png

New Project_f223.png
 
Last edited:
I think you've done a good job with difficult dawn footage. Unfortunately the odds are against you.

As already mentioned Nextbase's poor mounting solution has caused too much vibration.
 
I think you've done a good job with difficult dawn footage. Unfortunately the odds are against you.

As already mentioned Nextbase's poor mounting solution has caused too much vibration.

I'm trying to play with the stacking programs but not really sure how to use them all that well...But honestly, I'm not optimistic on getting a plate. I'll be happy to upload the 253 image files I extracted from the mp4 containing the SUV if someone wants to take a stab at this who has experience.
 
253 frames would be overkill. Too many layers in the stack is not necessarily better; you'll likely just create too much noise. As I mentioned earlier you need to have a full understanding of imaging fundamentals, how operators work and other factors. There is a significant learning curve.
 
Last edited:
I searched for "AI license plate recognizer" and came up with the term Automatic License Plate Recognition. After looking at a few links I searched for "ai license plate recognition for blurry photos" and came across a .com website called PlateRecognizer which had an option to upload an image for free. Unfortunately, when I uploaded the image HonestReview extracted from the video the site said it didn't detect a license plate. Most of the examples on their page are easily readable by humans, so I guess you're not really gaining much with their software in terms of interpreting blurry images that aren't readable by humans.
 
I offer up the 253 images if someone wants to go through, pick the best images, and see if anything can be done. As I reiterated before, I MESSED around with the Stack Program https://www.autostakkert.com/wp/enhance/ but I don't have a clue what I'm doing here.

As we have an individual on here who seems to profess the ease and knowledge of using a Stack Program, and believes the license plate can be extracted, I'll pass the torch on to them.

Honestly, I'm with @Nigel and others that I doubt enough data exists. But for Op's Sake, I hope I am 100% wrong. I'd be pissed if someone hit my car and took off!

So to the person who feels this can be done.. Here's your chance to shine.....

Link to images (Rar File): https://sprend.com/download?C=b60f80994c974355b9cb7fe881e7e857

Max of 5 Downloads Permitted from Sprend. Don't download unless you intend to assist with Stack please.
 
Another option here is Video Cleaner, a free video forensics client that is designed for situations like this and is used by law enforcement. This software has been discussed previously here on DCT and a few members have tried it with mixed results. As with other software of this type one needs a solid understanding of imaging fundamentals, controls and parameters and there is a significant learning curve.

LINK: VIDEO CLEANER

videocleaner1.jpg

videocleaner.jpg

 
I spent a while using some techniques I've learned for astrophotography. I trimmed the file in Avidemux, exported frames using PIPP, stacked around 10 tracked frames in AutoStakkert then finally sharpened the result in RegiStax. Unfortunately there's not much detail to work with.

211004_032801_932s_lapl7_ap3.jpg
1633743698154.png
 
You're fighting a losing battle.

In that top screengrab even the stationary street lights have serious vibration issues.
 
If you have had side cameras, those might very well have given a plate capture.
But throwing more cameras at a potential problem are not possible / desirable for all
 
I'd be going back to the scene and see if there are any CCTV cameras around and try and grab something from them.
 
I spent a while using some techniques I've learned for astrophotography. I trimmed the file in Avidemux, exported frames using PIPP, stacked around 10 tracked frames in AutoStakkert then finally sharpened the result in RegiStax. Unfortunately there's not much detail to work with.

View attachment 58572
View attachment 58573

In astronomy, frames are often stacked to get more light, but in this case we already have plenty of light.

They are also stacked to remove atmospheric turbulence where it uses lots of very similar frames and takes the best bits from each, but we don't have significant atmospheric turbulence.

The main problem here is lack of detail due to the detail being compressed out of the image by the codec, so none of the frames have best bits to be selected by algorithms, the detail is simply missing:

1633771743647.png1633771789188.png
(original | enlarged)

Any detail you see in that plate is largely compression noise, not real image detail.
Also, there is not really enough resolution to read the plate anyway. If the frames had not been heavily compressed then we might be able to use super-resolution where the movement/vibration between frames provides data for calculating extra resolution, but with the data compressed out of the image, there is nothing to calculate from.

The dashcam needs to collect more data for that to be readable.

This is the frame with the most detail:
1633772269338.png
Mainly the best frame because the windscreen wiper was blocking some of the light, so the plate has darker exposure, which has resulted in the compression algorithm keeping more of the data. This frame maybe does have enough data for stacking, but there is only one of it, and resolution is still marginal anyway.

If this frame had been from a 2K sensor then it may have been readable.
If more bitrate had been used then the other frames may have been as readable as this one, and then stacking might have helped.
If more framerate had been used then we may have had a second of these frames and been able to use some stacking.

But the best answer is to use more resolution and more bitrate in the camera so that the data we need is recorded, and then there is no need for any image processing, you can just read the plate.

Also helps to use larger plates with bigger characters!

So for anyone buying a new camera, the minimum specification should be: 2K or higher resolution, 25Mb/s or higher bitrate, then you have a reasonable chance of catching plates, although it still depends on lighting conditions and other circumstances. 1080 resolution/16Mb/s is maybe OK if you live in UK where the plates and text are much bigger.

Another point to make, if you try using AI algorithms to read the plates, as some software is doing, they are making informed guesses as to what it is most likely to be, and guesses can not be used as evidence in court!
 
If this was at a normal commute time then I’d go back there for a few days and wait, chances are that you’ll see the car again.
 
In astronomy, frames are often stacked to get more light, but in this case we already have plenty of light.

That is one possible function of stacking in astrophotography, but not what I was trying to achieve here.

They are also stacked to remove atmospheric turbulence where it uses lots of very similar frames and takes the best bits from each, but we don't have significant atmospheric turbulence.
No turbulence, but each frame has its own random noise due to iso gain and compression. I was attempting to use stacking to increase the signal to noise ratio, by keeping those 'best bits' and averaging out the noise, then sharpen the resulting image. However there simply wasn't enough useful data to work with.
 
The main problem here is lack of detail due to the detail being compressed out of the image by the codec, so none of the frames have best bits to be selected by algorithms, the detail is simply missing
Agreed
 
No turbulence, but each frame has its own random noise due to iso gain and compression. I was attempting to use stacking to increase the signal to noise ratio, by keeping those 'best bits' and averaging out the noise, then sharpen the resulting image. However there simply wasn't enough useful data to work with.
The sensor noise is being optimised out and really only changes at keyframes, so you are only getting two frames per second that are worth stacking for sensor noise reasons.

Mainly the best frame because the windscreen wiper was blocking some of the light, so the plate has darker exposure, which has resulted in the compression algorithm keeping more of the data. This frame maybe does have enough data for stacking, but there is only one of it, and resolution is still marginal anyway.
Actually, much of the improvement when the wiper passes appears to be because it effectively reduces exposure time so reduces motion blur, especially that which is coming from the "music". Image stacking can't decrease motion blur, but it might pick out the one frame as being the sharpest, of course we can also do that by stepping through the frames and looking. In astrophotography with atmospheric turbulence, different parts of the image appear best in different frames, but in this, the frame with the least blur from the bass is the same frame all over the image.
 
If this was at a normal commute time then I’d go back there for a few days and wait, chances are that you’ll see the car again.

Indeed there is a chance the guy is just as much a creature of habit like the rest of us.
before i started to use the event button to save things, i knew which 3 minute to click on to go to a event in so and so intersection, or very close to it.
My drives are so regular that i know that when toe footage get to the #9 segment in a drive, that is when i enter so and so town.
Learned that from all the years using mental notes and scrubbing thru footage to get to just that section of road where i remembered something of interest happened.
Now i can relax my brain, and not forget some things which i have done several times in the old days, but i have made it a habit to say why i have pressed the event button, cuz otherwise you dont notice some small things.

And i have tried to some times sit watching a clip thinking " why the hell did you save this" only to finally realize it was not something right in front of my car but a bit further out, or off to the side.
So you sit there looking at the middle of the footage expecting to find car related stupid, and then the reason for pressing is something on the sidewalk way off to the side.

A few days ago i saw a kid on a electric scooter, with a huge ass boombox on his back ( playing loud ) not wearing the helmet that is mandatory if you use at least a electric scooter.
 
I spent a while using some techniques I've learned for astrophotography. I trimmed the file in Avidemux, exported frames using PIPP, stacked around 10 tracked frames in AutoStakkert then finally sharpened the result in RegiStax. Unfortunately there's not much detail to work with.

View attachment 58572
View attachment 58573

As suspected from my original images, there aren't enough usable frames. The single frame I captured was the clearest one of the entire video. The wipers helped sharpen the plate for an instance by blocking out some light.

You can see the vibrations from the bass + nextbase hanging from a stalk, made everything else unusable. You succeeded in sharpening the SUV, but gathered no more useful info regarding the plate.

Your effort was valiant, but there wasn't much more you could do without any additional frames that were usable.
 
Last edited:
It's a crappy intersection for sure. I definitely think the black car was at fault. However, I don't think @leaf_Huntington was driving defensively. The gray car to @leaf_Huntington's left brakes and @leaf_Huntington appears to speed up to prevent the black car from cutting in front of him/her. Typically when you see another car brake unexpectedly you slow down.

I can somewhat understand being caught off guard with a car cutting across, especially if this is your first time at that intersection. However, if you frequently drive through that intersection, then you pretty know what to expect. It seems like this might be a pretty common occurrence due to the merging of the two roads.
I would be surprised if a car dashing across two lanes is a common regular occurrence. I doubt that driving frequency at the intersection has anything to do with it.
The arrows on the ground are very clear.
 
Status
Not open for further replies.
Back
Top