That seems to be the story of AI, something unknown is coming, sooner than you expect.
That was the story last century, and is still the story today, not quite as bad as nuclear fusion, since AI is now performing a few useful functions, but it is not arriving very fast, it is mainly marketing!
I believe your dismissive assessment of Artificial Intelligence is utterly lacking in understanding, knowledge and historical perspective. In addition, I think characterizing AI as merely having a "few useful functions" and brushing it off as marketing hype is woefully naïve.
Artificial Intelligence as we know it today began with a seminal paper by Alan Turing in 1950 exploring the mathematical possibility of artificial intelligence which has now become a potential practical reality because computers were in 1950 poised to reach a new era. Before 1949 computers lacked a key prerequisite for intelligence; they couldn’t store commands, only execute them. Computers could be told what to do but couldn’t remember what they did.
68 years ago, in 1956, at Dartmouth College in Hanover New Hampshire, not far from where I live, a germinal event occurred with the convening of
The Dartmouth College Artificial Intelligence Conference originally known as the Dartmouth Summer Research Project on Artificial Intelligence. Key figures in computer science at the time, such as Marvin L Minsky (MIT), Nathanial Rochester (IBM), Claude Shannon (Bell Labs) and others offered workshops, presentations and discussions. It was at this historic conference where the term, "Artificial Intelligence" was introduced and it was at this conference that the academic field of research into artificial intelligence was founded.
As with many world changing technological developments such as nuclear energy (fission or fusion) modern air travel, space flight, interplanetary exploration, medicine and many others including AI, it can take generations before these developments come fully into fruition.
Many prominent scholars, futurists and computer scientists such as Ray Kurzweill and author and professor Vernor Vinge have predicted that AI would not begin to come into its own until the mid to about 2030 . In other words right about now! Vernor Vinge pointed out in his 1993 paper that, "The acceleration of technological progress has been the central feature of this century."In 1993 he accurately predicted that it would begin to happen within 30 years which is exactly now!.
Ray Kurzweil who is known for many technological accomplishments and accurate predictions is known for
Kurzweil's Law otherwise known as t
he Law of Accelerating Returns whichs decribes the tendency for technological advances to feed on themselves exponentially, increasing the rate of further advance, and pushing well past what one might sensibly project by linear extrapolation of current progress.
This is not a new concept but in no field of endeavor in history has this construct been more applicable than in Artificial Intelligence and this is why I said what I stated in my first sentence of this post in response to your comments.
AI is now performing a few useful functions
A few useful functions?
AI is currently being deployed in more fields than you could possibly count from medical diagnosis and advanced surgery, to product design, manufacturing, inspection and quality control, security, banking, finance, investing, automotive technology, weather prediction, astronomy and space exploration, bio-engineering, energy management, food science and production, etc., etc., etc.
AI is already here!
The AI market size is expected to reach $407 billion by 2027 its already about $87 billion in market revenue in 2024.
We are fast approaching an historic tipping point where Artificial Intelligence will dramtically reshape every aspect of the world as we know it, along with the everyday expereince of everyone in it.
A dashcam is a recording device, they improve over time, but don't expect some futuristic AI Tech Wave to fundamentally change them, only to enable them to record better, and to improve usability a little.
Despite your dismissive attitude, AI is coming to consumer electronics and small devices in a big way and this will accelerate as AI chips become faster, more powerful, cheaper, smaller and more energy efficient.
I can't say exactly when or what new capabilities and features will eventually appear in dash cams but I am certain they will be significant and probably arrive sooner than most are expecting. For example, it is likely that machine learning analysis and imaging enhancement that is already in widespread use in astronomy and medical diagnostics will eventually arrive in some form in dash cameras. When that happens the problem of license plate capture in challenging situations will cease to be a problem for example. AI enabled cameras will analyze image data, learn from it, make processing decisions and perform actions based on that analysis using intelligent algorithms and machine learning techniques. A blurry license plate will suddenly be rendered quite readable. And yes, as a recording device dash cams have improved over time but according to the
Law of Accelerating Returns these improvements will come sooner and be far more profound than many of us are expecting.
It is at times like this when I think about the noted author Arthur C Clark's "Three Laws" of which the third law (my personal favororite) is perhaps the most well known.
The laws are:
- When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.
- The only way of discovering the limits of the possible is to venture a little way past them into the impossible.
- Any sufficiently advanced technology is indistinguishable from magic.