
Is AI UNDERhyped, or OVERhyped? According to Google co-founder Eric Schmidt, AI is definitely underhyped, but the volume of AI-related content we’re bombarded with everyday is not negligible. AI-related fatigue is becoming a thing.
That’s why the AI Codecamp starting on June 16 is focused only on what matters. We’re neither overhyping AI nor underhyping it, just giving it the attention it deserves in the context of what it means for you, the customer.
Here’s Developer Advocate Ian Barker on what you can expect:
Here are just some of the sessions on the AI Codecamp 2025 schedule:
1) Engineering Applications Using ANN in C++ Builder+
2) What do You Need to Know to Start Using AI
3) Different Models for Different Solutions
4) Implementing Your Own Edge AI With Delphi
5) AI Beyond the Code: Maximize Your Productivity
6) Developing in the AI Era! Tips and Tricks
How to Sign Up
Don’t miss the chance to join Developer Advocate Ian Barker for a number of great sessions from June 16 to 20.
Sign Up Now
Design. Code. Compile. Deploy.
Start Free Trial Upgrade Today
Free Delphi Community Edition Free C++Builder Community Edition
Eric is correct, AI is UNDER-hyped. It about time computers are able to provide humans with a new, easier world to live in! I’ve waited decades for this and I’m very excited about AI. Just a few years ago, AI was laughable, but now it’s ready to serve us. Doubting it’s usefulness is only going to cause us problems. Embracing it and accepting the changes is the way to enjoy the future–now.
Well, I think the problem is the perspective on what LLMs – a very specific form of AI – are actually doing and being cognizant of the underlaying truth of that. They can do magical things, but, like all magic, much of it is to produce a result which appears to be one thing when, in fact, it’s entirely another. If I appear to levitate an elephant and you can’t work out how I did that it doesn’t mean I really possess magical abilities. The same with LLMs – they produce such fantastic results which, in some ways give every appearance of “the LLM is thinking” when, in fact, it is not. The classic example is, of course, the strawberry test, which fooled a lot of LLMs and exposed the fact they are tokenizing, not rationalizing.
They are superb tools and yes, we can probably refer to them as a form of AI, but they are not General Artificial Intelligence – yet. We must treat the answers we get with skepticism and caution much in the same way we would with a new intern. With a nice fresh new intern, you can often spot the flaws and bad smells in code because, overall, the code does not bear the traits of maturity which comes from banging out code and getting it wrong, learning, and relearning, and improving. The LLMs can often produce extremely good quality code which bears the traits of experience but is completely wrong. It’s the fact it is wrong in plausible ways with an air of polish and efficacy which are beguiling but still just as wrong as naive and chunky code produced by the inexperienced coder.
We’ll cover this a lot more in our AI coding camp.