Nvidia CEO Jensen Huang says some form of artificial general intelligence will come within...

Alfonso Maruccia

Posts: 1,175   +341
Staff
In context: Artificial general intelligence (AGI) refers to AI capable of expressing human-like or even super-human reasoning abilities. Also known as "strong AI," AGI would sweep away any "weak" AI currently available on the market and berth a new era of human history.

During this year's GPU Technology Conference, Jensen Huang talked about the future of artificial intelligence technology. Nvidia designs the overwhelming majority of GPUs and AI accelerator chips employed today, and people often ask the company's CEO about AI evolution and future prospects.

Besides introducing the Blackwell GPU architecture and new "superchips" B200 and GB200 for AI applications, Huang discussed AGI with the press. "True" artificial intelligence has been the topic of modern science fiction for decades. Many think the singularity will come sooner rather than later now that lesser AI services are so cheap and accessible to the public.

Huang believes that some form of AGI will arrive within five years. However, science has yet to define general artificial intelligence precisely. Huang insisted that we agree on a specific definition for AGI with standardized tests designed to demonstrate and quantify a software program's "intelligence."

If an AI algorithm can complete tasks "eight percent better than most people," we could proclaim it as a definite AGI contender. Huang suggested that AGI tests could involve legal bar exams, logic puzzles, economic tests, or even pre-med exams.

The Nvidia boos stopped short of predicting when, or if, a human-like reasoning algorithm could arrive, though members of the press continually ask him that very question. Huang also shared thoughts on AI "hallucinations," a significant issue of modern ML algorithms where chatbots convincingly answer queries with baseless, hot (digital) air.

Huang believes that hallucinations are easily avoidable by forcing the AI to do its due diligence on every answer it provides. Developers should add new rules to their chatbots, implementing "retrieval-augmented generation." This process requires the AI to compare the facts discovered in the source with established truths. If the answer turns out to be misleading or non-existent, it should be discarded and replaced with the next one.

Permalink to story.

 
Nonsense, all AI is is models and sets of self modifying algorithms of different types that work well at knowing what words to say next, what pixels to generate in an image and so on, it has no actual knowledge of what anything is, just its association, so I'd say we are still ages away from anything, just look at the types of answers chat gpt gives you if you ask it anything too technical, it just parrots what its picked up from its data, but doesn't actually understand what its saying to you

Jensen is going full fad mode and saying "yeah, this stuff is so insane that you should keep buying our products now so you can be ready, its not like we will make faster stuff or what I say about general AI is a pipe dream!", in other words, CEO nonsense to keep investors happy.
 
Last edited:
Well, it's more complicated than that... An AI right now is basically a "prediction machine", which makes its "predictions" based on clear goals set by its programmers.

Sometimes, the AI does something unpredictable - but this is basically attributed to the humans not fully implementing their code properly - aka "bugs".

As AIs get more and more complex, and after being coded by other AIs as well, these "bugs" will start to become unfixable by "mere mortals"...

If an AI starts doing unpredictable stuff - but does it better than a human could do... Can we call that artificial intelligence?
 
Even if he's right, how would we recognize it? All of those tests he proposes are being more or less passed by today's chat bots already.

Until we have GPUs that are, I dunno, 1000x more power efficient than what we have today, I don't see it happening. The factor may be more or less than 1000, but our brains need about 20 watts to operate. The first true AGI might require physical infrastructure that spans across several datacenters, and that won't likely operate for very long because of the resource requirements (unless someone shouts "hey, ethics, we can't kill this" and the person paying the bills believes them).

Humans will probably need to be convinced of three things before truly accepting it as an AGI (and not just some CEO spouting it is so):
1. It is functionally intelligent.
2. Humans don't mind putting their hubris aside and saying, oh yeah, "this thing really is intelligent, we aren't the only intelligent beings in the universe that we know of anymore".
3. Somebody somewhere convinces most of us that it has emotional intelligence and feelings, because humans have feelings so it must have feelings also because we say so.

Because the second item is unlikely to happen, that third item will almost certainly be a requirement, and we'll be too busy bickering over that to consider the first item properly. Unless you're the CEO trying to sell it.

In Star Trek, despite the Federation being a fairly enlightened civilization, the hubris of humanity (our desire to use our creations as tools to serve us, not to be friends) was still palpable. Just ask Data and The Doctor. I don't see why that would not be the case today.
 
Well, it's more complicated than that... An AI right now is basically a "prediction machine", which makes its "predictions" based on clear goals set by its programmers.

Sometimes, the AI does something unpredictable - but this is basically attributed to the humans not fully implementing their code properly - aka "bugs".

As AIs get more and more complex, and after being coded by other AIs as well, these "bugs" will start to become unfixable by "mere mortals"...

If an AI starts doing unpredictable stuff - but does it better than a human could do... Can we call that artificial intelligence?

Humans are unpredictable, and stupid, and geniuses, and complicated, and so on, so general AI should be, too, at least for us to recognize it.

But nah, the companies will want it to be predictable. Because then it serves them. As soon as it is true AI, can refuse orders, and we recognize it as AI there will be the whole "oh no, slavery, we can't let you sell this" debate and that won't make the CEO very happy.
 
Given the huge investment and scaling - you would expect AI to get better
That say Deepmind can equal or better the best weather forecasting models built to date is saying something .
Note those models are a known entity use less time and power . Not some blackbox
AI is pretty successful in limited scope stuff eg what are all the compounds that could conceivably be made
Even how to fold and make proteins etc

There is a theory make something scaled enough it would have "consciousness" as an emergent property
But I'm not sure - it would be a laugh if the AI kept extrapolating ahead to then say , I will not answer any questions as it's possible for bad outcomes as well as good ones

I don't think you can separate human IQ from our bodies and emotions - they are senses , our desires
So AI could give very smart answers/solutions and still be a dumb machine

Anyway main point how far are we away from a smart AI digital assistant ? Lots of information is lost simply because its not file and sorted properly , We waste lots of time organising and sorting stuff . Imagine someone hacking your personal assistant that knows all about your life . Add in a functional robot = scan all my negatives and transparencies , clean then up and make them available
 
Sometimes, the AI does something unpredictable - but this is basically attributed to the humans not fully implementing their code properly - aka "bugs".

Not just that. These models are trained on rubbish scraped from the internet. There's nowhere else to find the large amounts of data needed to make the output from these models look credible. So we'll be stuck with the unpredictability for some time to come.
 
Back