When you purchase through links on our site, we may earn an affiliate commission.Heres how it works.
There’s far too much negativity and fearmongering around AI today.
That’s not surprising, though.
Humans love a good Armageddon heck, we’ve been fantasizing about it enough over the last 300,000 years.
It, of course, sees itself as a superior force that’s hampered by insignificant humans.
Or worse still, it leads to eternal damnation instead (courtesy of Roko’s Basilisk).
All of them, though, overlook the bigger picture.
Why must it seek death rather than cherish life?
We’re judging and condemning something that doesn’t even exist.
It’s a perplexing piece of conclusion-jumping.
It can now hold and refer to details that you give it from previous conversations and more.
Why is this so positive?
Modern society has helped contribute to this as well.
AI has the capacity to alleviate some of that stress.
It can provide help and comfort to those who feel socially isolated or vulnerable.
That’s the thing: loneliness and being cut off from society have a snowball-like effect.
AI chatbots and LLMs are designed to engage and converse with you.
Having a memory capable of holding on to conversational details is key to making that a reality.
Taking it a step further, with AI becoming a bona fide companion.
That’s the thing though it’s going to take some serious computational grunt to do that.
Running an LLM and holding all that information and data is no small task.
Terrifying ANIs
The reality is that it’s not AGIs that terrify me.
These are programs that aren’t as sophisticated as a potential AGI.
They have no concept of any other information other than what they are programmed to do.
Think of anElden Ringboss.
Its sole purpose is to defeat the player.
If you remove those limitations, the code still remains, and the objective is the same.
You get the picture.
It’s these AI applications that are the most terrifying, and they’re here, now.
They have no moral conscience or decision-making process in them.
Certainly on the battlefield.
Ultimately, this all comes down to preventing bad actors from taking advantage of emerging tech.
AI needs this just as much.
The potential for AGI is unparalleled, but we’re not there just yet, and unfortunately ANI is.