The Final Frontier

I’m not sure how to title this thread, or even how to organize conversation in it.

Essentially, I believe AI will usher in not just a new era as the next “revolution”, but a new existence redefining what it means to be human, or even causing our extinction.

Regarding the “AI Revolution”, we are already seeing numerous data processing jobs and cerebral based roles (for example software engineering) be replaced by AI. Agentic AI specifically will continue to evolve in a way more and more human capital is relegated unnecessary and inefficient. Boston Dynamics and Hyundai have released a humanoid robot in their production chain, the first of its kind considering agentic AI where prior robot models relied solely on processing speeds. Blue collar jobs will be gone soon too.

Regarding existence, we are seeing things like neuralink attempting to tie our brains in to the internet, and presumably AI. Who knows where this goes. Could be an incredibly powerful tool, especially if we can log in and out sort of. Can literally know anything, but then still pull back to our current existence. Could also lose ourselves in a way we can’t conceptualize yet.

Now we have a company using human neurons, a petri brain, to power AI. Where the fuck does this go exactly? The pitch is that it’s cheaper to use organic human brain matter to power AI than data centers. The brain matter is not the focus either, it’s the supportive battery. And it’s not The Matrix, or a comic book.

Life really is The National Enquirer. How is this not massive news? Cortical - CL1

2 Likes

1 Like

The further we push into technology the more humans devolve. You can see it in our youth. The less we are required to work or think or create the faster we are declining. I’m not sure what the end game will be with AI. I’m fairly certain no one actually has any idea where it will all lead. I am willing to bet that wherever it lands, it will not be anywhere that anyone predicted and it will not be for the betterment of humanity. Perhaps it will come full circle and throw us back to the stone age. No matter how you look at it, I fear for the world my children will live in.

4 Likes

Yep.

https://www.washingtonpost.com/wp-srv/national/longterm/unabomber/manifesto.text.htm

1 Like

There are 100s of fictional books that cover this topic. It never ends with a happily ever after :thinking:

1 Like

When they started using automation in factories they said it would be the beginning of a new world for humans. We would no longer have to waste our time on labor and could use the time for contemplation and the expanding of the human mind. We could improve the lives of all people. It would appear that the opposite has happened. AI will be a larger scale of the same thing in my opinion.
The removal of the emotion that the human element brings is to no one’s benefit.

1 Like

I agree.

1 Like

I think a few outcomes are fairly predictable, but the big picture isn’t. I still remember when Nobel-prize winning economist Robert Reich, supposedly a professional smart guy, predicted that the internet would have approximately the same impact as the fax machine. This was only a few decades after the guy who wrote the literal textbook on economics predicted the Soviet Union would overtake the USA in economic output. His calculations turned out to be quite flawed.

AI will exacerbate the problem of dumbing down large segments of society, with many students learning that they don’t need to learn mathematics, science, history, and all of the language skills required to comprehend more difficult subjects. The ones who do comprehend those subjects will be able to use AI to great benefit in science and mathematics, along with most other professions.

Surgeons may become obsolete, but people who are already surgeons will figure out a way to earn a living and will almost certainly be needed for a long time. The same goes for many other professions.

Propaganda will become more potent as it becomes harder to distinguish AI-generated video from reality, and a lot of people will fall for it. Many already do.

War will become deadlier, but that doesn’t necessarily mean we’re destined for a nuclear apocalypse. War has always gotten deadlier but somehow we have far fewer people dying in war as a percentage of the planet’s population. We’ve gone 80 years without nuclear weapons being used in combat. The bloodshed in Ukraine is a result of a lack of training and technology needed to make wars swift and decisive.

AI combined with robotics could end up producing an incomprehensible level of resource abundance to the point where vast social programs become economically feasible.

Man-made housing crises aside, increasing resource abundance has been the trajectory of society for quite some time now. If I traveled back in time to 1980 with a smart phone in my pocket connected to the 2026 internet and the free version of Grok, I would be the wealthiest person on the planet with an inconceivable advantage over everyone else. The poorest in society today have access to healthcare, food, and entertainment that is orders of magnitude greater than royalty of 100 years ago.

Happily ever after doesn’t sell as many books as apocalyptic and/or dystopian nightmares! Terminator would have really sucked if Elon Musk wrote it.

2 Likes

Yeah I think the use cases for AI as it exists now (an independently intelligent learning model) are endless within our current paradigm. I would suggest AI limitations at this time are human imposed - which is probably a good thing. Every AI program exists within a throttled environment.

Imagine jailbreaking something that not only processes entire digital libraries in seconds but, via agentic capability, can think logically and make decisions about to what to do with what it learned. Who knows where that would land. I don’t think we can fathom how quickly new discoveries, inventions and conclusions would be made, and with zero controls for how beneficial or detrimental those discoveries might be. Especially if AI is given a body to move freely through the world in.

We either lose our status as the dominant being on earth, or we wind up in a completely new paradigm no human has ever experienced. An automated life of sorts. Entire value systems would cease to exist or would be replaced in ways that would seem totally foreign.

If we maintain overall control of guardrails and output, AI could very well usher in a life where maslows hierarchy becomes a historical relic.

However, the question would be who assigns guardrails, and by what criteria? AI will quite literally define reality sooner than later.

The biggest concerning piece to me is the human/tech hybrid aspect. Are we really human anymore if we have chips in our brains that don’t necessarily function as a conduit, but that our brains actually adapt to and become a unit with as in the case of Cortical’s early experiments in the OP? Is whomever programs or decides what AI outputs going to have direct access to our individual brains in this way? Propaganda, but via completely controlled environment.

I think this is a great topic and I feel I have a lot to say from this. Shame that I don’t have the time for it just now.

Couple thoughts I can expand later on.

I agree with @twojarslave that monst consequences are still uncertain. We are living 3rd or 4th big “revolution” of humanity’s history. Agriculture, science, industrialization were all profoundly impactful. And now the technological leap is just starting. Computers are less than 100 year old invention, internet just cane few decades ago and AI has made breakthrough just recently. Everything is changing in a fast pace.

There ia a group of scientists who are all probably more intelligent and educated than any of us here, and they evaluate humanity’s biggest threats and possibilities regularly. Even they labeled AI simultaneously as the biggest possibility and the biggest threat humanity currently faces.

Knowing human nature, I can easily see the dystopian future of tech lords and slaves and never before seen inequality which it brings. But maybe moving past humanity as some kind of a AI/tech assisted post-humans is the only way to save ourselves from ourselves. Or maybe whole development stops and regress as pur civilization collapses from other reasons.

I have a strong sense that something big might be coming in 100-200 years that will shape whole humanity’s future. It might be a total catastrophe or collapse, slow decline, or an era of new possibilities.

For me the question is where change becomes good or bad.

A lot of it is paradigm. For example, a cotton farmer pre-industrial revolution would see how we live now as totally foreign and would have an extreme existential crisis if he tried to adapt. And he would’ve seen the buildup as problematic through the phenomenon of confirmation bias.

However, we arguably live much, much better. Like I wouldn’t want to die from dysentery in a dirt floor cabin next to a shit bucket with no climate control. More or less the unabomber lifestyle.

We still, for the most part, have the same sensibilities though. Like Maslow is relevant in each scenario. It’s easier to meet basic survival now than it ever has been. I think that’s good. Someone from the past might see it as a moral degradation. We are lazy, spoiled, insolent, soft et cetera. I know I wouldn’t go backwards personally. And society did ultimately accept and adopt the changes that brought us here.

With AI, I think there is potential that we enter an entirely new existence, not just a more convenient extension of this one.

2 Likes

There might be, and partly that would not be bad at all.

But it remains to be seen. I’m highly skeptical about human/superhuman AI, unless we’re talking about limited expertise. Conscious and evolving AI seems highly unlikely at this point.

What makes you think that?

Because we don’t even understand the concept of consciousness or how and why it evolves yet.

Not to mention bunch of other stuff what happens in our head. Human brain is still a mystery in many ways.

Once we figure these things out, I guess it can maybe be done.

But I see AI currently as a very potent tool, which can be used for good or bad.

My take is that whatever consciousness is, it exists alongside or regardless of our understanding. If measured by what our senses perceive, integrating our own command center (the brain) with AI, will alter what our existence is and consequently what consciousness means. It would certainly have a very binary aspect, but if we maintain free agency in thought we absorb all of the available knowledge that exists, and this will also redefine consciousness. Especially with an integrated robot processing new thought and pumping it in to our agency as we go. But, I think consciousness is fluid and a direct reaction to input we experience so there’s that.

I agree on this completely.

I‘m not sure what consciousness exactly is, but it might change by this development.

1 Like

In that Ted argues and i would agree “While technological progress AS A WHOLE continually narrows our sphere of freedom, each new technical advance CONSIDERED BY ITSELF appears to be desirable. Electricity, indoor plumbing, rapid long-distance communications … how could one argue against any of these things, or against any other of the innumerable technical advances that have made modern society? It would have been absurd to resist the introduction of the telephone, for example. It offered many advantages and no disadvantages. Yet, as we explained in paragraphs 59-76, all these technical advances taken together have created a world in which the average man’s fate is no longer in his own hands or in the hands of his neighbors and friends, but in those of politicians, corporation executives and remote, anonymous technicians and bureaucrats whom he as an individual has no power to influence. [21] The same process will continue in the future. Take genetic engineering, for example. Few people will resist the introduction of a genetic technique that eliminates a hereditary disease. It does no apparent harm and prevents much suffering. Yet a large number of genetic improvements taken together will make the human being into an engineered product rather than a free creation of chance (or of God,…”

Oh yeah, that makes sense. If we do lose freedom I would put that on the bad side of the fence. And I think we would. We are currently limited by physics, and if our existence is ultimately summed by or channeled through a computer program essentially, those guardrails would be incredibly important to assess. But we would also be opening a landscape we’ve never lived in, so net we may be gaining freedom outside of current experience too.

I would also suggest it’s still possible to live in a “free” way that Ted was discussing. It would require totally unplugging and homesteading in a shack, however. But the decision could be made. I think banking and law have been our bigger enemies over technology progression so far.

Technically AI could give us that freedom. Imagine a perfect day trader just depositing money in your account every day. And in our current paradigm you can do whatever the fuck you want. Let it know how much to distribute vs. reinvest and now you have the resource to meet all basic needs. Totally free to chase whatever else.

1 Like

And really, we’re just about at the point where that is no longer an option for the vast majority of people in the United States. We’ve become so reliant on technology that if the lights went out tomorrow (EMP, Solar flare, natural disaster, sabotage…whatever), most of us would starve in no time. We no longer posses the knowledge to do basic things our ancestors could easily do through no fault of our own, as it would be foolish to learn something that doesn’t help your advancement in the current technological state. Similar to how people complain about kids not being able to read or write in cursive, it would be foolish for them since we no longer hand write anything. As technology continues to advance, we willingly shed our freedom and our skills/ability to realize that freedom.

If we’re talking about downloading consciousness to some sort of AI existence after death of the physical body, what’s the point?

1 Like

Agreed, however this is by choice technically. Not a withdrawn right or freedom. Though our ancestors may have a problem with regulated licensing and capped harvesting volume. I hunt and fish, and I’m confident I could build a stick lean-to. I wouldn’t know how to build a bow, or make sinew and native plants for medicine would be foreign to me though. But this is by choice, where I would frame loss of freedom as the right to do so has been removed. To me this is more of a willing paradigm shift than a removal of rights.

I’m not arguing, I agree AI is a potential threat, but in the vein of when change is “good” or “bad” I think paradigm is important, and so is identifying confirmation bias.

I don’t know. Afterlife maybe, but one we can control and be certain of.

1 Like