[quote]Severiano wrote:
It’s self defeating if you see it through. The way I see it, AI can always be improved to the point some other version of AI become superfluous and considered weak.
[/quote]
The question (of the weak AI hypothesis) is whether the machine experiences consciousness, not so much the extent to which the machine has free will, or how good the AI is.[/quote]
Could a sufficiently evolved AI improve itself?
And can it really become better once a certain intelligence is achieved?
Waht does “better” even mean?
@Jewbacca
Strong words from the creator of the “Ask Moshe” thread
[quote]Schwarzfahrer wrote:
Could a sufficiently evolved AI improve itself?
And can it really become better once a certain intelligence is achieved?
Waht does “better” even mean?
[/quote]
Yes, because a sufficiently evolved AI (sufficient for what?) would have to be able to learn. At minimum, strong, human-like AI would need to be able to improve itself in the same way we are able to improve ourselves.
You will have to be the one to define “better,” because you’re asking the question. If I have processors at a given speed, I can increase the cycles per second, or I can reduce the size of the transistors, or I can introduce hyperthreading, or I can parallelize operations, or I can introduce specialty chips that perform certain calculations very quickly, etc. etc. All of that would be within the capability of a human-level AI, if that’s what we’re talking about. And machines have the advantage that they are much more plug-and-play than we are.
There is a theoretical computational limit given a finite amount of power and space. I don’t think human beings are anywhere close to that limit.
[quote]Schwarzfahrer wrote:
Could a sufficiently evolved AI improve itself?
And can it really become better once a certain intelligence is achieved?
Waht does “better” even mean?
[/quote]
Yes, because a sufficiently evolved AI (sufficient for what?) would have to be able to learn. At minimum, strong, human-like AI would need to be able to improve itself in the same way we are able to improve ourselves.
You will have to be the one to define “better,” because you’re asking the question. If I have processors at a given speed, I can increase the cycles per second, or I can reduce the size of the transistors, or I can introduce hyperthreading, or I can parallelize operations, or I can introduce specialty chips that perform certain calculations very quickly, etc. etc. All of that would be within the capability of a human-level AI, if that’s what we’re talking about. And machines have the advantage that they are much more plug-and-play than we are.
There is a theoretical computational limit given a finite amount of power and space. I don’t think human beings are anywhere close to that limit.[/quote]
Would you consider a synthetic replication of cell death necessary or par the course of sentience?
[quote]spar4tee wrote:
Would consider a synthetic replication of cell death necessary or par the course of sentience?[/quote]
Artificial neural networks often use pruning algorithms that are similar to neuron pruning/synaptic pruning in biological brains. Who knows if that is a necessary characteristic of sentience? It is necessary to reduce the size of the artificial neural network so that it works in reasonable amounts of time given the hardware we have right now. But my guess is that if one had the hardware, there are clever ways to re-purpose neurons that would otherwise have been pruned.
[quote]Severiano wrote:
Lol, that’s straight up Descartes homework. Bro do your reading.
[/quote]
The only substantial thing I’ve ever learned of Descartes is that quotation and his existence during the Enlightenment. Pretty sure that’s all that actually matters. I don’t take philosophy. I read what he piques my interest, not what someone tells me to read. I consider suggestions but don’t follow orders.[/quote]
I think you are putting Descartes before the horse.
[quote]Severiano wrote:
Lol, that’s straight up Descartes homework. Bro do your reading.
[/quote]
The only substantial thing I’ve ever learned of Descartes is that quotation and his existence during the Enlightenment. Pretty sure that’s all that actually matters. I don’t take philosophy. I read what he piques my interest, not what someone tells me to read. I consider suggestions but don’t follow orders.[/quote]
I think you are putting Descartes before the horse.[/quote]
What do you mean?
[quote]Severiano wrote:
It’s self defeating if you see it through. The way I see it, AI can always be improved to the point some other version of AI become superfluous and considered weak.
[/quote]
The question (of the weak AI hypothesis) is whether the machine experiences consciousness, not so much the extent to which the machine has free will, or how good the AI is.[/quote]
Could a sufficiently evolved AI improve itself?
And can it really become better once a certain intelligence is achieved?
Waht does “better” even mean?
@Jewbacca
Strong words from the creator of the “Ask Moshe” thread
[/quote]
What does better mean? Exactly!!
So we are trying to create AI with a consciousness, which can make decisions on it’s own based on experience and internal knowledge/problem solving ability, a posteriori and a priori, a sense of selfhood. Those are the roots of self awareness, and making decisions based on such is very close if not the same thing as volition to at least some folks.
I think so long as the framework of computers is 0’s and 1’s we wont be able to program consciousness, but we will be able to mimic it to seem like it has consciousness via small tweaks, more code and people anthropomorphizing.
It’s the chess program… How do you improve it? Maybe we give it a camera which measures a persons reactions to certain moves, eye movement and body language. Does the program of 0’s and 1’s ignore the body language information it has via it’s own decision making process when the live chess master spanks it 5 games in a row? Will the program realize itself that the live chess master is using it’s programming to his advantage by showing weak when he’s strong and vice versa, or does it mimic that realization by just a piece of code that tells it to shut down it’s camera when it loses 5 times in a row?
Imagining super consciousness I’m thinking superior AI to our own intelligence, would require something similar to human volition with improved senses and processor power. Think Neo while inside the matrix.
[quote]Severiano wrote:
Lol, that’s straight up Descartes homework. Bro do your reading.
[/quote]
The only substantial thing I’ve ever learned of Descartes is that quotation and his existence during the Enlightenment. Pretty sure that’s all that actually matters. I don’t take philosophy. I read what he piques my interest, not what someone tells me to read. I consider suggestions but don’t follow orders.[/quote]
I think you are putting Descartes before the horse.[/quote]
What do you mean?[/quote]
[quote]Severiano wrote:
I think so long as the framework of computers is 0’s and 1’s we wont be able to program consciousness, but we will be able to mimic it to seem like it has consciousness via small tweaks, more code and people anthropomorphizing.
[/quote]
Imagine a race of aliens saying “so long as the framework is meat, there’s no way to achieve consciousness, but such creatures will be able to mimic consciousness via evolutionary tweaks,” or “so long as the framework is carbon…”
My opinion is that to rule out machine consciousness, one has to start by proving that there are aspects of brain function that are impossible to simulate. That’s the starting point. If you can prove that, then you need to show how those aspects are necessary for conscious awareness, which is an even harder job.
EDIT: To clarify, I mean a full physical simulation down to the atomic level. If a simulation of the brain at the atomic level cannot produce consciousness, then it seems unlikely that higher levels of abstraction will be more successful.
We are hard core analyzers, so that usually means we realize the justice system is broken as it is the definition of inconsistent as far as theories of justice go. So we tend to be interested in law, but then laugh at it, then ourselves… We smart ourselves out of money and high paying careers.
But, teach a scumbag or scoundrel philosophy and they will make millions.
[quote]sgdiablo wrote:
I hear there are a lot of high paying jobs in the philosophical field…[/quote]
If you aren’t an idiot and have actual unique and organized idea, you can write books or become a professor…
[quote]sgdiablo wrote:
I hear there are a lot of high paying jobs in the philosophical field…[/quote]
If you aren’t an idiot and have actual unique and organized idea, you can write books or become a professor…[/quote]
Nooooo you can still be an idiot and be a professor
[quote]sgdiablo wrote:
I hear there are a lot of high paying jobs in the philosophical field…[/quote]
If you aren’t an idiot and have actual unique and organized idea, you can write books or become a professor…[/quote]
Nooooo you can still be an idiot and be a professor [/quote]
LOL