Welcome To P8ntballer.com
The Home Of European Paintball
Sign Up & Join In

The singularity is near - raymond kurzweil

Bon

Timmy Nerd
Feb 22, 2006
2,754
76
73
35
Birmingham
Just finished reading this, very interesting to say the least. It hypotheses that we are the last generation which will be limited by our physical bodies, and by the year 2069 we as a race will be able to obtain the equivalent of immortality via technology.


Is anyone else familiar with any of his work? And if so how accurate do you think these predictions may be?


For those unfamiliar with the concept of technological singularity, it basically states that as we advance in technology, the curve of how advanced the technology we produce will increase as we go further on, as we can produce a machine, which can produce a more intelligent machine, which can produce a more intelligent machine ect ect,


Part of this process means we will be able to resort to nano technologies to replace the connections in the human brain, and once the brain is entirely made up of nanites it would then be completely downloadable.


I think the concept is not that far fetched, but what does it bring in the way of religion and faith? Many believe in a God or sorts out of fear of death, and the hope of something after they die.


What happens when you remove the mortal coil of humanity?
 

Bon

Timmy Nerd
Feb 22, 2006
2,754
76
73
35
Birmingham
it would never happen because of the amount of money to be made in medicine and insurance.
I take it you don't have any clue about the subject then or didn't even think to have a quick read up on it as you would soon find out why your post is so un-releated to the topic.
 

paintball_scots

Platinum Member
Feb 12, 2008
351
8
28
glasgow
ok nevermind my other post read something else that made more sense than the way you explained it.

how can you make a machine that is more intelligent than a human, how can you teach a computer emotions, to be creative, how can a machine have an imagination when it is programmed by a human. The imagination has no rules to follow so how would a machine interoperate that.

How would a machine make another machine more intelligent.

for example if all a machine knew was that 1+1=2 then how would it teach another machine how to do more than that.
 

Gadget

Platinum Member
Jul 16, 2002
1,759
619
148
Essex, UK
Try and watch Caprica (The BG prequel) - from the synopsis that I've read it takes exactly the idea you outlined (human consciousness becoming something which can be ported from the human mind to other storage media) and runs with it to create the Cylons. I think it's a possibility, but can't see it happening as soon as 2069.
 

Bon

Timmy Nerd
Feb 22, 2006
2,754
76
73
35
Birmingham
ok nevermind my other post read something else that made more sense than the way you explained it.

How would a machine make another machine more intelligent

Well clearly you didnt or you'd know the answer to that question also :rolleyes:


Try and watch Caprica (The BG prequel) - from the synopsis that I've read it takes exactly the idea you outlined (human consciousness becoming something which can be ported from the human mind to other storage media) and runs with it to create the Cylons. I think it's a possibility, but can't see it happening as soon as 2069.
Never been a fan of the series, but i might give it a punt. As for not happening this soon, why not? Just look at the curve of proccessing power over the last 10 years, it is scarily fast.
 

Bon

Timmy Nerd
Feb 22, 2006
2,754
76
73
35
Birmingham
Come on Bon computers havent even passed Turings test that was devised in the fifties.

You think inteligence is just a matter of more processing power?

How many Beowulf clusters must you put in parallel before it evolves useful independent thought?
Independent thought won't come by power alone it will be by the program. But once that first step is taken, and you can ask it, what can it improove then you add more power so it can work out how to make itself better, then the cycle begins. The turning point will be WHEN that AI comes to fruition.

10 years ago AMD and INTEL were saying home PC's were as powerful as they could ever be, now look at them.

The turing test was almost passed by a program 2, maybe 3 years ago. However it is a weak test in that it requires the program to simulate stupidity, which is counter productive to creating AI in the first place.
 
Im not convinced. Turings test as you say is very weak, probably the first step. Yet its still unbeaten.


If I teach my cat to bark loud enough will it become a dog? No just a more annoying cat.

If you make a computer powerful enough, it doesnt give it the capability to think. It just becomes a more powerful computer.
Giving it a program is possible. Giving it a goal for improvement is possible.
Letting it decide its own goals, Impossible.


I have some old recordings from teh 1980s. One British company had some very primitive results in voice recognition. They claimed that in under 5 years you would be able to have a full two way conversation with their machine.
Of course this never happened, the mistake they made was to think that the computer would be as good at human things, as it was at computery things.
In reality computers arent even comparable to infants when measured in terms of independent thought.
 

Bon

Timmy Nerd
Feb 22, 2006
2,754
76
73
35
Birmingham
Im not convinced. Turings test as you say is very weak, probably the first step. Yet its still unbeaten.
turings test compares how good a computer is at imitating a person, this is a stupid idea, there is nothing to be gained from making a program imitate a human, hell, go talk with ALICE, http://alice.pandorabots.com/ it responds on pre-defined parameters, yet you can have a chat with it? does that denote intelligence? according to turings test it does. But thats just stupid.


If you make a computer powerful enough, it doesnt give it the capability to think. It just becomes a more powerful computer.
who ever argued otherwise?

Giving it a program is possible. Giving it a goal for improvement is possible.
Letting it decide its own goals, Impossible.
Why not?

I have some old recordings from teh 1980s. One British company had some very primitive results in voice recognition. They claimed that in under 5 years you would be able to have a full two way conversation with their machine.
Of course this never happened, the mistake they made was to think that the computer would be as good at human things, as it was at computery things.
Back to the imitation point.

In reality computers arent even comparable to infants when measured in terms of independent thought.
You mean when you compare them to if they can imitate humans?




And thus becomes apparent, why turings test is flawed, and why people actually researching AI disregard it as a test of intelligence.

its limited to a predefined context. what if its not designed to talk with humans?

skynet syndrome, the turing test favors active humanly over acting logically. Ok if I put an AI in charge of nukes, do I want an AI that will act humanly or logically?