Welcome To P8ntballer.com
The Home Of European Paintball
Sign Up & Join In

Emotions in a Computer???

TEKLOFTY

You're in the jungle baby
Jan 7, 2009
189
0
26
In your sphincter
I don't know if any of you are into anime, but I once watched Ghost in the Shell 2 on a plane and found it interesting. It basically explores this exact type of situation in the future, and the converse (where humans have software upgrades and electronic enhancements etc.). Where is the line drawn between man and machine? Besides being an interesting watch it's also full of excellent violence and cartoon boobs. I recommend it.

I have a slight interest in anime but as a development, if you want to see what is in my opinion the best representation of AI through fiction then I suggest you read some books by Iain M Banks. Within his structured world of 'The Culture' man and machine live side by side in a utopic and hedonistic existence; some of the concepts he deals with are exactly what I imagine AI to be like should we ever develop it to a sentient extent.
 

Pmr Man

otherwise known as Bing!
Apr 24, 2008
279
0
0
satans layby- MILTON KEYNES
Robbo, your bang on the money of what i meant, but a lot more articulate. only thing is most humans and animals learn through pain and discomfort. e.g. if a todlar was bad and the mum took away a fun toy, the toddlar would find this bad. whereas with a computer, it doesnt have nerves (pain) or wants, and even if they did, how would they react if they didn't get what he wanted. so in theory if a computer became sentient could it be like a bad 2 year old because it knows it wont feel bad either way
EDIT; what would determine the wants of a computer as it doesn't have nerves, i'm kind of going round in circles but are any of you on the same train of thought
 

TEKLOFTY

You're in the jungle baby
Jan 7, 2009
189
0
26
In your sphincter
Robbo, your bang on the money of what i meant, but a lot more articulate. only thing is most humans and animals learn through pain and discomfort. e.g. if a todlar was bad and the mum took away a fun toy, the toddlar would find this bad. whereas with a computer, it doesnt have nerves (pain) or wants, and even if they did, how would they react if they didn't get what he wanted. so in theory if a computer became sentient could it be like a bad 2 year old because it knows it wont feel bad either way
This depends on your definitions of pain and discomfort. We have no idea what a sentient computer might construe as painful or uncomfortable and as such we can't really postulate how a sentient being such as this might react and learn.
 
John, I think we need to view AI as an emergent property here; I think the notion that we can program a computer to be happy or sad is nonsensical but I do know this much mate .... I think whenever we look at complex systems, whether they be organic or electronic, it is just a matter of complexity as to when those systems becoming sentient.

It hasn't happened yet as far as we know in a digital sense, and that last statement is quite important because if your research is being funded, then if a computer does become self aware, the last thing those people in charge of that research are gonna do, is announce it to everyone ....think about it mate, those people would have in effect, a device worth zillions.
Not in itself, but because of what it could produce.

And so, if we assume for the time being it hasn't been achieved, I am proposing, computers will eventually experience emotions (or approximations to emotions) when the nature of its programming becomes sufficiently complex to provoke an emergent property of what we call consciousness.


Just how the computer will let us know it can think is another topic ......it's intriguing stuff because the psychology books will have to be rewritten everywhere to accommodate the 'nature' of a computer's personality ....hmmm...mind bending stuff indeed !
Is that all thats required though?
Once a system becomes complex enough that it appears convincingly sentient, then it is?

To me thats mimicery.
The Turing test was designed to test exactly that (can a computer mimic a human within certain boundaries?).
Last time I checked nobody had took the prize.

The Turing test will be conquered Im sure.
But it still leaves a question in my mind. At what point does a good mimic become the real thing?



Ill use a much more mature and well understood technology to try and clarify the point I made in the last post about the limitations of computers.
Which is the problem of analog in digital systems.

An analog value such as 1/3 is very easy for us to comprehend.
But A computer has no concept of dividing something in to thirds. It has to represent it in digital.

So you decide a whole 1 is represented as 1024. in digital (10 bits). Then 1/3 will be 341. 0.33 of a bit out from the real value.
If we represent the whole 1 as 65536. in digital (16 bits). Then 1/3 is 21845. 0.0000000000000000000000000000005 of a bit out from the real value.
The more resources you use the lower the error.
But it requires infinite computer resources to acheive zero error.

Ultimately you have to choose a level of approximation and pull strange tricks (like floating point representation) to deal with something a person can do so very easily.
No matter what level of development, a digital computer will never be able to deal with analog values as accurately as we do.

My feeling is the same limitation may be found in AI systems.



The more I learn about computers the more I become aware of their limitations and how they are impressively profficient at certain tasks and annoyingly useless at others.

PS Nobody check my maths :)
 

Kat

I'm a love Albatross.
Aug 18, 2006
1,048
0
0
34
Carlisle/ Leeds
Hmmm, good question.
I'm really interested in robotics (as a side interest from my interes in Neuroscience)especially a bloke called Kevin Warwick from the University of Reading.
Some of his experiments have convinced me that computers will eventually be able to act as completely functional humans, although the emotions one is tricky as then you have to ask if the emotions would be genuine or 'programmed'.

Anyway, for anyone that's interested.

http://www.kevinwarwick.com/index.asp


Not only did this guy create an implant that allowed him to switch on lights, tun on his computer, open doors etc just by moving his hand ( imagine the impact for disabled people) he also wired his nervous system up to the internet allowing him to move a robotic hand (that copied the movement of his own) in New york while he was in Reading, he also communicated telegraphically with his wife by inserting electrodes into them both and linking their nervous systems and created a real 6th sense with said electrodes as they could sense how far away things were even when he was blindfolded completely. Imagine the impact that would have for blind people.

Anyway watch the video it's amazing!

I know it's not emotions, but to me, the development of these things as we speak and certainly in the next year gives me no real doubt (maybe a little bit) that eventually emotions will be viable.

Didn't they already destroy one computer (well split it up) once it showed enough logical thinking to beat chess masters even with the input of tricks in the game?
 

NSKlad

Pistolas y Corazones
Dec 9, 2006
949
36
63
32
Bournemouth
Is that all thats required though?
Once a system becomes complex enough that it appears convincingly sentient, then it is?

To me thats mimicery.
Sounds about right....

You can simulate things with a computer, but ultimately all it is doing is switching things on and switching things off, fetching digits from memory and following orders which somebody has already given it.

An analog value such as 1/3 is very easy for us to comprehend.
But A computer has no concept of dividing something in to thirds. It has to represent it in digital.
Yep. To use a really unsophisticated way of describing stuff which everybody is going mock me for but I think is just about adequate :) Computers are digital, we are analogue. Our thoughts for example could be analogue, but the computer is only showing snippets of that, in the same way that a computer can record sound on a microphone. Doesn't matter how many hundred times a second the computer records the sound, it is still not getting the whole thing. No doubt somebody will prove me wrong somewhere, but nobody is perfect. :D
 

Matski

SO hot right now
Aug 8, 2001
1,737
0
0
I think a lot of it must come down to learning. At the moment, computers run software and this software is programmed to do specific things. The software never learns anything in the 'human' sense of the word, it can store preferences etc, but it is always following a programmed decision making process.

For example, the computer Deep Blue was a super computer designed to beat any chess player at chess. Deep Blue could process around 200 million positions per second - that's pretty incredible, yet Gary Kasparov still managed to beat it overall. Why? A. the guy is a genius, B. Because Deep Blue, like any computer, could not 'learn' Kasparov's style of play, it could only calculate every possible move and outcome and pick the one most likely to win/give advantage. Kasparov on the other hand could get a feel for how the computer would play - even an incredibly complex one like Deep Blue. Deep Blue is now old news, but new software approaches follow a similar approach.

One of the issues is how knowledge and its forms are transferred, humans can handle tacit and explicit - explicit forms are numbers and text etc whereas tacit forms really only come from learning e.g. try telling someone precisely how to ride a bike when they have never done so before. No matter how well you describe it, give them all the variables, they will have to learn for themselves to some extent. The same goes for other things like face recognition etc. A computer can store a specification, but not learn the exact form as this requires tacit knowledge, which currently only a brain can store.

For a computer to 'do' emotion, I think learning will have to come before this as emotions are tacit, have to be developed following experience and are not predefined. If autonomous learning is possible (I doubt we are there yet or will be for a long time) then emotions will surely be possible at some point too - the latest research is looking more towards bio-cybernetics, so maybe the answer will be there somewhere.
 
Yeah Matski thats one essential step.
But I fear its one among a hundred others all of which require a fundamental change in the way computers operate.

To put that into perspective, computers are still operating on the Turing Machine principle, which was first described in 1936.