Robot Dilemmas Discussion

Forum devoted to Mega Man by Archie Comics.

Re: Robot Dilemmas Discussion

Postby Sunwalker » Fri Jun 19, 2015 1:08 pm

lalalei2001 wrote:I read a really great book about sentient robotic life called Expiration Day. It was sort of an emergent AI thing and how robots and humans adapted to one another, and at the end of the book the main character picked the 'staff only' door, proving that she could think beyond what was programmed into her. The entire time I was reading it I was thinking "This book is so important."

Another thing I really liked was that the main character's father was a priest, and he was pro-sentient robotic life too, and argued for her sentience in court.

It is really a fascinating topic. Personally, I think that the value of life does not come from the appearance or the material of which the body is made, but from the inner self — the soul, if you like. A sentient being is much more than a thing and deserves to be respected and to be valued. So I would defend the right of sentient robots to live, if they existed, but I am not optimistic that this can become a reality.

The reason is that I do not think that the human mind can conceive something that is at least as complex as itself, let alone something that is more complex. In order to understand something, that is, to internalize a concept in the mind, it needs to be abstracted. And abstraction is something that removes information from a concept, so any efforts to understand how the mind works will be necessarily incomplete. I think that the complexity of the human brain for Neurology is like the speed of light for Physics: it is a hard limit that can be approached more and more, but can never really be reached. This is the barrier that I see when it comes into developing human-like machines. In other words, the creation cannot be bigger than its creator.

Even if it becomes possible someday, I am still iffy about the idea. No, I am not against scientific progress. I am graduated in Chemistry and I do scientific research myself, I do like Science. My problem with sentient machines is that I think that the potential of abuse of this knowledge is much bigger than the potential good it can make. Having the capability to manipulate the inner self (assuming it is possible, which I doubt) gives unprecedented ability to manipulate and torture people, plus it violates the most private and personal area of people that is their mind. If something can be abused, it will be. However, I do not think that the level of knowledge to make this possible can be reached, nor I think that it is possible with any level of knowledge. I can respect people who think differently, though.
User avatar
Sunwalker
BumbleHonored
 
Posts: 657
Joined: Wed Feb 13, 2013 6:55 pm
Location: Brazil

Re: Robot Dilemmas Discussion

Postby Penguin God » Fri Jun 19, 2015 1:48 pm

We already make stuff that's more complex than any single person understands. That's a benefit of having multiple people able to work on single projects.
User avatar
Penguin God
BumbleCult
 
Posts: 8406
Joined: Tue Nov 18, 2008 4:28 pm

Re: Robot Dilemmas Discussion

Postby Sunwalker » Fri Jun 19, 2015 2:41 pm

Penguin God wrote:We already make stuff that's more complex than any single person understands. That's a benefit of having multiple people able to work on single projects.

I see where you are coming from. But even them, all the people involved in a project are still limited by what it is possible within the constraints of the human brain. It is not about a single person nor a group of people, but about the human nature. Of course I might be wrong, but what I think is that the case of a human brain understanding how the human brain works is like that mythological serpent Ouroboros, the one that bites its own tail.

In order to some data to be stored in the brain, the brain must create an representation of said information (that is, to abstract it). A representation unavoidably has less information than the original object. For example, a picture of something is a representation of some object, but it does not convey every single detail of the object, and also a picture of people cannot tell much about their personality.

What I am saying is that any representation of the brain that is stored in the brain is necessarily less complex (i.e., has less information) than the brain itself. This is why I think that it is not possible to fully understand how the brain works, let alone to replicate it. But I am always open to the possibility that I am wrong ;).
User avatar
Sunwalker
BumbleHonored
 
Posts: 657
Joined: Wed Feb 13, 2013 6:55 pm
Location: Brazil

Re: Robot Dilemmas Discussion

Postby DoNotDelete » Sat Jun 20, 2015 4:41 am

The odd paradox here is that machines/computers actually make it possible for us to understand more about ourselves, our world, the universe, etc. than we would be able to do so without them. It might be that one day computers figure out how to replicate something *like* a human brain for us - or even how to emulate parts of a brain (perhaps for medical purposes/brain surgery).

Generally speaking I think the problem with understanding the human brain is that everybody's brain is wired up differently; I don't think it's necessarily a case of applying the same 'rules' to understanding how everybody's brain works. Outside of what we 'know' about brain physiology (i.e. what different parts of the brain do) each brain should be considered an individual and unique conundrum.

I personally don't know how memory works in fine detail, but I suppose it can be said that a brain is part biology and part abstract - the abstract part being the unique memories every individual has. Both parts - the biological and the abstract - play a part on how that individual's brain develops/learns/adapts - the physical acts upon the abstract and the abstract acts upon the physical (though I'm not well versed enough in brain physiology to be considered any kind of expert on this).

Touching on what you spoke about earlier in this thread - I don't know about machines having souls or robots being able to prove that they have souls (I'm not even sure that human beings can prove to themselves/each other that they have souls), but I think the problem with a robot achieving self-awareness lies in the fear of death/oblivion; Until a robot really 'fears' for its own 'life' - in so much that it refuses to walk off a cliff or into a blast furnace. So I suppose really I'm arguing that sentience/sapience/self awareness lies in a survival instinct - something that has evolved from the necessity to stay alive...

I wonder even if robots can be programmed to 'understand' what death is - I don't even know if robots can be programmed to 'appreciate' or 'comprehend' what they are - of the difference between being online and being offline, between being functional and in a state of disrepair. If those things are programmed into something is it ever anything more than a puppet/illusion and not really 'alive' at all?
User avatar
DoNotDelete
BumbleCitizen
 
Posts: 329
Joined: Wed Apr 15, 2015 2:17 am
Location: United Kingdom

Previous

Return to For Everlasting Peace!



Who is online

Users browsing this forum: No registered users and 2 guests