Ok, I have been reading some Isaac Asimov Books and for those who do not know, it is about robots which obey the 3 laws. I'm going to quote them.
"
Isaac Asimov.
Now lets say for arguments sake that one of these robots can be built. Lets say that physically it can move as fast as us or faster and its programming is sound. Thats not too habelieveeleive because AI hasn't been around for very long, compared to other sciences like physics or chemistry. We just need some egg head to come up with a couple of breakthroughs and what do you know in 100 maybe 200 years time it could be a possibility. I mean we already have robots which can walk, and embedded car systems are getting more complicated all the time. Every year processors and hard drives become bigger and more powerfully. Well if they continue at that rate for 100 years, 200 years, I can't see why it can't happen.
Ok I'm getting a little of my point. Now instead of one of these robots, imagine there is billions. Imagine the ratio to robots to human is around 2:1 or 3:1 . Thats alot of robots. Now I think robots could do anything with a long enough time spent on there programing. They would have scanners and radar, and could communicate with other robots. So lets say the lazy human race gets the robots to do all the jobs that exist in the world. Why wouldn't they? Even if the robots where at risk with getthemselvesselfs destroyed it doesn't matter. They would have too due to the second law. Also, they can be repaired and replaced by other robots. I should say that these particular robots are getting produced in their thousands every day. Even if a couple hundred malfunctioned, the others would keep things under control. It would be a simple matter of getting the first one right and copying and pasting the programming. Oh important point. No jobs for humans = No monThereforerfore money would become abolished. Everything is free. You don't have to pay anybody, because a robot is doing it. Whatever you want, can be done because the robots are doing it. You want a new house, fine and robot will build it, maybe more than one.
Now my question is why couldn't this happen in the future? If these robots could be built is there anything I haven't thought of which prevents the perfect world I mentioned above? I should also mention the robots update their programming to keep it in date.
(sorry for the long post)
It screwed up a couple of the words.
habelieveeleive = hard to believe.
getthemselvesselfs = getting themselves
No jobs for humans = No money . Therefore money would become abolished.
There's nothing that indicates it can't. Sure, it'll probably happen eventually. Just like first contact with aliens.
Pretty good idea huh? I wish they would hurry up and get it done though. I want a robot to get my breakfast :)
Oh, a major point I forgot. If a conflict arrises such as 2 potential harms, to humans. The robots either work together, or if they have to choose which harm to pick, they give each harm a number. The higher the number the more important, thats what they do next. Of course this all happens in miliseconds.
For as long as we can completely avoid an Ultron incident, me too.
A what incident? I have no idea what you are talking about
Ultron=A Marvel character developed by one of the Avengers to be the first completely sentient android, but since he can build robots himself, he believes the middle step (A.K.A people) is unnecessary.
First law.
Secondly humans wouldn't want to build robots if robots could do a better job.
Well, obviously that particular Ultron would be an insuccessfuly attempt to make a regular android. It would mean something didn't go as planned.
Yes. And Ultron made Vision, an android he intended to kill the Avengers, but he eventually joined them. Irony anyone?
First law says that robots can't harm humans, or through inaction let them be harmed. It says nothing about robots making other robots without the first law that can attack humans though.
"First law says that robots can't harm humans, or through inaction let them be harmed. It says nothing about robots making other robots without the first law that can attack humans though." Reply to october.
But by making robots without the first law they are breaking there own first law. They wouldn't be able to because the potential for danger to humans would exist. I should mention that there would be advanced robots which would have the capacity to see a possible future outcome of there actions.
The robots would be able to see the future outcome of their actions (harm to humans), though the first law doesn't say anything about robots wishing harm upon humans, though. Sure, they're not personally allowed to by their programming, but they could have that intent.
The law doesn't specify whether a robot harming a human includes indirect harm (aside from inaction).
Responding to a PM from bradhal: I don't understand your post. Make it a tad clearer. What indirect harm?
The first law states that a robot cannot harm a human. Direct harm would be a robot running up and punching a human in the face or something. Indirect harm is harm done to the human caused by the robot but not executed by the robot. The indirect harm I was talking about in my post above is the harm caused by the lawless robots created by the regular robots.
I'm pretty sure it includes indirect harm. I have read the stories and indirect harm is taken into account. If it is caused by the robot it is the robots fault. The robot would not allow this to happen.
Ah, okay. I was unsure whether the law included indirect harm or not. Thanks for clearing that up.
No probs.
Well I don't like the way you don't give me an example. But I will respond anyway. Firstly, you could do any job which didn't break the first law by being dangerous. And due to the number of robots you would be supervised at all times. But also consider that robots being intelligent could solve problems by ways we haven't thought of yet. They could analyse everything. Spend a year analysing it again, without the problem of boredom. As for lack of space, sky scrapers and starting life on new world could become possible. The robots could build us homes on other worlds then we can just shuttle ourselfs across. Don't forget this is in 200 years time or more, so a lot could have advanced in that time. Especially with genetic engineering and that.
Oh and that is just a problem with population increase. That is just another problem which isn't really anything to do with the robots themselves.
I know you were talking about the people. People would still have motivation and drive though. We would still have hobbies and we could do any job we wanted, so long as it didn't break the first law. Simply by ordering the robot to let me have a go. We would still have omlympics and that so competetion would still work. All the hobbies you can think of would still exist. Chess, sports, research, weight lifting. Ect Ect.
I didn't mention communism you did. I don't know why you link the two together.
What trouble with colonising? Why would we not want to? We allready have nasa and whatnot. We are allready trying to find habital planets like earth. It would only be tiresome for the robots, but not to us as humans. We wouldn't have to lift a finger. The robots would know exactly what we would require. Also I find it illogical why nobody would order a robot to do it. If the world is over populated like you said, there would be no choice. Also don't forget with no money, no expense will be spared. If it can be physically done, it will be. Simple as that. There is no pressure on money see.
It didn't matter that they don't see it as a problem. Which they will anyway. Because we order them to do it. So they will. Robots will be more advanced by then. It would be an idea for nasa because why would it go away. And money wouldn't exist, as there is no need. Competetion would work due to hobbies, olympics ect. I mentioned this above.
HHmm . I don't really understand. Why will I still be paying for what I want? I could order a robot to make what I want, or do a job and give me the money. There is no danger to me in doing this, so the robot would give me the money. If everybody can gain money the way I am, money will become worthless. I could order a robot to make something, and I could sell it. Your statements are rather generalised, can you give me an example of a situation?
This could get tricky. Obviously the details aren't smoothed out yet because it isn't true. But again with the second law what if two humans gave it conflicting orders? Will it go with the one which is closest to the second law?
Example:
Human 1 orders robots to farm for food and tells the robot not to give the food to anybody and give it back to him at the end of the day.
Human 2 is fat and says he is hungry to the robot and orders the robot to give him the food
Several situations could occur
1. The robot could give the food to human 2 because that was the last instruction received
2. The robot could give the food to human 1 because human 1 told him specifically not to give it to anybody else
3. The robot could give the food to human 2 because he said he was hungry even though he was fat, as the robot doesn't want to risk breaking the first law.
4. The robot could give the food to human 1 because due to his physical size being smaller, he assums he needs it most
5. the robot cannot decide and dumps the food on the floor
6. the robot cannot decide and evenly splits it
As you can see it gets complicated. And this is a very simple matter from 2 sources of information human 1 and 2. What if there was a crowd of people?
Therefore Seaton731 I don't think it is as clean cut, as wealthy man wins, poor people don't. And as far as objects if they happen to be food, the first law may become important to the robot.
Lawsuits aren't my way of thinking. There wouldn't be any. People could get everything they want so why would they want to sue you? Money? Not needed. The chair or wood? Robots can get more. Also if there was law suits still around, they would be run by robots. Simple really, the judge wouldn't be biased or let emotions get in the way. It would just simply give each side a number and the higher number wins. Team A has 4 points, team B has 5 points. Team B wins. Simple. Of course in real life it would be more complicated as more factors would have to be taken into account, but robots don't miss anything and wouldn't get borred of reviewing the evidence. Better in my opinion.
No. Have you considered company A and company B would be run by robots? Robots wouldn't file a lawsuit. I still don't get your point about money and power. Why would money exist? Everything is free. It's simple supply and demand. It can be produced for nothing, and the robots wouldn't want to sell it. They couldn't even if they wanted to. The human would order it for free and the robot would do it. Every human can have what they want due to second law. Unless it broke the first law. Now I don't know how deep we want to go here, but what if the robots took into account of mental harm. It might assume that it has to provide everything the human wants or it is breaking the first law. Wow now we are getting very complicated. But with the right programming and 200 years I don't see why it can't be done. I think the first robot that would be designed should be a programmable robot, which programs itself, based on the 3 laws. That might not work, but it would save time if it could be. Not to mention it would be well cool. Also I wouldn't need to steal the robot in the first place. I cannot stress enough, that robots would be everywhere. 2: 1 ratio of robots to humans. I have 2 robots, you have 2 robots, heck that doesn't even include the mining robots, or whatnot. I wouldn't need to steal one as they are EVERYWHERE.
I am seeing your point about money, but the fact is if they were programmed by the 3 laws it would become equality and simplicity. The 3 laws mention nothing about the original owners. If you said I want to own the company, you could. If I said I want to own the company, I could. This is all second law stuff, which gets complicated as I mentioned above in my example about the food. I still don't understand why money would exist. If a robot has to obey me no matter what . I could get anything I wanted without money, right? So why would I want money. I'm saying if robots had the 3 laws money wouldn't exist. I think the 2 go together. Also we haven't established why the owners of the robots are making them for money. They might be making them because of how cool and useful they are. Have you considered they could be greedy for robots? Money doesn't mean anything to them? Money is only used to get things, but if robots can get them, money wont be needed and will become worthless. We seem to be repeating ourselfs.
I'm seeing flaws in this post. A robot doesn't have to steal it, it could make a identical copy. Then it looks and acts exactly the same why would you want the original? I don't have to purchase as money doesn't exist. Police officers could be robots. They could be designed so they are bullet proof and what not. They are made out of metal for crist sake. They don't need to harm the human criminals. They could use handcuffs and insure that they don't physically harm the humans. Corner them and very carefully put them into a car. Although I can see the diffculty here. However how can they commit crime? And why would they? They have everything they need, as essentially everybody is a billionare because money doesn't exist. Murders would not happen, as robots would stop it happening before it did, because of the first law. So essentailly robots are police, just by having the first law in their programming.
Ok, I am seeing your point a little more. But prices would lower dramatically, almost to the point of it making no difference. We are still having issues with the second law. You could order your robot not to allow anybody into your home, or order it to alert you immediatly. Then the intruder could say move out of my way, and it couldn't because it got told by his owner to not allow anybody in. Also being poor wouldn't matter as prices would be lowered. Also nothing could happen which broke the first law. E.g. food, shelter and basic needs. And also what if mental harm might be taken into account. Anyway I'm going offline for a while, but feel free to respond. But there is also problems with the first law. What if the theif got prevented from breaking a window because of the potential harm of cutting himself?
Essentially, you're both not considering a vast array of factors.
First, a robot populated society cannot be achieved while we still have Democracy. The rich would just hoard all the resources from the start, and even if there were billions of robots, money still wouldn't be abolished, in fact, society would just be worse since the rich now has bodyguards that are ten times as strong, and enslaving the human race would be a walk in the park. The first and second law? Why program those laws in the first place when you're rich? And even if we had the first and second law, law enforcement would still be a human career path since...for example, you have two people with guns aimed at each other. In order for the robocop to stop harm from coming to one, it must harm the other, vice versa. Hackers would also be imminent as robots get more advanced and advanced. Let us say they develop a new prototype that is the best of it's kind, hacked. World Obliteration. And don't forget, with billions of robots, one is bound to be less superior than their peers. And if we're talking about AI, there will probably come an emotional chip eventually. You know, for connecting and bonding. There is just too much to take in consideration and they're all human variables. Though, if the society you mentioned did happen, humans are not likely to be depressed or despondent, think back to the early and mid Imperialism age or even the rich, they have everything done for them, catered, pampered, and treated delicately. People are just that way, we'll find something to do in our utmost boredom. For example, many of us want to travel the world, we could live 2 weeks in each landmass if there is no more country, and even that wouldn't give you enough time if we consider other planets and outer space. Though, due to our already existing divisions like culture and race, more than likely, there would be a genocide. On another note, population can be controlled with things like one-child policy, rofl, though more than likely, the rich would have everyone cut their cords at birth, and the placenta kind. Thus, a future incest population. ;p
Either way, you idea would be nice, but not realistic as when there is more power (not money), there is the issue of who or what controls it. In this case, the best anti-virus firm. They come up with the latest security, and have the best hackers. xD
Seanton, capitalism doesn't work, otherwise, people wouldn't be leaning towards other forms of government. Even if you think America is a capitalist society, there are varies "socialist" agendas running every second of every day. Look at Europe for example, they're more socialist than capitalist when it comes to the government, and they're doing better than the U.S. Point is, if you don't see the flaws that pile up for a capitalist society, you obviously haven't learned much about politics and government.
and not placenta kind*
various "socialist"*
No, I never said Capitalism was a form of government or else I would have said Capitalist Government instead of "Capitalist Society". I said they lean toward other forms of government because Capitalism is more prominent in Democracy than say Communism, Monarchy, or Aristocracy for example.
It depends on your definition of socialism and the dictionary definition of socialism. However, yes, Europe is more socialist than capitalism when you consider Europe as a whole. The problem with Russia if you haven't studied was that it developed into a communist society without a perpetual guidance. The Communist Manifesto was an idea, not a pamphlet for government, thus, the population exploited those very big leaking holes, and before you know it, the whole wall came down. Democracy however, was designed to be a form of government that 'tried' to plug these holes. Whether or not North Korea is a socialist country isn't for you to say considering your country-oriented bias. Most newspapers reported North Korea to be more a facist society than socialist. I think your prejudice towards socialism hinders your very understanding of it.
fascist*
Capitalist*
I just got one thing to say to this. IRobot
Weren't those rules featured in Irobot, which means that the movie was probably based off of one of the books? I'll be honest, I havn't read the rest of these posts besides yours, so I am probably either restating arguments or disproved arguments.
I think that the robots could be made, but I'm not sure about this perfect world of yours. Human greed will always exist, so I think that most people will think twice about letting robots get their jobs, I mean look at how people are acting about illegal immigrants taking the crappy jobs, then predict how all of America would think about things that aren't even alive taking all of the jobs. I'm also not sure if money would be discontinued either, I mean will the government own these robots, or the people. I think that would mainly decide the money situation. If the government owns these robots, then I can see government workers (robots) making improvements and products for free, but if they are in the government's control, they might not be able to personally visit every person, seeing as some jobs, like factories or house building, would require many robots, making the human to robot ratio not matter if there are not enough idle robots to help the people. If they are in the hands of the people, they might be able to help more of the population, but how will they get their hands on these robots? Will the government or manufacturing companies be gracious enough to donate at least one to every single citizen in the United States, which in two centuries could equal billions? My next concern on the money issue will be who started these robots? I know America gets most of their resources from around the world, so if they started the robots, the rest of the world might not adopt the whole no money system, and might charge them for the materials. So how will they pay for it? With no money, they will not be able to buy these resource, and seeing as the one thing we will have plenty of is robots, will we give them robots in return for the materials to build the robots? That seems a teensy bit counterproductive. If it is started in another country, perhaps one that has the resources, but I can't think of any, how will we aquire these robots. Seeing as human greed will exist, and they don't take money any more, how will we get our hands on them? It will take a while for these robots to encompass the world, and I doubt that everyone will embrace the whole "Lets not have money" system at one time, which will cause complications.
Wow, I expected a quick, two minute post.... that didn't happen.
You know Bo you could have just done what I did.
Say Irobot? No, that would have been stupid because I could not have voiced any of my opinions.
Ok How about this. I think this situtation would eventlly pan out in a way simmiler to IRobot. If you have not heard of this movie he is a basic summary of the plot.http://www.imdb.com/title/tt0343818/plotsummary
While this would not only be desterious for us we would also become less of a human.
No body has compared this to Irobot yet. I would think that a computer could break its working parameters and do that.
"Ok How about this. I think this situtation would eventlly pan out in a way simmiler to IRobot"- Cool74
Nope, nobody at all.
Kinda figured. Great movie, never read the book.
That deserves it's own thread, you need to post it under the main topic because it is a whole other topic. I agree with you that it could happen, just like it could go perfectly and eradicate poverty forever, we jsut don't know.
It wasn't actually a glitch, it was common reasoning that lead the robots to revolt. Their mane law is to protect humans, and as they saw crimes, wars, and other man-made ways of harming themselves, they felt the only solution was to declare martial law and force them to stop. It was human greed and desire for power that drove them to stop it.
main*
eh, potatoe potahtoe. I think that she had the idea because she had a better outlook on the world than most robots and humans combined. She found the flaw that humans harmed themselves and the laws made her protect humans, and I think she brought up that the lives lost during the transition from human democracy to robot martial law wouldn't be as high as if she allowed the current system to continue. I am not supporting appointing a dictatorship here, but I can see the reasoning behind what she/it did, and that I don't think it was a glitch, just that she went with the most logical decision, but the most logical decision is not always the best decision, at least not in the human standpoint.
the problem with the laws is that it means the machines have to be able to process the idea of doing harm, they have to understand if they are in danger. its second nature to us, but to a machine, something with no real emotion, that idea is hard to grasp, if not impossible. sorry if this is not up to date with how the conversation had developed.
Lol, my big thing was the whole no money deal, I don't think that will ever happen, so I argued it in my post.
So then way bother with robots if the laws are flawed.
that was the perpouse, the laws are intentionally flawed. its to show why its not a good idea.
There fore we should follow the advice and not do that.
What a long forum just to come to that simple conclusion.
Verdict: It could happen, but not in our lifetime!
Case Closed.
Jinx.
Yeah, totally jynxed it.
They can walk, use things as sensitive as fingers and keep them perfectly controlled, walk, and lift tons. I mean have you ever been inside an automobile manufactury? the Hyundai plant in Alabama is incredibly advanced, and its a great working environment, with the air at the perfect temperature 24/7, the shipments coming in every ten minutes perfectly arranged so that they come in as they need them, and the car parts moving differently every ten-twenty minutes to avoid carple tunnel. But I'm digressing, my point is that we could actually have working "Hollywood" robots in our lifetime, if out best scientists would stop working on anti-hairloss gells and the better Viagara and actually put their brains on something purposeful, we could have some awesome robots.
I don't even know what to say after reading those posts. I think the Irobot movie was flawed because it let the robots beat up the humans. Which is breaking the first law. It should have found a way to go about human imprisonment which didn't break the laws. And if it couldn't it shouldn't have done it. It should have gone about breaking military equipment, that would have worked. But then you couldn't make a cool film.
machines dont understnd the princlple of life and death, they cant understand what it means to harm something or someone. because they are cold and lifeless. no matter how many human characteristics you give them, they are only doing what they have been programmed to do. they dont realy "think" they dont know why there supposed to act angry, or said and they dont care. they cant think about it. thats why machines will never start conspiring against humans at all. unless there programmed to do it, or get some kind of virus of course.
We don't know that actually, with our current technology we don't know what they will or will not comprehend.For all we know, the robots could have an advanced enough AI to comprehend life and death. In humanity's attempts to perfect technology, they will try to make robots as similar to humans as possible.
I think the main theing was that the central computer has an advanced enough AI to where she realized that the death of humans was wrong, but the error was she was purely logical. Without emotion, logic says that killing a few humans would be worth stopping the humans from killing themselves by the thousands in wars and whatnot.
Gonna go ahead and repost that down here for space's sake.
We don't know that actually, with our current technology we don't know what they will or will not comprehend.For all we know, the robots could have an advanced enough AI to comprehend life and death. In humanity's attempts to perfect technology, they will try to make robots as similar to humans as possible on the emotional level, but will want them stronger and more intelligent so they can do the work that humans can't.
but again they only know what they are programmed to know, they dont understand anything, they only do what they are programmed to do in any given situation. no matter how compex you make there systmes, that will not change. it may seem to change, but it hasnt.
But 200 years time, who's to say that we can't program them with emotions. Not that I think thats a good idea but with complex enough algorithms who's to say it couldn't eventually be done?
If what you described above would occur, I am pretty sure some inventor would give in to his curiosity whether he can really create a sentient android, which could result in an Ultron scenario.
Exactly, we can't say that there can be no possible advancements in that field to make it possible. Saying something will be forever impossible is very shortsided.
Shortsighted*
I just read this forum all the way through (probably should have done that at the beginning) and I am just going to say that I thoroughly Seaton's arguments
That doesn't mean we can't discuss it.
even if we perfectly simulate human emotions, that doesnt mean that the machines will feel the emotion, only express it. only living things have emotions.
Aren't we machines? We may not be metallic, but aren't we just a large compilation of organs that function together in a magnificent way? And aren't emotions just a part of that? And if so, couldn't that be applied to a robotic counterpart?
Yes, that was what I meant. But I think that eventually (if human technology had gone that far), at least one inventor/engineer would want to make one, if only to see if he could. Read my other posts, they are all in the same context.
No, he made the main robot there (which I have no idea what was called) in order to counter the rogue A.I. That's not exactly what I'm talking about, but in some ways, it is similar.
Don't mean to be a jerk here, but this is annoying me. It'l sentient, not sentinel
Haha, that sorta annoyed me too. I decided not to point it out though.
Well, in a metaphorical way, we are. In a technical sense, we can still evolve while machines need to upgrade their hardware manually. We develop personalities, machines upload them. We age, they rust (depending on the material you use to make them). We breathe, they don't. We produce heat from our bodies, they produce heat from electric circuits. Oh, they also have built in cooling systems, nice! We copulate, they have intercouse by oiling their parts and plugging it back together. And of course, we have babies, they're produced in a lab or factory. And last, but not least, we can swim, they can only swim depending on the model. ;p
In a practical outlook, we are the manifestation of chaos, they are the embodiment of order.
^In response to SindriV
Yes, and because their only form of "evolution" is upgrades, they would completely control in what way they evolve, so to speak, and they can make that happen much faster than we can. Still, here's my question, is a sentient robot alive?
But what makes us alive and them dead objects?
I'm not religious. So if an android can be built with emotions, I consider him to be alive.
Scientifically: http://www.schools.utah.gov/curr/science/sciber00/7th/classify/living/2.htm
Psychologically: They're not.
Philosophically: They are.
Medically: They're not.
Engineering: They are.
Politically: They're not.
Artistically: They are.
Historically: They're not.
Scientifically Bonus:
http://www.cliffsnotes.com/study_guide/Characteristics-of-Living-Things.topicArticleId-8741,articleId-8578.html
http://wiki.answers.com/Q/List_the_characteristics_of_a_living_thing
http://library.thinkquest.org/C003763/index.php?page=origin06
In essence, if we met aliens, they would be considered animals and can be domesticated. Some might use them as slaves, others might want to exterminate them. All in all, they're not living. :)
But just because the aren't alive like we are, it doesn't mean they aren't alive. Such as medically, there is a way to repair and therefore treat most injuries they would suffer, and even be reparied if they are "killed" (I used the word killed because I consider them to be alive).
"I'm not religious. So if an android can be built with emotions, I consider him to be alive."
HAHA, can't beleive I just read this. 2 Words MRS GREN.
Movement : Possible
Respiration : Not possible
Sensitivity : possible
Growth : depends how you define growth
Reproduction : Not possible
Excretion : Not posible
Nutrition : Needs electric but it isn't carbon material
Yes, and? How does this mean a sentient android is a dead object?
Saying Mrs. Gren proves nothing
Also, the fact that we breathe is just our way of collecting a portion of what we need to thrive. Robots would need that too (unless they can power themselves, but even in that case, they would be "feeding" on whatever it is they can produce in a limitless quantity.
More than likely, it would be atoms. There is a lot of energy in one single atom, and if we're talking about future super smart AI constucts, then we probably have the technology to exploit power sources up to an incredibly high extent.
Yes, but that means that they would still feed, just not on the same things we humans do.
Uhh...we feed on energy too.
Yes, but they would consume it in a different way.
Yeah, our energy just comes from different sources. Ours comes from other plants and other animals, theirs comes from electriciy or other source (perhaps nuclear? seems dangerous though), we both consume it to keep moving, and if we don't get it, we both die.
Exactly. Doesn't this support the fact that they would be alive? I mean, if they are truly sentient, then they have a consciousness.
So if god made us, that makes him god? So if we build robots, we are god and they are alive? Think about it.
@cool
Define Life.
@everyone
yep, not as simple as it seems.
when and if anybody types up an answer, because i do want to see them, reply it to the newer post asking the question again.
again i say Define life. i would like to see a single well written answer from everyone.
I would like "frickin sharks with frickin lazerbeams on their frickin heads" but thats not going to happen. I think it is the physical, mental, and spiritual characteristics that constitute existence
a good responce, but what do you think constitutes as physical mental and spiritual characteristics?
living ;P Throw one my way and I can compare it to robots. Otherwise it might look like I am choice picking what robots can do
im asking for a deffinition, dont compare it to anything.
Eh, just trying to stay on topic for once. Living as in eating, walking, thinking, reasoning, etc.
ok. is that what you think constitutes life?
a few of the things, hence the etc.
so you feel that the actions that living things make, is what makes them alive?
Not neccesarily, both a human and a rock can stay still.Both a human and the earth can rotate. Both a human and fire can consume fire. You are putting words in my mouth.
consume oxygen*
no im asking you about what you have said. is a plant alive?
I would rather consider a sentient android to be alive than a plant, unless we uncover new information about them that state they can think independantly.
so we are clear then that its not the actions of something that make it alive?
Nobody said it was, I said the characteristics, physical included, and when you asked me to name a few, I did. You assumed thats what I thought.
im not assuming anything, im asking questions to better understand what you are saying.
It's all about phrasing, remember what I told you about a politician and a ubonics-speaking gangsta both saying the same thing? Two extreme differences, I know, but the point still applies.
im sorry but i dont see how what you said applies. see the post at the bottom lets keep this moving.
I'm going to quote myself before quote you this time. "Just because you refuse to acknowledge a point, doesn't mean it isn't there"-Bo
"so you feel that the actions that living things make, is what makes them alive?" - ugilick. I said phrasing because you can go back and say your meaning all day long when you are writing, but the way you say something defines what you mean, even in writing. In public speaking however, saying something that accusatory makes it seem like you are trying to put words in someone's mouth, and if you go back to correct yourself, chances are your point is going to seem less powerful.
according to biologists, yes
sorry about the inconvienent timing of my post, my new question is just above your post.
so we agree its not actions that make things alive.
Gonna pull an ugilick on this one: define actions
walking talking thinking reasoning sitting rotating. a verb.
in that case, I think it is. Living is a verb after all. Thinking is a verb.
so if somehting talks it is alive?
good so we are clear that actions do not make something alive?
no
If something grows is it alive? If something consumes energy is it alive? All verbs.
that makes fire alive doesnt it?
No, fire does not think. Fire does not reason.
neither do plants, they both grow and both consume energy,
Indeed they do. I said "according to biologists", because most people think that if you say plants are not alive, your stupid :P I don't know though, now that I think about it, there is a difference between living and sentient.
:D yay for knoledge!
now if i can just learn to spell.
There is a difference, living things don't have to be sentient to be alive, but I still think everthing that possesses a sentient form of mind can count as alive.
for something to be sentient is has to feel.
Feel what, exactly? Dogs can feel pain, sadness, and fluffy towels, but they are not "sentient"
i do beleive you can call them sentient.
Well, how would you like to define sentient? I define it as capable of higher thought.
When they say "finely sensitive" it doesn't mean knowing what the feelings are.
ugh, double post of same thing, embarrasing
lol its ok, i forgive you.
Does fire make actions on it's own?
yes, fire does things all by itself. there isnt some puppet master controlling the fire.
Well, I can't confirm fire isn't alive.
while anyone with any sense will tell you its not.
Biologists can, actually.
Why can't a machine be sentient? I consider us to be machines (like I said above) although we are not made of metal. If emotions and sentient thoughts can form with the way our brain and nerves are compiled, why can't the same be done with androids?
"even if we perfectly simulate human emotions, that doesnt mean that the machines will feel the emotion, only express it. only living things have emotions."
Does this matter ugilick? If they can simulate it correctly in any given situation, without any noticable difference who cares?
Because the discussion had turned into whether robots are alive or not, and one of us said that machines could have emotions in the future, but i think that we can't say what things can or cannot do in the future, and to say that they can't because they can't now is extremely short sighted.
All right, feel free to keep debating about the previous argument, but if you want to keep the forum on track, try here. We have derailed it from whether we could have robots in the future to the meaning of life... literally. Actually, if any admin wants to move that to another thread, it would make a good one on its own.
Yes. I find it a nice debate.
It's a pretty good one, just off subject
the point is and was, that its not actions that make things alive. so no matter how many actions a machine simulates, its not alive.
If it's sentient, then it makes its actions on its own, not simulates.
it simulates the actions and behaviours of a sentient being, that does not make it sentient.
Ugi, I replied to this above.
if you mean the one i just hit reply to then that was silly, otherwise this is a long thread and you minus well repost it.
My reply is directly above when Bo tried to ask us to keep on the subject.
how far up is it? ive read everything posted today.
Fine, here's what I wrote:
the thing is machines are the immitations. if something acts enough like a computor does that make it a computor?
If it can perform the exact same features, yes.
so if i act just like a plant, that makes me a plant?
Not if you act like a plant, if you function like a plant. I never said robots are humans, but humans and sentient aren't the same things. Like you said, dogs are sentient, does that mean that they are human or immitating humans? A sentient android is a sentient android, not a wannabe human.
so if something acts alive, that means its alive?
If something actually can act alive, then I believe so. And remember to look towards the definition of life we were discussing before.
its your lack of a spiritiual viewpoint that keeps you from understanding.
Or it makes me understand. Depends on which one of us is correct. But regardless, then it seems we will never come to a conclusion.
I also want to ask, what about bio-engineering. Let's take an example from Homo Perfectus. Meckard created Teacher, a sentient android that behaves pretty much like a human, and Meckard also created Adam with bio-engineering. Adam is a construct of Meckard's, just like Teacher. Is Adam alive? If so, isn't Teacher alive as well? What's the difference between creating an organic being and creating a robotic being? Even if we take souls and such into account, can the scientist who bio-engineers a living creature create it with a soul? If not, is Adam just a lifeless husk? This example can pretty much cover all bio-engineering, what I mean is: why do the components the sentient being is made of matter, if they behave the same either way?
Pleeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeease move your argument to the actual argument rather than derailing the conversation even more! I know how much you like to derail forums and act like its so important not to afterward, but please comply.
bo, this conversation has completely moved on from what it was. there is no desire to talk about rather or not there will be robots in the future.
not to you. you like to speak for everybody a lot, don't you?
i dont see anybody wanting to talk about there being robots in the future.
http://www.myadventuregame.com/forums/message.aspx?MessageId=6414
In that case, you might want to read this forum
I just get directed to this page whenever I click that link.
exatcly ;)
Wha...I just...what?
Read the conversation between Ugi and me, you should be able to figure it out.
Is it okay if I nod and pretend to understand?
Well...I guess the nod is kind of useless as this is through the internet, but still.
nobody is wanting to talk about whether or not robots will exist in the future now. then maybe, thats how conversation develops.
Or derails, depending on your point of view
Defenition of life:
MRS GREN
Movement : Yes respiration : No sensitivity : Yes growth : depends on what you define as growth, data can grow? reproduction : Not really excretion : Not really nutrition : Not really
Now we got that crap out the way, back to what I started the forum with. What possible job couldn't a robot do? I personally think, if a robot could do every job better than a human, we wouldn't bother with humans. They are essentially a better form of technology, which in turn makes thinks quicker and easier. If a company has to choose, between a cheap foreign worker or a robot, I think they will pick the robot as a investment. In fact they don't even need to buy it because they can just order it.
Anyway, I'm curious as to what job a robot couldn't do
Sciences: Easy, always a right answer
Maths: Easy, always a right answer
P.E: Easy, just memorise and copy
Geography: Easy, just memorise and update
Ect Ect
If it can do all the subjects, why couldn't it do the jobs which demand them?
What about psychological jobs and others who require intuition? If a robot isn't sentient, and it can't in order to comply to the laws, they would not be able to be successful at those kinds of jobs.
Just about everything but respiration robots can do. i can't remember the seven things that my old Biology textbook said classifies a living organism, but I know we went over some of it on the forum before.
Umm... Psychiatrists probably couldn't be replaced by robots, because you can't classify psychological problems just by symptoms, otherwise all those modern day drugs would be right for everybody. The medical field might be able to be run by robots, most of the time those are classified by the symptoms and in the surgical field the robots could have a steady enough hand and the data could be given to the robots so they could basically have the robots be able to access a diagram of the human organs and muscles and whatnot, but if anybody can come up with reasons why they cant. They would probably take over sports teams for our amusement, but I don't think they would have the passion required to make it interesting.
Those are the positions off the top of my head.
If we weren't talking about a sentient android, that last part would be true, but we are talking about a sentient android, and so he would be able to perform math or science and develop their skills in such matters further. Their programmings would be equally complex as humans.
I would describe them as complex robots, but the sentient androids are robots that are as complex as human beings, at least their minds are. Which means that if a human can perform something in his mind, a sentient android could as well, at least by average.
Yes, a rogue sentient android brings us back to talking about Ultron, which I'm not doing again.
How people would react is irrelevant, this debate is about whether a sentient android can be referred to as alive.
Of course the androids aren't biological, but I would consider any creature that can communicate, think, act, react and reason on a human level to be alive. Especially if it considers itself to be alive.
I understand what the definition of "alive" is, but I think it would't cover a sentient android because there hasn't been a sentient android yet. As soon as there will be (and to connect this to the actual topic, could it happen?), I hope it will be changed to the point where it covers a sentient android as well.
Very well. But just tell me this, if a sentient android is created, and therefore thinks exactly like a human and will most likely want to be a human, would he have to refer to himself as a dead object? Then answer this, if someone from Japan moves to America with his family when he is newborn, and therefore is raised there and has no idea what living in Japan is like, can he never refer himself as American, because 'by definition' he is from Japan?
I don't mean human, but alive.
Hmm...that is also an interesting concept. If someone is undead, (but then not mindless, just has been brought back from the dead), is he not alive?
Of course, but there can also be bad effects with other things, such as insanity. What I meant with my post two posts ago is that if I was that android, I would consider myself to be alive, and I would probably be offended if someone said I wasn't (kind of like racism), so if there actually would be an android such as that, I would consider him to be a living being.
I understand that, but like I said, if I was a sentient android, I would want people to think of me as alive, and so, I would consider a sentient android to be alive.
"Do unto others as you would have them do unto you"
I'm not forcing anyone to believe anything. I am simply explaining that I believe a sentient android would be alive.
Aah, I see. I thought it was a "I could tell you.....but I won't"
Sorry about the confusion. Anyway, as usually, no one seems to have changed their mind towards this.
And it is also fun to land in a pleasurable debate.