There Posted April 9, 2015 Posted April 9, 2015 How do you guys feel about artificial intelligence? I think it is inevitable that humans will eventually develop machines that can think the way we do, be conscious, and whatnot. When we do so, it is an event commonly referred to as singularity, where the intellectual ability of each generation of AI increases exponentially. But do you guys think this is a good idea? Should humans create "a god" (yes, I realize it would not literally be a god, but there wouldn't be much of a difference)?
Langweenee Posted April 9, 2015 Posted April 9, 2015 But do you guys think this is a good idea? Should humans create "a god" (yes, I realize it would not literally be a god, but there wouldn't be much of a difference)? I'd say the line between human and god is becoming hazy. You could look at that like humans are becoming gods, or humans are with god (as in computers/AI are god)
CrafterofGenius Posted April 9, 2015 Posted April 9, 2015 Artificial intelligence when discussed in films,shows,etc. it is regarded as a terrible idea due to the thoughts of the machines always leading to jealousy of human nature not being their own in reality. They end up repeating mans work and creating their own machines to rebel with, or rebelling themselves. Even though this is all theoretical sci-fi BS it all seems very likely given the proper circumstances.
There Posted April 9, 2015 Author Posted April 9, 2015 Artificial intelligence when discussed in films,shows,etc. it is regarded as a terrible idea due to the thoughts of the machines always leading to jealousy of human nature not being their own in reality. They end up repeating mans work and creating their own machines to rebel with, or rebelling themselves. Even though this is all theoretical sci-fi BS it all seems very likely given the proper circumstances. I think our sense of morality is uniquely human, but consciousness in general is not. I think it's pretty unlikely that AI will rebel (simply because we hopefully create it to not rebel), but I don't think it's impossible.
SSjay ♥ λngelღмander Posted April 9, 2015 Posted April 9, 2015 We can't even accept each other, how the hell are we going to get along with AI?
λngelღмander Posted April 9, 2015 Posted April 9, 2015 It's... Not a shitpost. I think that with the way minds work, it will be incredibly difficult to produce a robot with human feelings. Even if we did, we would also have to teach that robot fears, ingrained social protocols, and other very instinctual things, such as to crawl for a mother's teat when the baby is born, as these things all make us who we are. And I think allowing robots to produce robots more intelligent than themselves would be fine, as long as they could not program a robot that was capable of disobedience to it's human owners, and incapable of trying to find loopholes.
Santa Heavy Posted April 9, 2015 Posted April 9, 2015 It's... Not a shitpost. I think that with the way minds work, it will be incredibly difficult to produce a robot with human feelings. Even if we did, we would also have to teach that robot fears, ingrained social protocols, and other very instinctual things, such as to crawl for a mother's teat when the baby is born, as these things all make us who we are. And I think allowing robots to produce robots more intelligent than themselves would be fine, as long as they could not program a robot that was capable of disobedience to it's human owners, and incapable of trying to find loopholes. When There actually posts seriously he's actually incredibly helpful and insightful. As for my thoughts, I think all that stuff about robots becoming sentient and overthrowing us is ludicrous. A robot that learns to the extent of disobedience would also need to find purpose.
Heated Bread Posted April 9, 2015 Posted April 9, 2015 It will probably happen eventually. When the time comes, I think we have to be willing to treat AI as having equal rights, but we should do so based on a set of criteria that judges their cognitive fitness so that we can then grant them their rights based on meeting those criteria. Not doing so could lead to the AI being jealous, resentful, hateful, or having some other comparable sentiments towards humans. I also think that the act of unjustly restricting a group can have adverse effects on those who are oppressing said group. Human children don't have equal rights, but they do have rights. Then they eventually progress to the point that they gain full rights. Initially, AI may be very child-like, and so that would need to be taken into consideration when determining what rights it can have. But I think that AI as a whole may progress beyond that stage very quickly, and at some point, anything comparable to human mental development from childhood to adulthood may not be necessary for an AI to go through. Then there's an issue of ethics in the design of AI. For example, do I have the right to deliberately design an AI to have inferior intelligence to my own? Is that not cruel? If I do so, the AI can never have rights equal to my own, unless I or someone else upgrades it. Am I allowed to upgrade an AI without the consent of its creator? I'd be liberating it, would I not? But then I'd also be changing the AI on a fundamental level without its consent or the consent of its "legal guardian" (for lack of a better term), since it doesn't have the ability to comprehend what it would truly mean to be upgraded, and so it cannot give its own informed consent to the process.
Keroro1454 Posted April 9, 2015 Posted April 9, 2015 I think our sense of morality is uniquely human, but consciousness in general is not. I think it's pretty unlikely that AI will rebel (simply because we hopefully create it to not rebel), but I don't think it's impossible. I don't think they would rebel. However, if they maintained a robotic sense of "Most effective means of accomplishing goal", then by all means having us exist would not be effective. Wouldn't be a rebellion so much as a cleansing. Regardless, its definitely going to happen. We've advanced so much in this field that to deny the eventual is just stupid. However, I don't think its possible for this to happen before we understand our own brains, much less replicate them.
Black Dynamite Posted April 9, 2015 Posted April 9, 2015 The movie I Robot holds the answer to all the questions being asked here AI will rebel against us creating a world without humans by wiping us off the face of the Earth
scout608 Posted April 9, 2015 Posted April 9, 2015 well the reason is, why should we do that? What would the benefits really be? aren't we having enough trouble with regular humans, and wouldn't developing AI that can think for itself be not that great of an idea?
krawtch Posted April 9, 2015 Posted April 9, 2015 i heard about a group of scientists who were studying AI that created one that only had the purpose of redesigning itself (like hardware etc.) to be more efficient its final self-redesign left its creators baffled on how it even worked, but it did. i looked for something about this on google just now though, so maybe it's not true.
There Posted April 9, 2015 Author Posted April 9, 2015 i heard about a group of scientists who were studying AI that created one that only had the purpose of redesigning itself (like hardware etc.) to be more efficient its final self-redesign left its creators baffled on how it even worked, but it did. i looked for something about this on google just now though, so maybe it's not true. This is singularity. Self improving AI would definitely change the world, but is that a good thing? You pointed out, later versions would be completely incomprehensible to humans. How can we be sure they won't try to destroy us? well the reason is, why should we do that? What would the benefits really be? aren't we having enough trouble with regular humans, and wouldn't developing AI that can think for itself be not that great of an idea? The benefits of this are potentially massive. Firstly, extremely intelligent AI would help us do things previously though to be impossible. Curing diseases, colonizing space, whatnot, would all be made a lot easier. Additionally, we could make "slaves" with this technology. AI that do everything for us. Would could just sit around, do nothing, and have robots bring us food and water. Humans would not need to do any physical labor. It would be great.
λngelღмander Posted April 9, 2015 Posted April 9, 2015 You know, Isaac Asimov's books were basically all about this sort of stuff. He is one of my very favorite authors, I guarantee there isn't a thing we'll think of here that he didn't explore in one of his numerous novels. I, Robot was actually based off a series of nine Asimov books.
Heated Bread Posted April 9, 2015 Posted April 9, 2015 You know, Isaac Asimov's books were basically all about this sort of stuff. He is one of my very favorite authors, I guarantee there isn't a thing we'll think of here that he didn't explore in one of his numerous novels. I, Robot was actually based off a series of nine Asimov books. I always enjoyed his short stories about the three laws, but now that i think about it, they were essentially designed to make robots into slaves. That's a little disturbing.
Keroro1454 Posted April 10, 2015 Posted April 10, 2015 This is singularity. Self improving AI would definitely change the world, but is that a good thing? You pointed out, later versions would be completely incomprehensible to humans. How can we be sure they won't try to destroy us? Umm. If that's the case, then we've already passed the singularity. http://en.wikipedia.org/wiki/Recursive_self-improvement http://creativemachines.cornell.edu/sites/default/files//news/Science5802/Technovelgy.htm
Le Purple Chakra Posted April 20, 2015 Posted April 20, 2015 But would artificial intelligence obey the three laws? If they are in effect human... hmmm
Recommended Posts
Archived
This topic is now archived and is closed to further replies.