grimdercell Posted March 27, 2013 Share Posted March 27, 2013 So here's a topic which is based on the Theory of Knowledge. Can a machine know? Machines, as we all know are made up of metal, components, etc. and does not portray the same emotions as human beings as well as other living things. However, machines are still able to perform tasks that has been set and programmed into them. Through the computer codes and other tweaks that allowed a machine to perform its tasks, do machines actually think when doing something? Can they know how to do a specific tasks? Link to comment Share on other sites More sharing options...
GSC159753 Posted March 27, 2013 Share Posted March 27, 2013 Makes me wonder how long until we have machines programming and tweaking other machines with no human input Link to comment Share on other sites More sharing options...
The Killjoy Posted March 27, 2013 Share Posted March 27, 2013 No. Link to comment Share on other sites More sharing options...
Puddilicious Posted March 30, 2013 Share Posted March 30, 2013 No. Link to comment Share on other sites More sharing options...
F CHARLIE Posted April 24, 2013 Share Posted April 24, 2013 No. Upvote, we barely know what we r doing. Link to comment Share on other sites More sharing options...
cleverpun Posted April 24, 2013 Share Posted April 24, 2013 Performing tasks isn't an indicator of sentience. You can get mice to run mazes via classical conditioning, doesn't make them sentient or capable of analyzing their task. Link to comment Share on other sites More sharing options...
~shenanigans Posted April 24, 2013 Share Posted April 24, 2013 What is your definition of "know"? If you mean, can a machine learn from mistakes, then yes, it can. If a specific set of actions does not work for a specific problem, then the machine program itself to not repeat those actions when given those conditions. For example, if you ask a machine "1+1=?" and it replies "4", you tell it that it's wrong. It replies "3", and you tell it it's wrong again. It replies "2" and you tell it it's correct. Then the machine "knows" that 1+1=2 Building on this, one can imagine that a machine which has seen very many situations can combine all possible solutions and choose an optimal one. In this regard, the machine can make decisions. Provided the machine has infinite memory and processing power, it can pull off a near human-like act. Link to comment Share on other sites More sharing options...
Puddilicious Posted April 24, 2013 Share Posted April 24, 2013 What is your definition of "know"? If you mean, can a machine learn from mistakes, then yes, it can. If a specific set of actions does not work for a specific problem, then the machine program itself to not repeat those actions when given those conditions. For example, if you ask a machine "1+1=?" and it replies "4", you tell it that it's wrong. It replies "3", and you tell it it's wrong again. It replies "2" and you tell it it's correct. Then the machine "knows" that 1+1=2 Building on this, one can imagine that a machine which has seen very many situations can combine all possible solutions and choose an optimal one. In this regard, the machine can make decisions. Provided the machine has infinite memory and processing power, it can pull off a near human-like act. lol @ infinite memory and processing power needed to pull off a near human-like act. Link to comment Share on other sites More sharing options...
☣Long218☢ Posted April 24, 2013 Share Posted April 24, 2013 No. why the fk do you sell festive minigun for 1.88 Link to comment Share on other sites More sharing options...
~shenanigans Posted April 25, 2013 Share Posted April 25, 2013 lol @ infinite memory and processing power needed to pull off a near human-like act. The machine would have to evaluate decisions based on prior experiences, so it would need a large sample size in order to choose the most relevant answer. Storing a large amount of data would require a lot of memory, and evaluating the best answer would require a lot of processing power. Since it does not have first-person consciousness, this is the only way it can potentially make judgments. Of course, if a machine is specialized for a certain task, then it becomes much easier. Link to comment Share on other sites More sharing options...
grimdercell Posted April 25, 2013 Author Share Posted April 25, 2013 The machine would have to evaluate decisions based on prior experiences, so it would need a large sample size in order to choose the most relevant answer. Storing a large amount of data would require a lot of memory, and evaluating the best answer would require a lot of processing power. Since it does not have first-person consciousness, this is the only way it can potentially make judgments. Of course, if a machine is specialized for a certain task, then it becomes much easier. What is your definition of "know"? If you mean, can a machine learn from mistakes, then yes, it can. If a specific set of actions does not work for a specific problem, then the machine program itself to not repeat those actions when given those conditions. For example, if you ask a machine "1+1=?" and it replies "4", you tell it that it's wrong. It replies "3", and you tell it it's wrong again. It replies "2" and you tell it it's correct. Then the machine "knows" that 1+1=2 Building on this, one can imagine that a machine which has seen very many situations can combine all possible solutions and choose an optimal one. In this regard, the machine can make decisions. Provided the machine has infinite memory and processing power, it can pull off a near human-like act. I agree with the fact that a machine will need lots of memory and processing power if its gonna pull of a near human-like act. The human mind can be considered as limitless at the moment, seeing that humans continue to challenge the impossible and had at least succeeded in many occasions. Right now, machines are mostly being given specific tasks. Take for example the IBM supercomputer Deep Blue that was matched against World Chess Champion Garry Kasparov. Deep Blue was programmed by several IBM programmers and chess experts between rounds during the match against Kasparov. The fact that Deep Blue was able to recognized its mistakes after being programmed to analyse thousands of tactical chess moves was an example of the machine pulling off a near-human like act, but only in the game of chess. The fact that the Deep Blue loss the 1996 match to Kasparov, only to win in the 1997 rematch, shows that the Deep Blue was able to learn from its mistakes in the previous rounds and recognized commands by the programmers to fix it. In my opinion, until it reaches a point where no men can defeat a machine such as the Deep Blue which was constantly being reprogrammed after a loss to rectify its mistakes, only then will machine be on its pathway to diversify from a specific task to a humanoid which has the ability to think and act like a human. Link to comment Share on other sites More sharing options...
Santa Heavy Posted April 25, 2013 Share Posted April 25, 2013 I'd honestly think no. We program them to "know" and appear to have no organic AI to build off of. Link to comment Share on other sites More sharing options...
Recommended Posts
Archived
This topic is now archived and is closed to further replies.