By Jamieson D.
AI stands for Artificial Intelligence; a category of robots that have intellects similar to humans. AI is already being implemented into our everyday lives. This includes self driving cars, computers, Google, and even our school.
When working on this subject, I needed more insight on what other people had thought of AI, especially people with knowledge on the matter. I had first asked a teacher for some insight on what they thought AI was and how it should be used in the future.
“AI is interesting, because the definition of AI changes constantly; but if I had to choose a singular definition, it would be ‘when technology starts to do things that humans would normally do.’ For the next question, I see AI being used for things humans can’t do because AI and humans can handle different amounts of data.”
After this, I then asked a second source, Blach’s Technology Specialist: Mr. Finn. I asked him the same questions.
“Because of the vast amount of knowledge AI possesses, AI can be used to solve problems and keep us safer by assisting us in everyday tasks; it may even help contribute to discover how humans can live longer. ”
I then asked both what moral laws we should put into action, because even though AI is not able to think for itself (yet), we need to know how to deal with the subject later so that AI and the human race can peacefully coexist in a world where we live together.
There are already laws set in place, but only three 一Asimov’s laws: “A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.”
The teacher said, “This is the most difficult part of AI left, because it’s really a grey area. Your moral laws and my moral laws are different. So at some point we will need to decide what moral values can be accepted by all, humans, and implemented into code for AI to follow.”
Mr. Finn commented, “When getting into Morale laws you also get more into philosophy and how one’s mind thinks. But this is why we need to have discussions on what moral standards to put robots up to,”
To summarize, AI laws are something that can’t be determined by a yes or a no and we should discuss this topic until there is truly a set of morale rules set in stone.
Now let me ask you a question; how do you think AI should be limited and how do you think it helps us? Remember, the future is now.