Powered by ProofFactor - Social Proof Notifications

Understanding the Different Definitions and Perceptions of Artificial Intelligence

May 10, 2023 | 0 comments

blog banner

May 10, 2023 | Essays | 0 comments

Boddington (2018) defines artificial intelligence as a computer science branch that deals and associates with intelligent behavior simulation in computers. While Graham (2003) defines artificial intelligence as a machine’s capabilities to effectively familiarize as well as imitate human behavior that is intelligence. Graham (2003) further explains that artificial intelligence describes a robot controlled by a computer or digital computer abilities to perform various tasks related to substantially intelligent beings efficiently. Furthermore, Graham (2003) points out that beings substantially adaptable to changing situations and circumstances are intelligent.

People Also Read

However, it is vital to note that artificial intelligence definitions shift and change according to the objectives, aims, and goals strived to attain with a particular artificial intelligence system (Boddington, 2018). Boddington (2018) states that development and investment in artificial intelligence are based on three major objectives. First of all, to effectively build a system that functions as well as thinks similarly to humans. Secondly, to attain systems that function without going through the process of comprehending reasoning in human beings (Boddington, 2018). Lastly, according to Boddington (2018), the third objective when investing in artificial intelligence is to utilize reasoning in humans as a standard model without it being the project’s end goal. Therefore, different definitions emerge with different understandings and perceptions of artificial intelligence and its purpose in today’s world.

The current perceptions of what Artificial Intelligence is now and what it will become in the future must involve a discussion of how the world’s political climates are currently, as to whether the embrace of Artificial Intelligence is a good or bad thing depends on who is being asked. According to Muller (2016), quite a several people are skeptical and creating rational fears surrounding artificial intelligence due to its current state. On the other hand, another significant group has completely embraced the development and investment in artificial intelligence as the future staple of many industries.

Muller (2016) points out that, traditionally, the idea of artificial intelligence was considered significantly ambiguous and substantially difficult to pinpoint to one single concept; thus raising the question of what makes Artificial Intelligence intelligent. For many, the idea of being self-aware is sufficient. However, that is an almost insignificant definition as many computers are completely aware of their existence, not in a retrospective way but rather by their capabilities to perform functions and calculations. Moreover, In Muller (2016) mentions that when reviewing the processes, the computer must be fully aware of its actions; thus, it can review them for various processes such as debugging.

Tegmark (2017) argues that artificial intelligence is often given a range of tasks and, after that, judged on its ability to be humanlike or behave as a human being would. This is the basis of the Turing test; however, some would argue that creating a machine with pre-programmed humanlike responses to attempt to engage in a conversation with a human effectively; does not efficiently satisfy the definition of artificial intelligence. Tegmark (2017) explains that this is because the machine developed would not have active thinking, nor would it efficiently generate original responses based on thought, perspective, opinion, empathy, and bias, all of which are present in real human responses. This points out the substantial limitation which has always been surrounded by the growth, development, and advancement of artificial intelligence; the inability to effectively replicate numerous factors that make humans the way they are (Tegmark, 2017).

Inducing real emotions into a machine is a great impossibility; steps may be taken to attempt to initiate configurations that make machine responses mimic emotions. However, the reality is that a computer does not get sad, happy, jealous, or angry, amongst other human emotions. Tegmark (2017) emphasizes that a machine does not and cannot experience many things humans do, like jealousy, extreme joy, anxiety, or extreme devastation. Therefore, artificial intelligence cannot empathize with anybody without all of these features. For example, Siri, Amazon Alexa, or other devices intended to produce humanlike dialogue are just used to solve problems and answer commands, like “Siri, call Jesse to please” or “Hey Alexa, what is the score of the Winnipeg Jets game today.” These devices will work great for these purposes and are self-aware, and they do learn over time.

However, what they learn is based upon the actual human users as they rely on and are programmed to efficiently remember patterns related to the owner of these devices, like when the owner comes home from work or learning what type of music preference the users have.

Boddington (2018) argues that these devices cannot answer and respond to ethical dilemmas with original answers because they are still artificial intelligence and have not experienced the same things a human has in their lifetime and cannot see or record any form of a human’s perspective. Therefore most of the questions on ethical dilemmas artificial intelligence will not have the capacity to answer effectively, such as “Would you steal food to feed your starving family?” or “whose life is more important than of a child of a senior citizen?”; thus leading these ethical dilemmas to raise significant issues in artificial intelligence.

Nevertheless, self-driving cars are an invention and technological creation that are already in play and are being used and implemented on a large scale. Tesla by Elon Musk is a large contributor to the growth of artificial intelligence and the technological advancement of self-driving vehicles. However, as popular as they are, the vehicles are not perfect, and more often, they require updates to the existing models and new models, just like the traditional vehicles.

The interactions between human-driven cars on roads and self-driving cars are nowhere near similar to human error is inevitable and occurs every day daily. Human error can create quite a several controversial ethical dilemmas for artificial intelligence-operated powered cars. They are programmed to make the safest decision; however, many times, it is the case that it is a choice of which is worse of two bad outcomes. Suppose a self-driving Tesla has an automotive issue like a broken brake system which is separate from the artificial intelligence, and then let us consider a situation where the only options for the Tesla are to either continue through a stop sign into a busy intersection or crash into a crowd of people on the sidewalk. Artificial intelligence is not configured to have ethical standards, and its basis for decision-making processes makes them unpredictable. This decision-making design flaw applies to many other industries where the utilization of artificial intelligence in their systems is increasing at a high rate.

Today the service sectors are optimizing for the utilization of artificial intelligence in their production process, creating a situation where it could thrive as the everyday obstacles, and the morally ambiguous situation would be less common. The idea that computers or even artificial intelligence do not make mistakes is not far from the truth, although they make far fewer mistakes when compared to humans. AI or robotics do not require human error and simple human needs such as eating, drinking water, sleeping, and excreting waste. Service industries may see a complete shift from human workers to AI shortly. Although they are not at the point that they are completely self-reliant, they are already being implemented to replace human workers for purposes like call centers, taking orders, and troubleshooting, amongst other responsibilities and tasks.

Businesses that require service all day and night, like gas stations, hotels, and fast food, would benefit greatly from using Artificial Intelligence to replace human workers. More businesses would be able to stay open longer as well. Mistakes by human employees are far too common; AI can make business much more efficient and provide substantially better customer service. Moreover, there is significantly less wastage due to fewer orders being prepared incorrectly. One major obstacle preventing this from already being a reality is the audio function as miscommunications are common as machines are not good at differentiating words that are not clear, especially from those who have strong accents, different language from that programmed creating a language, as well as cases where people use slang or have speech impediments.

All in all, artificial intelligence stands at the top of technological developments that could lead to a significant revolution; several people, among the great scholars, refer to it as twenty-first-century electricity. However, it is vital for professionals as well as researchers to be in a state of awareness of the existing social as well as ethical implications that this particular advancement in technology poses. Nevertheless, we are responsible for creating, developing, and investing in the creation of robotics and artificial systems that has and is helping and substantially empowering humanity.


Boddington, Paula. (2018). Towards a Code of Ethics for Artificial Intelligence. Springer-Verlag New York Inc.

Graham, I. (2003). Artificial intelligence. Chicago, Ill: Heinemann Library.

In Müller, V. C. (2016). Fundamental issues of artificial intelligence.

Tegmark, M. (2017). Life 3.0: Being Human in the age of artificial intelligence.

5/5 - (2 votes)