
AI for Skeptics
Autonomous vehicles, once thought to be a fever dream, are slowly becoming more prominent in our society. However the idea of a fully self-driven vehicle, where the humans inside are only passengers, is still not a reality.
What we have today are vehicles that have driving assistance features, such as lane detection, adaptive cruise control, and forward collision warnings. While there are certain vehicles that can support something called “full self-driving,” the truth is that the driver must constantly supervise the vehicle’s movements and be ready to take over control at a moment's notice.
Autonomous Vehicles
Virtual assistants (VA), such as Alexa, Siri, Cortana, or Google Assistant, are powerful tools that augment the abilities of the search engine. They are typically activated by a command phrase but can potentially be activated by other means, such as text or other equipment that is integrated with the VA.
When engaged, the VA can carry out a variety of tasks that increase productivity and convenience. For example, they automate various tasks, including managing a calendar, sending messages, or even making phone calls on your behalf. If they’re integrated with your home and other smart devices, they can arm a security alarm, turn on or off the lights, or adjust the temperature. They can even be used for entertainment, ranging from playing music, podcasts, audiobooks, or videos.
Despite the vast variety of tasks they can complete, VAs struggle with complex tasks, are dependent on having a connection to the internet, and can encounter language barriers or cultural barriers.
Virtual Assistants
Whether it’s Google DuckDuckGo, Baidu, Yandex, or many others, search engines all function in the same way. A prompt is provided, they scour millions of websites to find the information most relevant to the prompt, and that data is returned in a list format to browse.
Google, for example, goes a step further by having an AI summarize the content from several sources to provide you with the answer you need without the requirement of browsing through the list of results, although it is still available.
Search Engines
Also known as Artificial Super Intelligence, this AI would be able to learn, think, make judgements, and have mental abilities that far surpass any human being. It would have the ability to feel its own emotions, have its own needs and desires, and have its own beliefs. This is also known as being self-aware. Both Super AIs and self-aware AIs does not exist outside of a theory.
Super AI
These can do everything a reactive AI can do but also has the added ability to recall from earlier experiences in the short term. This allows the limited memory AI to adapt its decisions in the future. For this AI to have more than just short-term memory, it must be trained on a stream of data. However, they can’t do more than their programming, such as processing a prompt from a user, and the quality of the response is dependent on the quality of the prompt.
A common example of limited memory AI can be found in the functions of autonomous vehicles. They understand the world around them and see other vehicles, signs, speeds, and proximities to other objects.
Limited Memory AI
These are machines that have no memory and are limited to a very specific task. Two well known examples of reactive AI are Deep Blue, a chess-playing AI system, and the Netflix recommendation engine.
Deep Blue is aware of the rules in the game of chess but only responds to the input of a player moving pieces on the board. It can’t remember its last opponent and can’t adapt to a player’s strategy. However, it can process huge amounts of mathematical data in seconds to find the probabilities of various moves a player will make on their next turn – and react accordingly.
The Netflix recommendation engine has access to your search history and your watched history and, based on the trends and relationships it finds between these two sets of data, provides a list of recommendations of media that you might enjoy. This can be changed by likes or dislikes, giving the engine a higher degree of accuracy.
Reactive AI
Also known as weak AI or Artificial Narrow Intelligence, these AIs can be trained to do a single task or a narrow set of tasks. These can usually be done much more efficiently than a human can do. For example, ChatGPT is a form of narrow AI. Despite all the power that it seems to have, it’s limited by the text input of the user. It’s important to note that this is the only form of AI that exists today. Most, if not all, generative AI tools fit in this category as well. Adobe Firefly, for example, is an AI image generator. It eventually will be given the power to take these images and create videos with sounds.
Narrow AI
This form of learning is based on trial and error, or “learn by doing.” For example, an AI is given the task of creating an image of a dog. It tries forty times, and out of these tries only two are close to the intended outcome. These two are marked as positive while the rest are marked as negative. The AI then repeats this task dozens more times. With each positive outcome, it learns what is an acceptable output and what is an unacceptable output.
Reinforcement Learning
This form of learning does not require input of labeled data and there is no specific learning goal in mind. The AI is given a collection of data and is allowed to organize this data into categories based on attributes, relationships, and trends that it can find. The key here is that AI does this without any human supervision.
Unsupervised learning is becoming the more popular method to train AI with as it’s harder to get labeled data. An example of this in action would be a business needing to analyze their customers’ demographics and how they interact with their digital space.
Unsupervised Learning
This form of learning requires the input of labeled data from a human (or user) with a specific learning goal in mind. Here’s a quick example.
An AI needs to learn what a dog is. The user provides the AI with several pictures of dogs, each one labeled with the breed and the color of their fur.
Over time, and as more images are provided to the AI, it will begin to identify the similarities between dogs of the same breed and the differences between other dog breeds (such as patterns in their coat, fur color varieties, body shape tendencies, and so on). It will eventually be able to identify a dog’s breed with relatively little input, such as a blurred image of a dog racing through a family photo.
Supervised Learning
We’ve all seen the popular culture references to artificial intelligence (AI). Whether it’s the iconic HAL 9000 from 2001: A Space Odyssey piloting the mighty ship, the Terminator franchise and their time traveling destroyers, or even as recent as Avengers: Age of Ultron and a sentient AI bent on destruction, humanity has been fascinated by AI and robotics.
While these are compelling stories, empowered by the magic of Hollywood, the reality isn’t quite as dramatic but still has gravity. As we read on, we’ll try to define what AI is, describe how it works, show the types of AI tools that currently exist, highlight where they exist in everyday technology, and describe who created these tools and who controls them.
Five Essential Questions About AI
We find that a combination of these two answers helps explain what AI is and why it’s difficult to explain what it is at the same time. AI is a machine or a computer system that uses human crafted code to complete a task. The semantic argument is that AI can’t have intelligence in the same way that you or I do – but there’s a possibility of that changing in the future. As it stands, there’s no such thing as an intelligent computer. They’re only as smart as the human that coded it.
A Brief Summary
AI is not just rooted in computer science, data analysis or engineering. It’s heavily tied to philosophy, psychology, ethics, and even linguistics. John McCarthy, a computer scientist known as the Father of AI and the man that coined the term “artificial intelligence,” defines it in the following way.
However, there’s a deep philosophical question that needs to be answered to really understand AI. What is intelligence? Is it something that only mammals possess, or can a machine have it? If it can, to what extent?
McCarthy believes that “Intelligence is the computational part of the ability to achieve goals in the world. Varying kinds and degrees of intelligence occur in people, many animals and some machines…The problem is that we cannot yet characterize in general what kinds of computational procedures we want to call intelligent. We understand some of the mechanisms of intelligence and not others.”
For a more modern take, Google’s definition of AI classifies it as a field of science where computers and machines are built to reason, learn, and act in a way that would normally require human intelligence. This also includes when data is too vast for humans to analyze.
Does the machine processing this data only have intelligence because it’s working on more than a human could possibly understand? If humans coded the technology to power the AI, is it only as intelligent as those that created it? Could it surpass this intelligence?
The Long Answer
“[Artificial intelligence] is the science and engineering of making intelligent machines, especially intelligent computer programs.”
AI is a machine or computer system that can perform a task that requires a human’s input. A common example of this is a search engine on the internet, such as Google or Bing. You provide the AI algorithm with a question, it scours the internet, and returns a series of results to you that best fit your question.
Another familiar example is Netflix. If you watch a certain type of show, the AI algorithm finds other shows that are similar to what you’re watching. It does this by using a coded set of criteria. Further recommendations from that AI algorithm can be made by liking or disliking shows.
The Short Answer
AI has many definitions, depending on who or what you ask, but we can boil it down to two answers – the “short answer” and the “long answer.”
What is AI?
When an AI is created, it knows nothing. For it to start being useful, in terms of helping individuals or society, it must be taught with a process called machine learning. AI learns and trains itself from huge amounts of data. This is akin to a human learning a new language or by training for a marathon.
The three most common training models that are used with AI are supervised learning, unsupervised learning, and reinforcement learning. There’s also a subset of machine learning that many are talking about right now called deep learning.
How Does AI Work?
Deep learning requires the creation of a neural network, which is similar in concept to how the human brain works. A series of nodes (called neurons), each responsible for learning a specific feature of data, are connected and arranged into layers. As the data (labeled or unlabeled) moves through the neurons, and then through each layer of the network, the AI learns more complex attributes of the data. There’s no limit to how many neurons and how many layers can exist in a neural network, but it must have more than three to be considered a “deep neural network” that is capable of deep learning.
For example, a neural network is given an image of a dog. The first layer of the network would identify the physical aspects of the image, such as the boundaries of the image. The second layer would build on this knowledge by finding the lines and shapes of the image itself. The third layer would add further to this knowledge by learning shapes and colors.
Deep learning empowers AI to engage with the world around it, such as with facial recognition or with technology in autonomous vehicles. It is also used to power image and/or audio recognition and creation.
Deep Learning
While there are many different types of AI tools out there, there are three fundamental categories that all AI tools fall under. They are artificial narrow intelligence, artificial general intelligence, and artificial super intelligence. Within these three categories, there are four sub-types of AIs - reactive, limited memory, theory of mind, and self-aware (Coursera, n.d.; IBM, 2025).
What are the Types of AI?
This doesn’t exist in the real world today, but the theory suggests that an AI with theory of mind functionality would be able to understand and process thoughts and emotions. This in turn would affect how the AI behaves as it personalizes how it reacts to entities around itself. It suggests that this kind of AI would be able to simulate real human relationships.
Theory of Mind AI
Also known as strong AI or Artificial General Intelligence, this form of AI would be able to use the learnings from its past and skills that it’s learned to complete new tasks without humans training the underlying models that power it. This, in theory, would allow the General AI to complete cognitive tasks that any human being could. But, this does not exist today - it remains theoretical.
General AI
Unsurprisingly, AI is everywhere. From your computer to your car, you can likely find something powered by AI or at least borrowing power from AI. Perhaps the most common AI you might interact with is the search engine, closely followed by the virtual assistant, and possibly even the autonomous vehicle.
Where is AI Used in Everyday Technology?
“Who owns AI?” is a question that we should all be asking. They’re given incredible amounts of data and unprecedented access to our homes and lives. Who or what exactly are we letting in?
Who Created and Controls AI?
The three main players are big tech corporations, academics, and governments around the world. Academics study AI within laboratories and create hypothetical scenarios to test the power of AI. Governments work with academics and corporations to create regulations that help control how it is used. Corporations secure venture capital and hire top talent (mostly from academia) to help their own research and development of this technology.
There’s an unspoken conflict happening between academia, such as institutions like Stanford, MIT, or Carnegie Mellon, and big tech corporations, like Google, Apple, Microsoft, or Facebook, to name a few. The corporations have heavily invested in AI research and development, going so far as to create their own AI and release it to the public which feed it immense amounts of data. Academics, however, are the ones that keep trying to push the boundaries and limitations of AI, such as working on concepts like the theory of mind.
The corporations are currently winning the battle. More money (and data from people like you and me) are flowing towards these companies as they incorporate AI into their business strategies. There’s a genuine fear in academia (and to a certain degree in government) that the power being held by these corporations will potentially stifle the pursuit of knowledge in AI. There are growing calls for breaking up these tech monopolies, or at least regulations to be put in place to help protect the public.
While AI was first created in the halls of academia, it currently is concentrated within the confines of a select few corporations. It’s up to people like you and me to not only identify if they have too much power but also to act and help push legislation that would regulate and limit the power that these corporations have. AI can truly change the world – the real question is for better, or for worse?