AI is a powerful tool that mimics our intelligence but is often misunderstood due to sci-fi portrayals. It’s not that dangerous, but its ethical use is key.
Artificial Intelligence has become a buzzword that’s attacking us from everywhere. We have, e.g., AI-powered cameras and smartphones. When we just change the wording a bit, we can find smart homes, smart cars, smart lightbulbs. But in general, what is it? How does it work? And what’s more important — should we be afraid of it?
There are numerous myths and misunderstandings when it comes to Artificial Intelligence. That’s generally because our world becomes more and more like what we know from science fiction. Sometimes it’s hard to differentiate, what’s science now, and what’s still fiction.
An excellent example regarding it is 2001: A Space Odyssey. The movie came out in 1968, a year before landing on the moon, but it presented space flights as something casual. Of course, we haven’t conquered the space yet, but there’s another tech advancement been shown there —, video calls or communicating with a computer with voice. What’s more, artificial intelligence HAL-9000 was like voice assistants we know today.
HAL-9000 is also an excellent example of a different aspect — why we fear AI. To put it simply: it was a bad character—just the same as many other famously known fictional AIs. Most AI characters in Sci-Fi have one aim: to destroy humankind. And since our world has more high-tech than was previously only in fiction, don’t be surprised that people are afraid.
Another issue that’s not helping the case is that AI sells. And I’m not writing here about movies or other entertainment. I mean, marketing of nearly every tech novelty.
For a long time, despite the entertainment, Artificial Intelligence was a scientific term in computer science related to making “intelligent” algorithms, mostly trying to mimic specific aspects of how the human mind works. But now, nearly everything that does something smart is called artificial intelligence.
And here, I’m not telling you that there are no AI algorithms behind your robot vacuum cleaner. It’s just too often misused, just because it sells well. That makes the topic a bit harder to understand. That’s why we should just look at the encyclopedia definition, simplify it and straighten all misunderstandings.
Artificial Intelligence (in short: AI) is, briefly speaking, an ability of a digital machine (like a computer) to do tasks that are commonly associated with intelligent beings. Correctly, the name should be used for projects of creating systems with intellectual processes that are characteristic for people, such as an ability to infer, discover meaning, generalize or learn from previous experiences.
The intelligence of AI is frequently brought to a few aspects. Here’s just one sample list of which parts we can find. Just remember that there’s no strict division between those. They are very commonly combined with each other.
I think it’s an aspect that is the most popular today. Generally, by learning, we mean algorithms that can find a way to work by rote learning (more accessible) the problem’s solution or generalizing it (harder but gives better results). It’s so popular that we even have a separate branch of knowledge — Machine Learning.
To understand the difference between rote learning and generalization, let’s look at how we could create an AI that plays chess. If we’d like to use the first approach, our algorithms would just remember the position and the best move during the training. In the case of generalization, we’d expect an algorithm to create a strategy for playing, not just remembering the moves. A perfect example here is AlphaZero — the DeepMind’s program that can learn to play games. It learned how to play chess without knowing its rules before. After it was trained to play, by self-play, it started to use unknown tactics that surprised other players, like sacrificing a queen and bishop just to get a better position [source].
As Encyclopedia Britannica says: “To reason is to draw inferences appropriate to the situation.” In this case, the program should infer from known logical rules and the current knowledge coming from input data. Some readers may already know this AI aspect from expert systems.
Rules for reasoning can be either hard-coded, from the database, or learned. Let’s do an example of a simple inference. Our system has one simple rule: “hamster can be either in a cage or in a carrier.” We have received an input: “hamster is not in a cage,” so we infer: “hamster is in a carrier.”
Here we can spot a weakness of the AI. It knows only the world we made it to learn by training or hard-coding knowledge. Every state outside of known ones can be perceived as an error or something impossible. Going back to the hamster example, AI will never think that it may gain freedom unless we add another rule about it.
I’d say that it’s the most common application of AI. Most often, it’s understood as a search between possible actions to create a sequence of them that may allow reaching a specific goal. It’s used to solve costly problems regarding computation, so finding their exact solution in the usual ways would take a lot of time. And when I say, “a lot of time,” I mean something like millions of years. We have techniques to compute it more reasonably, like a few minutes or even seconds. Just the downside of it is that we probably won’t find the best solution, but a good-enough one (or more mathematically speaking: local minimum/maximum instead of global).
As an example of a computational-heavy problem, we can see building routes for delivery services. It starts from a magazine, delivers a package to (let’s say) a hundred points, and goes back to the magazine. It’s known under the name of “traveling salesman problem.” Considering 100 points on the map, we will need to generate circa 9*10157 possible solutions (factorial of 100) if we'd like to find the best route. There’s no way we could store and check them all. But, if we have some special algorithm, e.g., simulated annealing, finding good-enough solutions is a matter of seconds.
The most impressive, the most interesting, and the most dangerous aspect of Artificial Intelligence. Perception is all about analyzing the environment based on senses, whether the sensory organs are real or artificial. We can find whole computer vision, voice-related algorithms, and any other purposes in this aspect, but these two are the most popular now.
Since you may have a grasp of what perception applications are, I will just explain, what I meant by the practical, engaging, and dangerous.
It is impressive because we are somewhat used to all other AI aspects, sometimes not even recognizing it as intelligence. There’s no magic in finding a route on the map for a regular person. But when the algorithm can identify objects on the photo and describe or modify them, it’s just WOW. For example, a few years ago, we had a boom for how-old — everyone wanted to know how old they are on a specific photo according to the AI.
Interesting, yes. From a developer’s perspective, creating an algorithm capable of perceiving is interesting from a development point of view. Even if it’s essential, still, it makes an impression.
Dangerous? In the era of CCTV cameras nearly everywhere, we may feel endangered by the AI that recognizes things happening on a live video. Of course, there are positive aspects too, since we may quickly find out if something wrong happens, but it may be used for control and invigilation in the wrong hands. That’s one of the aspects where we should be afraid, but maybe not of Artificial Intelligence, but people using it.
Understanding natural language is one of the staples of AI algorithms. This aspect was most often considered removing the boundaries between machines and humankind. Hence, the Turing Test for the perfect artificial intelligence is based on having a conversation with a human. There is also a separate branch of knowledge: Natural Language Processing.
Currently, we have algorithms that can write like humans, just to mention GPT-3. However, it’s just a repeating of known patterns, not a proper understanding of the language. The philosophical aspect can be found in a Chinese room thought experiment by John Searle. In short, it’s arguing whether the machine literally understands the language or just simulates this ability. Even when the program is compelling in faking being a human (let’s say it passes the Turing Test), it still doesn’t mean it understands the language.
While talking about Artificial Intelligence, we must define what we aim for with our research. Most often, we talk about strong and weak (applied) AI.
Strong AI, also known as Artificial General Intelligence, attempts to make a machine impossible to differ from humans regarding thinking. This kind of AI would be multipurpose and have very general knowledge. Simply speaking, it’s that kind that we know from the fiction and how most people perceive AI.
From what we officially know, there is no strong AI currently existent. It’s a tricky topic. From one side, it would require a lot of computational power to mimic the whole behavior, and scientists working on it would need to find out how to make it algorithmically possible. On the other hand, there are philosophical (like Chinese room argument, but we can also talk about Artificial Consciousness) and ethical aspects that we can’t avoid. However, experts claim we may see the first strong AIs around 2050.
Weak AI (or applied AI) is an opposite way of creating artificial intelligence. In this case, we are developing intelligent systems specialized in solving specific problems. That’s what we currently have in terms of AI. Therefore, if an algorithm can recognize a face, it won’t also find a route on a map. Of course, no one prevents us from mixing different solutions; for example, that’s how voice assistants work (separate AIs for voice recognition, language processing, and finding solutions).
As I’ve told in the beginning, evil AI is a prevalent motif in fiction. I think that most of us can enumerate a few examples of Sci-Fi where we had AI rebellion, war of AI versus humankind, or any other similar topic. But as I’ve just written, such AI is classified as Strong AI, and it’s something we still haven’t achieved. So, is there anything to be feared of?
Yes. AI itself is not dangerous, but it can be used maliciously. I’ve talked about control and invigilation based on CCTV cameras when talking about perception. But invigilation can also be done based on Internet behaviors. Ubiquitous algorithms on which modern advertisement networks are based are doing something that can be perceived as invigilation — they track what you do on the Internet which pages you are browsing, just to give you personalized ads. Of course, if it is just for advertisement, we can consider it a lesser evil, but the sole mechanism can be used for anything.
Another infamous example is deepfakes. That’s very, generally speaking, using artificial intelligence techniques to create fake images, videos, and sounds. The topic started to become famous when on the Internet appeared porn, where actors’ faces were swapped to depict famous people. We can consider it a harmful, bad “joke,” but it was just a sample usage. This way, we can fake interviews, speeches, acting. Whatever comes to mind. It’s a great tool because we can “revive” dead actors or rejuvenate living ones algorithmically in terms of entertainment. We may also find creative usages like the music video for the song SELF by Steven Wilson, where the singer's face is swapped with faces of famous people looking like they are singing the song. But, on the wrong hand, we may just imagine how many bad things can be done.
So, we shouldn’t fear Artificial Intelligence, but people using it. That’s why we always need to talk about the ethics of AI usage.
Artificial Intelligence is a very fascinating and broad topic in Computer Science. It’s a bit distorted by what we know from fictional works, but we should generally remember that it’s all about algorithms and computation. In its current state, it’s not something to be feared, but as always, we should be feared by the people behind it. Despite many advances in this field of science, we still have much to discover.
In the end, to show you why this topic is such broad and exciting, I’d like to leave you with the quote from Hans Moravec: “it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.”
(Article is an English translation, with some modernization, of https://swistak.codes/sztuczna-inteligencja-a-co-to-a-komu-to-potrzebne/)
Check out the latest content on YouTube where I'm discussing the topic at 'Synergy Caffe' series.
https://www.youtube.com/embed/45e4oJbIs1Q
References