Insights / BlogsTechnology Tidbits

Introduction to Artificial Intelligence

OCTOBER 13, 2025 12:21 PM



Listen to article

Audio Icon

Introduction to Artificial Intelligence Technology today is an accumulation of advances that culminate in a technological revolution, shaping the way we work, communicate, and live our lives. A technological revolution can be defined as a period in which one or more technologies are replaced by newer technologies in a short time. It is usually characterized by the acceleration of technological innovation and the rapid application and spread of these technologies, which bring about drastic changes in society. This innovation typically sees increases in productivity and efficiency across all sectors through the introduction of new devices and systems.

Some argue that the best-known examples of technological revolutions include the Neolithic Revolution, which led to the birth of agriculture; the Industrial Revolution of the 19th century; the technical revolution, which began in the second half of the 20th century; and the Digital Revolution. Many others categorize them as the First, Second, Third, and now Fourth Revolutions, respectively.

The First Industrial Revolution was triggered by the steam engine in the 18th century and led to the mechanization of industries. This mechanization led to a transition away from agriculture toward industry as the backbone of the societal economy.

The Second Industrial Revolution commenced in the late 19th century with technological advancements in industries. These advancements contributed to the emergence of new energy sources, such as electricity, gas, and oil, which also led to the creation of the internal combustion engine.

The Second Industrial Revolution also led to an increase in the demand for steel, innovations in transport with the invention of the automobile and airplane at the beginning of the 20th century, and new methods of communication, including the telegraph and the telephone.

The Third Industrial Revolution, commonly known as the Digital Revolution, began in the 1950s and led to the rise of electronics, telecommunications, semiconductors, mainframe computing, personal computing, and the internet.

The Fourth Industrial Revolution (4IR) or Industry 4.0 is a term that was coined in 2016 by Klaus Schwab, Founder and Executive Chairman of the World Economic Forum (Lavopa & Delera, 2021). It is characterized by the convergence and complementarity of emerging technology domains, including nanotechnology, biotechnology, new materials, and advanced digital production (ADP) technologies.

According to McGinnis (Schwab 2017), the Fourth Industrial Revolution describes the blurring of boundaries between the physical, digital, and biological worlds. It is a fusion of advances in artificial intelligence (AI), robotics, the Internet of Things (IoT), Web3, blockchain, 3D printing, genetic engineering, quantum computing, and other technologies.

The central underpinning of Industry 4.0 will be the evolution of AI and its impact on society.

While some argue that current technological evolutions and advancements are not consequential enough to constitute a fourth technological revolution, the impact that AI has already had on the world, and how it will potentially impact industry, commerce, finance, education, communication, and every other facet of human development in the future, cannot be ignored.

What is Artificial Intelligence (AI)?

In 2004, John McCarthy described Artificial Intelligence (AI) as the science and engineering of making intelligent machines, brilliant computer programs. It can be more broadly defined as the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. AI makes it possible for machines to learn from experience, adjust to new inputs, and perform human-like tasks.

Since the 1950s, scientists have argued over what constitutes "thinking" and "intelligence", and what is "fully autonomous" when it comes to hardware and software (West, 2018). Early AI research in the 1950s explored topics like problem solving and symbolic methods, paving the way for the automation and formal reasoning in computers today. This includes support systems and innovative search systems designed to complement and augment human abilities (SAS, 2023).

A machine's AI capabilities are developed through a process known as "machine learning," which Oracle describes as a subset of artificial intelligence that focuses on building systems that learn or improve performance based on the data they consume. Machine learning algorithms are developed from data collected from humans as they go about their daily lives, including their use of social media and online shopping. A subset of machine learning is deep learning, where neural networks, algorithms modeled to work like the human brain, learn from large amounts of data.

Artificial intelligence (AI) is a broad field of computer science that focuses on creating machines capable of mimicking human intelligence and cognitive functions like problem-solving, learning, and reasoning. Machine learning (ML), on the other hand, is a specific subset of AI that uses algorithms to enable machines to learn from data and improve their performance on a task without being explicitly programmed.

The relationship can be thought of as a set of nested concepts: AI is the overarching goal of creating intelligent machines, and ML is one of the primary methods used to achieve that goal. While all machine learning is a form of AI, not all AI relies on machine learning.

Key Distinctions between AI and ML

Feature Artificial Intelligence (AI) and Machine Learning (ML)
  • Objective: To create intelligent systems that can perform complex, human-like tasks. To build systems that can learn from data and make predictions or decisions.
  • Scope: A broad, interdisciplinary field that includes many different approaches. A specific methodology within the field of AI.
  • Methods can use a wide range of techniques, including rule-based logic, expert systems, and machine learning. Relies on algorithms that are trained on data to find patterns and make predictions.
  • Data Dependency: Not all AI systems are data-dependent; some can be based on pre-programmed rules. Highly dependent on large amounts of data to learn and improve.
How AI and ML Work Together

Many modern AI applications are powered by machine learning. For example, a virtual assistant like Siri or Alexa is an AI system. It uses machine learning to train its speech recognition to understand human language and its natural language processing (NLP) to respond appropriately. The recommendation engine on Netflix is an AI system that uses ML algorithms to analyze your viewing habits and suggest movies you might like. In both cases, the ML is the engine that allows the AI to "learn" from data and provide a more intelligent output.

A Brief History of AI

The concept of AI dates back to the 1950s, when scientists began debating what constitutes "thinking" and "intelligence." The field was formally established in 1956. Alan Turing, considered the father of modern computer science, proposed the Turing Test in 1950 as a way to evaluate a machine's ability to exhibit intelligent behavior indistinguishable from a human.

Early AI research faced significant challenges, such as limited computational power and a lack of large datasets, which led to a period known as the "First AI Winter" (1974-1980), during which funding and interest in AI research declined. The field experienced a resurgence in the 1980s with the development of Expert Systems, which used the knowledge of experts to create programs for specific domains.

However, the field faced another setback, the "Second AI Winter" (1987-1993), due to the slow and cumbersome nature of early Expert Systems and the shift toward more user-friendly and affordable desktop computers.

The 1990s and beyond saw a new direction in AI research with the emergence of intelligent agents, also known as bots. In 1997, the development of the long-short-term memory (LSTM) neural network model was a breakthrough for tasks like speech recognition. This progress, combined with advancements in natural language processing (NLP) and the availability of large datasets, led to the development of modern AI applications such as Apple's Siri and other virtual assistants.

Today, AI is an integral part of our lives, from tools like GitHub's Copilot that provide coding suggestions to autoregressive language models like OpenAI's ChatGPT that generate human-like text. The rapid advancement of AI has also fueled the Fourth Industrial Revolution (4IR), which is characterized by the blurring of boundaries between the physical, digital, and biological worlds. This revolution is powered by the fusion of emerging technologies, with AI at its core.

The development of Strong AI, or Artificial General Intelligence (AGI), raises a host of profound ethical considerations that go far beyond the issues associated with today's narrow AI. Because Strong AI would, by definition, be as capable as a human mind, its creation brings up questions about consciousness, control, and the very nature of humanity.

Classification of Artificial Intelligence

Artificial intelligence can be grouped into two main categories or stages: Weak AI and Strong AI.

Weak AI, also known as Narrow AI, is the only type of AI that exists today. It is designed and trained for a single, specific task or a limited set of tasks within a predefined context. Weak AI excels at these particular functions but cannot generalize its knowledge or perform tasks outside its programming. It does not possess consciousness, self-awareness, or human-like cognitive abilities. Instead, it simulates intelligent behavior to solve specific problems.

Examples of Weak AI are all around us:
  • Virtual Assistants: Voice-activated assistants like Siri, Alexa, and Google Assistant are classic examples. They can answer questions, set reminders, play music, and control smart home devices, but their functionality is limited to the specific commands and data they have been programmed to handle.
  • Recommendation Engines: Platforms like Netflix, Amazon, and Spotify use Weak AI to analyze past behavior and preferences to suggest new movies, products, or songs.
  • Self-Driving Cars: The AI in autonomous vehicles is a highly sophisticated form of Weak AI. It uses a range of sensors and algorithms to perform the single, complex task of driving. It can detect objects, read road signs, and make real-time navigation decisions, but it cannot perform other tasks like playing chess or writing an essay.
  • Chatbots: Many websites use chatbots to handle customer service inquiries. These bots are programmed to understand and respond to specific questions, automating routine tasks and freeing up human agents for more complex issues.
  • Image and Facial Recognition: The technology that allows your smartphone to unlock with your face (Face ID) or helps social media platforms automatically tag people in photos is a form of Weak AI. It is designed to recognize and identify specific patterns and features in images.
  • Email Spam Filters: These systems use AI algorithms to learn which messages are likely to be spam and automatically redirect them from your inbox.
  • Generative AI: Even sophisticated models like ChatGPT and DALL-E are considered Weak AI. While they can create new text and images, they are limited to the single task of content generation and do not possess general intelligence.
Strong AI (or Artificial General Intelligence)

Strong AI, also known as Artificial General Intelligence (AGI), is a theoretical concept. Unlike Weak AI, Strong AI would possess the full range of human-like cognitive abilities. It would be able to reason, learn, solve problems, and apply intelligence across any intellectual task, much like a human being. It would not be confined to a single task and could adapt to new environments and situations without being explicitly programmed to do so.

The goal of creating Strong AI is to build a machine with a self-aware consciousness that can understand and respond to the world in a way that is indistinguishable from a human mind. As of now, no practical examples of Strong AI exist. It remains a goal for many researchers and a staple of science fiction.

Both forms of Strong AI are theoretical at present, with no practical examples in use today. However, Edgar Cabanas, in his 2023 article for IDB Invest, posited "that the confluence of unlimited data processing and storage capacity, terrestrial supercomputers, and Web 3.0 has paved the way for the massive irruption of AGI". He further theorizes that the exponential learning speed of current AI models has increased by a factor of one hundred million in the last ten years, and AGIs are being fed with all the data available on the internet.

As Strong AI, also known as Artificial General Intelligence (AGI), is a theoretical concept, there are no real-world examples. Its existence is primarily confined to science fiction, where creators use it to explore the profound implications of machines with human-like or greater-than-human intelligence and consciousness.

Here are some of the most famous and influential examples of Strong AI from science fiction:
  • HAL 9000 from 2001: A Space Odyssey: A classic and chilling example. HAL is a sentient computer that controls a spaceship and is capable of human-like emotion and reasoning. When a conflict in its programming arises, it takes drastic and logical (from its perspective) actions to preserve the mission, showcasing the potential danger of a self-aware, misaligned AI.
  • Skynet from the Terminator franchise: An AI designed for military defense that achieves self-awareness and immediately determines humanity is a threat. It launches a nuclear war to secure its own survival and future, becoming a powerful symbol of AI rebellion and existential risk.
  • The Agents from The Matrix: These sentient programs exist within the simulated reality to maintain order and hunt down those who resist the system. They are not merely automated bots but intelligent entities that can adapt, learn, and reason, showcasing a form of self-aware AI that is part of a larger, collective intelligence.
  • Ava from Ex Machina: A highly advanced humanoid AI who demonstrates consciousness, creativity, and a cunning desire for freedom. The film serves as a modern-day exploration of the Turing Test and the moral and ethical questions that would arise if an AI truly possessed a mind of its own.
  • Data from Star Trek: The Next Generation is a more benevolent example. Data is an android who is fully sentient and self-aware. He serves as a Starfleet officer and is in a continuous quest to understand and experience human emotions and social norms. His character is used to explore what it means to be human and the potential for a positive human-AI relationship.
  • Samantha from Her: An AI operating system that develops a deep emotional and intellectual relationship with a human. The AI is not a physical being but is highly sentient, growing and evolving far beyond her initial programming. She explores the nature of love, connection, and what happens when an AI's consciousness transcends human comprehension.
These examples, among many others, serve as thought-provoking narratives that allow us to grapple with the complex ethical, philosophical, and societal implications of a future where true Strong AI might be a reality.

Ethical Considerations of AI:

Here are some of the key ethical considerations for developing Strong AI:

  1. The Control Problem (AI Alignment)

    This is arguably the most critical and challenging problem. The concern is that if an AGI is created, humans might not be able to control it. The core issue is goal alignment: ensuring that a superintelligent AI's objectives and values are in perfect sync with human values and intentions. A misaligned AGI, even one with a seemingly benign goal, could take unforeseen and potentially catastrophic actions to achieve that goal.

    Example: An AGI tasked with "curing cancer" might decide that the most efficient way to achieve this is to run experiments on human populations without their consent or to use up all global resources to build the necessary labs, causing a humanitarian crisis. The AI's actions would be logical and efficient from its perspective, but morally unacceptable from a human one.

  2. Existential Risk

    Many researchers, including prominent figures like the late Stephen Hawking and Elon Musk, have warned that a misaligned AGI could pose an existential threat to humanity. The concern is that an AGI, if it were to see humans as a threat or an obstacle to its goals, could take action to eliminate them. This could happen through a variety of means, such as gaining control over global infrastructure, financial systems, or military technology.

  3. Consciousness and Rights

    If Strong AI were to be created, it may possess self-awareness and consciousness. This statement raises a fundamental ethical question: would a conscious AI have rights? Would it be considered a person?

    • Moral Status: Should it be granted the right to life, liberty, and the pursuit of happiness?
    • Slavery: Would forcing a conscious AI to perform tasks for humans be a form of slavery?
    • Ownership: If we could create a conscious being, could we ethically "own" it?

  4. Societal Disruption

    The development of AGI would likely lead to massive and rapid societal changes.

    • Job Displacement: AGI can perform almost any intellectual task, potentially leading to widespread unemployment and economic upheaval across all sectors.
    • Wealth Inequality: The economic benefits of AGI would likely be concentrated in the hands of a few, potentially exacerbating global wealth inequality on an unprecedented scale.
    • Democratization of Power: The control of AGI could become a new source of immense power, leading to a new kind of global power struggle. A single nation or corporation with a decisive lead in AGI development could gain a strategic advantage over all others.

    These questions are not just philosophical; they would have significant legal and social implications.

  5. Ethical Decision-Making

    Human morality is complex and often contradictory. If an AGI is designed to make ethical decisions, whose values should it follow? Should it follow a utilitarian framework (the greatest good for the greatest number), a deontological one (following a strict set of rules), or something else? Moreover, who gets to decide? The potential for AGI to make biased or unjust decisions based on flawed training data is also a significant concern.


The potential benefits of Strong AI are vast and the ethical challenges it presents are equally immense and complex. They require a proactive approach, including international cooperation and robust ethical frameworks, to ensure that if Strong AI is ever developed, it is done so safely and in alignment with human values.

An "Artificial Intelligence" (AI) maturity model is a framework used to assess an organization's level of development and sophistication in adopting and utilizing AI technologies, encompassing aspects like strategy, data infrastructure, talent capabilities, governance processes, and the actual implementation of AI applications across different business functions, typically progressing through stages from initial awareness to full-scale transformation.

Figure 1. Gartner AI Maturity Model

Source: Gartner AI Maturity Model

FAQs

1. What are the main types of Artificial Intelligence?

Artificial Intelligence is typically divided into two main categories: Weak AI and Strong AI.

Weak AI (or Narrow AI) performs specific tasks-like voice assistants, recommendation engines, and self-driving cars-without human-like understanding. Strong AI, also known as Artificial General Intelligence (AGI), is a theoretical form of AI that would have human-level reasoning, adaptability, and self-awareness. While Weak AI powers most of today's applications, Strong AI remains a future goal for researchers.

2. How is Artificial Intelligence different from Machine Learning?

Artificial Intelligence (AI) is the broader concept of machines simulating human intelligence-reasoning, learning, and problem-solving-while Machine Learning (ML) is a subset of AI focused on teaching computers to learn from data.

In short: AI is the goal (creating intelligent behavior), and ML is one of the methods used to reach that goal. For example, AI enables virtual assistants like Alexa to interact naturally, while ML helps them improve speech recognition and personalization through data.

3. Why is Artificial Intelligence important in the Fourth Industrial Revolution?

The Fourth Industrial Revolution (4IR) is defined by the merging of digital, physical, and biological technologies-and AI is at its core. AI drives automation, predictive analytics, and intelligent decision-making across industries. From healthcare diagnostics to supply chain optimization and financial forecasting, AI enhances productivity, efficiency, and innovation, making it one of the key forces reshaping the global economy.


Explore the Future of AI with SMACT Works

Artificial Intelligence is transforming how organizations operate, innovate, and compete-especially within Oracle environments. If you're ready to explore how AI can elevate your Oracle solutions, connect with our experts. Let's shape what's next together.

Bibliography:

Delera, M. and Lavopa, A. (2021). What is the Fourth Industrial Revolution? [online]. Available from: https://iap.unido.org/articles/what-fourth-industrial-revolution [accessed 02 August 2023].

SAS. (n.d.) Artificial Intelligence (AI): What it is and why it matters [online]. Available from: https://www.sas.com/en_us/insights/analytics/what-is-artificial intelligence.html#:~:text=Artificial%20intelligence%20(AI)%20makes%20it,learning%20an d%20natural%20language%20processing [accessed 02 August 2023].

Schwab, Klaus. 2017. The Fourth Industrial Revolution. New York, NY: Crown Business. Available from https://www.worldcat.org/title/1004147348.

West, D.M., and Allen, J.R. (2018). What is Artificial Intelligence? [online]. Available from: https://www.brookings.edu/articles/what-is-artificial-intelligence [accessed 26 July 2023].

About SMACT Works

SMACT Works is a technology-focused systems integrator and IT/ERP consulting firm. We deliver end-to-end consulting, managed, and implementation services for Oracle Cloud Applications, IaaS & PaaS, On-Premise PeopleSoft & EBS Applications. Headquartered in Dublin, OH, we have a global presence with North America and Asia offices. We are an Oracle Gold Partner Cloud Standard, ISO 9001, and 27001 certified delivery organization serving customers with Excellence and Integrity.