Browse Definitions :

An explanation of AI buzzwords

In this video, TechTarget editor Sabrina Polin explains different AI buzzwords.

Getting dizzy with all the AI buzzwords being thrown around? Don't know the difference between ChatGPT and GPT-4? We can help with that.

Let's start from the top.

AI, or artificial intelligence, is broadly defined as machine systems that aim to simulate human intelligence. It's not one technology, but rather an umbrella term.

Machine learning, or ML, is a subset of AI. ML algorithms independently learn from and detect patterns in data, without being explicitly programmed. Many natural language processing -- or NLP applications -- for example, rely on ML.

ML can be contrasted to an older, more primitive form of AI, called "symbolic" or "rule-based" AI -- aka what beat a world champion chess player in 1997 -- which is limited to if-then conditioning. Rule-based AI can also be used in NLP applications.

Continuing on, deep learning is a subset of machine learning. Deep learning uses layers of neural networks for information processing, with each layer learning increasingly complex representations of data.

So those are the basics. Let's get into how this all plays with the buzzwords we've been inundated with.

Generative AI refers to a class of algorithms that produce content like text, photos, audio and even video. Generative AI largely uses deep learning, along with other ML techniques.

These next terms aren't inherently a subset of any one category above but are necessary to understand before we go any further.

"Foundation model" is a general term for any "off the shelf" AI model that can be fine-tuned for a range of tasks, depending on your goal.

Generative models can have different architectures, depending on the task at hand. Examples include generative adversarial networks, variational autoencoders, recurrent neural networks and transformers -- which we'll dig deeper on.

Transformer models are well suited to language-related tasks, like text summarization and translation. In recent years, transformers have become the leading architecture for many cutting-edge AI language models.

Which brings us to large language models -- or LLMs: a class of foundation model that typically uses a transformer architecture. LLMs are trained on vast amounts of text data and can serve as a base for specific applications, such as understanding, summarizing and generating text-based content. Examples of LLMs include GPT-3, 3.5 and 4; PaLM; Lambda; and BERT.

And finally, we have chatbots. These are the user-facing interfaces that make it possible to use an LLM for content creation. Examples include ChatGPT, Bard and Claude. So, for example, when you're prompting ChatGPT to help you write an essay, meal plan or anything else, ChatGPT generates answers on the back end by accessing the LLM GPT-4 -- a foundation model that uses a transformer architecture -- and spits the answer back to you in the chat box.

So, how'd we do? Any lingering questions on how all these AI terms and tech relate to each other? Drop them in the comments below and remember to subscribe for more developments in AI.

Lev Craig covers AI and machine learning as the site editor for TechTarget Editorial's Enterprise AI site. Craig graduated from Harvard University with a bachelor's degree in English and has previously written about enterprise IT, software development and cybersecurity.

Networking
  • What is network scanning? How to, types and best practices

    Network scanning is a procedure for identifying active devices on a network by employing a feature or features in the network ...

  • What is wavelength?

    Wavelength is the distance between identical points, or adjacent crests, in the adjacent cycles of a waveform signal propagated ...

  • subnet (subnetwork)

    A subnet, or subnetwork, is a segmented piece of a larger network. More specifically, subnets are a logical partition of an IP ...

Security
CIO
HRSoftware
Customer Experience
Close