Definition

What is a natural user interface (NUI)?

A natural user interface (NUI) is an interface that's designed to feel as natural as possible to users as they interact with a computing system or application. NUIs are intuitive, engaging and easy to use because they use natural human capabilities and behaviors, such as gestures, touch, eye movements and voice commands, during human-computer interactions. As a result, they lower the cognitive load for computer users and simplify and enhance the user experience (UX).

Purpose of a natural user interface

Traditional user interfaces rely on either textual commands or artificial graphical controls, like icons, menus and buttons, to enable humans to interact with the digital universe. Moreover, most of these interactions happen either through a keyboard or a mouse.

NUIs aim to remove as many artificial controls as possible, making interactions with digital or electronic devices and applications resemble natural interactions in the real world. They enable people to interact with computers through intuitive and highly familiar human actions, such as touch, gestures and handwriting, in the same way they interact with an object in physical space. NUIs seek to provide an experience that lets users interact with a computer without learning an artificial control, such as typing on a keyboard or programming.

Also, NUIs require little or no understanding of how the underlying system works to interact with it.

What's natural depends on the user's context and, therefore, may differ from one user to another. This is why it's difficult to create a NUI that feels natural and seamless to every user. Fortunately, many different types of NUIs are available. The ability of these NUIs to accept multimodal inputs increases user engagement and comfort and can enhance the digital experience for more users than is possible with traditional graphical user interfaces (GUIs).

Examples of natural user interfaces

Natural user interfaces are used in a variety of applications and devices. These include the following:

  • Touchscreens. Many computing devices, like smartphones, tablets, ATMs, self-service kiosks, point-of-sale terminals and interactive whiteboards, include touchscreens that let users interact with the device in more intuitive ways than older GUI-based devices. Users can swipe, tap or pinch the screen to achieve various effects and to consume content. Touchscreens are used in a variety of settings, including gaming, education, healthcare, industrial and automotive.
  • Virtual assistants. A virtual assistant, such as Amazon Alexa or Apple Siri, lets users communicate with applications by speaking to them. Powered by voice recognition technology, voice assistants take human voice input to perform a variety of tasks, such as send emails, display the weather, show a route, create a travel itinerary and more. Some voice assistants can use inputs from past conversations to better understand the user and anticipate their needs to improve future interactions.
  • Chatbots. Conversational AI chatbots feature NUIs to interact with users. NUI applications, such as ChatGPT, use generative AI technology, and their underlying models are trained on human language. This enables them to understand users' questions or prompts and then provide personalized and tailored responses to each prompt. Some chatbots, particularly those used in customer service settings, also have voice control options that let users interact with the tech by speaking instead of typing.
  • Augmented reality/virtual reality. AR/VR technologies use natural user interfaces to accept user input in the form of natural human behaviors -- voice commands, eye movements, nods, hand gestures, etc. -- to create immersive environments for different applications.

    Gaming, movies and retail are among the most common applications that combine NUIs with AR/VR to engage with users and enhance their experiences. For example, the AR game Pokémon Go uses smartphones' built-in location tracking and Global Positioning System mapping capabilities to overlay a fun and engaging virtual world onto real-world surroundings. Users simply point their phone at a point in physical space to get a Pokémon to appear on the device screen and use a swiping motion to capture it.

    Apart from games, devices like Apple Vision Pro VR headset also blend virtual and physical worlds to create immersive environments for entertainment, collaboration and work. These devices use spatial computing and eye tracking, as well as innovations like virtual displays and keyboards to enhance the way users interact with and consume digital content and experiences.
  • Facial recognition. Facial recognition technology is used in multiple applications across various sectors. For example, many smartphones include facial recognition to allow users to unlock the device without having to type in a PIN. In secure areas, like military or government premises, facial recognition is used to authenticate users and prevent unauthorized users from gaining access to the premises. Other common applications of facial recognition include surveillance and crime prevention, border control, electronic Know Your Customer and fraud prevention in banks, client digital onboarding, event registration and cardless ATM transactions.
  • Brain-computer interfaces. A BCI facilitates thought-driven movement to reestablish communication between the brain and the body following a traumatic event, such as an accident or a stroke. Paraplegic or paralyzed patients are connected to the BCI system; once a signal to move is created in their brain, the system decodes the signal using AI. It then delivers stimulation to the spinal cord to restore thought-driven movement and enable the patient to slowly recover the ability to walk.
      Comparison chart: user experience vs. user interface
      While the user experience and user interface are separate entities, the design of a UI directly impacts UX.

      Features of a natural user interface

      Natural user interfaces have a few defining features, including the following:

      • Targeting of user's natural abilities. NUIs target the abilities and skills that come naturally to human users, such as hand gestures or eye movements.
      • Shallow learning curve. Users should not have to learn a lot about a system to use it. Interaction with the system should be intuitive.
      • Minimalism. It's common for NUIs to employ a minimalistic design to reduce cognitive load and make the device or app easier to use.
      • Input recognition. NUIs process input through a variety of discrete sensors and other recognition technologies, such as cameras, microphones and touchscreens. These technologies recognize human inputs, such as voice, fingerprints, iris prints, etc.
      • Immediate feedback. NUIs provide immediate feedback on user actions to make the experience feel responsive and natural. Feedback may include visual, audio or haptics.
      • Adaptation. Many NUIs are smart products based on machine learning. These products understand and learn from user activity, context and other parameters.
      • Multimodality. Some NUIs support multiple modes of interaction, such as voice, touch and gestures. This flexibility and dynamism deliver richer, more interactive and low-friction digital experiences.
      • Accessibility. Accessibility is a key feature of modern NUIs. User-friendly interfaces and support for multimodal inputs enable different types of users to easily use the system.

      Command-line interface vs. graphical user interface vs. natural user interface

      CLI, GUI and NUI all refer to the underlying technology that enable a user to interact with a computing system.

      With a command-line interface (CLI), the user interacts with the computer using text-based commands. CLI commands follow a specific syntax, so users must memorize them. There are no graphics or icons on a CLI, just text and a command line. CLIs need an artificial means of input, such as a keyboard. CLIs do not require as many system resources as the other two types since the commands are text-based and, therefore, easy for the system to understand, process and act on.

      CLIs are used for a variety of purposes. System admins commonly use CLIs to manage files, access logs and configure servers. They are also used in scripting and to set up many types of automation tasks related to data processing, data backup, system maintenance and more.

      With a GUI, the user interacts with a series of graphical elements, such as virtual windows, icons, menus and buttons, to use the digital device or application. There's no need to memorize strict syntax, write commands or use codified text inputs. Instead, all that's needed to facilitate interaction with the system is an input device, such as a keyboard or a mouse.

      The GUI provides visual feedback to users unlike CLIs, which provide only textual output. The visual feedback enables users to see what's happening on the screen in real time and make any changes necessary to achieve their goals. Since GUIs use visual elements and include graphics and animations that must be processed before output can be produced, they require more system resources than CLIs. However, they require fewer resources than NUIs because the inputs are not dynamic or user-specific -- voice, fingerprint, etc.

      NUIs improve upon the CLI and GUI. With a NUI, the user interacts with the computer through one or more natural actions, including touch, gesture, voice or eye movement. NUIs require little to no instruction or prior training as users tend to know intuitively how to interact with NUI-based systems. These actions reduce cognitive load on users and ensure more intuitive and seamless UX.

      The NUI also provides multiple forms of feedback to user actions, including visual, auditory and haptic feedback. This enables users to immediately understand if their actions were successful or, if necessary, to make changes. NUIs often require more system resources than GUIs and CLIs because they rely on various sensors and sensor data to record user input and translate it into instructions that the system can understand and process.

      These three UIs can be used in conjunction with one another.

      For example, the Windows operating system (OS) has a GUI with buttons, icons and windows. It also has a program called Windows Command Prompt. CMD provides a text-based CLI that enables users to interact with the OS. With CMD, users can run programs, create and delete files, navigate through folders and do a lot more to administer, maintain and troubleshoot Windows-based systems.

      Apple's macOS has a similar CLI for interacting with the OS called Terminal. It takes in text-based commands to run various tasks, set up automations and execute scripts. Apple devices are known for providing user-friendly, visually appealing GUIs.

      Many Windows and Apple devices also include NUIs that adapt to users and deliver highly responsive and personalized UX.

      GUIs and NUIs also coexist on the mobile side. The two main mobile OSes -- Android and iOS -- include elements of graphical and natural UIs. Thus, smartphone and tablet users can toggle through menus and icons through GUIs. They can also use swiping motions, tap the screen, have the camera capture a picture of their face or even speak to the device -- all features of NUIs -- to get the system to do something for them.

      History of natural user interfaces

      The two main precursors of the NUI are text-based UIs (TUIs) and GUIs. In the 1960s, American engineer and innovator Douglas Engelbart invented the computer mouse. This invention and his demonstration of it set the stage for the development of GUIs.

      Sketchpad, a pioneering interactive computer graphics program conceived by American engineer Ivan Sutherland in the early 1960s, also played a role in subsequent GUI research and development. Sketchpad enabled users to draw on a computer display, visualize program functions, model objects on a screen and manipulate their drawings in real time. It was one of the first programs to use a GUI and paved the way for modern computer-aided design software.

      In the late 1960s and early 1970s, the Unix OS was developed at AT&T's Bell Laboratories. This invention, along with the inception of the Advanced Research Projects Agency Network and Ethernet, provided the technological underpinnings that enabled more innovations to emerge, including manufacturing control systems, the internet and GUIs that could deliver complex interactive UX.

      The development of TUIs in the 1970s enabled more friendly human-computer interactions. Around the same time, computer scientist Alan Kay proposed the Dynabook concept -- a compact PC for children with a child-friendly GUI and a flat-screen display.

      Also, during this time, Xerox introduced Alto, one of the world's first PCs to use a GUI with a mouse and icons. Xerox Alto predated Apple's innovative GUI-based systems by a decade, and it influenced many aspects of commercial computing, including icon-based GUIs and high-quality text and graphics. Alto and its GUI also influenced the design of PCs that came almost a decade later, such as Apple's Lisa and Macintosh.

      Apple Macintosh, introduced in 1984, ushered in a new era of human-centric computing. Featuring a GUI with support for mouse-based inputs, this innovative device -- along with other innovations, like MacPaint software, the HyperCard hypermedia development tool, AppleCD Small Computer System Interface-based CD-ROM drives and LaserWriter laser printers -- further facilitated more tactile and direct interactions between users and computers. These products also paved the way for a more user-centered, graphical-focused approach to the design and development of computer hardware and software.

      The development of GUIs accelerated in the 1990s and eventually led to the emergence of NUIs. In 1997, the merger between Apple and NeXT allowed Apple to take ownership of NeXTSTEP, an object-oriented, multitasking OS. NeXTSTEP was introduced in 1988 for the first NeXT Computer, and it emphasized NUIs and an object-oriented design. Apple introduced additional products in the 1990s that influenced modern UI design, notably the OpenDoc component-based framework standard for compound documents, Mac OS, iMac all-in-one computer and QuickTime media player.

      Other innovations emerged in the 2000s that introduced even more user-friendly GUIs and NUIs, many developed by Apple. One such innovation was iTools, a suite of free internet-based tools for macOS users that extended the desktop UX to the online space. And Apple published the Human Interface Guidelines, a set of principles and recommendations that help developers create consistent and user-friendly UIs for different Apple platforms.

      Since that time, work on NUIs has accelerated at a brisk pace. New products and technologies have emerged that adapt to human behavior, ease human-digital interactions and enhance UX. These include voice assistants, like Apple Siri, iOS 7 featuring natural human gestures, Apple Watch facilitating glanceable interactions, and Touch ID and Face ID for easy, frictionless user authentication for Apple devices.

      More recently, leaps in AI and powerful machine learning models have further expanded NUI development. Large language models (LLMs) trained on vast amounts of data are able to understand human intent, sensibilities and context. Advanced NUIs use these LLMs to predict and adapt to human needs and behaviors.

      Customer and user experiences, while sounding similar, serve distinct and crucial purposes. When effectively integrated, they significantly strengthen organizations' relationships with their customers. Learn about the differences between customer experience vs. user experience.

      Continue Reading About What is a natural user interface (NUI)?