pipelining
What is pipelining?
Pipelining is the process of storing and prioritizing computer instructions that the processor executes. The pipeline is a "logical pipeline" that lets the processor perform an instruction in multiple steps. The processing happens in a continuous, orderly, somewhat overlapped manner.
In computing, pipelining is also known as pipeline processing. It is sometimes compared to a manufacturing assembly line in which different parts of a product are assembled simultaneously, even though some parts may have to be assembled before others. Even if there is some sequential dependency, many operations can proceed concurrently, which facilitates overall time savings.
Pipelining creates and organizes a pipeline of instructions the processor can execute in parallel.
The pipeline is divided into logical stages connected to each other to form a pipelike structure. Instructions enter from one end and exit from the other. Pipelining is an ongoing, continuous process in which new instructions, or tasks, are added to the pipeline and completed tasks are removed at a specified time after processing completes. The processor executes all the tasks in the pipeline in parallel, giving them the appropriate time based on their complexity and priority. Any tasks or instructions that require processor time or power due to their size or complexity can be added to the pipeline to speed up processing.
How pipelining works
Without a pipeline, the processor would get the first instruction from memory and perform the operation it calls for. It would then get the next instruction from memory and so on. While fetching the instruction, the arithmetic part of the processor is idle, which means it must wait until it gets the next instruction. This delays processing and introduces latency.
With pipelining, the next instructions can be fetched even while the processor is performing arithmetic operations. These instructions are held in a buffer close to the processor until the operation for each instruction is performed. This staging of instruction fetching happens continuously, increasing the number of instructions that can be performed in a given period.
Within the pipeline, each task is subdivided into multiple successive subtasks. A pipeline phase is defined for each subtask to execute its operations. Like a manufacturing assembly line, each stage or segment receives its input from the previous stage and then transfers its output to the next stage. The process continues until the processor has executed all the instructions and all subtasks are completed.
In the pipeline, each segment consists of an input register that holds data and a combinational circuit that performs operations. The output of the circuit is then applied to the input register of the next segment of the pipeline. Here are the steps in the process:
Types of pipelines
There are two types of pipelines in computer processing.
Instruction pipeline
The instruction pipeline represents the stages in which an instruction is moved through the various segments of the processor, starting from fetching and then buffering, decoding and executing. One segment reads instructions from the memory, while, simultaneously, previous instructions are executed in other segments. Since these processes happen in an overlapping manner, the throughput of the entire system increases. The pipeline's efficiency can be further increased by dividing the instruction cycle into equal-duration segments.
Arithmetic pipeline
The arithmetic pipeline represents the parts of an arithmetic operation that can be broken down and overlapped as they are performed. It can be used for used for arithmetic operations, such as floating-point operations, multiplication of fixed-point numbers, etc. Registers are used to store any intermediate results that are then passed on to the next stage for further processing.
Advantages of pipelining
The biggest advantage of pipelining is that it reduces the processor's cycle time. This is because it can process more instructions simultaneously, while reducing the delay between completed instructions. Although pipelining doesn't reduce the time taken to perform an instruction -- this would sill depend on its size, priority and complexity -- it does increase the processor's overall throughput.
Furthermore, pipelined processors usually operate at a higher clock frequency than the RAM clock frequency. This makes the system more reliable and also supports its global implementation.
Possible issues in pipelines
Although processor pipelines are useful, they are prone to certain problems that can affect system performance and throughput. Two such issues are data dependencies and branching.
Data dependencies
A data dependency happens when an instruction in one stage depends on the results of a previous instruction but that result is not yet available. This can happen when the needed data has not yet been stored in a register by a preceding instruction because that instruction has not yet reached that step in the pipeline.
Since the required instruction has not been written yet, the following instruction must wait until the required data is stored in the register. This waiting causes the pipeline to stall. At the same time, several empty instructions, or bubbles, go into the pipeline, slowing it down even more.
The data dependency problem can affect any pipeline. However, it affects long pipelines more than shorter ones because, in the former, it takes longer for an instruction to reach the register-writing stage.
Branching
Branch instructions can be problematic in a pipeline if a branch is conditional on the results of an instruction that has not yet completed its path through the pipeline. If the present instruction is a conditional branch and its result will lead to the next instruction, the processor may not know the next instruction until the current instruction is processed. That's why it cannot make a decision about which branch to take because the required values are not written into the registers.
Other possible issues during pipelining
In addition to data dependencies and branching, pipelines may also suffer from problems related to timing variations and data hazards. Delays can occur due to timing variations among the various pipeline stages. This is because different instructions have different processing times. Data-related problems arise when multiple instructions are in partial execution and they all reference the same data, leading to incorrect results. A third problem in pipelining relates to interrupts, which affect the execution of instructions by adding unwanted instruction into the instruction stream.
Superpipelining and superscalar pipelining
Superpipelining and superscalar pipelining are ways to increase processing speed and throughput.
Superpipelining means dividing the pipeline into more shorter stages, which increases its speed. The instructions occur at the speed at which each stage is completed. In a pipeline with seven stages, each stage takes about one-seventh of the amount of time required by an instruction in a nonpipelined processor or single-stage pipeline. In theory, it could be seven times faster than a pipeline with one stage, and it is definitely faster than a nonpipelined processor.
Superscalar pipelining means multiple pipelines work in parallel. This can be done by replicating the internal components of the processor, which enables it to launch multiple instructions in some or all its pipeline stages.
Learn about parallel processing; explore how CPUs, GPUs and DPUs differ; and understand multicore processers.