decoupled architecture
1. In general, a decoupled architecture is a framework for complex work that allows components to remain completely autonomous and unaware of each other. Cloud computing is sometimes said to have a decoupled architecture because the cloud provider manages the physical infrastructure, but not the applications or data hosted on it.
2. In computing, the term decoupled architecture describes a processor in a computer program that uses a buffer to separate the fetch and decode stages from the execution stage. A decoupled architecture allows each component to perform its tasks independently of the others, while also enabling structural variations between source and target.
The buffer in a decoupled architecture separates the program’s memory access and execute functions. The buffer takes advantage of the parallelism between the two to achieve high-performance while preventing the processor from “seeing” any memory latency.
In theory, a larger buffer can increase throughput. However, larger buffers generate more heat and use more space. Plus, the entire buffer may need to be flushed if the processor has a branch misprediction, thereby wasting clock cycles and reducing the effectiveness of the decoupled architecture. For these reasons, processors generally use a multi-threaded design.
Decoupled architectures are typically used in very long instruction word (VLIW) architectures. Because decoupled architectures are not good at handling control intensive code, such as that used as nested branches in operating system kernels, they are not used in general purpose computing.
See also: coupling, loose coupling, Multithreading, thread safe