CenturionStudio.it - Fotolia

Tip

Network timing: Everything you need to know about NTP

As more applications use IP networks, reliable distribution of authoritative timing information is becoming critical.

In the first part of a two-part series, network engineer Andrew Gallo takes some time to talk about time: Specifically, the need for authoritative timing information across data networks. Part one covers the Network Time Protocol and its role; the second part will examine IEEE's Precision Timing Protocol (PTP), sometimes referred to as IEEE 1588, and the ITU's Synchronous Ethernet.

Distribution of timing information -- both time-of-day and frequency reference -- across data networks is one of those low-level, behind-the-scenes functions that rarely gains much attention -- even for network operators (a behind-the-scenes function itself). Yet, as more and more applications use IP networks, reliable distribution of authoritative network timing information is becoming critical.

First, let's sort out the differences between time-of-day and frequency synchronization. It is important for computers, servers, networking elements and other information and communications technology devices to agree on "what time it is." This is the function of the Network Time Protocol (NTP). NTP is an Internet Engineering Task Force standard, currently specified by RFC 5905, designed to keep real-time clocks (sometimes referred to as wall clocks) that are found in various network devices in synchronization.

Frequency synchronization is a less common network function, having been made less important with Ethernet's eclipsing time division-based protocols, such as SONET/SDH, T1s and digital PBXs. Frequency synchronization, in contrast to NTP, isn't concerned with answering the question, "What time is it?" Instead, it aims to provide a stable and true source of time via a metronome-like measured-tick service. This permits distributed transmitters and receivers -- along with scientific instruments -- to coordinate activities that need to occur simultaneously.

With more real-time applications using IP-over-Ethernet-based networks, and the improvement in the elements carrying that traffic, high-precision frequency synchronization over data networks is becoming an option for organizations that need it.

NTP: What makes it tick?

NTP was originally standardized, in RFC 955, in 1985 with the current version (v4) defined in RFC 5905. It is a client/server-based architecture with a flexible hierarchical topology that supports unicast time requests, broadcast and multicast distribution.

It is important for computers, servers, networking elements and other information and communications technology devices to agree on 'what time it is.'

At the top of the NTP hierarchy is a server connected to, or getting time directly from, a primary reference source such as the Global Positioning System (GPS) or Galileo (the European Union's implementation of GPS). This machine is referred to as the stratum-1 NTP server. At each hop through an NTP server, the stratum number is increased, up to a maximum of 16, with 16 being unusable (or "insane").

Router hops through the IP network do not affect the NTP stratum level, only the NTP servers themselves.

(A note on terminology: NTP stratum and SONET stratum refer to different characteristics. In the time-division multiplexing world, it reflects the quality of the clock -- the ability to remain in sync when free-running [or disconnected from its clock source]. In NTP, it refers to the number of server hops away from the primary reference source.)

NTP distributes Coordinated Universal Time (UTC), leaving conversion into local time zones up to the clients. Under ideal conditions, NTP maintains an accuracy of better than 1 millisecond. Under challenging conditions, NTP's accuracy can vary from a few dozen milliseconds to as many as 200 or 300 milliseconds. The actual mechanism by which a client synchronizes its clock to an NTP source can get quite complicated. In a nutshell, the client will send a request to a configured server with its current time stored in the payload of the request. The server will respond to its own timestamp. This exchange will happen periodically, with the client calculating delay (time to the server), offset (time between its own clock and the servers), and jitter or dispersion (difference among multiple samples). The client will choose the best performing servers and use that time to discipline its local oscillator. In other words, to keep the real-time clock synchronized to an authoritative reference clock.

Network timing used to depend on time-division multiplexing

In the past, network timing meant dealing with time-division multiplexing technologies such as T1s, E1s and SONET/SDH. These transmission technologies mediated access to shared facilities through something called time slicing. For example, a T1 provided for 24 voice calls between telephone switches by giving each user a 64 Kbps slice of a 1.544 Mbps signal, transmitting 8,000 frames per second. Each frame consisted of 192 data bits plus 1 framing bit. In order for a receiver to know when a bit started and stopped, the receiver had to lock onto the incoming bit stream of the data or payload bits themselves; there was no extra information in the frame -- or between frames -- to provide this function. Frames are sent continuously, even when no user data is being sent. As a result, T1 networks had to be designed very carefully to ensure proper frequency agreement throughout the network. Additionally, encoding tricks are needed to ensure that -- in the event of a long string of zeros -- enough ones are sent so the receiver's clock stays locked. Traffic between ports on the same device, say T1s on a telephone switch, shared a common clock so a voice call is transmitted through the network error-free and without buffering.

One of the reasons IP over Ethernet won out over TDM-based networks, even in the wide area, was the ease of deployment: Network-wide synchronization was not needed. Ethernet is a shared, multi-access network that runs asynchronously. There is no need for a central "clock" (really, frequency source), which made receivers cheaper to build and networks easier to design and deploy. Still, a receiver needs to know when bits started and stopped. To solve this problem, each Ethernet frame includes a preamble of 7 octets -- 56 bits -- of alternating ones and zeros. This allows the receiver's clock to lock on and stabilize to the incoming bit stream. Therefore, Ethernet receivers are synchronized on a per-frame basis rather than a priori. Multiple ports on an Ethernet switch maintain independent oscillators, and traffic passing from one port to another is buffered, if necessary, in order to pass between timing domains.

Desirable to keep IP network nodes set to same time

While the nodes on an IP network do not require lock-step frequency synchronization, it is desirable to keep their time of day clocks set to the same time. The benefits are obvious -- comparing log files for troubleshooting or incident investigation requires a common clock; distributed databases, including things like Active Directory, use timestamps as one of the mechanisms to resolve simultaneous changes to the same object.

In part two, a look at two other timing initiatives.

About the author:
Andrew Gallo is a Washington, D.C.-based senior information systems engineer and network architect responsible for design and implementation of the enterprise network for a large university.

Next Steps

NTP and security: What you need to know

Dig Deeper on Network management and monitoring