Computers usually use two kinds of clocks. One is tied to the incoming voltage (110 in the U.S., 220 in Europe). Every voltage cycle causes an interrupt, either at 50 or 60 times per second. This is the simplest kind.
Another one is based on a crystal oscillator. Tension is applied to a piece of crystal in such a way that it emits a signal (usually between 5 million and 100 million times per second (or 5 to 100 Mhz). The signal is fed to a counter, which, after so many iterations, sends an interrupt to the CPU.
A 1 Mhz clock sends a signal every microsecond, or every millionth of a second.
The number of a bits in the counter (which is a register) is important, as the register can only accumulate before flipping back to zero. A 32 bit counter accumulating ticks from a 60 Mhz clock rate will overflow in two years.
Programmable clocks are ones whose interrupts can be controlled by software.
A clock driver for an operating system can keep the time and provides timing for programs and the CPU scheduler.
From Andrew S. Tanenbaum's "Modern Operating Systems (3rd Edition)"
Synchronization is done through an exchange of packets between a NTP time server and the computer, known here as the client. The client sends a request packet to the server, which includes the time the packet was sent (called the "originate timestamp"). When the server receives the packet, it sends back another packet with the time it received that packet (the "receive timestamp").
When the client gets the receive timestamp, it logs the time again. It can then use its two timestamps, in conjunction with the time on the receive timestamp, to estimate the time.
The current time would be the receive timestamp, plus half the total travel time (The time the receive timestamp was received minus the time that originate timestamp was sent), plus the remote processing time.
A series of these sorts of exchanges are executed to validate the time.
NTP gets the time to the client, but how does the operating system adjust its own time to this new time? The IETF put forth a generic way in RFC 1589.
The Unix kernel is alerted by a hardware counter interrupt at some fixed rate. In the 1589 scheme, OS'es keep the time by an accumulation of microseconds. It is some multiple of a hardware counter interrupt, which takes place at a periodic fixed rate, depending on the frequency of the counter (Which, by default, must be some divisor of the CPU frequency). If this number does not divide evenly into a microsecond, the OS adds in some small sum periodically.
For instance, in the Ultrix kernel gets interrupted at 256 Hz. Since this does not divide evenly into a microsecond, the kernel adds 64 microseconds in each second.
For Unix systems, the NTP-driven clock adjustments are made using the adjtime()system call. One trick though: The clock frequency is changed by the value tickadj, which means the time can be slewed only by that amount. This rounding error can accumulate to such a degree the time is eventually wrong. As a result, a synchronization daemon must make an adjustment.
The venerable Network Time Protocol needs to be updated. It increments time in chunks too coarse for modern use.
For instance, how do you measure one-way packet delay from one node to another(through the trusty ping command)?
Well, the current network Time Protocol (NTP) can do it within 20 milliseconds. That was nice back in the day, but these days when it takes 50 nanoseconds to send a packet such a link, a minimum sized packet takes 6 microseconds to traverse the 1 kilometer (10 Gb/s) link. The level of nuance is not enough.
Thinner slices of time could be useful elsewhere too: synchronization at the MAC-address level; for an intra-PoP time transmission mechanism used by the cable company; and for Wi-Max transmitter response times, etc. Likewise, the electrical and printing industry also has the need for time increments less than a few microseconds. And the military is looking into large sensor networks, which need to be synchronized as well.