NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
The Limits of NTP Accuracy on Linux (scottstuff.net)
wowczarek 18 hours ago [-]
The problem with these types of posts is that this is an area that many are unfamiliar with (at least not in depth) and making lots of authoritative statements makes it believable at face value. There are so many variables to network time sync that you design to minimise them. For example no multipath and no asymmetry unless you have PTP P2P transparent clocks everywhere.

The author also mixes precision with accuracy and relies on self-reported figures from NTP (chrony says xxx ns jitter). With every media speed change you get asymmetry which affects accuracy (not always precision though). So your 100m->1G link for example will already introduce over 1 us of error (to accuracy!), but NTP will never show you this and nothing will unless you measure both ends with 1PPS, and the only way around it is PTP BC or TC. There is a very long list of similar clarifications that can be made. For example nothing is mentioned about message rates / intervals, which are crucial for improving the numbers of samples filters work with - and the fact that ptp4l and especially phc2sys aren't great with filtering. Finally getting time into the OS clock, unless you use PCIE PTM which practically limits you to newer Intel CPUs and newer Intel NICs, relies on a PCIE transaction with unknown delays and asymmetries, and without PTM (excluding few NICs) your OS clock is nearly always 500+ ns away from the PHC and you don't know by how much and you can't measure it. It's just a complex topic and requires an end to end, leave no stone unturned, semi-scientific approach to really present things correctly.

mlichvar 16 hours ago [-]
I agree with most of what you said.

The author has other posts in the series where he tried to measure the accuracy relative to the PHC (not system clock) using PPS: https://scottstuff.net/posts/2025/06/02/measuring-ntp-accura...

Steering the same PHC with phc2sys as chronyd is using for HW timestamping is not the best approach as that creates a feedback loop (instability). It would be better to leave the PHC running free and just compare the sys<->PHC with PHC<->PPS offsets.

> So your 100m->1G link for example will already introduce over 1 us of error (to accuracy!), but NTP will never show you this

That doesn't apply to NTP to such an extent as PTP because it timestamps end of the reception (HW RX timestamps are transposed to the end of the packet), so the asymmetries in transmission lengths due to different link speeds should cancel out (unless the switches are cutting through in the faster->slower link direction, but that seems to be rare).

wowczarek 16 hours ago [-]
Yes, I saw those PTP posts and where the methodology lacks a bit.

Re. asymmetries canceling out, OK, I oversimplified, and this is true in theory and often in practice, but for example, having done this with nearly all generations of enterprise type Broadcom ASICs sort of 2008 onwards I know that there are so many variations to this behaviour that the only way to know is to precisely measure latencies in one direction and the other for a variety of speed, CT vs. S&F and frame sizes, and even bandwidth combinations and see. I used to characterise switches for this, build test harnesses, measurement tools etc., and I saw everything ranging from: CT one way, S&F the other way, but not for all speed combinations, then CT behaviour regardless of enabling or disabling it, finally even things like latency having quantised step characteristics in increments of say X bytes because internally the switching fabric used X-byte cells, and then CT only behaving like CT above certain frame sizes. There's just a lot to take into account. There are even cases where a certain level of background traffic _improves_ latency fairness and symmetry, an equivalent of keeping the caches hot.

The author's best bet at reliable numbers would be to get himself a Latte Panda Mu or another platform with TGPIO and measure against 1PPS straight from the CPU. That would be the ultimate answer. Failing that, at least a PTM NIC synced to the OS clock, but that will alter the noise profile of the OS clock.

But you and me know all this because we've been digging deep into the hardware and software guts of those things for years, and have done this for a job, and what's a home lab user to do. It's a never-ending learning exercise and the key is to acknowledge the possible unknowns, and by that I don't mean scientific unknowns but that we don't know what we don't know, and bloggers sometimes don't do this.

eqvinox 11 hours ago [-]
> Steering the same PHC with phc2sys as chronyd is using for HW timestamping is not the best approach as that creates a feedback loop (instability).

This is standard practice, though, for most PTP slave clocks. The feedback is just factored into the math. (Why? No idea. I just know how the code works.)

Although… it's standard practice in PTP setups that are designed for it. Not NTP… if only there were a specification… :)

I do have to wonder though. Of what use are timestamps from an unsynchronized PHC to chrony? Is it continuously taking twin sys+PHC timestamps to line up things?

anon6362 2 hours ago [-]
Sigh Yep. Dunning-Kruger effect specimens hammer out puff pieces to get their participation awards.

Meanwhile, here's some other articles:

NTP: https://austinsnerdythings.com/2025/02/14/revisiting-microse...

PTP: https://austinsnerdythings.com/2025/02/18/nanosecond-accurat...

https://www.jeffgeerling.com/blog/2025/diy-ptp-grandmaster-c...

Dylan16807 13 hours ago [-]
It's not that the author is mixing accuracy and precision, it's that they only care about precision.

Any asymmetry that is consistent is irrelevant.

petesoper 16 hours ago [-]
The OP says early he only needs 10us.
eqvinox 11 hours ago [-]
ConnectX-6 has PTM, though I have not tested it.
wowczarek 7 hours ago [-]
Do it, that should make a significant difference. See https://www.opencompute.org/wiki/PTM_Readiness for other hardware that supports it, i225/226 are the most common these days, but also a system with TGPIO 1PPS will show the real picture.
diarmuidc 1 days ago [-]
Why is there no mention of PTP here? If you want accurate time synchronisation in a network just use the correct tool, https://en.wikipedia.org/wiki/Precision_Time_Protocol

Linux PTP (https://linuxptp.sourceforge.net/) and hardware timestamping in the network card will get you in the sub 100ns range

jacob2161 24 hours ago [-]
Chrony over NTP is capable of incredible accuracy, as shown in the post. Most users who think they need PTP actually just need Chrony and high quality NICs.

Chrony is also much better software than any of the PTP daemons I tested a few years ago (for an onboard autonomous vehicle system).

eqvinox 21 hours ago [-]
NTP fundamentally cannot reach the same accuracy as PTP because Ethernet switches introduce jitter due to queueing delays and can report that in PTP but not NTP.
anon6362 56 minutes ago [-]
With retail hardware, definitely, but there is boundary PTP support with enterprise gear.

For telco gear, there is PTP + SyncE.

erincandescent 21 hours ago [-]
Chrony can do NTP encapsulated inside PTP packets so as to combine the best parts of both protocols
eqvinox 19 hours ago [-]
That's not exactly NTP though ;)

I'll also say PTP is superior since it syncs TAI rather than NTP's UTC. Which probably isn't going to change even with NTPv5.

mlichvar 20 hours ago [-]
chrony can be configured to encapsulate NTP messages in PTP messages (NTP over PTP) in order to get the delay corrections from switches working as one-step PTP transparent clocks. The current NTPv5 draft specifies an NTP-specific correction field, which switches could support in future if there was a demand for it.

The switches could also implement a proper HW-timestamping NTP server and client to provide an equivalent to a PTP boundary clock.

PTP was based on a broadcast/multicast model to reduce the message rate in order to simplify and reduce the cost of HW support. But that is no longer a concern with modern HW that can timestamp packets at very high rates, so the simpler unicast protocols like NTP and client-server PTP (CSPTP) currently developed by IEEE might be preferable to classic PTP for better security and other advantages.

cozzyd 15 hours ago [-]
Some NICs support hardware timestamping (though some only for PTP packets, looking at you, XL710).
ainiriand 18 hours ago [-]
Correct me if I am wrong, but wouldn't that be true only for testing accross comparable hardware? Would that be true in scenarios like the one that the author describes, where he uses 3 different systems (threadriper cpu, raspberrypi, and LeoNTP GPS-backed NTP server) and architectures?
secondcoming 21 hours ago [-]
On our GCP cloud VMs, cloud-init installs chrony and uninstalls ntp automatically.
EnnEmmEss 19 hours ago [-]
You're probably looking for https://scottstuff.net/posts/2025/06/10/timing-conclusions/ which discusses the overall conclusions of NTP vs PTP and is the culmination of several blog posts on the topic.
senderista 24 hours ago [-]
PTP is discussed in the concluding article of the series: https://scottstuff.net/posts/2025/06/10/timing-conclusions/
michaelt 23 hours ago [-]
The blog's next post is about PTP, if that's what you're interested in.

The Linux PTP stack is great for the price, but as an open source project it's hamstrung by the fact the PTP standard (IEEE1588) is paywalled; and the fact it doesn't work on wifi or usb-ethernet converters (meaning it also doesn't work on laptop docking stations or raspberry pi 3 and earlier)

This limits people developing/using for fun. And it's the people using it for fun who actually write all the documentation, the 'serious users' at high frequency trading firms and cell phone networks aren't blogging about their exploits.

RossBencina 19 hours ago [-]
> it doesn't work on wifi

802.1AS-2020 (gPTP) includes 802.11-2016 (wifi) support.

The IEEE's gatekeeping is indeed odious.

The biggest limitation is that many ethernet MACs do not support hardware timestamping. Nor do many entry-level ethernet switches.

For what it's worth, I'm interested in TSN for fun (music, actually), and I'm prepared to buy compatible networking hardware to do it. No difference to gamers spending money on a GPU.

eqvinox 19 hours ago [-]
Most new MACs do it, (cheap) switches are still a problem though.
rendaw 1 days ago [-]
There's a discussion on that in the comments at the bottom of the article, where the author explains why it wasn't analyzed.
18 hours ago [-]
eqvinox 21 hours ago [-]
GPS modules need to be put in a special stationary mode (and ideally measured-in to a location for a day or two) to get accurate timing. I'm consistently achieving ca. 10ns of deviation. Hope the author didn't forget this. (But it might also just be crappy GPS modules, I'm using u-blox M8T which are specifically intended for timing.)
mwpmaybe 15 hours ago [-]
Interesting. The serial GPS module currently wired up to my Raspberry Pi doesn't have a stationary mode per se but there is a feature called AlwaysLocate that seems related. I can choose between Periodic Backup/Standby and AlwaysLocate Backup/Standby modes. I'll need to look into this...

ETA: I can also increase the nav speed threshold to 2m/s.

eqvinox 11 hours ago [-]
Not all modules have this feature; and it's also locked behind feature/license bits sometimes. It's obviously not needed for normal GPS use… u-blox timing targeted modules definitely have it. Some have a "measure-in" mode where you let it sit for a while (days) and it does all the setup automatically. Other cases you actually have to feed things into the module (annoying and error prone…)

It's simply that if you know your location, you can remove that as free variable from the equations and instead constrain the time further.

RossBencina 19 hours ago [-]
What method do you use to measure 10ns deviation?
eqvinox 19 hours ago [-]
Delta between modules on a scope

(offset values on the hardware timestamp on the immediately connected PTP clock also line up with this)

[Caveat: everything is in the same room with the same ambient temperature drifts…]

azalemeth 1 days ago [-]
My experience with rt Linux is that it can be exceptionally good at keeping time, if you give up the rest of the multitasking micro sleeping architecture. What do you need this accurate time for? I'm equally sure, as acknowledged, the multipath routing isn't helping either.
michaelt 22 hours ago [-]
> What do you need this accurate time for?

Some major uses of high-precision timing, albeit not with NTP, include:

* Synchronising cell phone towers, the system partly relies on precise timing to avoid them interfering with one another.

* Timestamping required by regulators, in industries like high-frequency trading.

* Certain video production systems, where a ten-parts-per-million framerate error would build up into an unacceptable 1.7 second error over 48 hours.

* Certain databases (most famously Google Spanner) that use precise timing to ensure events are ordered correctly

* As a debugging luxury in distributed systems.

In some of these applications you could get away with a precise-but-free-floating time, as you only need things precisely synchronised relative to one another, not globally. But if you're building an entire data centre, a sub-$1000 GPS-controlled clock is barely noticeable.

rkomorn 21 hours ago [-]
> But if you're building an entire data centre, a sub-$1000 GPS-controlled clock is barely noticeable.

Dumb personal and useless anecdote: one of those appliances made my life more difficult for months (at a FAANG company that built its own data centers, no less) for the nearly comical reason that we needed to move it but somehow couldn't rewire the GPS antenna, and the delays kept retriggering alerting that we kept disabling until the expecte "it'll be moved by then" time.

So, I guess to make the anecdote more useful: if you're gonna get one, make sure it doesn't hamstring you with wires...

michaelt 21 hours ago [-]
The secret, I'm told, is to make friends with the CCTV/access control team.

They always know the paperwork and contractors needed to get a guy on a cherrypicker drilling holes and installing data cables without upsetting the building owners.

JdeBP 1 days ago [-]
Bear in mind that the author specifically reminds us, halfway down, that the goal is consistency, not accuracy per se. Making all of the systems accurate to GNSS is merely a means of achieving the consistency goal so that event logs from multiple systems can be aligned.
mustache_kimono 1 days ago [-]
> What do you need this accurate time for?

Securities regulation?: https://podcasts.apple.com/us/podcast/signals-and-threads/id...

stinkbeetle 21 hours ago [-]
What is the rest of the multitasking micro sleeping architecture, and how do you give it up to improve time keeping?
aa-jv 19 hours ago [-]
> What do you need this accurate time for?

Scientific and consistent analysis of streaming realtime sensor data.

Been there, done that, shipped the package. Took quite a bit of fun to get it working consistently, which was the main thing.

mschuster91 22 hours ago [-]
> What do you need this accurate time for?

Say you are running a few geographically apart radio receivers to triangulate signals, you want to have all of them as closely synchronized as possible for better accuracy.

myrandomcomment 12 hours ago [-]
The paths on the network with the MLAG is NOT likely the issue. There is a serialization delay different between the NICs in the NTP servers and switches. 100M takes more time then 1G which takes more time then 10G which takes more time then 40G. Also the UI kit is store and forward (not sure on the 10G but the desktop one is) switching and the Arista kit is cut thought (at around ~500 byte packets IIRC). The end to end paths are not the same given the link speed difference and that is the source of the variations. The MLAG hashing is in hardware and will not have an effect, also IIRC you can set the LAG hash to be SRC/DST on both L2 and L3 even on a L2 link.
Bender 17 hours ago [-]
They state there is a problem but then state they are happy with what Chrony is doing so what exactly is the problem they are trying to solve? What on their network is requiring better than 200ns? or even 400ns for that matter? Not in theory but in reality? Also there are optimizations they are missing in this document [1] such as disabling EEE.

On a more taboo note, while RasPi's can be great little time servers they have more drift and will have higher jitter but that should not matter for a home setup and should not be surprising. If jitter is their concern then they should consider using mini-pc's, disable cpuspeed, all power management and confine/lock the min/max speed to half the CPU capabilities and disable all services other than chrony. It will use more power but would address their concerns. They could also try different models of layer 2 switches. Consumer switches will add some artificial jitter and that varies wildly from make, model and even batch but again for a home network that should not matter. I think they are nitpicking. Perfect is the enemy of good, especially in a day and age when people prefer power saving over accuracy.

[Edit] As a side note the aggressive min/max poll settings they are using can amplify the inefficiencies of consumer switches and NICs regardless of filter settings and that can make the graphs more chaotic. They should consider re-testing that on data-center class servers, server NICs and enterprise class switches or just reduce the polling to something reasonable for a home network minpoll 1 maxpoll 2 for client, minpoll 5 maxpoll 7 for edge talking to a dozen stratum 1's with a high combinelimit. Presend should not be required even with default ARP neighbor GC times and intervals. Oh and if you want to try something fun with the graphs, run chronyc makestep ever minute in cron on every node. yeah yeah I know why one would not do that and its just cheating.

[1] - https://chrony-project.org/examples.html

wowczarek 14 hours ago [-]
In addition, for the purposes of characterising the system using NTP, ideally one should also either eavoid any ensembling / combining of sources because that's just pulling in multiple sources of noise, or it should be proven that doing so does not affect the final results, or if it does, then by how much.

There's so much more that can be picked apart here because it's an absolute rabbit hole of a topic - for example, saturate the links a little or a little more, especially with bursty traffic in both directions (or do an 80-20 cycle), and watch those measurements go out the window and only with PTP-capable switches at every hop will you survive this. The Telecom industry has done it ad nauseam and for years with appropriate standardised measurements, test masks and requirements.

And this whole business is also not fundamentally PTP vs. NTP because the principles are exactly the same, it's the fact that PTP was designed with hardware timestamping in mind and it would serve no purpose more useful than NTP had NTP gained support for one-step operation, hardware timestamping - and network assistance. But the default PTP profile uses known multicast groups and thus known destination MACs and it was the easiest entry into hardware packet matching - early "PTP-enabled" NICs only timestamped PTP packets (and most only multicast), only more modern ones allowed to timestamp all packets and that includes NTP.

And as far as RasPi goes - for time sync, at least in terms of COTS equipment, Intel is king, but that's because they had smart people working hard for years to purposefully integrate time-aware functionalities into the architectures (Hey Kevin and team!) - invariant TSC, ART, culminating with PCIE PTM. But this is where aiming for the tens to single digit ns region.

You can easily deliver sub-10 ns sync to a NIC, but a huge source of uncertainty is time transfer from your hardware-timestamping NIC to the OS clock. PTM is the only way to do this in hardware, otherwise, with Solarflare being the only NON-PTM exception I've worked with, comparing NIC to OS time is literally reading the time register on the NIC and the kernel time in quick succession in batches (granted, with local interrupts disabled), and then picking the pair of reads that seems to have taken the least amount of time. Unknowns on top of unknowns.

Bender 13 hours ago [-]
There's so much more that can be picked apart here because it's an absolute rabbit hole of a topic

That pretty much sums it up and I agree with everything you stated. There are countless variables that one could spend a lifetime trying to understand, tune and compensate for and all of that changes with each combination of hardware and refreshing hardware is inevitable. It can be a never ending game. I just tune for good enough for my needs that being slightly better than defaults.

RossBencina 19 hours ago [-]
There was some related discussion a couple of weeks ago here:

Graham: Synchronizing Clocks by Leveraging Local Clock Properties (2022) [pdf] (usenix.org) https://news.ycombinator.com/item?id=44860832

In particular the podcast about Jane Street's NTP setup was discussed.

watersb 1 days ago [-]
Segal's Law:

"A man with a watch knows what time it is. A man with two watches is never sure."

https://en.m.wikipedia.org/wiki/Segal's_law

ofalkaed 19 hours ago [-]
If you actually care about what time it is you need at least three so you can average them and knock out the error. The Beagle carried 22 when it also carried Darwin, in the nearly 5 year expedition they only lost 30 odd seconds.
nullc 22 hours ago [-]
A person with three or more watches knows what time it is in proportion to the square root of the number of watches.
stinkbeetle 21 hours ago [-]
A person with four watches knows what time it is in proportion to 2?
dspillett 19 hours ago [-]
Sort of. With two watches your confidence is ~1.4 of an arbitrary measurement, with three it is ~1.7, with 4 ~2, etc. Though this is an ever-growing sequence.

A better model might be to measure the confidence with something like (x-1)/x as this grows, more slowly with each step, towards 1, without really getting there until infinity. With two watches you are 50% of maximum confidence in your time, with three 66%, with four 75%, five->80%, and so on.

JdeBP 1 days ago [-]
A person with two watches finds xyrself suddenly in the messy business of doing full NTP, rather than the much simpler model of SNTP. (-:
klaas- 24 hours ago [-]
maybe I missed that, but why not just combine ptp and ntp within chrony, it does support that.
sugarpimpdorsey 1 days ago [-]
There are so many inaccurate technical details here I wouldn't know where to begin, let alone write a blog post. Sigh.
ainiriand 1 days ago [-]
Unfortunately I think the same as you. The provided details in the blog post are by no means any way of doing any sort of time benchmark or network i/o benchmark. For starters, he is comparing times from tsc enabled hardware (x86_i64), with raspberry pi which are arm. Network i/o benchmarking on linux should be done with system calls to the network cards or input devices and not through the kernel drivers etc...
Avamander 19 hours ago [-]
> For starters, he is comparing times from tsc enabled hardware (x86_i64), with raspberry pi which are arm.

Well, that TSC-enabled hardware also has other peripherals (like SMBUS as mentioned in the article) that on the other hand introduce errors into the system.

I personally use a RPi4 with its external oscillators replaced with a TXCO. Some sellers on AliExpress even have kits for "audiophiles" that let you do this. It significantly improved clock stability and holdover. So much so that "chronyc tracking" doesn't show enough decimal places to display frequency error or skew. It's unfortunate though that the NIC does not do timestamping. (My modifications are similar to these: https://raspberrypi.stackexchange.com/a/109074)

I'd love to find an alternative cheap (microcontroller-based) implementation that could beat it.

mwpmaybe 15 hours ago [-]
The CM5's NIC has timestamping, but not sure if there's a TXCO hack for it.
jmpman 1 days ago [-]
I would be extremely interested in reading your blog post. Fascinating topic.
pastage 24 hours ago [-]
Saying this is actually the only same thing to do.

I personally will not care for sub 200 microseconds and think it was a good article if read critically. I think it does describe why you should not do that at the moment if you have lots of nodes that need to sync consistently.

Having a shared 10Mhz reference clock is great and that gives you a pretty good consistent beat. I never managed to sync other physical sensors to that so the technical gotchas are too much for me.

There is madness in time.

Edit: changing some orders of magnitude honestly I feel happy if my systems are at 10ms.

ainiriand 24 hours ago [-]
In my opinion when you want such precision, you need to stablish strict constraints to the measurements, for example memory fences: https://www.kernel.org/doc/Documentation/memory-barriers.txt

If you do not do this, the times will never be consistent.

The author produced a faulty benchmark.

Dylan16807 22 hours ago [-]
What benchmark? The only numbers he's measuring himself are on the oscilloscope. Everything else is being measured by chrony. Unless you're talking about a different post on the blog?
ainiriand 21 hours ago [-]
He uses Chrony, which uses system time, and compares those times across different machines. Unless proper setup is done the benchmark is faulty.
Dylan16807 21 hours ago [-]
Chrony is what's comparing the times. Zero code written by the author is running except to log the statistics chrony created. Are you accusing chrony of failing at one of its core purposes, comparing clocks between computers? What could the author do differently, assuming the author isn't expected to alter chrony's code?
ainiriand 21 hours ago [-]
If those times are produced on different architectures, then yes, the comparison can never be accurate enough since the underlying measurement mechanisms differ fundamentally. While the author goes to great lengths to demonstrate very small time differences, I believe the foundation of their comparison is already flawed from the start. I do not want to generate any polemic sorry!
Dylan16807 13 hours ago [-]
But you do or don't think chrony knows how to do the memory barriers and everything else properly?

Making the sync work across existing heterogenous hardware is the goal of the exercise. That can't be a disqualifier.

throwawaysoxjje 1 days ago [-]
It’s wild they talk about the jitter in the pps signals but glossed over the jitter the oscilloscope?
marshray 23 hours ago [-]
The scope should be capturing samples from the three channels synchronously.

It appears to be set to trigger on the bottom trace (it appears still) and then retrospectively display the other two.

20 hours ago [-]
magicalhippo 22 hours ago [-]
Given he's using the desktop PPS signal as a trigger reference, and compares the relative times, how would intrinsic scope jitter significantly affect that?
wowczarek 15 hours ago [-]
I would worry less about _trigger jitter_, especially for relative measurements, because unless it's one of those toy scopes, trigger jitter will be negligible for this purpose, so yes the scope, but less trigger jitter and more measurement jitter.

For anything that involves a scope, I think it's good practice to specify how and what you are measuring - namely what termination, what trigger level and show what the pulse actually looks like on the scope.

Where you are definitely right is that all those devices can produce completely different looking pulses, and only by looking at the pulse with a scope that has sufficient bandwidth you can pick the right trigger level that lands at a point of the waveforms that gets consistent characteristics, namely you stay away from the top of the pulse and tend to trigger somewhere mid-ramp that looks clean, and keep your paws away from those AUTO buttons!

...but this is all nitpicking as far as the post goes, this is where lots of electronics people get triggered (badum, tss!) where we have a network in the middle that is, essentially, chaos.

guenthert 20 hours ago [-]
Not sure about the Siglent oscilloscope used here, but my old LeCroy WaveAce 2032 (were Siglent had obviously it's hands on) has a trigger jitter of 0.4ns. I'd think the one used here will be of the same order of magnitude, i.e. negligible.

Uh, the Siglent SDS 1204X-E used here has a "new, innovative digital trigger with low latency" ...

But yes, as others have commented already, if only the relative jitter between the signals is of interest, the trigger jitter itself is inconsequential.

magicalhippo 10 hours ago [-]
> Uh, the Siglent SDS 1204X-E used here has a "new, innovative digital trigger with low latency" ...

Datasheet claims trigger jitter of <100 ps.

nullc 1 days ago [-]
GPS timing modules should have a sawtooth correction value that will tell you the error between the upcoming pulse and GPS time. The issue is that PPS pulse has to be aligned to the receiver's clock. Using that will remove the main source of jitter.
wowczarek 15 hours ago [-]
Applying sawtooth correction will remove _some_ of the jitter but at the time server level, not network level, and quantisation error is not the main source of jitter here (I'm talking long time constants) - Packet Delay Variation (PDV) and internal time comparisons are. Plus, any decent loop should average the sawtooth and transform it into fluctuations slow enough that they will not have that much effect on what is being measured in the blog post - the output of the time server looks nothing like the raw 1PPS input, at least in the short term it doesn't. Of course sawtooth should be removed and let's hope his time servers do it, especially the RPi ones.
nullc 12 hours ago [-]
> any decent loop should average the sawtooth

You can't really, depending on their relative phases and the resulting aliasing products the average of the sawtooth error can still have an arbitrary offset which last for an arbitrarily long time.

> that they will not have that much effect

okay fine for some definition of 'not much' that's true. But failing to account for it can result in a bigger error than many people expect-- and in an annoying way, since when you test it might be in a state where it is averaging out okay but then later shift into a state where it's producing an offset that doesn't average out.

Assuming your receiver outputs the correction it's pretty easy to handle, so long as you know it's a thing.

RossBencina 19 hours ago [-]
Only the expensive ones have the correction capability (e.g. uBlox LEA-M8T) hat tip to tverbeure:

https://news.ycombinator.com/item?id=44147523

Aligning the PPS pulse with an asynchronous local clock is going to require a very large number of measurements, or a high resolution timer (e.g. a time to digital converter, TDOA chip, etc. there are a few options.)

wowczarek 15 hours ago [-]
To an extent. You can get previous generation GNSS receivers with sawtooth correction for cheap, eBay is full of those, say an old Trimble Resolution whatever, and lots of LEA-6T carrier boards going for $20-range, and bare modules for less. I would trust those carrier boards more though, less chance of getting a fake module.
nullc 9 hours ago [-]
The timing receivers often have other advantages beyond just sawtooth, including stuff like being able to produce a time pulse with a single working satellite in view once their position is learned.
wowczarek 7 hours ago [-]
Should have clarified: previous generation timing receivers. Yes, self-survey + stationary mode + O/D mode where you can track a single SV and such, are essential for stable time sync, not just sawtooth.

There is an exception re. sawtooth, but only a recent one: the Furuno GT-100, priced between the ZED-F9T and the Mosaic-T, has 200 ps clock resolution and doesn't even provide a quantisation error output.

contingencies 21 hours ago [-]
When it comes to realtime guarantees, bare metal code on a dedicated IC or MCU is by definition better than anything running on a general purpose OS on a general purpose CPU. Even if you tune the hell out of it, the latter will have more room for bugs, edge cases, drift, delayed powerup, supply chain idiosyncracies, etc. FYI GNSS processing chips cost $1.30 these days. https://www.lcsc.com/datasheet/C3037611.pdf
Galanwe 23 hours ago [-]
See also https://en.m.wikipedia.org/wiki/White_Rabbit_Project for this interested in PTP / low latency time sync
lifeline82 24 hours ago [-]
[dead]
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 06:03:45 GMT+0000 (Coordinated Universal Time) with Vercel.