Hello @jaski.tx ! Thank you very much.
So you could also every X uS where X is usually 50, or not? What is your application?
Yes, being precise, X is all the time 50203.(fractional) nS. The application is such that, every period, a couple of bytes (204 bytes precisely) are written in an internal queue of the application. The fractional part is saved so when it adds to unity, it also increments the time for the next period.
What is the error?
An actual measure made, for instance, was that after about 3 hours, we wrote 100kBytes more than we should have. I interpret this fact as “our application is 25mS ahead in time after 3 hours”, considering the desired throughput, which is 32507936 bps.
I was unable to interpret if this advancement in time was due to, say, clock drift, or, granularity of the time the kernel can in fact work with or whatever else it might be.
What are you requirements? Which jitter can you accept on the time of your events? Which granularity and accuracy is needed?
The actual requirement is that the average bits/sec throughput in this queue be precisely 3250736bps. Jitter or spikes are to some extend acceptable.
When i asked the question, what i had in mind was “could i use some other time source to regulate my primary source?” i.e, a time source with maybe less granularity (say, mS) but that could be used to sense that my application was ahead in time.
I understand that i might be working on the limit of precision, nevertheless i decided to ask.
Thank you in advance,