// Chromium uses a minimum timer interval of 4ms. We'd like to go
// lower; however, there are poorly coded websites out there which do
// create CPU-spinning loops. Using 4ms prevents the CPU from
// spinning too busily and provides a balance between CPU spinning and
// the smallest possible interval timer.
At the time at least the 4ms only kicks in after 5 levels of nesting, as mentioned in the article, but there was still a 1ms limit before that.
Seems like it has been removed though based on jayflux's comment.
Because browser developers still have major incentive to care about not misusing the resources (cpu/battery) of browser users, and website developers very clearly do not.
This the natural consequence of a platform having high capability but low barrier to entry. Conscientious use of resources cannot be assumed and is in fact the exception rather than the rule, and so guardrails must be put in place.
This is an enormous problem with software in general. IMO it's probably because software has been abstracted into the stratosphere to the point that most developers aren't at all aware of resources or even the machine it's running on. That's someone else's problem. I really hate it.
As someone who wrote an entire indexedDB wrapper library just to understand the "micro task" issues that are referenced in this blog post, and THEN dedicated a couple hundred words of my readme to explaining this footgun[0], I am so glad to hear about `scheduler.postTask`. That's new information to me!
I remember reading that high precision timers can be used for browser fingerprinting and/or for timing attacks, but I didn't anything specifically about setTimeout()/setInterval() after searching a bit.
Also--loosening the accuracy of timers allows the system to optimize CPU power states and save battery. Again, not sure if that's related here.
Although you can claw that precision back by enabling cross-origin isolation for your site, at least in Firefox and Chrome, which both quantize high res timers to 100μs in non-isolated contexts but only 5μs in isolated contexts. I'm not sure exactly what Safari does.
"For the time being, I’ll just do what most web devs do: choose whatever API accomplishes my goals today, and hope that browsers don’t change too much in the future."
It's a strategy that's worked out very well. Standards groups and browsers prioritize backwards compatibility very highly. It's hard to remember any real compatibility breakages in standardized HTML/CSS/JS features (ie. not third-party plugins like Flash).
I guess it's the end of days, if tags have stopped blinking.
> And the beast shall come forth surrounded by a roiling cloud of vengeance. The house of the unbelievers shall be razed and they shall be scorched to the earth. Their tags shall blink until the end of days.
— from The Book of Mozilla, 12:10
> Even if you’ve been doing JavaScript for a while, you might be surprised to learn that setTimeout(0) is not really setTimeout(0). Instead, it could run 4 milliseconds later:
I think it's still the case. The 4ms happens if you call setTimeout nested several times. I don't know the exact limit. But it's 5-ish times where that kicks in IIRC.
Yeah, exactly. Timeout based callbacks register a timer with the runtime, and when the timer is up, then the callback gets added to the end of the task queue (so once the timeout is up, you've got to wait for the current loop iteration to finish executing before your callback gets executed).
Sure, but the nuance here is there is a (otherwise usable) range of values for which the timers are only ever "after" instead of "at or after". I.e. the lower bound is artificially increased while the upper bound remains unlimited.
I don’t think “artificially increased” is correct. See your sibling. If the runtime waits until expiry, and only then adds the task to the end of the work queue, there’s no point at which any delayed work could happen at expiry except the work to place it on the end of (an empty) queue.
Any busy runtime (e.g. one with lots of parallel tasks, plus anything running less than optimally) will have a delay.
Artificially increased is what's happening when you request a timeout of 0 and the browser always makes it 4 ms or more.
Imagine this code:
let value = 0;
(function tick () {
value += 1;
console.log(value);
setTimeout(tick, 1);
})();
If you let `tick()` run for 1 second, what would you expect the `value` to be? Theoretically, it should be around 1,000, because all you're doing is running a function that increments the `value` and then puts itself back onto the execution queue after a 1 ms delay. But because of the 4 ms delay that browsers implement, you'll never see `value` go above 250, because the delay is being artificially increased.
I always kinda figured that any "timer" in any language would technically need to work that way unless you're running a very fancy real-time system because multitasking, especially in high load scenarios, means there just may not be clock cycles available for your task at the exact millisecond you set something to execute at.
So it is with JS; I kinda figured EVERYTHING would need to be heavily throttled in a browser in order to respect the device running that browser.
Background javascript processes can really seem to add up across a lot of browser tabs firing up to stay "smart" or "current" like they're the only tab in existence in their user's life.
I'm not sure if many people struggle with browser tabs gone wild. Limiting Javascript can have varying degrees of success since it's relative to how the page/site/app is built to begin with.
I asked about this a few years ago on SO and there is some good info: https://stackoverflow.com/q/61338780/265521
E.g. Chrome has this comment:
At the time at least the 4ms only kicks in after 5 levels of nesting, as mentioned in the article, but there was still a 1ms limit before that.Seems like it has been removed though based on jayflux's comment.
Because browser developers still have major incentive to care about not misusing the resources (cpu/battery) of browser users, and website developers very clearly do not.
This the natural consequence of a platform having high capability but low barrier to entry. Conscientious use of resources cannot be assumed and is in fact the exception rather than the rule, and so guardrails must be put in place.
This is an enormous problem with software in general. IMO it's probably because software has been abstracted into the stratosphere to the point that most developers aren't at all aware of resources or even the machine it's running on. That's someone else's problem. I really hate it.
Of which, the biggest example is shipping Chrome with the application.
As someone who wrote an entire indexedDB wrapper library just to understand the "micro task" issues that are referenced in this blog post, and THEN dedicated a couple hundred words of my readme to explaining this footgun[0], I am so glad to hear about `scheduler.postTask`. That's new information to me!
Thanks for including that example!
[0] https://github.com/catapart/record-setter?tab=readme-ov-file...
I remember reading that high precision timers can be used for browser fingerprinting and/or for timing attacks, but I didn't anything specifically about setTimeout()/setInterval() after searching a bit.
Also--loosening the accuracy of timers allows the system to optimize CPU power states and save battery. Again, not sure if that's related here.
Maybe someone else here can add more detail.
You might be referring to the Spectre mitigation changes:
Timer precision from performance.now and other sources is reduced to 1ms (r226495)
https://webkit.org/blog/8048/what-spectre-and-meltdown-mean-...
https://trac.webkit.org/changeset/226495/webkit
Although you can claw that precision back by enabling cross-origin isolation for your site, at least in Firefox and Chrome, which both quantize high res timers to 100μs in non-isolated contexts but only 5μs in isolated contexts. I'm not sure exactly what Safari does.
The story of web development:
"For the time being, I’ll just do what most web devs do: choose whatever API accomplishes my goals today, and hope that browsers don’t change too much in the future."
It's a strategy that's worked out very well. Standards groups and browsers prioritize backwards compatibility very highly. It's hard to remember any real compatibility breakages in standardized HTML/CSS/JS features (ie. not third-party plugins like Flash).
Challenge accepted.
https://developer.mozilla.org/en-US/docs/Glossary/blink_elem...
I guess it's the end of days, if tags have stopped blinking.
> And the beast shall come forth surrounded by a roiling cloud of vengeance. The house of the unbelievers shall be razed and they shall be scorched to the earth. Their tags shall blink until the end of days. — from The Book of Mozilla, 12:10
> Even if you’ve been doing JavaScript for a while, you might be surprised to learn that setTimeout(0) is not really setTimeout(0). Instead, it could run 4 milliseconds later:
Is this still the case? Even with this change? https://chromestatus.com/feature/4889002157015040
I think it's still the case. The 4ms happens if you call setTimeout nested several times. I don't know the exact limit. But it's 5-ish times where that kicks in IIRC.
Edit: Here's the MDN bit on that, I was correct:
https://developer.mozilla.org/en-US/docs/Web/API/Window/setT...
> browsers will enforce a minimum timeout of 4 milliseconds once a nested call to setTimeout has been scheduled 5 times.
And the link from there to the spec about that:
https://html.spec.whatwg.org/multipage/timers-and-user-promp...
> If nesting level is greater than 5, and timeout is less than 4, then set timeout to 4.
I think that change is talking about the minimum timeout for the first 5 nested calls to `setTimeout(0)`.
Previously the first 5 would take 1ms, and then the rest would take 4ms. After that change the first 5 take 0ms and the rest take 4ms.
Have I not always heard that timeout-based callbacks always run at or after the timeout, but never before?
“Do this {} at least Xms from now”, right?
Yeah, exactly. Timeout based callbacks register a timer with the runtime, and when the timer is up, then the callback gets added to the end of the task queue (so once the timeout is up, you've got to wait for the current loop iteration to finish executing before your callback gets executed).
Sure, but the nuance here is there is a (otherwise usable) range of values for which the timers are only ever "after" instead of "at or after". I.e. the lower bound is artificially increased while the upper bound remains unlimited.
I don’t think “artificially increased” is correct. See your sibling. If the runtime waits until expiry, and only then adds the task to the end of the work queue, there’s no point at which any delayed work could happen at expiry except the work to place it on the end of (an empty) queue.
Any busy runtime (e.g. one with lots of parallel tasks, plus anything running less than optimally) will have a delay.
Artificially increased is what's happening when you request a timeout of 0 and the browser always makes it 4 ms or more.
Imagine this code:
If you let `tick()` run for 1 second, what would you expect the `value` to be? Theoretically, it should be around 1,000, because all you're doing is running a function that increments the `value` and then puts itself back onto the execution queue after a 1 ms delay. But because of the 4 ms delay that browsers implement, you'll never see `value` go above 250, because the delay is being artificially increased.I always kinda figured that any "timer" in any language would technically need to work that way unless you're running a very fancy real-time system because multitasking, especially in high load scenarios, means there just may not be clock cycles available for your task at the exact millisecond you set something to execute at.
So it is with JS; I kinda figured EVERYTHING would need to be heavily throttled in a browser in order to respect the device running that browser.
Background javascript processes can really seem to add up across a lot of browser tabs firing up to stay "smart" or "current" like they're the only tab in existence in their user's life.
I'm not sure if many people struggle with browser tabs gone wild. Limiting Javascript can have varying degrees of success since it's relative to how the page/site/app is built to begin with.