"The min_granularity setting was renamed to base_slice in this commit in v6 kernel.
The comment says it scales with CPU count and the comment is incorrect. I wonder whether kernel developers are aware of that mistake as they are rewriting the scheduler!
- Official comments in the code says it’s scaling with log2(1+cores) but it doesn’t.
- All the comments in the code are incorrect.
- Official documentation and man pages are incorrect.
- Every blog article, stack overflow answer and guide ever published about the scheduler is incorrect."
This feels misleading? They’re claiming Linux has been hard coded to 8 cores but from what they describe in the article it is specifically the scaling of the scheduler?
If I understood correctly the more cores you have, the more you could scale up the time each individual task gets on a CPU core without experiencing latency for the end user?
I can see that would have a benefit in terms of user perception Vs efficient use of processing time but it doesn’t mean all the cores aren’t being used? It just means the kernel is still switching between tasks at say 5ms when it could be doing it at 20ms if you have lots of cores and the user wouldn’t notice. I can imagine that would be more efficient but it’s definitely not the same as being capped to 8 cores; all the cores and CPUs are being scheduled just not in a way that might be the most optimal for some users.
Is that right? I feel like the title massively overplays the issue if so. It should be fixed but it doesn’t affect how many cores are used or even how fasr they work, merely how big the chunks of time each task get to run and how you can “hide” that from desktop users so the experience feels slick?