[H-GEN] Priorities

Robert Brockway robert at timetraveller.org
Sun Jun 8 04:22:07 EDT 2003


[ Humbug *General* list - semi-serious discussions about Humbug and     ]
[ Unix-related topics. Posts from non-subscribed addresses will vanish. ]

On Sat, 7 Jun 2003, Greg Black wrote:

> I'd have to study the source a bit to be sure what happens
> there, but I suspect that most priority algorithms work in such
> a way that all processes will get a shot at the CPU eventually.

These days definately.  I have read in various texts who some algorithms
were susceptible to the "process starvation" problem and that it had ended
up in production in some OSes.

> However, if a user or a program sets it nice value as high as
> possible and then doesn't get much CPU time, we know who to
> blame :-)

When I was in the maths dept at UQ, any user process left running for too
long (measured in seconds of cputime) would get re-niced.  This was
because the Sun box we were running on had to accomodate 100 simultaneous
users or more.  Interesting enough, the Professors in the department were
specifically excluded from the renicing that went one... :)

I heard about a little tool recently to renice any misbehaving process.
The tool is called "verynice". It is apparently highly configurable - if
anything looks like it might run away with the cpu, it can get niced down
to keep it in line.

> This would not happen.  When a signal is posted for a process, a
> large set of things gets done in the kernel -- e.g., checks to
> see if the signal is being ignored or masked; checks to see if
> the process is blocked on an event or stopped by a signal;
> etc. -- but eventually it determines whether or not the signal
> is to be delivered.  If the signal is to be delivered, and the
> process is runnable but not the current process, it is marked as
> needing to be scheduled, so it will effectively get a very high
> priority and be set to run in order for the signal to be able to
> be delivered/noticed[1].

This is a good point.

> [1] This is true for BSD; I'd be surprised if it was not true
>     for other systems, although I haven't bothered to check.

Without having UTSL or RTFM, I'm inclined to believe any modern unix would
act this way.

A bit of emperical evidence now - I run seti at home at nice 19 - ie, it is
as nice as it can get.  My system is currently cpu bound (cpu 100%
constantly) thanks to a cpu intensive app I'm running (which is running at
system standard priority).  Seti at home is consistently pulling 10-12% of
the cpu. This is a bit higher than I would have guessed.  This is on Linux
2.4.20 btw.

This has sparked my interesting, and I'll be delving deeper into the
current algorithm used in Linux.  It would be interesting to compare to
other unixen.

Year ago (1.0.x, 1.2.x ?) I actually patched my kernel with an alternative
scheduling algorithm.  Although the new scheduler reputed to be "fairer"
with cpu time than the standard Linux scheduler of the day it was
noticable slower.  I ran it for about 30 minutes and rebooted to my old
kernel :)

Rob

-- 
Robert Brockway B.Sc. email: robert at timetraveller.org  ICQ: 104781119
Linux counter project ID #16440 (http://counter.li.org)
"The earth is but one country and mankind its citizens" -Baha'u'llah

--
* This is list (humbug) general handled by majordomo at lists.humbug.org.au .
* Postings to this list are only accepted from subscribed addresses of
* lists 'general' or 'general-post'.  See http://www.humbug.org.au/



More information about the General mailing list