Logout

6.1.7

Outline OS resource management techniques: scheduling, policies, multitasking, virtual memory, paging, interrupt, polling.

 

Teaching Note:

Technical details as to how these are carried out will not be required, but it is expected that students will be familiar with these techniques and understand when and why they are used.

 

Sample Questions - FORMER CURRICULUM - Probably a bit beyond what is necessary; especially (b).:

(a) Define interrupt. [2 marks]
(b) Describe how an interrupt is detected and identified by the processor. [4 marks]

 

JSR Notes

So, we'll take them one at a time, including the "when" and "why" (alluded to in the Teaching Note) they are used:

 

OS Policies (& Mechanisms)

The idea with policies and scheduling is that there are thousands of jobs the hardware of a computer has to do, so which ones are done when, and how are they scheduled to make the overall functioning of the computer as efficient as possible? When designing an operating system, there are several strategies you can take. Keep in mind that interrupts and other techniques can influence or interrupt default scheduling algorithms. Do make sure to look at the "several strategies" link above; it includes strategies such as First Come First Serve, Shortest Job First, Priority Scheduling, and Round Robin Scheduling.

General analogies:

First Come First Server - Cafeteria with a properly observed queue
Shortest Job First - Grocery store checkout where you let the person with one item in front of you if you have a lot of groceries
Priority Scheduling - interrupt example of: telephone interruption --> interrupted by person at the door --> interrupted by the fire alarm
Round Robin Scheduling - person who takes care of the house (as in the "a woman's work is never done" situation)

Homework analogies:

First Come First Serve - Mr Rayworth assigns something firsrt, two days ago before Mr Richards who only assigned it today.
Shortest Job First - little homework assigments before studying for big test
Priority Scheduling - HL before SL
Round Robin Scheduling - When you get tired of something switch to another subject and then go back to finish off the first)

CPU First come first Serve - click on one icon in dock, and then an other and another, the one you clicked on first should be giving priority. And at startup of computer, there's the quesiton of which startup items load first, the ones that start to load first, or the smallest (see next)
CPU Shortest Job First - At system startup, it could be that the smallest application in the Start-up list are allowed to fully load first.
CPU Priority Scheduling - Facebook message notificaton. Windows has a "High Priority" setting in Task Manager, for determining which processes have priority, and stop all other processes until that one is finished.
CPU Round Robin Scheduling - happens most often, particularly for longer operations, since many things need to be processed basically at the same time.

When the CPU is scheduling, it has to look up the policies to be considered in determining if certain processes or applications have priority over others. So we can see policies as the rules the CPU uses to choose which activities to perform, and in which order.

In understanding how policies and scheduling (see next topic) relate, an analogy would be that its the school's IB mock exam policies that dictate the mock exam scheduling. The main policy is that HL courses have a priority over SL courses. The scheduling is the actual mechanism of Ms Canobie making the schedule with that priority and other policies in mind.

When? Policies just exist; in terms of when they are used: continually by the scheduling mechanisms.

Why? So that more urgent tasks can take priority over less urgent ones, leading to a faster, less error-prone operation of the device.

Note that policies are protocols in a way, since they are sets of rules that the CPU must follow in terms of how various processing is scheduled.

 

OS Scheduling

Whereas policies are what scheduling is based on, the actual process of scheduling itself is the allocation of CPU and other resources to various tasks, via particular hardware and software "mechanisms". The schedule is thus the outcome of the actions of the mechanisms which have been used, adhering to certain policies.

With scheduling, memory and CPU usage, for example, are physically handed over to specific processes/applications for certain amounts of time, based on policy algorithms hard coded in the OS.

A good analogy of scheduling is a traffic cop; he/she continually looks around, and based on certain policies (such as emergency vehicles take priority) decides who should enter the intersection and signals to them to do so. So "you, process over there, come into the CPU now... now you have to leave, and you, over there, yes you, process, it's your turn, come into the CPU... now, you four processes, I want you to swap in and out, equal time, for the next 40 milliseconds..."

So it's the Contol Unit (CU) part of the CPU which, using certin OS scheduling policies, implements a schedule of what is processed when.

When? Scheduling occurs continually.

Why? To keep many processes and applications running smoothly and efficiently all at the same time.

 

 

[A final clarification of the difference between the terms scheduling, policies, and another associated term you'll often see: "mechanisms".: Policies are the rules used. Scheduling is the action of deciding which processes can occur when, based on the rules. Mechanisms are the software/hardware ways that those actions are implemented. So interrupts are mechanisms; using a queue data structure instead of a list would be choosing a different mechanism to create a schedule according to certain policies.]

Silly final analogy:
A couple of school policies - students have to have lunch cards, and teachers need to have id cards or visitors badges

Schedule and Mechanism (in this case is not followed through - because cafeteria ladies are too nice, and teachers are too lazy!) - students use lunch cards for efficient service in the cafeteria, and all adults on campus are identifiable by badges.

 

Sample Question: Outline how an OS uses scheduling and policies to manage the fair and efficient sharing of the CPU.

General Answer: The OS creates schedules for all processes going on at the same time by using certain policies such as first come first serve in certain situations, and shortest job first in other situations, and in other cases, when certain things happen which need to be urgently attended to, those are allowed to interrupt other on-going processes.

 

 

 

(OS) Multitasking

Multitasking is basically doing (or seeming to do) more than one task at a time. Note that it is also often referred to as multi-programming.

Analogies:
- Juggling many balls at one time: even though you are only using two hands, many balls are being kept in the air.
- Spinning of plates circus act: only one plate is even being spinned at one time, but because the plate spinner attends to each one in order, re-spinning them, they all keep spinning all at the same time.
- Patting your head and rubbing your stomach at the same time: to get this going, you have to consciously think about one and then the other.
- Students doing homework++: you are multitasking only in the same way that a plate spinner spins plates; you go back and forth from one thing to another, from a chat entry, to a YouTube song choice to your work to a Skype conversation. But do note that you are actually doing only one thing at a time, just as a (single-core, single processor) CPU can do only one thing at a time, just as a plate spinner only actually attends to one plate at a time, though all continue to spin.

A multitasking operating system is any type of OS that is capable of running more than one program at a time. Most modern operating systems can do this, but how they do it varies.

"Pseudo" Multitasking

At the most basic level, two applications can be swapped back and forth into the CPU continually and so fast that both seem to be operating at the same time. (This would be similar to the analogies listed above; "plates being spun", but really the CPU only truly attending to one thing at any given fraction of a second.)

The "doing homework, but "multitasking" " example where along with your homework, you are: Facebook chatting, watching things like "11 Ways to be Successful", reading Reddit, checking Netflix, and so on and so on is not true multitasking because more than one thing is not actually being "processed" at the same time. Never-the-less, yes, it's kind of multitasking.

True Multitasking

For the difference between serial jobs which cannot be multitasked on a computer, and and parallel jobs which can be, see this video. Or think of my example of rotating 8 "pixels" on a piece of paper with one hand/pencil, vs. two hands/pencils.

If a CPU has multiple cores.... and also possibly with multi-threading... processes can actually, truly, occur "concurrently", so multitasking now-a-days can actually be doing more than one thing at a time. (Again, this would be a good time to take a look at the Activity Monitor utility and see all the things which seem to be all going on at the same time.)

Note that in order for true multitasking capabilities of an operating system to be able to be used, the architecture of the CPU must also be capable of multitasking, and so must be the application(s) in use. True multitasking occurs only when all three - OS, CPU, and application - work together to do multiple tasks truly at the same time. For example, in class it was interesting to see from my Dock icon for Activity Monitor that only two of the CPUs were active when I launched Adobe Illustrator, so in that case the application must have been set up to take advantage of two cores, but nothing more.

When? With a modern computer, all the time. One of a myriad of examples would be an anti-virus daemon constantly checking a file being downloaded from the Internet, as it's being download. And within the running of one program itself there are many situations where multitasking is possible. This is particularly true of multimedia applications in which the same thing is done to thousands or millions of times, for example rotating a picture (the example used in the video linked above). With two processors, half the pixels can be rotated by one core, while at the same time that the other half are being rotated by another core.

Note that, this said, many processes cannot be run concurrently, so multi-tasking is not possible. The easiest thing to think of here is a Netbeans/Intellij program running; one line after another has to be done in sequence (unless you are using Java threads, for example, but that is outside of the scope of what we learned). This is a good time to be reminded that it is the OS which is primarily repsonsible for determining when multitasking can happen, and implementing it via the hardware in conjunction with the application.

Why? To allow more than one thing to go on at a time (or at least for all intents and purposes be going on at the same time), and considering all the various things we do with our computers, even at the top level, we often do have more than one (big) thing going on at at time, like having Skype running, while listening to YouTube, doing your homework, and checking Facebook updates. Meantime, behind the scenes (as seen in Activity Monitor) there are lots of other smaller things that are constantly running at the same time.)

 

 

Paging

Wikipedia (ver betum, pretty well): In computer operating systems, paging is one of the memory-management schemes by which a computer can store and retrieve data from secondary storage for use in main memory. In the paging memory-management scheme, the operating system retrieves data from secondary storage in same-size (non-contiguous) blocks (called pages). The main advantage of paging over memory segmentation is that it allows the physical address space of a process (in RAM) to be noncontiguous. Before paging came into use, systems had to fit whole programs into memory contiguously, which caused various storage and fragmentation problems

(To remember what "paging" is, think of it as making an "index page" of a book, or a look-up table.)

There is a great minute or two description of how paging works in this YouTube video starting at about the 2:00 minute mark. Key here is that though the"pages" of a structure in their logical form are treated as if they are contiguous, they actually map to non-contiguous physical (RAM) memory "fames". (Contiguous means to be touching each other, non-contiguous not touching each other; for example we talk of the 48 states of mainland USA as being contiguous, but Hawaii and Alaska are not contiguous with the other states.) (If you want to watch another video from the same professor on the other memory management scheme, segmentation, it's here.

Mr Rayworth Summary: paging is a way for a computer to "look at" and work with a program or data structure as if it were logically organized all together in one place, even though in (RAM) memory it physically is not. This is possible because the "pages" that the program or data structre is broken down into are spread around the memory randomly, but each page is able to be found with its address in memory being recorded on a page file.

Paging

Among the advantages of allowing non-contiguous physical memory blocks is that:

One disadvantage of paging is that since the physical frames are not beside each other, it may take a relatively long time to "gather them up" when a file is read from the hard drive. To get around this, "defragmentation" can occur when the hard drive and CPU are not busy, in which all of the pieces (frames) of a file a put together in one place.

When? In terms of when, you can assume that all modern computers use paging as the approach to the way they save to memory.

Why? The advantages are listed above, but they boil down to overall more efficiency in terms of working with memory.

 

 

OS Management of Virtual Memory

Virtual memory is when part of a secondary storage device, such as a hard drive, is used to simulate RAM. By having a "swap" space of virtual memory, more processes can be loaded into (what is seen as) RAM, and so the computer can do more things at once. What is swapped into virtual memory are any parts of a program that are only infrequently used, such as exception handling code, methods that are only rarely used, or parts of arrays which are either empty or not currently being used. When those infrequently used parts of a program are needed, they are swapped back into actual RAM.

The same professor as referenced above has another excellent YouTube video on virtual memory. What is relevant to you is a little over the first two minutes. Here is a diagram from that video.

Virtual Memory

When? Whenever there are more processes in the logical memory than will fit in the RAM (i.e. you launch applications and load data which if fully loaded would be more than the available RAM). (And of course, only on systems which support virtual memory, such as pretty well all modern computers and operating systems.)

Why? To allow more processes to be loaded into RAM at one time.

 

 

Interrupts (There is good coverage of this on pages 332 and 348 of the former curriculum yellow and red text book.)

An interrupt is an instruction sent to the CPU which interrupts the current processes. It is either generated by hardware or software when a problem or an exception occurs. Whatever the problem or exception, it is deemed urgent enough to "Hold the presses!!", as it were.

There is an interrupt register in the CPU which holds the address of the current highest priority interrupt. All other current interrupts along with other previously occurring processes are put onto stacks of "unfinished business". As interrupts are handled, and "popped" off the interrupt stack, eventually the interrupt register will be null, which means the other processes that were going on may resume.

An analogy would be an Emergency ward in the hospital where as a person with an upset stomach is being looked at, and a person with a cut in their arm enters, so that new situation interrupts the conversation with the stomach-ache person. But then, when a person arrives in a lot of pain with a broken collar bone from a big ice hocky hit on the ice, that situation interrupts the arm-cut person ("Here, just apply pressure with this", they are told). And then, a person arrives who is severely bleeding from a car accident, and that situation naturally interrupts the collar bone situation.

Many hardware interrupts are hard coded onto the ROM of the machine. Based on where they are stored in ROM, each has a particular offset. The address pointing to the code which handles them can be found by a calculation using that offset number.

Meantime, software interrupts will come from the code itself. We regularly code interrupts in the form of Java Exceptions. So, we try a certain block of code, and if a problem occurs (i.e. an exception happens), then that part of the running application is interrupted by the exception, which is handled through the catch block.


In terms of scheduling policies (see above), interrupts would take priority over everything else. And there will be a certain hierarchy of interrupts, with specific interrupts being able to interrupt other interrupts.

You'll note that with some version of the Windows operating system, you are able to customize the BIOS (Basic Input and Output), and change the priorities of interrupts. Though this is not recommended!


Examples:

Hardware interrupts:

- A modem having lost its signal to the Internet interrupts the CPU to warn it of that. (Given all the applications and processes which depend on a connection, this is an urgent thing for the computer to know about.)
- A printer reports that there is a paper jam. (Naturally, it would not be good for the computer to send any more print requests in that situation.)

Software interrupts:

- In the middle of a method in a running application, there is an attempt to access the [arr.length] element of an array, and since that does not exist, trying to work with that element is impossible. It therefore makes sense that the method is interrupted and the flow of control moves to a certain "catch" block of code.

- The number 0 is passed to a method as a parameter, which at some point will be used as the divisor in a mathematical equation. But dividing by 0 is not possible, so a software interrupt will be triggered, and that method will be halted, with flow of control being directed to a certain "catch" block of code.

When? When there is something urgent which needs the CPU's attention.

Why? If the situation is not handled, big problems will occur, such as crashing of the computer or freezing of an application.

 


Polling
(page 348 of the yellow and red textbook)

**** remember to keep all of this in the context of the Operating System working through the CPU.

Polling is the act of a computer (CPU) checking various devices that are connected to it. There are two general things it can be checking.
--> First of all, a computer (CPU) might "canvas" or "poll" a device to see if it has any request of the CPU or any data to send its way. (Teacher in a classroom analogy: "Question, Johnny?, Question, Sally?, Question Tafadzwa?")
--> Secondly, polling can be used to check the readiness of another device, like checking to see if devices on a network are still "Ok". (Wikipedia link.) (Teacher in a classroom analogy: "You OK, Johnny?, You OK, Sally? You OK, Tafadzwa?")

The CPU will poll at regular intervals of its own choosing, and/or when it has the capacity to do so. It is an example of a master/slave relationship, in that only the master (i.e. the CPU) can determine when the exchange happens.

One "request" example of a job that will only be done when nothing else is being processed is if the hard drive needs its index updated. So if there is no other processing going on (i.e. the user is not on the Internet, or working in Word etc.) the CPU will poll the hard drive to see if it wants its index updated (the kind updating of the file structure it will let you know it is doing sometimes when you click on Spotlight). The "slave" hard drive can put the request in any time it wants, but the request will only be honored by the "master" CPU when it has time.

Analogies

One analogy of request is the teacher checking to see if there are any questions or anything else needed after teaching a lesson.

One "readiness check" analogy is checking the readiness of equipment in a fire station. Checking to see that the gas tank is full, there is water in the tank of the fire truck, the boots and clothes are where they should be and so on. This kind of checking is done whenever there is time to do so.

Actual OS Examples

Checking for data/requests:

- Regular collection of weather data from weather sensors around a city every 30 seconds.
- Uploading of transaction files of ATMs to the bank server in a batch way at the end of the day.

Checking readiness:

- A network server polling the machines on the LAN to make sure everything is Ok regarding their connections.
- The application Remote Desktop checking every once in a while to see if computers are available or not on the network for it.
- Polling a printer to see if it's ready for the next print job, or indeed even the next character.

When? Regularly (anywhere from many times a second, to only once daily for example), as determined by the CPU and OS together, and typically when the CPU is not busy with other tasks.

Why? There are things which need to be monitored and done by the CPU, but not necessarily urgently or in real time as certain processes are progressing.

 

Comparing Polling and Interupts

Polling comes from the CPU toward other devices. Whereas interrupts come into the CPU from other devices or parts of the computer.

Polling is "casual" doing of stuff when there is time. Whereas interrupts are stuff being done that Has To Be Done, Now if at all possible.

 

 

 

 

-------------------- NOT NEEDED -----------------------

JSR Notes - FORMER CURRICULUM - 6.5.3 Define interrupt and polling.:

Here’s a good general point to re-iterate in terms of what the IB wants you to know.  There are if fact different levels what they want you to know, and you need to be prepared for each.  At a very basic level, they want you to be able to define these thing.  And they also want you at a higher level to be able to explain and outline them.  But higher still, they want you to be able to apply them to specific situations.  More on the two higher levels later, but for now appreciate that being well versed at those higher levers does not necessarily mean that you’ve got a good handy definition at the ready, so for 6.5.1, 6.5.2, and 6.5.3, do keep it simple, but accurate.

Interrupt Definition: A signal informing a program that an event has occurred. When a program receives an interrupt signal, it takes a specified action (which can be to ignore the signal). Interrupt signals can cause a program to suspend itself temporarily to service the interrupt.

The one adjustment I’d make to the above definition is that the signal can come from a program or from a piece of hardware, like a printer.

Polling Definition:  There are actually 2, and that’s perfect, since there are two slightly different scenarios, one which is polling from a "master" to a "slave", and one where the polling is from one peer to another peer.  So note that the following two definitions are valid, and are both polling – just different kinds.
Polling is a CAM. In a master/slave scenario, the master queries each slave device in turn as to whether it has any data to transmit. If the slave answers yes then the device is permitted to transmit its data. If the slave answers no then the master moves on and polls the next slave device. The process is repeated continuously. Also see contention and token passing. (So the CPU does the poll every once in a while, whereas the device being polled makes the "request" or readies the data as it happens, though with the expectation that the request will not be honored until it is polled.)
(2) Making continuous requests for data from another device. For example, modems that support polling can call another system and request data.

More on the above difference later, but for now the ‘take-away’ point is that where an interrupt is an abrupt demand of the CPU for attention (which can be ignored), with polling it’s rather a polite request for attention that is usually solicited by the CPU.

 

JSR Notes - FORMER CURRICULUM - way beyond what is needed:

The function of the interrupt register is well explained in the text book. But to help you, here's a bit more, by way primarily of clarification.

Basically there are times when it is in everyone's best interest that the operation being executed in any give time be interrupted by some other process that should have priority. This is often because something urgent has happened, which has to be attended to right away. But often enough, it's just something that happens, and when it happens, it should be acknowledged and dealt with. And it can be a hardware interrupt or a software interrupt. Usually the urgent kinds of interrupts are hardware, and the non-so-urgent ones are software.

A hardware interrupt could happen when, for example, a paper mis-feed at the printer has just happened. The computer could put that request for action in a queue and deal with whatever else it's dealing with first. But the sooner the user goes to attend to the printer the better. So the interrupt signal comes into the computer, through a specific interrupt "port", and the operating system, therefore recognizing the communication as being an interrupt, goes to the place in ROM where the specific information about how that specific computer can handle that specific interrupt, and executes it.

Software interrupts can be as simple as someone clicking a button in a GUI program. In order to respond to the user in a timely fashion, the CPU puts on the back-burner other things that it has on the go, and it executes the code coming in from the "mouse-down" "event". Certainly there is a hierarchy controlling which interrupt can trump which (rock, paper, scissors, anyone?), but as soon as is reasonably possible, the CPU will acknowledge and respond to the user's click.

In terms of the difference between queuing behavior and interrupt behavior, think of the following analogy. Imagine a school cafeteria where all students actually queue up to be served at lunch. To make this analogy even more bizarre, when teachers come along, they are allowed to go to the front of the line - they can interrupt the queue, because they are more guaranteed to have work that they have to get back to. It is in the best interest for everyone at the school that they not waste their time waiting in lunch queues. And, like the first school I taught at, in Istanbul, it was not only for that reason, but out of respect for age and position. So in that same light, if the Director or other Administrators were to come along, they would be invited to interrupt the queue of teacher "interrupts". And though he never did, if the Prime Minister (or better yet, the Land Mafia boss that ran the school to launder his money...) came along for lunch, they would most certainly have been given top interrupt priority. In a computer, there are very many instructions that queue up to be processed when the CPU is able to do so. But there are other instructions which are more urgent, and so interrupt.

So what's actually held in the interrupt register? You guessed it, an address (or in the case of hardware interrupts, an address offset position - more on that below). With no current interrupt, the interrupt register is null. But when a certain signal is received, an address with (or offset to) the start of the required interrupt instructions is put in the interrupt register.

In the case of a hardware interrupt, it's actually an offset position of the correct interrupt-responding code be in the interrupt register. All hardware interrupts are "hard-wired" into the ROM, which is specific for the specific machine. In fact all the interrupt instructions are found in a certain contiguous space in ROM, of which the starting point is know. So, like the book describes, when a hardware interrupt is triggered, the code to be fetched from ROM will be: the known starting address + the offset value in the interrupt register.

Meantime, with a software interrupt, it's just a plain old address.

An interrupt can interrupt the processing of an interrupt that was triggered previously, but isn't finished yet. So the Control Unit of the CPU has to keep track of which interrupts have not been finished. It keeps these in a LIFO (Last In - First Out) order, and works its way down through the stack when it's able to.


JSR Notes - FORMER CURRICULUM - 6.5.5 Compare the features of DMA, interrupt systems and polling systems.
Could be the most useful of these three sections from the former curriculum, but will have to be streamlined and edited..

The textbook does as good a job as it can with this assessment statement. DMA really doesn’t fit nicely with the other two; it’s particularly the other two that need to be compared.

Firstly though a little reiteration of DMA, for what it’s worth. It is a special system that is set up expressly for primarily one purpose: backing up. So to compare it to interrupt systems and polling, well it’s for something entirely different for the most part.

So, comparing the two that would demand it (in the case of the interrupt), or politely wait for a request to do so (in the case of polling.)….

Interrupts are to be preferred when the action should happen as soon as possible. Worthy of note at this point is that interrupts have certain priorities (and this is noted in the teaching note above). So just because a certain interrupt is sent from a device or a program, it does not mean that everything comes to a screeching halt until the issue has been resolved. Rather it just means that the request in question is initiated by the device/program. And the reason it is done is that it very well could be something that if left unattended, could screw up other things/processes. And since interrupts have certain set priorities (there are 256 different kinds of software interrupts, for example, each with its own priority level), the CPU will by definition attend to the things first that it should attend to first.

Meantime, the poll is (in the first instance) a request for requests, as it were. It is something initiated at regular intervals by the CPU. So the CPU will poll devices and applications to see if they have any request. The applications, in turn collect their requests in anticipation of the regular check-in by the CPU.

The advantage of the interrupt method is that there is no lag time to when the CPU is to make its decision about what to do or not. And it doesn’t require regular processing by the CPU when there is no problem. So for urgent things that don’t happen too regularly, it works fine. But you can imagine that if it were overused, then it would be a burden. In the analogy of Dr. Bieber in his office, imagine if every minute of every day members of the school community were lined up outside his door demanding his attention.

So the advantage of the poll system is that the CPU can turn it’s attention to “outside” requests, when it’s got the time. The only disadvantage of this is that it will often time waste its energy requesting the requests, as it were, since often times there will be no requests waiting.

A balance between interrupts for decidedly urgent requests, and polling by the CPU, regularly, is the best overall way to go.

And at this point do recall the second definition of polling; one of continuous requests by one device of another. This would not happen between modem and CPU for example, but it could between modem and Internet Service Provider, while the user is connected to the Internet. The idea here is that the request for data is continuously made, but unlike an interrupt, the asking device does not need to take over CPU control, it is simply asking the same thing again, in the case of this example, for data from the ISP. This sort of polite request is expected by the other device.

Note that you should have a specific external device interrupt example ready, and also a polling example ready, but you can use the ones form 6.5.4 for this. With this assessment statement you should be more focused on the comparison of interrupt and polling, as was done above.