Y13 Unit 0 - Class Structure
Y13 Unit 1 - Searching Algorithms
Y13 Unit 2 - Abstract Data Structures (HL)
Y13 Unit 3 - Computer Organization (Binary)
Y13 Unit 4 - Computer Organization (Architecture)
Y13 Unit 5 - Resource Management (HL)
Y13 Unit 6 - Control (HL)
Paper 3
1 of 2

Dealing With Limited Resources

So our library has run into the problem that one processor, library assistant, is not enough to handle all of the students that are asking for access to the library. Over years of experimentation, library engineers have come up with increasingly complex ways to speed up the processing of tasks.

Single Threads

We’ll start our trek through the performance-enhancing methods with batch processing. The CPU would take a group of processes during one scheduled block and help them all at once, performing their operations one at a time. Even though only one program ran at a time, it was faster to perform tasks for processes that all needed similar data. We could compare this to a study group going to the library and studying the same material from the same book.

We quickly evolved this into multiprogramming, a process in which multiple processes were loaded into main memory, and the CPU would look for idle time in one process to conduct tasks for another. Idle time would most likely come from user input waiting. So while one process waited for the user to do something, the CPU would run tasks for another process. This is different from batch processing in a few ways. We are still using study groups, but instead of just trying to use the same data, the library assistant is helping different students regardless of what the data might be. So now the students don’t have to be studying the same content, they can just be grouped all at once and the library assistant will help one student while the other one is waiting for something to happen.

Multiprogramming evolved into multitasking. As the name might suggest, instead of running a whole process while another waited, we run small tasks from each process. Instead of the library assistant helping with the whole homework task, it can just try to help find answers for specific questions in the homework, one after another. Doing smaller tasks takes less time, therefore giving a sense that multiple processes are being performed at the same time, when instead its just one task at a time, each task from a different process.

Multi Threads

However, the library engineers came up with a seriously genius idea. What if instead of having one library assistant we had…wait for it…waiiiit for it…still waiting…TWO assistants. 🤯

That’s right. Someone was smart enough to say, “hey let’s just hire another assistant, huh?”

In reality, this was a huge breakthrough because we had to find a way to place two CPU cores in one CPU chip or dye. The benefits are obvious, we could now run multiple processors at a time so we could use multiprogramming or multitasking on each CPU. We could now run at least twice as fast as before. This process is called multiprocessing.

As we got more advanced we had another brilliant idea, why don’t we just continue to hire more library assistants (CPU cores). 2! no 4! no 6! no 8! no 32!

You can check how many cores are in your computer. Most are quad core, but newer computers will have 6 or 8 cores, at least. Within each core we implemented something called a thread. This meant that we allowed programmers the ability to specify when the system should run certain tasks on different cores. So we can imagine a student asking the librarian for permission to use two library assistants at once, and specify which questions each assistant will help with simultaneously. This created an insanely productive student.

Finally we also developed multi-access networked systems. Now you no longer need to come to the library, you can connect remotely and ask for processor time and provide the data needed. Some systems allow many users to use their processors all at once. Madness in the library ensues unless you have an incredibly efficient librarian conducting the orchestra.

Scheduling

As we start to have more processes running in more CPU cores and threads, we need to make sure that everyone stays organized. Regardless of whether each CPU core is multiprogramming or multitasking, the operating system must allow dedicated chunks of time to processes in a fair manner. This means that a schedule will be followed and processes can’t use more than the allotted time. The operating system may implement short-term, medium-term and long-term schedules and must inform processes of when they can use processor time. There are many kinds of scheduling algorithms, but the most common are:

  1. First Come First-Serve Scheduling (FCFS) – There is no written schedule, there is a line and students that come first will have processor until finished or for up to X amount of time. You must wait in line for your turn.
  2. Shortest-Job-First Scheduling – As the name implies, the librarian will decide how long a student’s task will take and will schedule the shortest jobs to use library assistant time first. Each process will perform its job until finished.
  3. Priority scheduling – Using a combination of bandwidth availability, priority, frequency of process usage and processor speed (among other factors), the librarian finds a way to calculate the priority of a job. Highest priority jobs get to use the librarian first and they will perform their job until finished.
  4. Round Robin scheduling – No priority is given, all processes get a set amount of time, say 100 milliseconds, and the schedulers allow processes to use processor time for that amount of time. It goes in a circle, giving all processes in the queue an allotment until all processes have finished their tasks.
  5. Multilevel Queue Scheduling – There are essentially different schedules, each grouping processes by possible types of tasks. Each schedule then uses its own scheduling algorithm from the previous 4 mentioned(or some other algorithm!)