So our library has run into the problem that one processor, library assistant, is not enough to handle all of the students that are asking for access to the library. Over years of experimentation, library engineers have come up with increasingly complex ways to speed up the processing of tasks.
We’ll start our trek through the performance-enhancing methods with batch processing. The CPU would take a group of processes during one scheduled block and help them all at once, performing their operations one at a time. Even though only one program ran at a time, it was faster to perform tasks for processes that all needed similar data. We could compare this to a study group going to the library and studying the same material from the same book.
We quickly evolved this into multiprogramming, a process in which multiple processes were loaded into main memory, and the CPU would look for idle time in one process to conduct tasks for another. Idle time would most likely come from user input waiting. So while one process waited for the user to do something, the CPU would run tasks for another process. This is different from batch processing in a few ways. We are still using study groups, but instead of just trying to use the same data, the library assistant is helping different students regardless of what the data might be. So now the students don’t have to be studying the same content, they can just be grouped all at once and the library assistant will help one student while the other one is waiting for something to happen.
Multiprogramming evolved into multitasking. As the name might suggest, instead of running a whole process while another waited, we run small tasks from each process. Instead of the library assistant helping with the whole homework task, it can just try to help find answers for specific questions in the homework, one after another. Doing smaller tasks takes less time, therefore giving a sense that multiple processes are being performed at the same time, when instead its just one task at a time, each task from a different process.
However, the library engineers came up with a seriously genius idea. What if instead of having one library assistant we had…wait for it…waiiiit for it…still waiting…TWO assistants. 🤯
That’s right. Someone was smart enough to say, “hey let’s just hire another assistant, huh?”
In reality, this was a huge breakthrough because we had to find a way to place two CPU cores in one CPU chip or dye. The benefits are obvious, we could now run multiple processors at a time so we could use multiprogramming or multitasking on each CPU. We could now run at least twice as fast as before. This process is called multiprocessing.
As we got more advanced we had another brilliant idea, why don’t we just continue to hire more library assistants (CPU cores). 2! no 4! no 6! no 8! no 32!
You can check how many cores are in your computer. Most are quad core, but newer computers will have 6 or 8 cores, at least. Within each core we implemented something called a thread. This meant that we allowed programmers the ability to specify when the system should run certain tasks on different cores. So we can imagine a student asking the librarian for permission to use two library assistants at once, and specify which questions each assistant will help with simultaneously. This created an insanely productive student.
Finally we also developed multi-access networked systems. Now you no longer need to come to the library, you can connect remotely and ask for processor time and provide the data needed. Some systems allow many users to use their processors all at once. Madness in the library ensues unless you have an incredibly efficient librarian conducting the orchestra.
As we start to have more processes running in more CPU cores and threads, we need to make sure that everyone stays organized. Regardless of whether each CPU core is multiprogramming or multitasking, the operating system must allow dedicated chunks of time to processes in a fair manner. This means that a schedule will be followed and processes can’t use more than the allotted time. The operating system may implement short-term, medium-term and long-term schedules and must inform processes of when they can use processor time. There are many kinds of scheduling algorithms, but the most common are: