There is a limited amount of data in a computer system and this data is comprised of a hierarchy of storage devices with different capacities, costs and time to access.
Library Analogy™ lends itself nicely for this. If we think of contents in books as data from memory:
Books | data in | Comment |
in front of me | processor’s registers | where most of the work is done |
on my desk | processor’s caches | where frequently accessed data is kept |
on the library shelves | main memory (RAM) | main, active storage for data |
in library supplier (Book Depot) | secondary memory (hard drive) | archival storage |
amazon.com | networked storage | data out in the world |
So why don’t we just connect processors directly to the main memory or even secondary memory, that way we can hold way more information? Well we CAN but it would just take too long. Imagine if instead of reading from the book in front of us while were taking notes, we instead left the books in the shelves and went back to our desk to take notes. It would take a long time to walk back and forth.
Why can’t we just bring the shelf to the table then? How about bringing a table in front of the shelf? This would create a ton of problems as soon as more than one person wants to get books from a shelf. You’re not allowing anyone else to place a table by your shelf. Things would be very inefficient.
We also have to look at possible cost per GB vs. speed:
Storage Technology | Size (GB) | Cost ($) | $/GB | Access Time |
DDR4 DRAM (memory) | 16 | $400 | $25 | ~12-15 nano seconds |
SSD flash | 256 | $200 | $0.80 | 0.1 milliseconds |
Hard Drive | 2048 | $100 | $0.05 | 12 ms |
The cheaper memory is, the slower it is.