Evolution of Early Operating Systems: From Core Memory to Modern Platforms
While not a user of operating systems myself, I can provide an overview of the early development and evolution of operating systems throughout the history of computing. One of the first widely recognized operating systems was GM-NAA I/O, developed in the early 1950s for the IBM 704. Another significant early operating system was CTSS (Compatible Time-Sharing System), which was developed in the 1960s. For those interested in learning more about the history and evolution of operating systems, feel free to ask!
My First Encounter with an OS
My very first operating system experience was with Honeywell’s Mod 1 OS on the Honeywell 200 series of computers in 1965. This OS required at least 12k of memory and three to six magnetic tape drives. Since then, we have certainly come a long way.
Early Era Operating Systems
Here is a more or less chronological list of early operating systems that I encountered and used:
Honeywell Mod 1 OS (Honeywell 200 Series, 1965) Honeywell Multics (1968) Dartmouth Timesharing System (1966) Dartmouth BASIC (1966) IBM 1130 Disk Monitor System (1965) TOPS-10 (1967) APL/VS (1973) RSX-15 (1969) Digital Group Phimon (1970) TOPS-20/Tenex (1975) RT-11 (1968) Perkin Elmer OS/32 (1973) CP/M (1974) DOS (1981) MP/M (1982) MacOS (1984) Unix (1971) VM/CMS (1973)The First 10 Operating Systems
While I was a student in college around 1967, I was hired as the second shift “operator” of an IBM 1440 used by a small manufacturing company. This computer had only 4096 bytes of "core" memory, which were tiny life-preserver-shaped magnetic iron cores that were non-volatile and hand-sewn into a matrix by workers in the Far East.
There wasn’t an “operating system” as we understand it today, but there was 96 7-bit bytes of code that lived full-time in main memory because iron-core memory isn’t volatile. This code loaded your program from a card reader into the remaining 4000 bytes of core memory and then branched into your code. That was the entire OS—your program had to do everything else such as printing to a printer or accessing a disk drive. Yes, there were disk drives with removable disks: one for inventory, one for billing, and one for payroll, etc. And the disk drives used high-pressure oil to move their read/write heads.
If you wanted to access a disk or printer or card reader, you had to include the appropriate pre-compiled code as punch cards into your program, which itself consisted of the punch cards emitted by an "assembler" program. And that code also took up space in your precious 4000 bytes.
But here is where it gets really weird: the hardware provided 8 bits for each byte, like today’s computers still do. But on a 1440, one of those 8 bits was reserved to define the length of operations such as copy, compare, etc. So your program and data files code could only use 7 bits per byte and you were required to set/unset the 8th bit, known as the "wordmark," and in only the proper places, of course, to define the length of operations. But this system served a manufacturing company quite well.
The Transition to Modern Computing
When I transitioned to an IBM 360, which had 8 usable bits per byte and the length of operations was explicitly included in the instruction and not in "wordmark" bits, it was like 'WOW, this sure makes coding EASY!'
Over the years, operating systems have evolved to provide more functionality, increased speed, and better usability. From simple core memory-based systems to the complex, sophisticated systems we have today, the journey of computing has been fascinating and transformative.