folklore.org
The Original Macintosh:    112 of 124 
Mea Culpa
Author: Andy Hertzfeld
Date: January 1986
Characters: Bruce Horn, Bud Tribble, Jerome Coonen, Bill Atkinson, Andy Hertzfeld, Bill Gates, Burrell Smith
Topics: Hardware Design, Software Design, Technical
Summary: Here are some of our worst mistakes

Almost everyone involved with the design of the original Macintosh is proud of the work that we did on the project, both individually and collectively, but that doesn't mean that we aren't also embarrassed about some of the mistakes that we made. It's worthwhile to consider, if not apologize for, the worst decisions that I was personally responsible for, as well as other major faults in the system software and product as a whole.



The worst blunder that I perpetrated had to do with the memory manager. Bud Tribble adapted the Lisa intra-segment memory manager for the Macintosh (see Hungarian), but we needed to add a few features. One was a "locked" attribute associated with a relocatable memory block, that temporarily prevented the block from being moved. Another enhancement was a "purgable" attribute, that told the memory manager that it could release a block if it needed to, as memory was getting full. The big mistake was where I chose to locate the bits that controlled the attributes.

I decided to put the bits controlling the "locked" and "purgable" attributes in the high-order bits of the master pointer (a pointer to the current address of a memory block), because they weren't being used for anything else. The 68000 had a 24 bit address bus, allowing 16 megabytes of addressable memory, so the high-order 8 bits of an address were not used by the processor. The high bit of a word is the most efficient one to test, which was another reason that I thought it was efficient to locate the flags there.

Of course, it was foolish to count on unused address bits to stay that way for very long, and it became a problem when the Macintosh transitioned to the 68020 processor in 1987, with the introduction of the Macintosh II. The 68020 had a full thirty-two bit address bus, so the memory manager could no longer get away with using the high-order master pointer bits for flags. It wasn't that hard for Jerome Coonen to convert the memory manager to keep the flags in the block header instead of the master pointer (which was where they should have been in the first place), but the practice of manipulating them directly had crept into third party applications, even though it wasn't supposed to, and it took another year or so to identify and eradicate all the transgressions to upgrade the Macintosh software base to be "32 bit clean", so the full address space could be used.

I paid a more direct price for my second worst mistake, which was to use fixed low memory addresses for toolbox globals. The Apple II kept important system globals in low memory, and the 68000 included a special 'short' addressing mode that made accessing addresses in the first 32K of memory more efficient, which motivated us to use low memory for various globals. While that may have been acceptable for system globals, it was a clearly a mistake for the toolbox, since that precluded us from running more than one application at a time, since each application required its own copy of the toolbox globals.

That didn't matter much at first, because with 128K of RAM, we barely had enough memory to run a single application at a time. But when the 512K Macintosh was released in September 1984, it started to become an issue. In October 1984, after I left Apple to work on my own, I realized that you could solve the problem by swapping all of the application-dependent low memory locations when you performed a context switch. In a few days, I wrote the core of the Mac's first multi-tasking environment called Switcher (see Switcher), using the low memory swapping technique, that kept multiple programs resident in memory at once, and switched between them with a nifty scrolling effect. Using low memory like we did ended up making context switching a few milliseconds slower than it should have been, and made it harder to eventually use a memory management unit, but it didn't turn out to be as devastating as I once thought.

We wanted the Macintosh to have a relatively simple system architecture, so it could perform well with limited hardware resources, but perhaps we went a little too far. We decided that we could live without a memory management unit, which was the right decision because of the expense of the associated hardware. But we also decided to eliminate the distinction between user and system code, by running everything in supervisor mode. This empowered applications and simplified the system, but it was a poor choice in the long run, because it made it harder to control the software base as the system evolved.

Even Bill Atkinson made an occasional error. His worst mistake was using signed 16-bit integers as sizes in various QuickDraw data structures like regions and pictures. This limited the maximum size of a region or picture to 32 kilobytes, which became a significant limitation a few years later as memory sizes grew. Bruce Horn's resource manager suffered a similar problem, using 16 bit offsets, limiting the size of resource files unnecessarily.

The biggest problem with the Macintosh hardware was pretty obvious, which was its limited expandability. But the problem wasn't really technical as much as philosophical, which was that we wanted to eliminate the inevitable complexity that was a consequence of hardware expandability, both for the user and the developer, by having every Macintosh be identical. It was a valid point of view, even somewhat courageous, but not very practical, because things were still changing too fast in the computer industry for it to work, driven by the relentless tides of Moore's Law. Burrell did try to sneak some expandability into the design (see Diagnostic Port), but was only partially successful.

Limited hardware expandability exacerbates other flaws in the design, since you don't have the flexibility for yourself or third-parties to easily correct them. One of the biggest mistakes that we made in the first Mac was not enough support for a hard drive. Our first file system used a simple data structure that didn't scale well to large drives (in fact, it was suggested to us by Bill Gates in July 1981), and we didn't have a way to get bits in and out of the box at the rates that a hard disk required. In our defense, it was hard to for us to consider adding a hard disk to the Macintosh because it was one of the last differentiators from the Lisa, which was more than three times as expensive. But the lack of hardware flexibility made it more difficult for third parties to jump into the breach, although some did anyway.

From a broader perspective, I think that many of our mistakes came from a lack of understanding about exactly what we were doing. We thought that we were making a great product, reincarnating the Apple II for the 1980's, but we were actually creating the first in a long line of compatible computers that would persist for decades, although the latter wouldn't have happened if we didn't succeed at the former. Perhaps our design would have given the future more priority over the present if we had understood how long it would last.

A Rose by Any Other Name
Back to The Original Macintosh
Evolution Of A Classic

Login
Account Name:

Password:

Create new account
Related Stories
Rating
Overall Rating: 4.21
(good)

Your rating:

  1

  2

  3

  4

  5
8 Comments     
I've been a Mac developer for many years (since the Plus was new, in fact), and it's nice to hear a bit of humility about some of these design decisions, after some pretty arrogant stuff thrown back at us in the late 80s/early 90s, especially regarding the memory manager. I know how it is, you make a design decision based on what you know at the time, and later it's very easy to find you goofed, or at least made it hard to maintain compatibility when you come to fix it later. A group of us Mac developers came up with a solution to the memory manager problem at one time (maybe about 1992 this was), and were quite happy to give the solution to Apple completely free, but DTS told us basically to go away and stop bothering the geniuses there - if what we were proposing could work they would have already thought of it - that was the how we read it anyway. Maybe there were issues that we weren't aware of, but as OS X/Carbon proves, there was always a completely different way to do it without breaking the programming model, even if a recompile is needed. Anyway, ancient history! Instead we were stuck with the tedious partitions of fixed sizes for another decade. Another area that always frustrated me, though I understand the design choice again, due to the very limited memory of the first Mac, was the Menu Manager. The fact that everything was stuffed into 8-bit fields, with various things reused according to context. OK, good space use, but very limiting! The Carbon APIs now are really a pretty nice version of those early APIs (though doubtless use gobs more memory) - I'd be interested to know what the guys who came up with the original ones think of it.
It's not fair to beat yourselves up over most of these things. Considering the mistakes that other OS vendors made (16-bit resource designators, for example), you guys made few mistakes that couldn't be dealt with when it was finally possible to do so. Besides, as app developers, we do much worse stuff with much less justification.
Andy, I recall arguing with you about something similar in Dick Applebaum's "Computers Plus" store around 1986 or so. This was right after the release of the Mac Plus, (and before I worked at Apple), and I was asking why it was limited to 4 megabytes of memory which the 68K could address 16 megabytes (of course, some of that 16 megabytes would have to be used for I/O, a screen buffer, and so forth). You challenged me to come up with an application that could actually use more than 4 megabytes; as I recall, I couldn't. BTW, surely there could be some stories about Dick Applebaum and Mark Wozniak's "Computers Plus" chain. I miss them.
The company I'm at bought a Lisa in 1983 under the developer program ($8,560.47 for a Lisa 1, Pascal + Workshop, printer, and parallel card; that would be over $15K in today's dollars). The first of our various Macs arrived in 1984. I was involved with a product developed with Think Pascal and first sold in 1989 and exclusively on the Mac until 1994, when the market seemed to stall. We did a Microsoft Windows version, which sold quickly, and our customer base fast became 90% Windows users. After working on both systems, it became apparent that the worst thing about the Mac Toolbox was that every time a new feature was added (MultiFinder, zoomable windows, pop-up menus, etc.), it would be necessary to go back and add code to the application to support that feature, whereas Windows included "default" handling that tended to automatically support events not explicitly coded for, and therefore allow apps to "implement" new features added to the O/S even if those features (events) didn't exist when the app was originally written. Think was also the best Pascal development environment I'd had ever seen, until Delphi came out, which made MacApp, Think libraries, and similar GUI packages look quite archaic. Of course, things like Microsoft Windows and Delphi had the advantage of essentially belatedly standing on the shoulders of things that Mac made so popular. Although Macs were initially very innovative in both hardware and software, I think most customers were like me and didn't really care what was inside the box, as long as the software had all those neat Mac features, was easy to use, and did what we wanted. Because the software half of Mac was what stood out, we thought of Apple as primarily a software company, not a hardware company, and we had a hard time understanding why Apple (or anyone else) seemed to take so long porting Mac or a Mac-like system to other machines. Having to do both hardware and software was obviously necessary until the Mac was released, but at that point Apple should have realized they essentially had let the cat out of the bag: turning their ideas into commodities would from then on move into the hands of companies that did either hardware or software (ie, Intel and Microsoft), and would no longer be driven by companies like Apple that tried to do both. Imagine trying to tell Jobs (or anyone else at Apple) in 1984 that Google's 2004 Zeitgeist pie charts would show 91% of their users use Windows vs. Mac's 4%... PS. Congratulations on your site and your excellent writing. I couldn't tear myself away, and had to read every damn article (106) the first day I ran into your site.
Around the time we were working on making all our code 32-bit clean for the 68020, there was a folktale going around about a conversation between John McCarthy (the LISP guy) and Andy Hertzfeld: McCarthy: There is no problem in computer science that can't be solved by a couple more levels of indirection. Hertzfeld: Or a couple more low-memory globals.
I was in 4th grade in 1984, and I never programmed any of the early Macintoshes. Recently I have been reading a copy of Inside Macintosh that was printed in 1985. The thing I've noticed from the book is that the 1984 Macintosh had an event loop, but it didn't have a callback model. i.e. the application had to look at every event and determine where did the mouse hit, if a control had been hit, if windows needed to be managed, if the event needed to be passed to a desk accessory, etc... most people will agree that the modern callback model is an improvement. My question is, was a callback type of model ever discussed during the design of the Macintosh system software? Was it considered and rejected? Were there technical limits that would have made it impossible or was it just never thought of? It seems, with the advantage of hindsight, that a callback model would have been more work for the ROM but in return would have substantially reduced the size of the application binaries, which would have been a good thing in the days of limited RAM. I'm saying this with the advantage of history, it's easy to look back on these things. But I think some of the neatest inventions are the ones that were feasible for years before they were thought of, they were just waiting for someone to conceive of them. I wonder if "the GUI callback" falls in that category.
"t's not fair to beat yourselves up over most of these things. Considering the mistakes that other OS vendors made (16-bit resource designators, for example), you guys made few mistakes that couldn't be dealt with when it was finally possible to do so." Yep, the IBM PC folks made even worse mistakes, which was compounded by the 8086 processor. For example, putting hardware interrupts in vectors that was back then marked as "reserved" by Intel, which became a problem when the 80186 and later processors started to use these vectors. Also, though Mac and Windows applications rarely do direct hardware access, even during the time when it was allowed, DOS applications often did.
"Perhaps our design would have given the future more priority over the present if we had understood how long it would last." Well, don't all software designers and architects believe what we are designing is going to last forever? IMHO, the real problem is that under pressures of deadlines, we stop short of more rigorous critical review of our design strategies.