Infinite Mac on a (Virtual) LAN #


I’ve added a Cloudflare Durable Object-based LAN mode to Infinite Mac, allowing networked games of Marathon, Bolo and anything else that works over AppleTalk. To try it out, use (or any other subdomain — it defines the “zone” where packets are broadcast and thus instances are visible to each other).

Remmeber to aim for the ground when using rocket launchers

Remembering The LAN

Though a computer without internet access can feel like a useless brick nowadays, in the 80s and early 90s that was the default state. But even without the internet, local networking was somewhat common, especially in offices (some solutions were more successful than others, and some were elegant kludges). Classic Macs were a part of this, with AppleTalk arriving only one year after the launch of the Mac. Beyond the office use-cases like file sharing and networked printers, this was used for games, with Bolo and Minotaur (Bungie’s first game) being early examples.

After Infinite Mac was announced it took less than an hour for someone to ask for (Marathon) network play — I was not the only one with fond memories of Thunderdome. The Basilisk II architecture is quite modular, and adding networking support for a platform is mostly a matter of implementing a few functions called by the virtual Ethernet driver. There was also an existing option to relay them over UDP, which had pointers for the special broadcast addresses that needed to be handled. had used this approach to get the emulator on the internet.

I began by implementing an ether_js.cpp version of the Ethernet functions that sent/received data from the JS side, and a basic framework for passing the packets to/from the worker to the UI process. Initially that used a BroadcastChannel for local testing, but then I added a Cloudflare Durable Object-based transport (somewhat inspired by this Doom port). That turned into a bit of a yak shave because the tooling and recommended setup for Cloudflare Workers had changed since I last updated them. However, it worked! The appeal of this approach is that it should have reasonable latency since the durable object will be created close to the clients that end up using it.

Infinite Mac network architecture

As I was developing this, I noticed that the emulated Mac would pause for 5 seconds during the boot whenever AppleTalk was enabled. I verified this on actual hardware and confirmed that it was not an emulation glitch. I decided to add some logging to understand why this was happening, and was amused to see that GitHub Copilot was capable of generating suggestions for TypeScript code that parsed AARP packets, which is surely not a common combination:

GitHub Copilot suggesting AARP packet logging code

Unfortunately, after some quality time with Inside AppleTalk it turned out that this delay was by design. AppleTalk nodes will self-assign an address, send out broadcast packets with it, and then wait to see if any nodes report conflicts. While this might be fixable by patching the ROM (a technique that Basilisk II makes heavy use of), it’s not something that I’m doing at this time. Because of the delay in booting, AppleTalk is not enabled in the default Infinite Mac instance, only when using a subdomain (to choose a “zone”). This also has the benefit of ensuring that zones do not get too big. Coincidentally wildcard subdomains recently became free on Cloudflare, enabling this approach.

I added some basic end-to-end latency tracking and was surprised to see that even when using the BroadcastChannel-based transport it was averaging around 8ms, and sometimes was approaching 16ms. These times were suspiciously close to the 60 Hz screen refresh interrupt, and it turned out to be due to my approach of hooking into the input loop — it was only triggered before refreshing the screen. I fixed this by moving input reading to being done on a higher frequency (1,000 Hz) — this both helped with reduced network latency and made mouse input feel smoother as well.

Infinite Mac screenshot showing 1ms ping times
More acceptable ping times

The actual experience when playing Marathon is mixed. The network protocol was designed for LAN play, and does not handle the increased latency of being played over the internet well. If playing for more than 15-20 minutes the game state gets out of sync between players. That being said, it’s satisfying that it works at all.

Infinite Mac 7.5 Weeks Later #

Infinite Mac has been quite a whirlwind. I wasn’t sure if it would reach the Mac (or retro computing) community, but it did, and went beyond that too, including to Ars Technica and Hacker News (twice). The most gratifying thing was seeing (and in some cases hearing from) people who were active Mac developers and community members in the 90s, including Andrew Welch, the author of Shufflepuck Cafe, Jorg Brown, James Thompson, and others. It was also great to see people actually using the apps to make things — there’s more to old OSes than just playing games (though those are great too).

Infinite Mac Requests GraphCloudflare Workers are pretty nice for handling traffic spikes

Besides the ego boost, I also got a lot of feedback, and have have made a bunch of changes since then. In addition to adding a few more things to the library and making bug fixes, the notable changes are:

  • I added as a companion site to and Besides making a new base system image, I also changed how the library is stored, keeping it in a separate disk from the OS. This makes uploading and hosting of alternate OSes much easier, since the same library disk image can be used, instead of duplicating ~1GB of data. I probably could have leaned in more on the content hashing by forcibly aligning files to chunk offsets, but that would have been brittle.
  • The HFS file system that I was generating had some malformed data structures which became more apparent as the number of files grew (and various B*-tree structures overflowed). After spending a lot of quality time with Inside Macintosh: Files I was able to make two fixes to the machs library and get things working.
  • The file system also lacked a populated desktop database, which mean that double-clicking on files from other apps did not consistently work. Unfortunately that file format was never reverse engineered, so the easiest way to get it created was to temporarily boot the image, force a desktop DB rebuild, and then persist the results of that.
  • A lot CD-ROM games are archived as disk images (especially in the Toast format), so I special-cased the dragging in of those files to instead directly mount them as disks. This unblocks playing of games like Myst.

At some point I was reminded of GUI Central, which was my pre-Mscape Software hobby project (it has a brief mention in my first post). It was a site cataloging Mac customizations (think Kaleidoscope schemes, desktop patterns, etc.). Its main gimmick was that it replicated the 1997-era Mac OS UI in the browser (complete with theme/scheme changing). I have in some ways come full circle.

Infinite Mac: An Instant-Booting Quadra in Your Browser #


I’ve extended James Friend’s in-browser Basilisk II port to create a full-featured classic 68K Mac in your browser. You can see it in action at or For a taste, see also this screencast:


It’s a golden age of emulation. Between increasing CPU power, WebAssembly, and retrocomputing being so popular The New York Times is covering it, it’s never been easier to relive your 80s/90s/2000s nostalgia. Projects like v86 make it easy to run your chosen old operating system in the browser. My heritage being of the classic Mac line, I was curious what the easiest to use emulation option was in the modern era. I had earlier experimented with Basilisk II, which worked well enough, but it was rather annoying to set up, as far as gathering a ROM, a boot image, messing with configuration files, etc. As far as I could tell, that was still the state of the art, at least if you were targeting late era 68K Mac emulation.

Some research into browser-based alternatives uncovered a few options:

However, none of these setups replicated the true feel of using a computer in the 90s. They’re great for quickly launching a single program and playing around with it, but they don’t have any persistence, way of getting data in or out of it, or running multiple programs at once. macintosh.js comes closest to that — it packages James’s Basilisk II port with a large (~600MB) disk image and provides a way of sharing files with the host. However, it’s an Electron app, and it feels wrong to download a ~250MB binary and dedicate 1 CPU core to running something that was meant to be in a browser.

I wondered what it would take to extend the Basilisk II support to have a macintosh.js-like experience in the browser, and ideally go beyond it.

Streaming Storage and Startup Time

The first thing that I looked into was reducing the time spent downloading the disk image that the emulator uses. There was some low-hanging fruit, like actually compressing it (ideally with Brotli), and dropping some unused data from it. However, it seemed like this goal was fundamentally incompatible with the other goal of putting as much software as possible onto it — the more software there was, the bigger the required disk image.

At this point I switched my approach to downloading pieces of the disk image on demand, instead of all upfront. After some false starts, I settled on an approach where the disk image is broken up into fixed-size content-addressed 256K chunks. Filesystem requests from Emscripten are intercepted, and when they involve a chunk that has not been loaded yet, they are sent off to a service worker who will load the chunk over the network. Manually chunking (as opposed to HTTP range requests) allows each chunk to be Brotli-compressed (ranges technically support compression too, but it’s lacking in the real world). Using content addressing makes the large number of identical chunks from the empty portion of the disk map to the same URL. There is also basic prefetching support, so that sequential reads are less likely to be blocked on the network.

Along with some old fashioned web optimizations, this makes the emulator show the Mac’s boot screen in a second, and be fully booted in 3 seconds, even with a cold HTTP cache.

Building Disk Images, or Docker 1995-style

I wanted to have a sustainable and repeatable way of building a disk image with lots of Mac software installed. While I could just boot the native version of Basilisk II and manually copy things over, if I made any mistakes, or wanted to repeat the process with a different base OS, I would have to repeat everything, which would be tedious and error-prone. What I effectively wanted was a Dockerfile I could use to build a disk image out of a base OS and a set of programs. Though I didn’t go quite that far, I did end up something that is quite flexible:

  1. A bare OS image is parsed using machfs (which can read and write the HFS disk format)
  2. Software that’s been preserved by the Internet Archive as disk images can be copied into it, by reading those images with machfs and merging them in
  3. Software that’s available as Stuffit archives or similar is decompressed with the unar and lsar utilities from XADMaster and copied into the image (the Macintosh Garden is a good source for these archives).
  4. Software that’s only available as installers is installed by hand, and then the results of that are extracted into a zip file that can be also copied into the image.

(I later discovered Pimp My Plus, which uses a similar approach, including the use of the machfs library)

I wanted to have a full-fidelity approach to the disk image creation, so I had to extend both machfs and XADMaster to preserve and copy Finder metadata like icon positions and timestamps. There was definitely some cognitive dissonance in dealing with late 80s structures in Python 3 and TypeScript.

Interacting With The Outside World

Basilisk II supports mounting a directory from the “host” into the Mac (via the ExtFS module). In this case the host is the pseudo-POSIX file system that Emscripten creates, which has an API. It thus seemed possible to handle files being dragged into the emulator by reading them on the browser side and sending the contents over to the worker where the emulator runs, and creating them in a “Downloads” folder. That worked out well, especially once I switched a custom lazy file implementation and fixed encoding issues.

To get files out, the reverse process can be used, where files in a special “Uploads” folder are watched, and when new ones appear, the contents are sent to the browser (as a single zip file in the case of directories).


While Emscripten has an IDBFS mode where changes to the filesystem are persisted via IndexedDB, it’s not a good fit for the emulator, since it relies on there being an event loop, which is not the case in the emulator worker. Instead I used an approach similar to uploading to send the contents of a third ExtFS “Saved” directory, which can then be persisted using IndexedDB on the browser side.


The emulator using 100% of the CPU seems like a fundamental limitation — it’s simulating another CPU, and there’s always another instruction for it to run. However, Basilisk II is working at a slightly higher-level, and it knows when the Mac is idle (and waiting for the user input), and allows the host to intercept this and yield execution. I made that work in the browser-based version by using Atomics to wait until either there was user input or a screen refresh was required, which dropped CPU utilization significantly. A previous blog post has more details, including the hoops required to get it working in Safari (which are thankfully not required with Safari 15.2).

The bulk of the remaining time was spent updating the screen, so I made some optimizations there to do less per-pixel manipulation, avoid some copies altogether, and not send the screen contents when they haven’t changed since the last frame.

The outcome of all this is that the emulator idles at ~13% of the CPU, which makes it much less disruptive to be left in the background.

Odds and Ends

There were a bunch more polish changes to improve the experience: making it responsive to bigger and smaller screens, handling touch events so that it’s usable on an iPad (though double-taps are still tricky), fixing the scaling to preserve crispness, handling other color modes, better keyboard mapping, and much more.

There is a ton more work to be done, but I figured MARCHintosh was as good a time at any to take a break and share this with the world. Enjoy!

Update: See also the discussion on Ars Technica and Hacker News (take 2). There is also a follow-up blog post with some post-launch details, and another describing the implementation of networking.

A-12 Software Development Parallels #

I recently finished reading From RAINBOW to GUSTO which describes the development of the A-12 high-speed reconnaisance plane (the predecessor to/basis for the somewhat better known SR-71 Blackbird). Though a bit different from the software history/memoirs that I've also enjoyed, I did find some parallels.

Early on in the book, when Edwin Land (founder of Polaroid) is asked to put together a team to research ways of improving the US’s intelligence gathering capabilities, there's the mid-century analog of the two-pizza team:

Following Land’s “taxicab rule” — that to be effective a working group had to be small enough to fit in a taxi — there were only five members.

It turns out that cabs in the 1940s had to seat 5 in the back seat – I suppose the modern equivalent would be the "Uber XL rule".

Much later in the book, following the A-1 to A-11 design explorations, there was an excerpt from Kelly Johnson’s diary when full A-12 development had started:

Spending a great deal of time myself going over all aircraft systems, trying to add some simplicity and reliability.

That reminded me of design, architecture and production reviews, and how the simplification of implementations is one of the more important pieces of feedback that can be given. Curious to find more of Johnson's log, I found that another book has an abridged copy. I've OCRed and cleaned it up and put it online: A-12 Log by Kelly Johnson.

It's a snippets-like approximation of the entire A-12 project, and chronicles the highs and lows of the project. I highlighted the parts that particularly resonated with me, whether it was Johnson's healthy ego, delays and complications generated by vendors, project cancelations, bureaucracy and process overhead, or customers changing their minds.

Communicating With a Web Worker Without Yielding To The Event Loop #

I recently came across James Friends’s work on porting the Basilisk II classic Mac emulator to run in the browser. One thing that I liked about his approach is that it uses SharedArrayBuffer to allow the emulator to run in a worker with minimal modifications. This system can also be extended to use Atomics.wait and Atomics.notify to implement idlewait support in the emulator, significantly reducing its CPU use when the system is in the Finder or other applications that are mostly waiting for user input.

James’s work is from 2017, which is from before the Spectre/Meltdown era. Browsers have since then disabled SharedArrayBuffer and then brought it back with better safety/isolation. The exception to this is (not surprisingly) Safari. Though there have been some signs of life in the WebKit repository, it’s unclear when/if it will arrive.

I was hoping to resurrect James’s emulator to run in all modern browsers, but having to support an entirely different code path for Safari (e.g. using Asyncify) did not seem appealing.

At a high level, this diagram shows what the communication paths between the page and the emulator worker are:

Page and worker communication

Sending the output is possible even without SharedArrayBufferpostMessage can be used even though the worker never yields to the event loop (because the receiving page does). The problem is going in the other direction — how can the worker know about user input (or other commands) if it can’t receive a message event.

I was going through the list of functions available to a worker when I was reminded of importScripts1. As its documentation says, this synchronously imports (and executes) scripts, thus it does not require yielding to the event loop. The problem then becomes: how can the page generate a script URL that encodes the commands that it wishes to send? My first thought was to have the page construct a Blob and then use URL.createObjectURL to load the script. However, blobs are immutable and the contents (passed into the constructor) are read in eagerly. This means that while it’s possible to send one blob URL to the worker (by telling it what the URL is before it starts its while (true) {...} loop), it’s not possible to tell it about any more (or somehow “chain” scripts together).

After thinking about it more, I wondered if it’s possible to use a service worker to handle the importScripts request. The (emulator) worker could then repeatedly fetch the same URL, and rely on the service worker to populate it with commands (if any). The service worker has a normal event loop, thus it can receive message events without any trouble. This diagram shows how the various pieces are connected:

Page, worker and service worker communication

This demo (commit) shows it in action. As you can see, end-to-end latency is not great (1-2ms, depending on how frequently the worker polls for commands), but it does work in all browsers.

I then implemented this approach as a fallback mode for the emulator (commit), and it appears to work surprisingly well (the 1-2ms of latency is OK for keyboard and mouse input). As a bonus, it’s even possible (commit) to use a variant of this approach to implement idlewait support without Atomics, thus reducing the CPU usage even in this fallback mode.

You can see the emulator at (you can force the non-SharedArrayBuffer implementation with the use_shared_memory=false query parameter). Input responsiveness is still pretty good, compared with the version (commit) that uses emscripten_set_main_loop and regularly yields to the browser. Of course, it would be ideal if none of these workarounds were necessary — perhaps WWDC 2022 will bring us cross-origin isolation to WebKit.

Update on 2022-03-31: Safari 15.2 added support for SharedArrayBuffer and Atomics, thus removing the need for this workaround for recent versions. We didn't have to wait for WWDC 2022 after all.

  1. It later occurred to me that synchronous XMLHttpRequests might be another communication mechanism, but the effect would most be the same (the only difference is more flexibility in the output format, e.g. the contents of an ArrayBuffer could be sent over, thus better replicating the SharedArrayBuffer experience)

Archiving Mscape Software on GitHub #

Mscape SoftwareMscape Software was the “label” that I used in my late teenage years for Mac shareware programs. While having such a fake company was (is?) a surprisingly common thing, it turned into a pretty real side-gig during 1999 to 2003. I spent a lot of my hobby programming time working on Iconographer, an icon editor for the new-at-the-time 32-bit icns icon format introduced with MacOS 8.5 (and extended more with the initial release of Mac OS X). The early entries of this blog describe its initial development in pretty high detail — the deal that I had with my computer class teacher was that I wouldn’t have to do any of the normal coursework as long as I documented my progress.

All of that wound down as I was finishing up college, and I officially decommissioned the site in 2008. I’ve been on a bit of a retro-computing kick lately, partially inspired by listening to some of the oral histories compiled by the Computer History Museum, and I was reminded of this phase of my programming career. Over the years I’ve migrated everything to GitHub, which has turned it into an effective archive of everything open source that I’ve done (it also makes for some good RetroGit emails), but this earliest period was missing.

I didn’t actually use version control at the time, but I did save periodic snapshots of my entire development directories, usually tied to public releases of the program. It’s possible to backdate commits, and thus with the help of a script and some custom tooling to make Git understand resource forks I set about recreating the history. The biggest time sink was coming up with reasonable commit messages — nothing like puzzling over diffs from 23 years ago to understand what the intent was. Luckily by the later stages I had started to keep more detailed release notes, which helped a lot. is the result of the archiving efforts, and it’s showing up as expected on my profile:

GitHub commits from 1998

I tried to be comprehensive in what is committed, so there is a fair bit of noise with build output and intermediate files from CodeWarrior, manual test data, and the like. The goal was that a determined enough person (perhaps me in a few more years) would have everything needed to recompile (there are still toolchains for doing Classic mac development).

It’s been interesting to skim through some of this code with a more modern eye. Everything was much lower-level — the event loop was not something that you could only be vaguely aware of, it was literally a loop in your program (and all other programs). Similarly, you had initialize everything by hand, do (seemingly magical) incantations to request more master pointers, and make sure to lock (and unlock) your handles. If you want to learn more about Classic Mac Toolbox programming, this pair of blog posts provide more context. Had I been aware of patterns like RAII, there would have been a lot less boilerplate (and crashing).

Speaking of C++ patterns, there are a bunch of cringe-worthy things, especially abuse of (multiple) inheritance. Need to make a class that represents an icon editor? Have it subclass from both an icon class and a document window class. It was nice to see some progression over the years to better encapsulation and data-driven code instead of boilerplate.

Another difference in approach was that there was a much bigger focus on backwards compatibility. clip2cicn and clip2icns both had 68K versions, despite it being 4-5 years since the transition to PowerPC machines begun. clip2icns and Iconographer both used home-grown icon manipulation routines (including ones that reverse-engineered the compression format) so that they could run on MacOS 8.1 and earlier, despite the icon format they targeted being 8.5-only. Iconographer only dropped Classic Mac OS support in 2003, more than 2 years after the release of Mac OS X. If I had to guess, I would attribute that to at least my not making rational trade-offs: would people that were hanging on to 5-year-old hardware be spending money on an icon editor? But I would also assume that Mac users tended to hang on their hardware for quite a while, presumably due to the higher cost.

On the business side, Brent Simmons’s recent article on selling apps online in 2003 pretty much describes my approach. I too used Kagi for the storefront and credit card processing, and an automated system that would send out registration codes after purchase. Iconographer ended selling 3,500 copies (the bulk in 2000-2003), which was pretty nice pocket change for a college student. On a lark I recreated the purchasing flow for 2021 using Stripe and it appears to be even more painless now, so modulo gatekeepers, this would still be a feasible approach today.

Making Git Understand Classic Mac Resource Forks #

For a (still in-progress) digital archiving project I wanted to create a Git repository with some classic Mac OS era software. Such software relies on resource forks, which sadly Git does not support. I looked around to see if others had run into this, and found git-resource-fork-hooks, which is a collection of pre-commit and post-checkout Git hooks that convert resource forks into AppleDouble files, allowing them to be tracked. However, there are two limitations of this approach:

  • The tools that those hooks use (SplitForks and FixupResourceForks) do not work on APFS volumes, only HFS+ ones.
  • The resource fork file is generated is an opaque binary blob. While it can be stored in a Git repository, it does not lend itself to diffing, which would ruin the “time machine” aspect of the archiving project.

I remembered that there was a textual format for resource forks (.r files) which could be “compiled” with the Rez tool (and resource forks could be turned back into .r files with its DeRez companion). This MacTech article from 1998 has more details on Rez, and even mentions source control as a reason to use it.

I searched for any Git hooks that used Rez and found git-xattr-hooks, which is a more specialized subset that only looks at icns resources (incidentally a resource I am very familiar with). That seemed like a good starting point, it was mostly a matter of removing the -only flag.

The other benefit of Rez is that it can be given resource definitions in header files, so that it produces even more structured output. Xcode still ships with resource definitions, and they make a big difference. Here’s the output for a DITL (dialog) resource without resource definitions:

$ DeRez file.rsrc
data 'DITL' (128) {
$"0003 0000 0000 0099 002F 00AD 0069 0405" /* .......?./.?.i.. */
$"4865 6C6C 6F00 0000 0000 0099 007F 00AD" /* Hello......?...? */
$"00B9 0405 576F 726C 6400 0000 0000 000C" /* .?..World....... */
$"0056 002C 0076 A002 0080 0000 0000 0032" /* .V.,.v?..?.....2 */
$"0012 008F 00C5 8816 5759 5349 5759 4720" /* ...?.ň.WYSIWYG */
$"6C69 6B65 2069 7427 7320 3139 3931" /* like it's 1991 */

And here it is with the system resource definitions (the combination of parameters that works was found via this commit):

$ DeRez -isysroot `xcrun --sdk macosx --show-sdk-path` file.rsrc Carbon.r
resource 'DITL' (128) {
{ /* array DITLarray: 4 elements */
/* [1] */
{153, 47, 173, 105},
Button {
/* [2] */
{153, 127, 173, 185},
Button {
/* [3] */
{12, 86, 44, 118},
Icon {
/* [4] */
{50, 18, 143, 197},
StaticText {
"WYSIWYG like it's 1991"

Putting all of this together I have created git-resource-fork-hooks, a collection of Python scripts that can be used as pre-commit and post-checkout hooks. They end up creating parallel .r files for each file that has a resource fork, and combining it back on-disk. I briefly looked to see if I could use clean and smudge filters to implement this in a more transparent way, but those are only passed in the file contents (the data fork), and thus can't read or write to the resource fork.

The repo also includes a couple of sample files with resource forks, and as you can see, the diffs are quite nice, even for graphical resources like icons:

Resource fork diff

I’m guessing that the number of people who would find this tool useful is near zero. On the other hand, Apple keeps shipping the Rez and DeRez tools (and even provided native ARM binaries in Big Sur), thus implying that there is still some value in them, more than two decades after they stopped being a part of Mac development.

An elegant [format], for a more... civilized age.

All of this thinking of resource forks made me a bit nostalgic. It’s pretty incredible to think of what Bruce Horn was able to do with 3K of assembly in 1982. Meanwhile some structured formats that we have today can be so primitive as to not allow Norway or comments. I have a lot of fond memories of using ResEdit to peek around almost every app on my Mac (and cheat by modifying saved tank configs in SpectreVR).

Once I started to develop for the Mac, I appreciated even more things:

  • Being able to use TMPL resources to define your own resource types and then have them be graphically editable.
  • How resources played nicely with the classic Mac OS memory management system - resources were loaded as handles, and thus those that were marked as “purgeable” could be automatically unloaded under memory pressure.
  • Opened resource forks were “chained” which allowed natural overriding of built-in resources (e.g. the standard info/warning/error icons).

While “Show Package Contents” on modern macOS .app bundles has some of the same feel, there’s a lot more fragmentation, and of course there’s nothing like it on iOS without jailbreaking, which is a much higher barrier to entry.