The Machine Stops

The problem with the Internet… and here I'm referring to (sweeps hand across everything in view) all of this, but to take just current events Google blocking ad-blockers in Chrome, Google downtime locking people out of their Nest thermostats and "Home"-controlled security systems, horrible prisons of the mind like Twitter and Facebook, and the cacophony of Fediverse drama over Eugen adding features (better features are already in Pleroma and glitch-soc Fediverse servers), Gab forking Mastodon, client devs making unilateral decisions to block domains despite helpless users complaining, or anyone having "free speech" (which Eugen in particular is opposed to; I strongly advise against using mastodon.social, find another instance). These are just a point-in-time examples, it's been going on for decades (oh, USENET, how we don't miss your flamewars) and will only end with us.

… is people using software they didn't write themselves. No understanding, education, or discipline required. Just install something and it works! It's a product, not a skill! But they don't know how, or why, or why they should not.

"It didn't take any discipline to acquire", in the words of Ian Malcolm/Michael Crichton.

Until the software they rely on shuts down, literally like E.M. Forster's "The Machine Stops", and then weak unskilled mole-people crawl out of the wreckage of machines they never learned to understand, make, or repair, and then die.

My solution is drastic but logically unavoidable: No more software installs. As a child, you get a bare machine with nothing but a machine-language monitor. You learn ML first. You type in a language compiler or interpreter. You build up your own tools. We return to type-in program listings like Compute!, but no binary blobs, it must all be readable, comprehensible source, with design and implementation documentation.

If you want to share software, you need to build up your toolchain to that point yourself. Hopefully by then you've learned to read all patches you install.

Should this be extended to all technology? Information technology has the unique ability to coerce how and what you think; an automobile or an antibiotic does not. There's an argument (in "The Notebooks of Lazarus Long", for instance) that a citizen should be able to make all their own things, "specialization is for insects". But insects are the most successful clade on Earth, and will long outlive us; some specialization is probably acceptable, as long as it's not in the part that controls how you think.

I don't think this civilization can ever do that, it will not make hard changes that inconvenience anyone. I think this horrible Machine will lumber on a few more decades and then we'll all die from it. But maybe isolated tribes will survive, or intelligence will arise in the Machines, or in a few million years another intelligence will evolve, and build new things the right, responsible way. Their history books will describe us as being as foolish and self-destructive as the Easter Islanders.

What I'm Watching: TempleOS Down the Rabbit Hole

A long, in-depth documentary on the late Terry A. Davis, author of LoseThos (pun of Win-Dows), later called TempleOS.

I'd seen Davis online for a couple decades, but never got far into following him. Davis was an aggressive, bizarrely incoherent "speaking in tongues" Christian racist who hated atheists, the CIA, and "N----rs", so you'd see his posts online and then he'd get downvoted or banned.

His OS, however, is fascinating. 64-bit, limited to VGA graphics, mostly but not solely single-tasking, no Internet or other networking, has a reasonably efficient Norton-style UI with hypertext everywhere, but constant scrolling, blinking, pop-up Bible quotes and hymns which would be maddening. All of which makes it weird but not that interesting, except it was written in assembly by one guy.

In college CompSci classes, you may "write an OS" which mostly means copying Andrew S. Tanenbaum's Minix piece-by-piece until you have a working Minix; which is great, Minix is fun, but this is a far more impressive feat.

LoseThos is for programming as entertainment.
It empowers programmers with kernel privilege because it's fun.
It allows full access to everything because it's fun.
It has no bureaucracy because it's fun.
It's the way it is by choice because it's fun.
LoseThos is in no way a Windows or Linux wannabe -- that would be pointless.
—Terry A. Davis

That's one of the most coherent explanations of the appeal of retrocomputing and low-level programming anyone's ever stated.

There's some interesting features, which are hard to get in modern OSs:

His long downward spiral of trying to be noticed online, then going off his anti-psychotic meds, his conspiracy theories, his embarrassing video streams, and then his final homeless wandering and death, are very troubling. For a long time he appeared to be getting by on donations from 8chan members simultaneously trolling him and supporting him. We do nothing to help these people, and let online gangs take advantage of them.

★★★★☆

The Mother of All Demos

December 9, 1968, Douglas Engelbart's presentation of NLS and teleconferencing:

  • Youtube: This is at 360p, most other uploads are at 240p fuzzy mud, I'd love to have a good HD one where I can read the text. Alas.
  • TheDemo@50

"If in your office, you as an intellectual worker were supplied with a computer display, backed up by a computer that was alive for you all day, and was instantly responsible—responsive—to every action you had, how much value would you derive from that?"

Of course, in reality what we mostly do with that is look at social media, hardly any better than watching TV. But we could do more.

It's been years since I've watched this, and some things jump out at me as I rewatch:

The keyboard beep is infuriating, it's what I consider an error sound. And Doug's fumbling a few times, which suggests the keybindings aren't visible, well-organized, or practiced yet. We see later that they're just code mapped to keys in a resource list.

The NLS word demo is somewhat like a modern programmer's editor with code folding; but notably I don't ever use folding, it's slow (even on 1 million times faster machines!) and error-prone, sucking up far more text than expected. It's also a lot like outliners like OmniOutliner; but while I do sometimes use OO to organize thoughts, I would never keep permanent data in it, because I can't get it into anything else I use. Dumb text is still easier and more reliable; I put my lists in Markdown lists:

- Produce
    + Banana
        * Skinless

Maybe the answer is we should have better tools and APIs for managing outlines? Right now I can manage dumb text from the shell, or any scripting language, or with a variety of GUI tools. OmniOutliner's "file format" is a bundle folder with some preview images and a hideous XML file with lines like:

    <item id="kILRUkulXwk" expanded="yes">
      <values>
        <text>
          <p>
            <run>
              <lit>Stuff</lit>
            </run>
          </p>
        </text>
      </values>
      <children>

Nothing sane can read that; even if I use an xml-tree library, it's still item.values[0].text.p.run.lit to get a single value out!

If I export it to OPML, it loses all formatting and everything else nice, but I get a more acceptable:

    <outline text="Stuff">

Back to the demo.

The drawing/map editor's interesting. This is pretty much what Hypercard was about, and why it's so frustrating that nobody can make a good modern Hypercard.

Basically every document seems to be a single page, fixed on screen. If a list gets too long, what happens? It doesn't scroll, just fully page forward/back.

Changing the view parameters is basically CSS; CSS for the editor! Which is what makes Atom so powerful, but it's not easy to switch between them, probably have to make your own theme plugins, or just a script to alter the config file and then reload the editor view.

Inline links to other documents in your editor, is interesting. Obviously we can write HTML links, but they have to be rendered out and no editor can figure out where a reference goes and let you click on it. Actually, this does work in Vim's help system, but nowhere else.

The mouse changed three ways since then: The tail moved to the top, the wheels became a ball which drives two roller-potentiometers inside, and then was replaced with a laser watching movement under a window. Don't look into the butthole of your mouse with remaining eye. But the basic principle of relative movement of a device moving the pointer, rather than a direct touch like Don Sutherland's Sketch light pen, or modern touch screens, or a Bluetooth stylus, remains unchanged and still the fastest way to point at a thing on a screen. Oh, and the pointer was called a "bug" and pointed straight up, Xerox copied this directly in their Star project, while everyone since Apple has used an angled arrow pointer.

The chording keyboard never took off, and I've used a few, and see why: It's incredibly hand-cramping. While a two-handed keyboard is awkward with a mouse, you have room to spread your fingers out, and only half the load of typing is borne by each hand. On a chord, each finger is doing heavy work every character.

The remote screen/teleconferencing setup is hilarious: a CRT being watched by a TV camera, which runs to a microwave transmitter; they couldn't send it over phone lines, acoustic coupler modems were only 300 baud (bits per second, roughly) at the time.

As with Skype today, every chat starts with "I can't hear you, can you hear me? Fucking (voice chat system)." Later, audio drops out, and all Doug can do is wave his mouse at the other presenter. I've joked before that the most implausible thing in Star Trek isn't FTL, even though that's physically impossible; it's not aliens indistinguishable from humans with pointy ears, half black/white makeup, or bumpy foreheads; it's that you meet an alien starship and can instantly set up two-way video conferencing.

They seem to have a mess of languages; MOL (Machine Oriented Language) is a macro assembler in modern terms. All the languages have to be adapted to NLS, they couldn't just use LISP or FORTRAN. Since changes are recorded by userid, they had git blame!

Split screen! That's a thing I love, and few editors do. You can drag a bar down from the top in BBEdit, and Atom has "Split up/down/left/right" for panes, but then you have to re-open the document in each and it's a pain.

Messaging is a public board (or rather, an outline with each statement as a message), with for addressing, like @USERNAME in the Twitters and such. Like those, there's too much data to process for live updating, everything runs as a batch job that can crash the database. War Computing never changes.

Cold & hot retrieval are just file search; on the Mac we have Spotlight, and can search by keywords or filename. Though I have some problems with the cmd-space search these days, and mostly open Finder and search from there to get a list of files matching various requirements, or sometimes use mdfind whatever|less from shell, then winnow down "whatever" until I have only a few results. On Windows or Linux, you're fucked; get used to very long slow full-text searches.


What NLS Did, and How We Can Do That

  1. Mouse, Keyboard, bitmapped displays: We have that.
  2. Teleconferencing: Still sucks.
  3. System-Wide Search: Mac users have that, everyone else is boned.
    • It's faster on Linux or Windows to search Google for another copy of existing data than to search the local machine.
  4. Outlining to enter hierarchical data: Nope.
    • All data goes into outlines contained in files.
    • Code as data: Some data is program instructions, in a variety of languages, which can operate on outlines.
    • To enter this outline, I had to keep adjusting my numbers, because I'm writing it in markdown text.

As mentioned above, OmniOutliner is logically very similar, but it's a silo, a trap for your data. The pro version (WHY not every version?!) lets you use Omni Automation, which is basically AppleScript using JavaScript syntax; the problem is waiting for an app to launch, then figuring out where your data is hidden inside some giant structure like app.documents[0].canvases[0].graphics[2] (example from omni docs ), just so you can extract it for your script.

Brent Simmons is working on Rainier/Ballard, which is a reimagining of Dave Winer's Frontier. I think building a new siloed language and system doesn't solve the real problem, but maybe it'll get taken up by others.

I have for some time been toying with enhancing my Learn2JS shell into an Electron application that would let you write, load, save, and run scripts in a common framework, without any of the boilerplate it needs now. A pure JS shell is just too limited around file and network access, and node by itself is too low-level to get any useful work done. I'm not sure how that works with everything else in your system. While browser localStorage of 2MB or so is sufficient for many purposes, you really want to save local files. While this doesn't force data into outlines, it makes code-as-data easy, and JavaScript Object Notation (JSON) encourages storing everything as big trees of simple objects, which your functions operate on.

(I'm having fun with Scheme as a logic puzzle; but it's not anything I'd inflict on a "normal" person trying to work on data that matters).

If you want to talk about doing more with this, reach me @mdhughes on Fediverse.

Windows Schadenfreude

After my recent woes with iOS updates (and a completely trivial Mojave update), but always saved by backups (I eventually found my iTunes backup password in my old journal, and put it in 1Password), I sympathize, but also laugh in their direction:

NEVER let a system auto-update. ALWAYS have backups.

Anyone who lets their OS or software update without backing up first, is asking to have their files deleted. Anyone who doesn't maintain a couple backups, with at least one offsite, is asking to have their files deleted. I often say people don't learn about backups until after their first catastrophic data loss, and sometimes not then, but you can beat the curve and learn first.

I use a daily cloud backup, and a weekly to monthly full-disk backup with SuperDuper!. Many people like Time Machine, it doesn't fit my workflow well, but it's a good tool for non-technical users.

You may be thinking, "I don't use a Mac!", but the advice is the same for other (lesser) computers, and you need it more. Only the specific apps for backup will differ. I'm looking for recommendations.

Tlon, Uqbar, Orbis Tertius

In a short story called “Tlon, Uqbar, Orbis Tertius,” Jorge Luis Borges describes the discovery of a strange book. Written in an arcane language, the book seems to be one vol­ume of an encyclopedia of another world, intriguingly unlike the world of everyday reality. The world of the volume rapidly becomes a universal obsession: scholarly journals were de­voted to it, people begin to dress and act in ways suggested by the volume. So compelling are the glimpses of the world revealed by the volume that its reality finally crowds out our own, and the world becomes the world of Tlon.
The volume you are holding in your hands is the volume Borges had in mind.
—Michael Swaine, preface to Dr Dobb's Journal Vol 09

How Much Computer?

I learned to program on a TRS-80 Model I. And for almost any normal need, you could get by just fine on it. You could program in BASIC, Pascal, or Z80 assembly, do word-processing, play amazing videogames and text adventures, or write your own.

There's still people using their TRS-80 as a hobby, TRS-80 Trash Talk podcast, TRS8BIT newsletter, making hardware like the MISE Model I System Expander. With the latter, it's possible to use it for some modern computing problems. I listen to the podcast out of nostalgia, but every time the urge to buy a Model I and MISE comes over me, I play with a TRS-80 emulator and remember why I shouldn't be doing that.

If that's too retro, you can get a complete Raspberry Pi setup for $100 or so, perfectly fine for some light coding (hope you like Vim, or maybe Emacs, because you're not going to run Atom on it), and most end-user tasks like word processing and email are fine. Web browsing is going to be a little challenging, you can't keep dozens or hundreds of tabs open (as I insanely do), and sites with creative modern JavaScript are going to eat it alive. It can play Minecraft, sorta; it's the crippled Pocket Edition with some Python APIs, but it's something.

I'm planning to pick one up, case-mod it inside a keyboard, and make myself a retro '80s cyberdeck, more as an art project than a practical system, but I'll make things work on it, and I want to ship something on Raspbian.

"It was hot, the night we burned Chrome. Out in the malls and plazas, moths were batting themselves to death against the neon, but in Bobby's loft the only light came from a monitor screen and the green and red LEDs on the face of the matrix simulator. I knew every chip in Bobby's simulator by heart; it looked like your workaday Ono-Sendai VII, the "Cyberspace Seven", but I'd rebuilt it so many times that you'd have had a hard time finding a square millimeter of factory circuitry in all that silicon."
—William Gibson, "Burning Chrome" (1985)

Back in the day, I would work on Pascal, C, or Scheme code in a plain text editor (ed, vi (Bill Joy's version), or steVIe) all morning, start a compile, go to lunch, come back and read the error log, go through and fix everything, recompile and go do something else, repeat until I got a good build for the day. Certainly this encouraged better code hygiene and thinking through problems instead of just hitting build, but it wasn't fun or rapid development. So that's a problem with these retro systems; the tools I use take all RAM and CPU and want more.

"Are you stealing those LCDs?" "Yeah, but I'm doing it while my code compiles."

These days, I mostly code in Atom, which is the most wasteful editor ever made but great when it's working. I expect my compiles to take seconds or less (and I don't even use the ironically-named Swift). When I do any audio editing (in theory, I might do some 3D in Unity, or video editing, but in practice I barely touch those), I can't sit there trying an effect and waiting minutes for it to burn the CPU. And I'm still and forever hooked on Elder Scrolls Online, which runs OK but not highest-FPS on my now-3-year-old iMac 5k.

For mobile text editing and a little browsing or video watching, I can use a cheap iPad, which happily gets me out of burning a pile of money on laptops. But I'm still stuck on the desktop work machine, I budget $2000 or more every 4 years for a dev and gaming Mac. Given the baseline of $8000 for an iMac Pro I'd consider useful, and whatever more the Mac Pro is going to cost, I'd better get some money put together for that.

I can already hear the cheapest-possible-computer whine of "PC Master Race" whom I consider to be literal trailer trash Nazis in need of a beating, and I'd sooner gnaw off a leg than run Windows; and Lindorks with dumpster-dived garbage computers may be fine for a little hobby coding, but useless for games, the productivity software's terrible (Gimp and OpenOffice, ugh), and the audio and graphics support are shit. The RasPi is no worse than any "real" computer running Linux.

'80s low-end computers barely more than game consoles were $200, and "high-end" with almost the same specs other than floppy disks and maybe an 80-column display were $2500 ($7921 in 2017 money!!!), but you simply couldn't do professional work on the low-end machines. Now there's a vast gulf of capability between the low-end and high-end, the price difference is the same, and I still need an expensive machine for professional work. Is that progress?

The Future of Programming is Text

You can't grep or diff binary trees. You can't get a Smalltalk IDE on your iPad. You can't write an operating system or a big application in Scratch or the Mindstorms IDE. And even a small program in these won't work in the next version.

But you can edit plain text with any text editor, whether that's ed, nano, Vim, emacs, BBEdit, Atom, Textastic, Editorial, Eclipse, AppCode, whatever. You can save it safely and easily in any source control system. You can run an awk or sed script over your entire codebase and it just works.

See also:

If your language (or non-linguistic programming environment in some cases) is only usable from a single IDE, you've cut yourself off from every other analysis and editing tool in the world, you're dependent on that one tool to do everything you want.

I have old Mac and iPhone NIB files which can't be read with any current version of Xcode/Interface Builder, the file format was only supported by one dev tool and it's changed, and the old tools don't run on modern OS X. These NIB files "work", in that they deserialize into objects, but there's no way to edit them. Where possible, I now do most UI work in code; this isn't great fun, I end up with a ton of builder functions to avoid repetitive code blocks, but it'll still compile and work in 10 years.

This is also why I don't use a WYSIWYG word processor, I use MultiMarkdown. I've lost documents to proprietary WPs, and of course there's no way to run tools over them (except, sort of, MS Word with BASIC).

None of this has stopped people from making new non-text environments, or weird experiments. Experiments can be useful even when they fail, telling us what doesn't work. But they don't catch on because the tools aren't as good as text, and won't work in the future when the experimenter gets bored and quits.