When we talk about transformative technologies in terms of setting, it might serve to look at one of the areas in the real world which produced a truly transformative effect: Information and commmunication technology (ICT), which allows us access to a staggering amount of information from essentially everywhere and at high speed.

Moore’s Law

The driver of our current advances, Moore’s Law is an observed trend which states that the number of transistors in an integrated circuit doubles every two years (originally, when formulated in 1965, it was every year). It’s a nice approximation, which unfortunately doesn’t hold anymore: Recent doublings took three years, and there are already experiments with atom-sized transistors in labs meaning we soon won’t be able to go much smaller.

Accordingly, my assumption is that chips don’t scale indefinitely (otherwise the same chip in a hundred years would have ~10^15 as many transistors). This would then force branching into more distributed computing, with one system consisting of dozens or hundreds of single processing units communicating between each other. Almost everyone who’s worked with multiprocessing will now run away screaming; but I’ll just posit there’ll be languages and tools that make this fairly simple.

Redundancy and Computer Systems

A fairly useful technology emerging in the last few years is containerization, which is essentially virtual machines (in that they are somewhat independent of the host system) but fast1. Without going into too many details, this allows extremely flexible and scalable systems: Usage spike in your search engine? Just shift resources from the email system to the search engine and start another thousand containers2.

In game terms, adapting this means that you have extremely scalable and flexible systems, which are distributed over several dozen to hundred processing nodes. Accordingly, damage will only result in disabling non-essential functionality, backup nodes being started in less time than it’d take a human to notice it.


I’ve looked over the human-computer interfaces again (see here for my original thoughts). And while in there, the cost of about $17,000 for a brain implant seemed excessive for general use, THS itself put the cost at $4,000. That’s much more affordable (three months of average salary vs two months; THS has a significantly lower income compared to 4e’s Basic Set).

Accordingly, many people in-setting uses what THS calls a Virtual Interface Implant as primary interface with technology around them. You cannot directly provide “knowledge” to a brain, meaning that information is usually presented either as a head-up display, as full-screen, or as a virtual monitor. The latter means that you can, for example, designate a table surface as display and you get a perspective-corrected image projected on there. Each of these can be shared, and spacecraft will often feature a shared “group consensus” in the command room or CIC in which you can configure your own workspace and a central display is shared (and might well look like a 3D-hologram-map of local space).

Those that don’t have such an implant have to make do with a glasses-based one, which on the other hand only costs $500. Both the implant and the glasses contain a very basic computer; it’s highly likely that you’ll add a dedicated secondary computer for normal use (probably smartphone-sized or notebook-sized depending on your computational needs).

Input modality differs depending on whether you got an implant or glasses. In the former case, a few commands are available “by thought”; everything else (without an implant, everything) is done using gestures tracked either via camera or implants reading muscle instructions.

Several (mostly cyberpunk) works posit augmented-reality-based advertising which might at times swamp the interface. I find that fairly unlikely; you can always disable AR, and there’s really no reason to just accept AR artifacts from any server just because it’s nearby.


Speaking of communication and authentication, one of the big topics is encryption. THS posits that effective encryption is possible, available, and in use. I tend to agree with that: Even today, the main issue isn’t weak encryption but rather the lack of adoption. Have you ever sent an encrypted email to somebody? Probably not, because people haven’t set it up. With a long time to go, it’s likely that network protocols would be updated to automatically include end-to-end encryption.

Can that be cracked? That’s quite likely. Public-key cryptography is only a trade-off between encryption computation time and decryption computation time. At the moment, we can be fairly sure that most encryption is safe enough (barring unknown mathematical advancements locked in some basement room at the NSA or ridiculous mega-computers). Quantum computers can compute some classes of problems far faster than current computers, yet there already are implementations of post-quantum encryption methods and it’s likely they’ll be deployed before quantum computers really become available.

UT, by the way, doesn’t share that feeling: They assume a macroframe ($1M) cracks one default message in about 1-3 hours, or a quantum mainframe ($200k) to read securely encrypted files in a minute or two. In my opinion, this would immediately crash the economy, since you could easily falsify transaction messages between different financial institutes or read confidential transaction information.

Accordingly: Encryption is available and can be assumed to be secure for a fairly long time. If you really piss off a major organization with sufficient funding and competence in that area, it is possible that standard-encrypted messages can be broken in a scale of weeks to months (that’s for one message); if you securely encrypt your messages, it will be years or even decades. Both timespans assume no bugs in the actual implementation of the encryption, of course.

In game terms, this allows secure communication between the players and their contacts (although metadata can still be used to gain insights). At the same time, a contact might give you insights into enemy communication if you manage to find the one message (out of a haystack) that actually matters. In truth, though, stealing a private key (this might be extremely difficult to impossible but is certainly an excellent adventure) is a better approach.


In summary, computing technology is still there, still a huge part of everyday life - but it’s not magic, and it didn’t give us gigantic supercomputers able to predict everything. Instead, it’s distributed and connected computing everywhere, with cheap and extremely common encrypted communication being available.

  1. If that doesn’t sound useful to you, you’re probably not a computer scientist but a sane person. 

  2. It’s a bit more complicated than that.