Artificial Intelligence is one of those holy grails of science fiction.

In our world, depending on one’s philosophical interpretation, the field of AI started centuries ago, though first as a philosophical construct. Since the 1950s, with the first practical Game AI, Rosenblatt’s perceptron, a little later ELIZA, and Minsky’s work, we’ve seen two AI winters and different approaches from logical inference and symbolic reasoning to connectivism.

Yet despite what many people seem to believe - and despite a felt dozen of newspaper articles touting AI, primarily with “scientists built this AI but it was corrupted by X” - we’re not really close to something which could be called AI1. Accordingly, I am quite annoyed by people warning of the dangers of AI or self-driving cars - it’s like worrying about the fusion reactor malfunctioning. But, enough of ranting2.

Transformative Possibilities

There are different categories you could classify available AI in a setting into. THS, for example, classifies it into NAI, LAI, and SAI (Non-sapient, low-sapient, and sapient AI):

  • NAI: They can learn, but lack self-initiative, reasoning ability, empathy, and creativity.
  • LAI: They have self-initiative and empathy, but lack human-level creativity.
  • SAI: Self-aware, and with human-level creativity.

To me, the notion of creativity (especially comparing them to humans) is quite difficult to classify and to define. What is creativity?

However, despite this, we can come up with a few ways this might transform the setting. First, my assumption is that AI is software, something shared by both UT and THS. This immediately implies that you can easily back-up and copy any AI - it’s just software, after all. This in turn means that AI by itself cannot really be expensive to produce. Hardware to run it on might (if you need a full-scale datacenter to run an AI, that’s going to cost serious money), but the software should - discounting development costs - scale almost constantly. Accordingly, either there is no AI or AI is everywhere.

Additionally, you can at any point backup an AI program (remember, software), which implies a certain recklessness with the virtual life of such an AI. Sure, the individual instance will die, but that doesn’t make a difference to you. Or does it? The BIG QUESTION from THS (the one which has the only stickied post on the SJGames’ THS forum to avoid derails of other threads) is about the identity of such constructs.

Simplified, the question is as such: Imagine you were a normal human being, living a normal life. One morning, on your way to work, you’re involved in a car accident. A fatal car accident. Luckily, as every day, you’ve made a backup at night, and are restored. Now, who lives and who died?

For an outside observer - and assuming you can perfectly “load” a mind storage - there’s no way you can distinguish between those two instances; from the outside perspective, they are identical.

On the other hand, what about the “you” that was killed? Well, here’s where it get philosophically difficult. Sure, every experience you had since the last “backup” will be lost. And, well, from the perspective of the killed copy, it won’t matter. The “experience thread” will be cut, and what you currently see yourself will end. Which is quite unfortunate for the you you.

Ghosts and Mind Backups

This brings us directly into the next potentially transformative technology, directly linked to AI above: What THS calls Ghosts, i.e. in some way digitalizing/uploading the human mind. THS only has destructive uploading (that’s why it’s called Ghosts); Eclipse Phase has absolutely arbitrary copying and down/uploading.

Both of these take the philosophical issues above and make them far worse, if only because they are applied to humans and we as humans have a tendency to - despite all of the THS legal discussions - see humans as the actually important things.

It becomes even worse if you include copying. What if today, you decide to copy yourself (Eclipse Phase calls that forking, although I’d rather use branching as a term) and send one copy to work while one stays at home? Both of these are arguably “you”; they share the same past conscious-stream. However, they will diverge quickly. Can you reintegrate them? Eclipse Phase goes with “yes”, and includes mechanisms to do so. Their countermeasure are restrictions both legal (at least in some jurisdictions) and psychological (risking stress), especially for separation times over about four hours.

Now, this actually makes sense to me, because it’s quite similar to what you’re doing in modern version-control systems. Git, for example, has the concept of branches which - simplifying a bit - can be branched off from at any point and later on merged. Whenever you do such a merge, and especially when you have changed more code, a merge conflict might happen which requires human intervention. I’m quite sure that Eclipse Phase took inspiration from that.

Eclipse Phase goes a step further and includes the farcaster as an optional implant. This is a mostly-magic (single-shot antimatter-powered quantum interference communicator) implant which serves to preserve your information by sending it off in the event of death.

Summary

In summary, if there’s AI, you’ll be able to copy it. If you can remerge them, there’s no reason not to. And if you can digitalize humans, the same applies to them. This makes concepts like death very different compared to today.

Decision Time!

So, what does that mean for the setting? From my point of view, there are two ways we can go here. One is to completely embrace the “digital mind” part: Minds can be branched, merged, and copied. They can be edited, too. Both applies to AIs and humans.

The other alternative is to assume that you cannot reliably digitize a human mind and therefore can’t copy or merge it. At the same time, to avoid the AI loophole, AGI has been long-promised but not delivered.

I prefer the second one.

Specifically, in-setting, you can (destructively) scan a human brain and then simulate it. The only issue is that it’s only about 75% accurate. This might sound fine, but at 90 billion neurons firing dozens of time per second, this diverges radically from the original in seconds. Accordingly, there’s not been (known) experiments with humans. Similarly, while AI has made great progress, none of them are Artificial General Intelligence: You can buy a personal assistant or an intelligent drone controller but you can’t yet reach human-like generalization.

What else can you do to “upgrade” a character? We’ll look at that next time.


  1. Although admittedly, one nice definition of AI is “that which computers are not yet able to do” - once you can implement it, it’s no longer magic. 

  2. To a scientist, there’s only one thing worse than working in an unknown field: Working in a field the public knows.