Understand
Mike tries his best. But here we have the crux of Sarah’s bizarre and often erratic character.
So, sorry to miss last week, but SPX was awesome. It was great to have a chance to see my Spiderforest peoples, and a whole bunch of awesome artists I know. Here’s hoping I get to go back next year, too!
So, it’s a bit of an odd moment for me – I have some interesting things in the works that I can’t talk about yet, and some other things that haven’t panned out that are kind of destabilizing my life a little. All in all, I’m confident it’ll all work out, but for now I’m just in… well an “odd moment.” That’s the best I can describe it! But I’m realizing I haven’t really slowed down for the better part of two years, and I’ve kind of let my lifestyle go to the wall, which I’m trying to get back in hand. For starters, I’ve neglected a whole bunch of my friends who I finally got to see this weekend, and it kind of was wonderful in a way I forgot about. So I’m going to try to do that more often. Also, at SPX, it was the first time I’ve had a significant number of people ask me about a 6-Commando book, which I kind of gave up on some years ago; maybe it’s time to think about that again? I don’t know.
Anyway. This, too, shall pass. For now, I just want to keep stumbling towards the end of this chapter, as the story begins to unravel. Finally. After so many years.
All the best, folks.
Yup. Real life is not like an episode of Star Trek with its endless understanding. We don’t want ever closer unity, what we want is to chose our distance. I like how you portray Mike as perhaps well-meaning but ultimately a screw-up — and probably a failure too. A.I. are always portrayed as the inevitable next evolutionary step (or whatever), but it’s extremely unlikely that we are descended from that first amphibian to crawl on to land. It died horribly without issue. We most likely evolved from like, the 1,000,000th amphibian to make the attempt…
Thanks. It’s a tough thing to get across that all your characters are seriously screwed up – and Mike is no exception. Trying to understand things is what he does, but there’s more to humans than just a narrative of facts about their lives, and he just can’t make that leap.
Mike’s efforts at helping are starting to do some serious damage.
Someone needs to explain to Mike the concept of “Collateral Damage”.
Its the difference between the concrete and the abstract. This person is someone I know and care about. A million “people” are just a statistic. Human beings are confronted with harder choices than that every day!
So what became of Ioseb Jughashvili in this universe?
He died of a brain hemorrhage in 1953.
With or without help from Belgrade?
I’m still not entirely unconvinced that Tito didn’t catch up to Stalin in the end. “Stop sending killers to Belgrade, or I’ll send one to Moscow- and there will be no need of a second!”
The problem with AI, going all the way back to Rabbi Loew and the Golem Of Prague, is that AI is extremely literal. If you tell Golem to bring you some apples, he’ll bring you the cart and the fruitmonger besides; tell him to “draw some water for washing” and he’ll empty the well unless you telk him to stop. That kind of literal-mindedness does not lend itself well to this kind of situation- it tends towards the kind of thinking that says “If there’s none of them and one of us, we’ve won!” It leads towards an “ends justify means” thought process that leads one to think that it’s perfectly ok to go buggering about with people’s minds- so long as everything works out in the end (remembering that the AI version of “works out in the end” may be radically different from yours) it’s all ok. And it has no concept of Trauma, what it is or how it works- to an AI, trauma is simply a line of code which can be rewritten. Questionable Content recently did a really good segment on this, and the possible ramifications. Mike seems to be trying to comprehend what Trauma is and how it works, but he’s a hamfisted bugger about it. The question which intrigues me is “why?” If an AI can be traumatised, how? And if it is, what then?
Mike does seem to be having a lot of difficulty with it. His responses are generally binary: things are one thing or another, and so he tries his best to resolve contradictions by establishing hierarchies of importAnce in the instructions he’s given. But since those priorities are internal to him, there’s a kind of immediacy to what he perceives about it: a person can have the same proority as people collectively. He doesn’t deal with abstractions very well.
I’ve probably said too much, but, well… call this the “commentary track,” I guess… 😉
Sam Harris adressed the issue of AI evolving beyong human capacities in a TED talk. You can find this very intriguing and frightening (?) train of thought here: https://www.youtube.com/watch?v=8nt3edWLgIg
Is this the one about the technological singularity? I’m just as willing to believe that AI salience could result in something of equal or superior moral and ethical capacity, but of course it would be a process.
Yeah never tell a woman you understand Mike. It never ends well.
The reverse is true as well, generally speaking.