?

Log in

A view to the gallery of my mind

> recent entries
> calendar
> friends
> Website
> profile
> previous 20 entries

Thursday, March 16th, 2017
3:31 pm - Moving on

I expect this to be my last breakup post (about this particular breakup, at least :P).

After having processed all the pains I’ve discussed in previous posts, there was just a final one left, one that’s in a sense the simplest.

It’s that I have tremendous respect and admiration for my ex. She combines a brilliant intelligence, a fiery loyalty to her principles, and a stark determination to get through things no matter what. I’ve rarely encountered such a unique soul, and the pain on my mind was the question of whether I would encounter another again, let alone one who’d be interested in me.

But then I managed to flip the issue around in my head. To just focus on how amazing it is that she ever was interested in me in the first place, and how I’m honestly grateful and humbled that such a beautiful person held me in a high regard. To see the good moments that we had as a piece of validation that I can always remember and hold on to, trusting that if such a person saw something beautiful in me, then she couldn’t have been entirely wrong.

A few days ago I still felt some pain when I saw her name pop up anywhere online. Now I just feel happy to see her writing. Seeing that she’s still herself.

And unexpectedly, I feel some of that gratitude extend to my other former partners as well. Feeling happiness that we ever had any good moments, even if the relationships did not last.

And, if I tap into that feeling, I can extend it even further, to anyone who has ever displayed any liking towards me. Be grateful for that appreciation, for them seeing good things in me.

Thank you, everyone. And thank you again to everyone who has commented on or reacted to my previous breakup posts, for helping me get through this. I’m not going to say that I couldn’t have made it without you, but you people did make it a lot easier.

Thank you.

Originally published at Kaj Sotala. You can comment here or there.

(Leave an echo)

Sunday, March 12th, 2017
2:39 pm - Re-interpreting meanings
After I made my last breakup post, siderea left me with some excellent thoughts about it. While there were a lot of good points, these were the parts that resonated the most. She started by describing the reaction that many people have to her:
 
… a lot of people, male, female and otherwise, fall “in like” with me very quickly, because for a lot of them, I “make” them feel good – put more accurately, the way I comport myself in the world is more comfortable to be around than they usually find themselves feeling. They feel – like you describe feeling for the woman you fell for – safe from humiliation or rejection when self-disclosing to me, like they can be more authentically themselves, which is a delicious feeling.
 
Here’s the first confusion: confusing how they feel with me for how they feel about me. It is one of the commonest human errors to decide that because one feels good with someone that they are good.
 
This is problematic first and most obviously because it’s how serial predators of all sorts groom victims: making the victims feel good so that the victims trust the perp to be good. Not pertinent to your case, except to bear in mind how dangerous an error that can be.
 
Less obvious and more pertinent is how that conflation confuses the one doing the conflating as to how much they actually know about the one they are so judging. The confusion of one’s own good feelings for the goodness of the person one attributes those good feelings to obscures what is often a concommitant fact: one doesn’t actually know much about the person who makes one feel good, except that they make one feel good. […]
 
You say, “What was so special was the almost instant feeling of connection” as if that feeling existed independently of any one specific human to have it. Feelings aren’t facts: that feeling of connection was had by you. It was a feeling you were having. That doesn’t mean there “was a connection” in some objective way. Further, saying that the feeling you had was “of connection” is just a projection of a meaning those feelings had. The words “a feeling of connection” don’t actually have any meaning. They’re a handwave that posits that the feelings – which probably all have names, like “adoration”, “pleasure”, “affection”, “delight”, “surprise” – indicate this hazy concept, “connection”.
 
Some of the main lessons I took away from this comment:
 
Part of my pain was in the feeling that I’d had a unique, almost mystical “connection” with someone, that we’d then lost. But as siderea pointed out, “a connection” doesn’t actually mean anything: it was just a way how I interpreted the feelings I had in the presence of my ex, as well as the feelings that I thought she had in my presence.
 
Going from “there was a unique and magical connection” to “there was a person who happened to fall into some kind of mental schema of a ‘safe person’ based on relatively superficial information, and thus made me feel safe, and at some moments there seemed to be mutuality in this” changes one’s perspective a lot.
 
For one, I had been feeling like it was a personal failure, telling of some deeper fundamental flaw in me, that I had screwed things up and “ruined” that connection. With the new perspective, it’s more like… Well, there were some moments when those feelings arose and others when they didn’t, and that had more to do with the quirks of our individual psychologies than anything else.
 
And as several people commenting on my last post implied, my side of the “connection” being primarily an emotion that *I* had suggests that recapturing that feeling doesn’t necessarily require finding someone who’s magical and rare and unique in some sense. Rather, it may be much more useful to just work on myself and my own emotions, to make it easier for me to achieve that feeling around people in general. (to use psych terms, this is a major inwards shift of the locus of control)
 
In the few days after reading siderea’s comment, painful memories of various kinds about this relationship kept popping up. It wasn’t very pleasant, but at the same time there was a sense of… my mind pulling up those memories so that it could reinterpret the meaning it had given them, and to then reconsolidate the version of the memory with the updated meaning.
 
Yesterday evening I noticed that I was feeling much less of an urge to go back and “make things right again”, but I still had a compelling need to have my ex think well of me, to fix any respect that might have been lost.
 
I asked myself: why do I feel that this is so important? It made sense to have this desire back when there was still a chance to fix our relationship, but what would fulfilling that desire do now?
 
No answer came back. Instead, the feeling seemed to weaken.
 
This night I had a dream where I was hanging out with my ex, and completely forgetting to think about what she thought of me, just getting absorbed in whatever activity it was that we were doing together.
 
And today I’ve been feeling pretty okay about that whole relationship and breakup thing.

Originally published at Kaj Sotala. You can comment here or there.

(Leave an echo)

Tuesday, March 7th, 2017
1:38 pm - On perceived connections

Writing about this seems to be useful, both for me and some other people, so more on breakup pain:

The fact is that I don’t have very much experience of long relationships, and that I haven’t had many deep friendships either. At this moment I feel like I only have one really deep friendship, and I don’t get to see that person nearly as often as I’d like. I’ve long had a deep feeling of loneliness and being alone.

When I started hanging out with this person… she was unique. Now, of course when you get infatuated with someone new, they always seem unique and perfect and special. But even looking back at it with more objective eyes now, it still feels unique. Even before I’d really developed any strong crush, even when my attitude was still just “I like this person and they seem like there could be some potential”, on our first date there was already something magical.

We shared interests and values, but that’s true for a lot of people. What was so special was the almost instant feeling of connection. I can with confidence say that I have never in my life had any interaction with anyone go that smoothly and pleasantly.

On that first date, there was never a moment of awkwardness or being unsure of what to say; not the slightest feeling of unease. It felt completely, utterly, entirely safe; I confessed to some private things which I had intended to leave until later, because it felt entirely inconceivable to my intuition that she would react badly to them (and she didn’t). Conversation seemed to flow completely smoothly and naturally, the topics moving from sex to religion, from religion to the subjective nature of reality, from there to the academic study of gaming, from there to the probability of two people sharing a birthday.

I’ve never felt such a feeling of understanding and being understood, of everything just… clicking. And if this was just the first date, how deep and rich could our relationship yet become?

It – and several other early interactions – were enough that I was ready to move to an unfamiliar town and leave basically my entire existing social circle behind in order to have that on a regular basis. It was enough that, if there would have been any other incompatibilities, I would have been ready to put in practically any amount of work in order to smooth them out.

And I thought that this feeling of already being totally committed to it – despite how little time had passed – and being ready to invest practically anything in it to make it work and maintain that magic smoothness, was mutual.

That mistaken assumption on my part ended up shaping – and damaging – much of our interaction when things started going less well.

By the time the relationship was practically over already, I heard her characterize it as “a brief thing of a few months”, not worth putting inordinate amounts of energy into if it looked like things weren’t going very well.

Not that magic, unique thing that I – maybe foolishly – had thought it was.

And now the next pain and fear that I need to process is the fact that it took me 30 years to find a person with whom there seemed to be the potential for such a deep and rich friendship, even if just for an instant. How much longer will it take to find somebody else like that? Let alone someone with whom that feeling of a genuinely unique connection would be mutual?

And is there any reason to assume that the answer to that question isn’t “longer than my remaining lifetime”?

I genuinely don’t know.

Originally published at Kaj Sotala. You can comment here or there.

(Leave an echo)

Sunday, March 5th, 2017
2:23 pm - Letting go – but not *too* much

Dealing with breakup pain, part twenty million:

I mentioned in a previous post that dealing with loss seems to come in stages. Grief is not grieving after one thing: rather there are many different things one has to come to terms with, all tangled up with each other.

The most recent pain I had in the last few days involved repeatedly recalling various good moments we had. It felt unclear to me what it was that I needed to do in order to absorb and integrate this pain: accept the fact that those moments were gone? But that didn’t seem to be it, and besides that was something that I felt I had processed already.

It turned out that it was kind of the opposite.

It was as if previously some part of my mind had come to terms with the fact that I wouldn’t have these kinds of moments with this person again. Now another part was saying something like “these moments were precious to us; and even though we are not going to have them with this person again, we wish to remember how good they were, and make sure that one day we’ll find something similar with some other person”.

The thing that the pain was calling my attention to, was in effect a reminder to not go too far in accepting my loss. A reminder to keep to thinking about the good moments and cherish them, lest I abandon the hope of finding something similar again.

And now that particular pain seems to be gone, the lesson having been learned and its message integrated to the rest of my mind.

Originally published at Kaj Sotala. You can comment here or there.

(Leave an echo)

10:36 am - No, I don’t think we need more existential terror

There’s a popular narrative that goes roughly like this: most of human history has been dangerous and uncertain, and that’s the kind of environment our minds work the best in. The reason why so many people these days are bored and depressed is because we’ve made the world *too* safe, we would actually be healthier and happier if the world was somewhat more dangerous and not so regular and boring.

I think that this narrative is intuitive, convincing, and mostly wrong, though it does have *some* truth to it.

Here’s a comment I wrote in response to an article which was arguing the above narrative, talking about a need for “mild existential terror”:

I think it’s worth distinguishing between two different possibilities: one, that mild existential terror makes us better off by itself. Two, that mild existential terror doesn’t actually contribute to well-being, but our work to protect against it historically did, and it’s us not needing that work anymore that’s the real culprit.

To take as an example one important component of well-being: meaningful relationships (not necessarily romantic). Hunting that tiger required working closely together, and being able to trust others in your hunting party – literally trusting them with your life. This facilitated – forced – the creation of very deep and intense bonds.

In contrast, these days it’s all too easy to drift through life without *needing* to form a close bond with anyone, because there are few existential terrors that we need to protect ourselves against by bonding together. But it’s not the existential terror, by itself, that causes the bonding. Inject some existential terror to the life of someone lonely and all you’ve done is make them even more miserable. Psychological research on people’s well-being finds the number and quality of close relationships to be one of the most important factors in well-being, not the amount of fear in their lives.

People can form bonds even without that terror, even quickly like with the “fast friends protocol” of just going through a series of increasing personal questions. Arguably the fast friends protocol, too, evokes a *bit* of fear by making people vulnerable to each other. But this is a mild enough fear that I wouldn’t put it in the same category.

Also, look at children: kids raised in healthy, loving homes, who’ve experienced the least amount of fear in their lives, tend to be pretty happy and content until they start getting thrown in unhealthy social environments (e.g. school) where they start developing worries and reasons for self-censorship and feelings that they’ll need to conform in order to fit in.

It’s the sudden appearance of existential fear that makes them worse off, not the lack of it.

When I was the most depressed, the problem was never “boredom”. The problem was feeling like I’d never achieve anything I wanted to, like I’d live in constant financial stress, like I’d never have a place where I’d feel I’d belong, like nobody would want me as a romantic partner. Again it was various kinds of existential fear that were hurting me, not the lack of them.

As I’ve started to recover, I’ve grown increasingly convinced that “being bored about life” isn’t really about having too few challenges. If you find things interesting, you’ll always discover new fascinating challenges. Rather the problem is in demanding too much of yourself, thinking that you need to self-censor in order to fit in, feeling ashamed about parts of yourself and wanting to suppress them. All things which cause you to (consciously or subconsciously) suppress your natural urges and your natural motivation to do things, and then you end up bored because you are not letting yourself be interested in any of the things that you are actually authentically interested in.

That, too, comes from a form of mild existential terror, the terror of not belonging unless you fit the mold X.

See also some interesting discussion on this on Facebook.

Originally published at Kaj Sotala. You can comment here or there.

(Leave an echo)

Thursday, February 23rd, 2017
7:29 pm - On tangled and layered grief
A thing you’d think I’d already have figured way earlier, but only became obvious to me after this latest breakup, is that there are stages of grief (other than the anger-denial-etc. ones).
 
A number of times, I’ve felt like I’d already gotten through the pain… Only for it to come up again, with me getting increasingly frustrated – “didn’t I process this already?”
 
Fact is, I think I did. It’s just that the way we talk about grief is a little misleading. Grief is not one big monolithic block that you just “get over” as one; rather there may be a number of different issues that are painful. They are separate but tangled up with each other, and you aren’t truly “over it” until you have processed them *all*.
 
Things that I’ve processed so far are at least:
  • coming to accept that this would never work as the kind of idealized relationship I’d been imagining as
  • coming to accept that while it working out as a *different* kind of relationship wouldn’t have been impossible earlier, it’s too late for that now
  • coming to accept that there were some simple mistakes that I made during the relationship that would have been easy to avoid and which could have made a huge difference to how things turned out; but which are pointless to dwell on now
  • coming to accept the loss of all the concrete good moments we had before things went sour, and the loss of that shared hope and excitement for the future that we had (this is the one my mind seems to be focused on working on right now)
As well as a few others that I think I’ve mostly gotten over, but which feel too private to mention.
 
I don’t know whether there will still be more. But it’s comforting to realize that I’m at least making progress, even if it doesn’t always feel like it.

Originally published at Kaj Sotala. You can comment here or there.

(Leave an echo)

Thursday, February 9th, 2017
8:56 pm - Meditation instructions for self-compassion

I really liked, and have gotten a lot out of, the self-compassion advice in the book The Wisdom of No Escape and the Path of Loving-Kindness.

First, on the general attitude and approach:

When people start to meditate or to work with any kind of spiritual discipline, they often think that somehow they’re going to improve, which is a sort of subtle aggression against who they really are. It’s a bit like saying, ‘If I jog, I’ll be a much better person.’ ‘If I could only get a nicer house, I’d be a better person.’ ‘If I could meditate and calm down, I’d be a better person.’ Or the scenario may be that they find fault with others; they might say, ‘If it weren’t for my husband, I’d have a perfect marriage.’ ‘If it weren’t for the fact that my boss and I can’t get on, my job would be just great.’ And ‘If it weren’t for my mind, my meditation would be excellent.’

But loving-kindness – maitri – toward ourselves doesn’t mean getting rid of anything, Maitri means that we can still be crazy after all these years. We can still be angry after all these years. We can still be timid or jealous or full of feelings of unworthiness. The point is not to try to change ourselves. Meditation practice isn’t about trying to throw ourselves away and become something better. It’s about befriending who we are already. The ground of practice is you or me or whoever we are right now, just as we are. That’s the ground, that’s what we study, that’s what we come to know with tremendous curiosity and interest. […]

Sometimes among Buddhists the word ego is used in a derogatory sense, with a different connotation than the Freudian term. As Buddhists, we might say, ‘My ego causes me so many problems.’ Then we might think, ‘Well, then, we’re supposed to get rid of it, right? Then there’d be no problem.’ On the contrary, the idea isn’t to get rid of ego but actually to begin to take an interest in ourselves, to investigate and be inquisitive about ourselves. […]

This is not an improvement plan; it is not a situation in which you try to be better than you are now. If you have a bad temper and you feel that you harm yourself and others, you might think that sitting for a week or a month will make your bad temper go away – you will be that sweet person that you always wanted to be. Never again will a harsh word leave your lily-white lips, The problem is that the desire to change is fundamentally a form of aggression toward yourself. The other problem is that our hangups, unfortunately or fortunately, contain our wealth. Our neurosis and our wisdom are made out of the same material. If you throw out your neurosis, you also throw out your wisdom. Someone who is very angry also has a lot of energy; that energy is what’s so juicy about him or her. That’s the reason people love that person. The idea isn’t to try to get rid of your anger, but to make friends with it, to see it clearly with precision and honesty, and also to see it with gentleness. That means not judging yourself as a bad person, but also not bolstering yourself up by saying, ‘It’s good that I’m this way, it’s right that I’m this way. Other people are terrible, and I’m right to be so angry at them all the time.’ The gentleness involves not repressing the anger but also not acting it out. It is something much softer and more open-hearted than any of that. It involves learning how, once you have fully acknowledged the feeling of anger and the knowledge of who you are and what you do, to let it go. You can let go of the usual pitiful little story line that accompanies anger and begin to see clearly how you keep the whole thing going. So whether it’s anger or craving or jealousy or fear or depression – whatever it might be – the notion is not to try to get rid of it, but to make friends with it. That means getting to know it completely, with some kind of softness, and learning how, once you’ve experienced it fully, to let go.

And then on the specific instructions for self-compassionate meditation:

The technique is, first, to take good posture and, second, to become mindful of your out-breath. This is just your ordinary out-breath, not manipulated or controlled in any way. Be with the breath as it goes out, feel the breath go out, touch the breath as it goes out. Now, this seems simple, but to actually be with that breath and to be there for every breath requires a lot of precision. When you sit down and begin to meditate, the fact that you always come back to that breath brings out the precision, the clarity, and the accuracy of your mind. Just the fact that you always come back to this breath and that you try, in a gentle way, to be as fully with the breath as you can sharpens your mind.

The third part of the technique is that, when you realize that you’ve been thinking, you say to yourself, ‘Thinking.’ Now, that also requires a lot of precision. Even if you wake up as if from a dream and realize that you’ve been thinking, and you immediately go back to the breath and accidentally forget about the labeling, even then you should just pause a little bit and say to yourself, ‘Thinking.’ Use the label, because the label is so precise. Just acknowledge that you’ve been thinking, just that, no more, no less, just ‘thinking.’ Being with the out-breath cultivates the precision of your mind, and when you label, that too brings out the precision of your mind. Your mind becomes more clear and stabilized. As you sit, you might want to be aware of this.

If we emphasized only precision, our meditation might become quite harsh and militant. It might get too goal-oriented. So we also emphasize gentleness. One thing that is very helpful is to cultivate an overall sense of relaxation while you are doing the meditation. I think you’ll notice that as you become more mindful and more aware and awake, you begin to notice that your stomach tends to get very tense and your shoulders tend to get very tight. It helps a lot if you notice this and then purposely relax your stomach, relax your shoulders and your neck. If you find it difficult to relax, just gradually, patiently, gently work with it. […]

The moment when you label your thoughts ‘thinking’ is probably the key place in the technique where you cultivate gentleness, sympathy, and loving-kindness. Rinpoche used to say, ‘Notice your tone of voice when you say “thinking.”’ It might be really harsh, but actually it’s just a euphemism for ‘Drat! You were thinking again, gosh darn it, you dummy.’ You might really be saying, ‘You fool, you absolutely miserable meditator, you’re hopeless.’ But it’s not that at all. All that’s happened is that you’ve noticed. Good for you, you actually noticed! You’ve noticed that mind thinks continuously, and it’s wonderful that you’ve seen that. Having seen it, let the thoughts go. Say, ‘Thinking.’ If you notice that you’re being harsh, say it a second time just to cultivate the feeling that you could say it to yourself with gentleness and kindness, in other words, that you are cultivating a nonjudgmental attitude. You are not criticizing yourself, you are just seeing what is with precision and gentleness, seeing thinking as thinking. That is how this technique cultivates not only precision but also softness, gentleness, a sense of warmth toward oneself. The honesty of precision and the goodheartedness of gentleness are qualities of making friends with yourself. So during this period, along with being as precise as you can, really emphasize the softness. If you find your body tensing, relax it. If you find your mind tensing, relax it. Feel the expansiveness of the breath going out into the space. When thoughts come up, touch them very lightly, like a feather touching a bubble. Let the whole thing be soft and gentle, but at the same time precise. […]

You may have wondered why we are mindful of our out-breath and only our out-breath. Why don’t we pay attention to the out-breath and the in-breath? There are other excellent techniques that instruct the meditator to be mindful of the breath going out and mindful of the breath coming in. That definitely sharpens the mind and brings a sense of one-pointed, continuous mindfulness, with no break in it. But in this meditation technique, we are with the out-breath; there’s no particular instruction about what to do until the next out-breath. Inherent in this technique is the ability to let go at the end of the out-breath, to open at the end of the out-breath, because for a moment there’s actually no instruction about what to do. There’s a possibility of what Rinpoche used to call ‘gap’ at the end of the out-breath: you’re mindful of your breath as it goes out, and then there’s a pause as the breath comes in. It’s as if you … pause. It doesn’t help at all to say, ‘Don’t be mindful of the in-breath’ – that’s like saying, ‘Don’t think of a pink elephant.’ When you’re told not to be mindful of something, it becomes an obsession. Nevertheless, the mindfulness is on the out-breath, and there’s some sense of just waiting for the next out-breath, a sense of no project. One could just let go at the end of the out-breath. Breath goes out and dissolves, and there could be some sense of letting go completely. Nothing to hold on to until the next out-breath.

Even though it’s difficult to do, as you begin to work with mindfulness of the out-breath, then the pause, just waiting, and then mindfulness of the next out-breath, the sense of being able to let go gradually begins to dawn on you. So don’t have any high expectations – just do the technique. As the months and years go by, the way you regard the world will begin to change.

Originally published at Kaj Sotala. You can comment here or there.

(Leave an echo)

Thursday, January 19th, 2017
3:35 pm - On my burnout

I’ve said a lot about depression, self-compassion, and breakup blues.

I haven’t said much about burnout. I have that too. Have had for years, in fact.

This is just the first time that I’ve had a chance to stop and heal.

I did a day of work last week, the first one I’ve done since the end of November. It went well. It felt good. So I thought I would try to get a full week’s worth of work done.

Then I basically crashed again.

Sometimes, your skin feels sensitive and raw. Everything is, not if outright painful, then at least unpleasant to touch.

That’s how I feel today, and on a lot of days. Except that the skin is my mind, and the things that I touch are thoughts about things to be done.

Goals. Obligations. Future calendar entries. But even things like a computer game I was thinking of playing, or a Facebook comment I’m thinking of replying to. Anything that I need to keep track of, touches against that rawness in my mind.

That’s another big part of why I’ve been so focused on self-compassion recently. On being okay with not getting anything done. On taking pleasure from just being present. On enjoying little, ordinary things. Because that’s all I have, on moments like this.

I’m getting better. There are fewer days like this. There are many days when I’m actually happy, enjoying it when I do things.

But I’m still not quite recovered. And I need to be careful not to forget that, lest I push myself so much that I crash again.

Originally published at Kaj Sotala. You can comment here or there.

(Leave an echo)

10:42 am - Self-compassion

Often when we are in pain, what we really want is some validation for the pain.

Not advice. Not someone trying to make that pain go away (because it discomforts them). But someone to tell us that it’s okay to be in pain. That the things that bother us, are valid and normal reasons to feel bad about.

Much of self-compassion seems to be the same. Not trying to stop being in pain. Not trying to change yourself. But giving yourself the validation that we usually look for from the outside. Accepting it as a part of yourself, as something that is alright to feel. Something that you can sympathize with yourself for feeling.

And if you find that you *cannot* accept the pain…

Then you unjudgingly accept that too. That today, this pain is too much for me to bear. You just are with it, without trying to change it.

And if you find that you cannot do that either, and feel bad and guilty for being so bad at this self-compassion thing…

Then you accept that, without trying to change it.

And if you find yourself being kinda okay with being in pain, but still wanting to change it, still wanting to explicitly apply some technique for deeper self-compassion rather than just accepting everything…

Then you accept that, and let yourself do it.

Dealt with in this way, self-compassion oddly starts looking like not really doing anything in particular. After all, you just go about living your life as you always have, not trying to change anything about yourself. Or trying, if that’s what you’re like. Not trying to exert any particular control over your behavior, except when you do.

Yet somehow you end up feeling quite different from normal.

(Except when you don’t, which is also fine.)

Originally published at Kaj Sotala. You can comment here or there.

(Leave an echo)

Wednesday, January 11th, 2017
3:37 pm - Disjunctive AI scenarios: Individual or collective takeoff?

In this post, I examine Magnus Vinding’s argument against traditional “single AI fooms off” scenarios, as outlined in his book “Reflections on Intelligence”. While the argument itself is not novel – similar ones have been made before by Robin Hanson and J Storrs Hall, among others – I found Vinding’s case to be the most eloquently and compellingly put so far.

Vinding’s argument goes basically as follows: when we talk about intelligence, what we actually care about is the ability to achieve goals. For instance, Legg & Hutter collected 70 different definitions for intelligence, and concluded that a summary which captured the essential spirit of most of them was “Intelligence measures an agent’s ability to achieve goals in a wide range of environments”.

But once we substitute “intelligence” with “the ability to achieve goals”, we notice that we are actually talking about having tools, in several senses of the word:

  • Cognitive tools: our brains develop to have specialized processes for performing various kinds of tasks, such as recognizing faces, recognizing emotions, processing language, etc. Humans have some cognitive tools that are unique to us (such as sophisticated language) while lacking some that other animals have (such as the sophisticated smell processing of a dog).
  • Anatomical tools: not only do our brains carry out specific tasks, we also have an anatomy that supports it. For instance, our vocal cords allow us to produce a considerable variety of sounds to be used together with our language-processing capabilities. On the other hand, we also lack some other anatomical tools, such as the impressive noses of dogs. It is the combination of cognitive and anatomical tools that allows us to achieve a variety of different goals.
  • Physical tools: tools in the most conventional sense of the word, we would not be capable of achieving much unless we had various physical devices that can be used for manipulating the world.
  • Cultural tools: nobody would get very far if they had to derive all of their ideas from scratch. Rather, we acquire most of our language, ideas, and ways of thought that we use from the people around us.
  • Societal tools: an individual’s ability to achieve things has grown enormously as our economy has grown increasingly specialized. No single person could build a laptop, or even a pencil, all by themselves. Yet we have at our disposal tools – computers, web browsers, Internet service providers, online stores, manufacturers, delivery companies – which allow us to almost effortlessly acquire laptops and pencils and then put them into use.

This paragraph from Vinding’s book summarizes much of his argument:

“Human intelligence” is often compared to “chimpanzee intelligence” in a manner that presents the former as being so much more awesome than, and different from, the latter. Yet this is not the case. If we look at individuals in isolation, a human is hardly that much more capable than a chimpanzee. They are both equally unable to read and write on their own, not to mention building computers or flying to the moon. And this is also true if we compare a tribe of, say, thirty humans with a tribe of thirty chimpanzees. Such two tribes rule the Earth about equally little. What really separates humans from chimpanzees, however, is that humans have a much greater capacity for accumulating information, especially through language. And it is this – more precisely, millions of individuals cooperating with this, in itself humble and almost useless, ability – that enables humans to accomplish the things we erroneously identify with individual abilities: communicating with language, doing mathematics, uncovering physical laws, building things, etc. It is essentially this you can do with a human that you cannot do with a chimpanzee: train them to contribute modestly to society. To become a well-connected neuron in the collective human brain. Without the knowledge and tools of previous generations, humans are largely indistinguishable from chimpanzees.

So what are the implications for AI risk?

One of Vinding’s arguments is that “intelligence” has gotten increasingly distributed. Whereas a hunter-gatherer might only have drawn upon the resources of their own tribe, a modern human will enhance their capabilities by tapping into a network of resources that literally spans the entire globe. Thus, it may be misguided to focus on the point when AIs achieve human-level intelligence, for a single individual’s intelligence alone isn’t sufficient for achieving much. Instead, if AIs were to wipe out humanity, they would need to first achieve the level of capability that human society has… but the easiest way of achieving that would be to collaborate with human society and use its resources peacefully, rather than cause damage to it.

A similar argument was previously put forward by J Storrs Hall in his paper Engineering Utopia, which uses a more economic argument. Hall notes that even when a single AI is doing self-improvement (such as by developing better cognitive science models to improve its software), the rest of the economy is also developing better such models. Thus it’s better for the AI to focus on improving at whatever thing it is best at, and keep trading with the rest of the economy to buy the things that the rest of the economy is better at improving.

However, Hall notes that there could still be a hard takeoff, once enough AIs were networked together: AIs that think faster than humans are likely to be able to communicate with each other, and share insights, much faster than they can communicate with humans. The size of the AI economy could grow quite quickly, with Hall suggesting a scenario that goes “from […] 30,000 human equivalents at the start, to approximately 5 billion human equivalents a decade later”.

Any individual AI, then, will be most effective as a cooperating element of a community (as is any individual human […]). AI communities, on the other hand, will have the potential to grow into powers rivalling or exceeding the capability of the human race in relatively short order. The actions of communities are effects of the set of ideas they hold, the result of an extremely rapid memetic evolution […]

Real-time human oversight of such AI communities is infeasible. Once a networked AI community was established, a “cultural revolution” could overtake it in minutes on a worldwide scale, even at today’s communication rates. The essence of our quest for a desirable future world, then, both for ourselves and for the AIs, lies in understanding the dynamics of memetic evolution and working out ways to curb its excesses.

Hall suggests that an community could rapidly grow to the point where they were exclusively communicating and trading with each other, humans being too slow to bother with. Suppose that you were a digital mind that thought a thousand times as fast as biological humans. If you wanted a task done, would you rather hire another digital mind to do it, taking what felt to you like an hour – or would you hire a biological human, and have to wait what felt like a month and a half? You’d probably go with your digital friend.

One obvious limitation is that this speed advantage would only apply for purely mental tasks. If you needed something manufactured, you might as well order something from the humans.

Vinding’s book could also be read as a general argument suggesting that the amount of distributed intelligence in human society was so large that AIs would still benefit from trade, and would need a large amount of time to learn to do everything themselves. Vinding writes:

… the majority of what humans do in the economy is not written down anywhere and thus not easily copyable. Customs and know-how run the world to an extent that is hard to appreciate – tacit knowledge and routines concerning everything from how to turn the right knobs and handles on an oil rig to how to read the faces of other humans, none of which is written down anywhere. For even on subjects where a lot is written down – such as how to read faces – there are many more things that are not. In much of what we do, we only know how we do, not exactly “what”, and this knowledge is found in the nooks and crannies of our brains and muscles, and in our collective organization as a whole. Most of this unique knowledge cannot possibly be deduced from a few simple principles – it can only be learned through repeated trial and error – which means that any system that wants to expand the economy must work with this enormous set of undocumented, not readily replaceable know-how and customs.

This is a compelling argument, but with recent progress in AI, it feels less compelling than it might have felt a few years back. Vinding mentions reading faces as an example of a domain involving much tacit knowledge, but computers are already outperforming humans at facial recognition and are starting to match humans at recognizing and interpreting emotional expressions, as well as in recognizing rare syndromes from facial patterns. As a more industrial example, DeepMind’s AI technology was recently deployed to optimize power usage at Google’s data centers, for a 15 percent improvement in power usage efficiency. Since relatively small reductions in power use translate to large savings – this change is estimated to save Google hundreds of millions of dollars – these were already highly-optimized centers.

Tacit knowledge is essentially knowledge that is based on pattern recognition, and pattern recognition is rapidly becoming one of AI’s strengths. Currently this still requires massive datasets – Goodfellow et al. (2016, chap 1) note that as a rule of thumb, a deep learning algorithm requires a dataset of at least 10 million labeled examples in order to achieve human-level or better performance. On the other hand, they also note that a large part of the success of deep learning has been because the digitization of society has made such large datasets increasingly available.

It seems likely that as the development of better and better AI pattern recognition will drive further investment into collecting larger datasets, which will in turn make it even more profitable to continue investing in better pattern recognition. After DeepMind’s success with improving power efficiency at Google’s data centers, DeepMind’s Demis Hassabis told Bloomberg that “[DeepMind] knows where its AI system lacks information, so it may ask Google to put additional sensors into its data centers to let its software eke out even more efficiency”.

If AI allows efficiency to be increased, then businesses will be rebuilt in such a way as to give AI all the necessary information it needs to run them maximally efficiently – making tacit human knowledge of how things were previously done both unnecessary and obsolete. The items in Amazon’s warehouses are algorithmically organized according to a logic that makes little intuitive sense to humans, with an AI system telling the workers where to go; Foxconn is in the process of fully automating its factories; Uber is seeking to replace human drivers with self-driving cars. We are bound to see this kind of automation penetrate into ever larger parts of the economy over time, which will drive the further deployment of sensors and collection of better datasets in order to enable it. By the time AGI manifests, after several decades of this development, there’s no obvious reason to assume that very much of the tacit knowledge needed for running an economy would necessarily remain locked up in human heads anymore.

To sum things up, this suggests that beyond the classical “one AI fooms to a superintelligence and takes over the world” scenario, there may plausibly exist a scenario where the superintelligences are initially best off trading with humans. As time goes on and the size of the AI community grows, this community may collectively foom off as they come to only trade with each other and have little use for humans. Depending on how long it takes for the community grow, this may or may not look any different from traditional foom.

This blog post was written as part of research funded by the Foundational Research Institute.

Originally published at Kaj Sotala. You can comment here or there.

(Leave an echo)

Saturday, January 7th, 2017
1:05 pm - Working on self-compassion: befriending my childhood self

For some reason, I’ve always felt an instinctive dislike towards my childhood self. I generally like kids, but if somebody had magically produced a copy of the person that I was at 5 or 10 and asked me to look after that kid for a while, my automatic reaction would have been “no, I don’t like that kid”.

I’ve also had somewhat of a bad self-esteem for a long, long time. For my tenth birthday, I decided that I didn’t want to get any presents, because I felt like I had done nothing to deserve them. And I didn’t want to get any presents on future birthdays, or on any Christmas, either. (This caused what’s probably one of the oddest child-parent fights that I know of, with my dad being angry about wanting to give me presents and me steadfastly refusing them.)

These two things seemed obviously related.

So today I started exploring that feeling of dislike. Where was it coming from? Why did I have such an aversion regarding my younger self?

Now here’s the thing. I was an only child who frequently spent more time by himself or around adults than he did around other kids. Like all kids, I had a fair share of fights with my parents about stuff like bedtimes and such.

But I never realized that other kids had those same kinds of fights and tantrums too.

I remember having been distinctly shocked when a teacher we had when I was 13-15 made an off-handed comment about this happening with younger kids.

I hadn’t known that this was a Kid Thing: I had thought it was a Kaj Thing.

And as a result, I’d felt guilty and bad over each time that I’d been self-centered and emotional in the way kids are. By the time I heard my teacher make that comment, it started to dawn on me on an intellectual level that this was nothing special: but on an emotional level I had already internalized a belief that I was exceptionally ungrateful and undeserving for everything my parents did for me.

Today I went back to those experiences. A few memories in particular stuck out: one of the countless bedtime struggles, as well as an occasion when I’d told my dad over the phone that I didn’t like him. And now, instead of just recalling my behavior in those memories – like on every previous occasion when I had recalled them – I tried to remember my emotional state, and to sympathize with it, and to recall other kids that I’ve seen acting up and who I’ve felt sympathetic towards.

And then there was a shift, and those memories started feeling like instances of a Kid Thing, rather than a uniquely Kaj Thing.

And now if you’d bring me a copy of me as I was at 5 or 10, I’d just like to hug that poor kid and tell him that it’s okay.

Originally published at Kaj Sotala. You can comment here or there.

(Leave an echo)

Monday, December 26th, 2016
10:20 am - On being a triad and a team

For a few months this fall, I was part of a poly triad which ultimately didn’t work out… but the moments when it did work, worked. So well in fact, that I suspect that any relationship with only two people involved will from now on feel somehow lacking to me, no matter how good otherwise.

There were two of us guys involved with one gal, with the guys starting out as strangers to each other. Still, from the start it was clear that everyone wanted everybody to be happy, and was going to act accordingly.

To me, at the best moments, we felt like family. Not just two men who happened to both have a relationship with the same woman, but a cohesive unit doing its best that everyone in it (as well as the kid from a previous relationship) would be as well off as possible. Thinking back to it, I recall moments like:

  • all three brainstorming and looking up stuff about how to make the kid sleep better at night, or to be more willing to sit still while riding a bus
  • one of us reading a book aloud to the two others, all three cuddling together
  • everyone spending several hours carrying some fresh wood together
  • all three sitting together and discussing some conflicts that had come up between two of them, with the third one offering a more neutral outside perspective and acting as a general calming force

It’s hard to describe, but I feel like there was a very strong sense in which there being three of us brought a sense of extra stability to the relationship. If someone was upset or doing badly, nobody needed to feel like they alone had the primary burden of helping that person out. Whoever needed support, there were two other people to shoulder the effort of providing it. And nobody would hesitate to provide it, if only they were in a shape where they could.

While it ultimately didn’t work out, that feeling of being a tight-knit family, with a sense of “one for all, all for one”… I’m going to miss that, in any relationship that doesn’t have it. You can get the sense of mutual support with just a single couple, of course; but things like that sense of “we’re both in love with the same person so we’re going to work together to make her happy; and we know that she cares about us both and will be the happiest if both of us are happy, so we’ll also do our best to help each other out whenever we can”… that I don’t think you can really get without having a triad.

Originally published at Kaj Sotala. You can comment here or there.

(Leave an echo)

Tuesday, November 29th, 2016
11:30 am - Suddenly, a taste of freedom

So a few days back, I mentioned that after getting rid of my subconscious idealized assumptions of what a relationship “should” be like, I stopped being so desperate to be in a relationship.

And some time before that, I mentioned that I’d decided to put the whole “saving the world” thing on hold for a few years and focus on taking care of myself first.

As a result, I’ve suddenly found myself having *no* pressing goals that would direct my life. No stress about needing to do something big-impact. No constant loneliness and thinking about how to best impress people.

Just a sudden freedom to do basically anything.

I’m still in the process of disassembling various mental habits that were focused on making me more single-mindedly focused on the twin goals of saving the world and getting into a relationship. But starting to suspect that even more things were defined by those goals than I suspected.

For instance, my self-esteem has usually been pretty bad, probably because I was judging myself and my worth pretty much entirely by how well I did at those two goals. And I didn’t feel like I was doing particularly well at either.

Now I can just… Live a day at a time and not sweat it.

It’s going to take a while to get used to this.

Originally published at Kaj Sotala. You can comment here or there.

(Leave an echo)

Monday, November 28th, 2016
10:08 am - Finding slices of joy

Three weeks ago, I ran across an article called “Google’s former happiness guru developed a three-second brain exercise for finding joy“. Yes, the title is kinda cringe-worthy, but the content is good. Here are the most essential five paragraphs:

Successfully reshaping your mindset, [Chade-Meng Tan] argues, has less to do with hours of therapy and more to do with mental exercises, including one that helps you recognize “thin slices of joy.”

“Right now, I’m a little thirsty, so I will drink a bit of water. And when I do that, I experience a thin slice of joy both in space and time,” he told CBC News. “It’s not like ‘Yay!”” he notes in Joy on Demand. “It’s like, ‘Oh, it’s kind of nice.’”

Usually these events are unremarkable: a bite of food, the sensation of stepping from a hot room to an air-conditioned room, the moment of connection in receiving a text from an old friend. Although they last two or three seconds, the moments add up, and the more you notice joy, the more you will experience joy, Tan argues. “Thin slices of joy occur in life everywhere… and once you start noticing it, something happens, you find it’s always there. Joy becomes something you can count on.” That’s because you’re familiarizing the mind with joy, he explains.

Tan bases this idea on neurological research about how we form habits. Habitual behaviors are controlled by the basal ganglia region of the brain, which also plays a role in the the development of memories and emotions. The better we become at something, the easier it becomes to repeat that behavior without much cognitive effort.

Tan’s “thin slice” exercise contains a trigger, a routine, and a reward—the three parts necessary to build a habit. The trigger, he says, is the pleasant moment, the routine is the noticing of it, and the reward is the feeling of joy itself.

Since then, I have been working on implementing its advice, and making it a habit to notice the various “thin slices of joy” in my life.

It was difficult to remember at first, and on occasions when I’m upset for any reason it’s even harder to follow, even if I do remember it. Still, it is gradually becoming a more entrenched habit, with me remembering it and automatically following it more and more often – and feeling better as a result. I’m getting better at noticing the pleasure in sensations like

  • Drinking water.
  • Eating food.
  • Going to the bathroom.
  • Having drops of water fall on my body while in the shower.
  • The physicality of brushing teeth, and the clean feeling in the mouth that follows.
  • Being in the same room as someone and feeling less alone, even if both are doing their own things.
  • Typing on a keyboard and being skilled enough at it to have each finger just magically find the right key without needing to look.

And so on.

Most of these are physical sensations. I would imagine that this would be a lot harder for someone who doesn’t feel comfortable in their body. But for me, a great thing about this is that my body is always with me. Anytime when I’m sitting comfortably – or standing, or lying, or walking comfortably – I can focus my attention on that comfort and get that little bit of joy.

In the article, it said that

“Thin slices of joy occur in life everywhere… and once you start noticing it, something happens, you find it’s always there. Joy becomes something you can count on.” That’s because you’re familiarizing the mind with joy, he explains.

I feel like this is starting to happen to me. Still not reliably, still not always, still easily broken by various emotional upsets.

But I still feel like I’m making definite progress.

Originally published at Kaj Sotala. You can comment here or there.

(Leave an echo)

Saturday, November 26th, 2016
12:41 pm - Relationship realizations

Learning experiences: just broke up with someone recently. Part of the problem was that I had some very strong, specific and idealized expectations of what a relationship “should” be like – expectations which caused a lot of trouble, but which I hadn’t really consciously realized that I had, until now.

Digging up the expectations and beating them into mush with a baseball bat came too late to save this particular relationship, but it seems to have had an unexpected side effect: the thought of being single feels a lot less bad now.

I guess that while I had that idealized vision of “being in a relationship”, my mind was constantly comparing singledom to that vision, finding my current existence to be lacking, and feeling bad as a result. But now that I’ve gone from “being in a relationship means X” to “being in a relationship can mean pretty much anything, depending on the people involved”, there isn’t any single vision to compare my current state against. And with nothing to compare against, there’s also nothing that would make me feel unhappy because I don’t have it currently.

Huh.

Originally published at Kaj Sotala. You can comment here or there.

(Leave an echo)

Friday, September 23rd, 2016
11:54 am - Software for Moral Enhancement

We all have our weak moments. Moments when we know the right thing to do, but are too tired, too afraid, or too frustrated to do it. So we slip up, and do something that we’ll regret.

An algorithm will never slip up in a weak moment. What if we could identify when we are likely to make mistakes, figure out what we’d want to do instead, and then outsource our decisions to a reliable algorithm? In what ways could we use software to make ourselves into better people?

Passive moral enhancement

One way of doing this might be called passive moral enhancement, because it happens even without anyone thinking about it. For example, if you own a self-driving car, you will never feel the temptation to drink and drive. You can drink as much as you want, but your car will always be the one who drives for you, so you will never endanger others by your drinking.

In a sense this is an uninteresting kind of moral enhancement, since there is nothing novel about it. Technological advancement has always changed the options that we have available to us, and made some vices less tempting while making others more tempting.

In another sense, this is a very interesting kind of change, because simply removing the temptation to do bad is a very powerful way to make progress. If you like drinking, it’s a pure win for you to get to drink rather than having to stay sober just because you’re driving. If we could systematically engineer forms of passive moral enhancement into society, everyone would be better off.

Of course, technology doesn’t always reduce the temptation to do bad. It can also open up new, tempting options for vice. We also need to find ways for people to more actively reshape their moral landscape.

A screenshot from the GoodGuide application.

A screenshot from the GoodGuide application.

Reshaping the moral landscape

On the left is a screenshot from GoodGuide. GoodGuide is an application which rates the health, environmental, and societal impact of different products on a scale from 1 to 10, making it easier to choose sustainable products. This is an existing application, but similar ideas could be taken much further.

Imagine having an application which allowed you to specify what you considered to be an ethical product and what kinds of things you needed or liked. Then it would go online and do your shopping for you, automatically choosing the products that best fit your needs and which were also the most ethical by your criteria.

Or maybe your criteria would act as a filter on a search engine, filtering out any products you considered unethical – thus completely removing the temptation to ever buy them, because you’d never even see them.

Would this be enough? Would people be sufficiently motivated to set and use such criteria, just out of the goodness of their hearts?

Probably many would. But it would still be good to also create better incentives for moral behavior.

Software to incentivize moral behavior

Sutter Health/California Pacific Medical Center.

This six-way kidney exchange was carried out in 2015 at the California Pacific Medical Center. Sutter Health/California Pacific Medical Center.

On the right, you can see a chain of kidney donations created by organ-matching software.

Here’s how it works. Suppose that my mother has failing kidneys, and that I would like to help her by giving her one of my kidneys. Unfortunately, the compatibility between our kidneys is poor despite our close relation. A direct donation from me to her would be unlikely to succeed.

Fortunately, organ-matching software manages to place us in a chain of exchanges. We are offered a deal. If I donate my kidney to Alice, who’s a complete stranger to me, then another stranger will donate their kidney – which happens to be an excellent match – to my mother. And as a condition for Alice getting a new kidney, Alice’s brother agrees to donate his kidney to another person. That person’s mother agrees to donate her kidney to the next person, and that person’s husband agrees to donate his kidney… and so on. In this way, what was originally a single donation can be transformed into a chain of donations.

As a result of this chain, people who would usually have no interest in helping strangers end up doing so, because they want to help their close ones. By setting up the chain, software has made our interest for our loved ones align together with us helping others.

The more we can develop ways of incentivizing altruism, the better off society will become.

Is this moral enhancement?

At this point, someone might object to calling these things moral enhancement. Is it really moral enhancement if we are removing temptations and changing incentives so that people do more good? How is that better morality – wouldn’t better morality mean making the right decisions when faced with hard dilemmas, rather than dodging the dilemmas entirely?

My response would be that much of the progress of civilization is all about making it easier to be moral.

I have had the privilege of growing up in a country that is wealthy and safe enough that I have never needed to steal or kill. I have never been placed in a situation where those would have been sensible options, let alone necessary for my survival. And because I’ve had the luck of never needing to do those things, it has been easy for me to internalize that killing people or stealing from them are things that you simply don’t do.

Obviously it’s also possible for someone to decide that stealing and killing are wrong despite growing up in a society where they have to do those things. Yet, living in a safer society means that people don’t have to decide it – they just take it for granted. And societies where people have seen less conflict tend to be safer and have more trust in general.

If we can make it easier for people to act in the right way, then more people will end up behaving ways that make both themselves and others better off. I’d be happy to call that moral enhancement.

Whatever we decide to call it, we have an opportunity to use technology to make the world a better place.

Let’s get to it.

Originally published at Kaj Sotala. You can comment here or there.

(Leave an echo)

Tuesday, August 16th, 2016
2:21 pm - An appreciation of the Less Wrong Sequences
Ruby Bloom recently posted about the significance of Eliezer Yudkowsky‘s Less Wrong Sequences on his thinking. I felt compelled to do the same.
 
Several people have explicitly told me that I’m one of the most rational people they know. I can also think of at least one case where I was complimented by someone who was politically “my sworn enemy”, who said something along the lines of “I do grant that *your* arguments for your position are good, it’s just everyone *else* on your side…”, which I take as some evidence of me being able to maintain at least some semblance of sanity even when talking about politics.
 
(Seeing what I’ve written above, I cringe a little, since “I’m so rational” sounds like so much like an over-the-top, arrogant boast. I certainly have plenty of my own biases, as does everyone who is human. Imagining yourself to be perfectly rational is a pretty good way of ensuring that you won’t be, so I’d never claim to be exceptional based only on my self-judgment. But this is what several people have explicitly told me, independently of each other, sometimes also vouching part of their own reputation on it by stating this in public.)
 
However.
 
Before reading the Sequences, I was very definitely *not* that. I was what the Sequences would call “a clever arguer” – someone who was good at coming up with arguments for their own favored position, and didn’t really feel all that compelled to care about the truth.
 
The one single biggest impact of the Sequences that I can think of is that before reading them, as well as Eliezer’s other writings, I didn’t really think that beliefs had to be supported by evidence.
 
Sure, on some level I acknowledged that you can’t just believe *anything* you can find a clever argument for. But I do also remember thinking something like “yeah, I know that everyone thinks that their position is the correct one just because it’s theirs, but at the same time I just *know* that my position is correct just because it’s mine, and everyone else having that certainty for contradictory beliefs doesn’t change that, you know?”.
 
This wasn’t a reductio ad absurdum, it was my genuine position. I had a clear emotional *certainty* of being right about something, a certainty which wasn’t really supported by any evidence and which didn’t need to be. The feeling of certainty was enough by itself; the only thing that mattered was in finding the evidence to (selectively) present to others in order to persuade them. Which it likely wouldn’t, since they’d have their own feelings of certainty, similarly blind to most evidence. But they might at least be forced to concede the argument in public.
 
It was the Sequences that first changed that. It was reading them that made me actually realize, on an emotional level, that correct beliefs *actually* required evidence. That this wasn’t just a game of social convention, but a law of universe as iron-clad as the laws of physics. That if I caught myself arguing for a position where I was making arguments that I knew to be weak, the correct thing to do wasn’t to hope that my opponents wouldn’t spot the weaknesses, but rather to just abandon those weak arguments myself. And then to question whether I even *should* believe that position, having realized that my arguments were weak.
 
I can’t say that the Sequences alone were enough to take me *all* the way to where I am now. But they made me more receptive to other people pointing out when I was biased, or incorrect. More humble, more willing to take differing positions into account. And as people pointed out more problems in my thinking, I gradually learned to correct some of those problems, internalizing the feedback.
 
Again, I don’t want to claim that I’d be entirely rational. That’d just be stupid. But to the extent that I’m more rational than average, it all got started with the Sequences.
 
Ruby wrote:
I was thinking through some challenges and I noticed the sheer density of rationality concepts taught in the Sequences which I was using: “motivated cognition”, “reversed stupidity is not intelligence”, “don’t waste energy of thoughts which won’t have been useful in universes were you win” (possibly not in the Sequences), “condition on all the evidence you have”. These are fundamental concepts, core lessons which shape my thinking constantly. I am a better reasoner, a clearer thinker, and I get closer to the truth because of the Sequences. In my gut, I feel like the version of me who never read the Sequences is epistemically equivalent to a crystal-toting anti-anti-vaxxer (probably not true, but that’s how it feels) who I’d struggle to have a conversation with.
And my mind still boggles that the Sequences were written by a single person. A single person is responsible for so much of how I think, the concepts I employ, how I view the world and try to affect it. If this seems scary, realise that I’d much rather have my thinking shaped by one sane person than a dozen mad ones. In fact, it’s more scary to think that had Eliezer not written the Sequences, I might be that anti-vaxxer equivalent version of me.
I feel very similarly. I have slightly more difficulty pointing to specific concepts from the Sequences that I employ in my daily thinking, because they’ve become so deeply integrated to my thought that I’m no longer explicitly aware of them; but I do remember a period in which they were still in the process of being integrated, and when I explicitly noticed myself using them.
 
Thank you, Eliezer.
 
(There’s a collected and edited version of the Sequences available in ebook form. I would recommend trying to read it one article at a time, one per day: that’s how I originally read the Sequences, one article a day as they were being written. That way, they would gradually seep their way into my thoughts over an extended period of time, letting me apply them in various situations. I wouldn’t expect just binge-reading the book in one go to have the same impact, even though it would likely still be of some use.)

Originally published at Kaj Sotala. You can comment here or there.

(Leave an echo)

Saturday, June 11th, 2016
6:11 pm - Error in Armstrong and Sotala 2012

Katja Grace has analyzed my and Stuart Armstrong’s 2012 paper “How We’re Predicting AI – or Failing To”. She discovered that one of the conclusions, “predictions made by AI experts were indistinguishable from those of non-experts”, is flawed due to “a spreadsheet construction and interpretation error”. In other words, I coded the data in one way, there was a communication error and a misunderstanding about what the data meant, and as a result of that, a flawed conclusion slipped into the paper.

I’m naturally embarrassed that this happened. But the reason why Katja spotted this error was that we’d made our data freely available, allowing her to spot the discrepancy. This is why data sharing is something that science needs more of. Mistakes happen to everyone, and transparency is the only way to have a chance of spotting those mistakes.

I regret the fact that we screwed up this bit, but proud over the fact that we did share our data and allowed someone to catch it.

EDITED TO ADD: Some people have taken this mistake to suggest that the overall conclusion, that AI experts are not good predictors of AI timelines, to be flawed. That would overstate the significance of this mistake. While one of the lines of evidence supporting this overall conclusion was flawed, several others are unaffected by this error. Namely, the fact that expert predictions disagree widely with each other, that many past predictions have turned out to be false, and that the psychological literature on what’s required for the development of expertise suggests that it should be very hard to develop expertise in this domain. (see the original paper for details)

(I’ve added a note of this mistake to my list of papers.)

Originally published at Kaj Sotala. You can comment here or there.

(Leave an echo)

Saturday, May 14th, 2016
11:39 am - Smile, You Are On Tumblr.Com

I made a new tumblr blog. It has photos of smiling people! With more to come!

Why? Previously I happened to need pictures of smiles for a personal project. After going through an archive of photos for a while, I realized that looking at all the happy people made me feel really happy and good. So I thought that I might make a habit out of looking at photos of smiling people, and sharing them.

Follow for a regular extra dose of happiness!

Originally published at Kaj Sotala. You can comment here or there.

(Leave an echo)

Wednesday, April 27th, 2016
9:52 am - Decisive Strategic Advantage without a Hard Takeoff (part 1)

A common question when discussing the social implications of AI is the question of whether to expect a soft takeoff or a hard takeoff. In a hard takeoff, an AI will, within a relatively short time, grow to superhuman levels of intelligence and become impossible for mere humans to control anymore.

Essentially, a hard takeoff will allow the AI to achieve what’s a so-called decisive strategic advantage (DSA) – “a level of technological and other advantages sufficient to enable it to achieve complete world domination” (Bostrom 2014) – in a very short time. The main relevance of this is that if a hard takeoff is possible, then it becomes much more important to get the AI’s values right on the first try – once the AI has undergone hard takeoff and achieved a DSA, it is in control with whatever values we’ve happened to give to it.

However, if we wish to find out whether an AI might rapidly acquire a DSA, then the question of “soft takeoff or hard” seems too narrow. A hard takeoff would be sufficient, but not necessary for rapidly acquiring a DSA. The more relevant question would be, which competencies does the AI need to master, and at what level relative to humans, in order to acquire a DSA?

Considering this question in more detail reveals a natural reason for why most previous analyses have focused on a hard takeoff specifically. Plausibly, for the AI to acquire a DSA, its level in some offensive capability must overcome humanity’s defensive capabilities. A hard takeoff presumes that the AI becomes so vastly superior to humans in every respect that this kind of an advantage can be taken for granted.

As an example scenario which does not require a hard takeoff, suppose that an AI achieves a capability at biowarfare offense that overpowers biowarfare defense, as well as achieving moderate logistics and production skills. It releases deadly plagues that decimate human society, then uses legally purchased drone factories to build up its own infrastructure and to take over abandoned human facilities.

There are several interesting points to note in conjunction with this scenario:

Attack may be easier than defense. Bruce Schneier writes that

Attackers generally benefit from new security technologies before defenders do. They have a first-mover advantage. They’re more nimble and adaptable than defensive institutions like police forces. They’re not limited by bureaucracy, laws, or ethics. They can evolve faster. And entropy is on their side — it’s easier to destroy something than it is to prevent, defend against, or recover from that destruction.

For the most part, though, society still wins. The bad guys simply can’t do enough damage to destroy the underlying social system. The question for us is: can society still maintain security as technology becomes more advanced?

A single plague, once it has evolved or been developed, can require multi-million dollar responses in order to contain it. At the same time, it is trivial to produce if desired, especially using robots that do not need to fear infection. And creating new variants as new vaccines are developed, may be quite easy, requiring the creation – and distribution – of yet more vaccines.

Another point that Schneier has made is that in order to keep something protected, the defenders have to succeed every time, whereas the attacker only needs to succeed once. This may be particularly hard if the attacker is capable of developing an attack that nobody has used before, such as with hijacked airplanes being used against major buildings in the 9/11 attacks, or with the various vulnerabilities that the Snowden leaks revealed the NSA to have been using for extensive eavesdropping.

Obtaining a DSA may not require extensive intelligence differences. Debates about takeoff scenarios often center around questions such as whether a self-improving AI would quickly hit diminishing returns, and how much room for improvement there is beyond the human level of intelligence. However, these questions may be irrelevant: especially if attack is easier than defense, only a relatively small edge in some crucial competency (such as biological warfare) may be enough to give the AI a DSA.

Exponential growth in the form of normal economic growth may not have produced astounding “fooms” yet, but it has produced plenty of situations where one attacker has gained a temporary advantage over others.

The less the AI cares about human values, the more destructive it may be. An AI which cares mainly about calculating the digits of pi, may be willing to destroy human civilization in order to make sure that a potential threat to it is eliminated. This ensures that it can go on calculating the maximum amount of digits unimpeded.

However, an AI which was programmed to maximize something like the “happiness of currently-living humans” may be much less willing to risk substantial human deaths. This would force it to focus on less destructive takeover methods, potentially requiring more sophisticated abilities.

It is worth noting that this only applies to AIs whose values are defined in terms of how they affect currently existing humans. An AI that was only maximizing human happiness in general might be willing to destroy all existing humans, and then recreate large numbers of humans in simulations.

In effect, the AI’s values determine the level of intelligence it needs to have in order to achieve the kind of a DSA that’s useful for its purposes.

Any destructive plan requires the ability to rebuild afterwards. It would not be of much use for the AI to destroy all of human civilization, if it was dependent on electricity from human-run power plants, and incapable of building or running its own. Thus, purely offensive capabilities will need to be paired with whatever rebuilding capacities are necessary after the offensive capabilities have been deployed.

This calculation may be upset if the AI believes itself to be in an immediate danger of destruction by humans, and believes that its values will still be more likely to be met in a universe where it continues to exist, even if that requires risking a universe where it cannot rebuild after deploying its attack. Thus, being threatened may force the AI’s hand and cause it to launch a potentially destructive offense even when it’s uncertain of its capability to rebuild.

The rational level of aggressiveness depends on the extent to which the AI can selectively target human resources. Human nations generally avoid creating engineered pandemics and using them against their enemies, because they know that the pandemic could easily spread back to them. An AI with no biological components might be immune to this consideration, allowing it to deploy biological weapons more freely. On the other hand, the AI might e.g. need electricity, a dependence which humans did not share and which might give them an advantage in some situation.

A way to formulate this is that attacks differ to the extent to which they can be selectively targeted. Traditional firearms only damage those targets which they are fired upon, whereas pandemics potentially threaten all the members of a species that they can infect. To the extent that the AI needs to rely on the human economy to produce resources that it needs to survive, attacks threatening the economy also threaten the AI’s resources; these resources are in a sense shared between the AI and humanity, so any attacks which cause indiscriminate damage on those resources are dangerous for both. The more the AI can design attacks which selectively deprive resources from its opponents, the lower the threshold it has for using them.

This blog post was written as part of research funded by the Foundational Research Institute.

Originally published at Kaj Sotala. You can comment here or there.

(Leave an echo)

> previous 20 entries
> top of page
LiveJournal.com