Alicorn (alicorn24 on LJ) was already here when I got here, and will in all likelihood still be here after I leave. Less Wrongers will know her as being one of the people with the highest karma on the site. I was kinda surprised when I met her, because somehow I'd gotten the impression that she'd be really shy and stuff, but she turned out to be really outgoing and extroverted and not shy after all. Her school allowed people to make up their own degrees so long as they also completed some 'real' degree, and as a result she has degrees both in Philosophy and World-Building, which is totally awesome.
She writes serial fiction together with Tethys at elcenia.com, and has her own webcomic (which is cute and which I like) at htht.elcenia.com. Alicorn also makes good food (see her cooking blog) and likes petting people's hair, if they allow it.
Kaj: So, tell our readers, how did you come to be here?
Alicorn: I originally sent an e-mail last fall, asking about the summer, because at the time I expected to be in grad school for the forseeable future. I didn't get a firm response because there were so many summer applications to sort through and no clear idea of how many spots there were. Then, come the spring semester, I decided I wasn't happy, discerned no school-compatible way to fix that, and asked Anna if I could come out if I were able to leave right away instead of at the end of the school year. After some consideration and discussion, the answer turned out to be "yes"; I withdrew from school, packed up, flew out here, and proved useful enough to be kept around.
Kaj: 'Useful enough to be kept around' leads us pretty naturally to the next question, which is, what are the things that you do around here?
Alicorn: I write Less Wrong posts sometimes, although lately while I have lots of ideas, they aren't gelling properly. I've started doing a lot of outreach, because I love to chat with people, including the people SIAI wants someone to stay in touch with. I've also been doing some human capital development projects, absorbing more content and developing new skills.
Kaj: Too many potential lines of interr... uhh, interviewing that I could pursue, I have difficulty picking which ones. The outreach thing sounds interesting - do you generally get to talk with people about Singularity-type stuff a lot, or is it more general conversation? What kinds of people do you talk with that you count as outreach?
Alicorn: I have a pretty low ratio of Singularity-stuff to general conversation. For one thing, this probably increases my long-term quantity of Singularity conversations: people will be more willing to listen to me pontificate on that sometimes if it's not all I ever talk about! A lot of my contacts are people I was already friends with before I got involved with SIAI - some through Less Wrong, some not. In order to count for outreach at all, they have to have relevant interests, though - I can't include every one of my friends on my list of contacts for this reason.
Kaj: That makes sense. How do people in your experience generally react to Singularity-type stuff when it does come up? And do you actively seek out new contacts?
Alicorn: I do actively seek new contacts, although I prefer not to "cold call" - or rather, "cold e-mail" - I like to know a little about who I'm talking to first. People have surprised me with their reactions to Singularity type stuff. Some people reject it so thoroughly - even if they usually seem willing to listen to what I have to say and think I rarely have stupid ideas! - and others seem to follow everything I present, but don't find it at all motivating. People who don't fall into one of those categories, I've typically met through Less Wrong or the SIAI - so I can't claim to have converted anyone.
Kaj: Alright. You also mentioned that you've been doing human capital development and acquiring new skills. What kinds of skills in particular?
Alicorn: Since I seem to have comparative advantage at luminosity, I've been putting extra effort into verbalizing how I do that - the luminosity sequence wasn't as good as I think it could have been, and I'll probably give the topic another crack on LW in a few months. I attend some of the workshops that people in the house give, which are on all kinds of topics. And of course I read books and articles.
Kaj: Say a few words about luminosity, for those readers of mine who aren't LW regulars?
Alicorn: Luminosity is self-knowledge: the ability to monitor what's going on in your mind, predict what you're going to do next, and find the best ways to change these if you want to.
Kaj: And here's the link to Alicorn's Luminosity sequence, for anyone who's interested. So how do you like living and working here in the house?
Alicorn: I like it a lot! All of the people here are really great. There are some challenges associated with living in a large group, but we navigate them pretty effectively. It's easy to wander around and find an interesting conversation if I have nothing to do, and lots of people to feed my delicious food.
Kaj: Cool. I occasionally have the feeling that the opportunity to talk with all these people so easily sometimes gets me distracted and prevents me from getting things done, do you manage to avoid that?
Alicorn: I'm highly interruptible. While it costs me time to get sidetracked, it doesn't tend to make it much harder to pick up the project that was interrupted later. But if I need not to be disturbed, I can go in my room and close the door - or, for a less heavy-duty solution, put on my headphones.
Kaj: That works. What else. Oh yeah, how did you originally hear about the Singularity and all this stuff?
Alicorn: I was aware of the Singularity as a background idea, but considered it a science fiction trope more than something that might actually happen, for a long time - I assume I picked it up from all the fiction I read. I started taking it seriously after I found Less Wrong, which I discoverd via the Overcoming Bias link after having found OB through Stumbleupon.
Kaj: I thought it'd be something like that. So what are your own thoughts about the Singularity and our posthuman future? Do you think we'll just inevitably end up welcoming our robot overlords, for instance?
Alicorn: Can you rephrase that question, please?
Kaj: Sure. Basically I just meant to ask what your views were on things like the path to the Singularity, the likely timeframes and our chances of making it through intact. For instance, I'd personally be surprised if we didn't have real AI in say fifty years, and I suspect humanity has a pretty low chance of surviving the transition in a way that we'd consider positive (though I'd love to be proven wrong on that, obviously).
Alicorn: Hm. I'm not absolutely convinced we'll encounter a Singularity at all. I think it's entirely possible that there's some bottleneck in how fast technology can progress that we haven't hit yet, which, when it manifests itself, will smooth out all our further advancement and have us moving forward in a distinctly non-Singularityesque way. We could also all die, which would be bad. I'm skeptical that, if the Singularity happens, it will happen in fifty years or less: estimates for when things happen are often pushed back and virtually never pushed forward. The good part is that gives us lots of runway space in which to steer, insofar as we can steer. But I'm very dubious about CEV as a solution to fragility of value, and I think there are far more and deeper differences in human moral beliefs and human preferences than any monolithic solution can address. That doesn't mean we can't *drastically* improve things, though - or at least wind up with something that *I* like!
Kaj: Alright. I think I'm starting to run out of questions... oh yeah. With degrees in philosophy and world-building, you're somewhat different from the average Visiting Fellow, with a lot of people here tending to have more mathy or computer science-ish backgrounds. Do you think that's led to any situations where the difference is clearly noticable, or is it something that hardly ever comes up? Do you think there should be more diversity of backgrounds here?
Alicorn: I stay out of the really technical discussions, generally. I don't think it comes up too much apart from that. I think cognitive diversity is underrated in general. My post "Epistemic Luck" mentions ideological families within a discipline - I don't find it remotely hard to believe that similar things could happen to disciplines entire, which seems a dangerous thing not to guard against. The Less Wrong/SIAI community is quite homogeneous in more ways than just aptitudes for science and math, and I worry that we're missing some gigantic, obvious failure mode that someone from a different background would spot at once.
Kaj: That does sound worryingly plausible. Alright, so I'm out of real questions, so time to go meta. Is there any question that you'd have liked me to ask you, and if so what and how would you answer it, and no, a meta-response like "how would you like me to answer that question" isn't allowed?
Alicorn: Gosh, I don't know, I'm just so relentlessly fascinating, how can I pick a single one of the arbitrarily large number of things you could have asked me as the best one when any of them would have gloriously entertained your readers?
Kaj: :D Roll a die?
Alicorn: My dice are in the mail from my old apartment and won't be here for a couple days.
Kaj: Ah well. Well, aside from answers to difficult meta-questions, is there anything else you'd like to say before we're done with the interview?
Alicorn: Um, if anybody wants to be my friend, they should send me an e-mail or IM me. I am very approachable and nobody should find me intimidating at all.
Kaj: Cool. If anyone wants to do that, Alicorn's AIM name is alicorn24, and which e-mail addy should I mention?
Alicorn: email@example.com works fine. But if someone uses gChat, my ID there is firstname.lastname@example.org, and my MSN address is email@example.com :)
Kaj: :) And that's it, I think. Thanks for the interview!
Alicorn: You're welcome!