Weather has scaled back from bitterly cold, past cold, and down to chilly. With adequate layering it's pretty nice to be outside. The huge piles of snow are looking increasingly lonely and misplaced. Mostly working for a coffeeshop today - it's nice to be nearer the fresh air. Conference situation resolved, I'm preparing/polishing a demo and chewing on a poster design. It's funny how much stuff is best done by background tasks in the mind - with active effort, I might create an ok poster over a brief period of devoted time, but if I just start chewing on it alongside all the other things I'm chewing on, the work gets done with very little direct time or devoted effort, and it gets done more creatively. I find the majority of the design part of programming to be best done that way too - with experience, one can avoid most of the pitfalls of "just do it" by having one's mind play with the possibilities in the background for long enough, so that implementation is usually a short implement/debug cycle. I'd bet being an author is like that too (not that I have direct experience); maybe thinking in general is best not done at a sprint.
On my way here I tried to go to Brueggers, but failed twice because I passed an old dude playing awesome woodwind instruments where SqHill's Panera's porch used to be. Very good music, but totally distracting - had to pull my meandering mind entirely to not walking right past Brueggers for a third time. I felt bad for the poor employees there who had to listen to elevator music, but maybe it's just as well - people using sharp knives to make custom orders of food for people might not need music of ultimate distraction.
I'm not sure if I'm glad or not that I didn't run into the old Hare Krishna I sometimes see in SqHill - those conversations tend to be long, and while they're often interesting, I'm also a bit weirded out that he keeps saying that I'm some kind of deeply spiritual being. Maybe he does that to everyone, or maybe it's that when we were talking about the way people should live their lives he liked hearing a perspective that a focus on wealth and posessions is a poor substitute for a focus on knowledge and human relationships. I don't think this really makes me spiritual (although the term is kind of fuzzy) - I don't mind being called spiritual, per se, but I don't like how it seems like it might be a misperception - that I feel like it'd lead me to disappoint him were we to talk more directly on philosophical materialism. I guess this kind of thing often comes up for people with an unlikely combination of identities/perspectives. He did invite and strongly encourage me to go to regular meetings of Hare Krishnas in people's homes he's involved with, but I politely demurred - that'd be a clear trip to awkward city given the wide gap of actual perspectives (even given some shared values). I'd generally be delighted to go to places as gaijin, to discuss from the outside differences and similarities, but to actually join communities of that sort , either muting myself on areas where I disagree or disrupting whatever they do normally wouldn't feel right. I used to have a JW come by once every two weeks or so on the weekend, and we'd hold polite discussions desipite our disagreements - he tossed religious arguments at me that I dismantled, I talked to him about philosophy and the wide variety of perspectives/philosophies on the nonreligious side of the line. Maybe having that difference provided the oyster's sand.
Still kind of stuck between trying to hop right into grad school, getting another academic job, or maybe even working in industry. I am amazingly good at indecision on very important life-direction things where the data doesn't really point in a single direction! Still, I think not going to Santa Barbara was probably a mistake, and not going to Qatar was probably another one. Maybe that'll help inspire me to break inertia - there's hardly enough land left here to stand on. Moving probably won't help, but there's nothing else left to do.
Interesting thinking about risks and learning - learning from mistakes is an important part of intelligence, but it's undesirable to "learn" from managed risks. In machine learning, there's a lot of interesting literature on the difference in formally understandable systems (how strong is our model for the world? What's the anticipated cost of collecting enough data to improve our model? etc), but in human behaviour, our instincts for this are terrible (on the gross-scale, maybe undesirable "learning from managed risks" is central to what "knee-jerk legislation" is. Some time back I wrote about the "leadership gene" - we could easily imagine a tribe of people in the past, led by a charismatic person of that sort (maybe with the face of David Cameron, whose policies I dislike but who I find immensely difficult to dislike); they'd see bold, decisive, stupid leadership that would be emotionally fulfilling in a universe that's reimagined however is convenient to both emotional realities and whatever seems to be the challenge at hand, accuracy be damned. I suppose unless we really can teach statistics at a young age (I had my first exposure in middle school in a 5-student class taught by the principal - I think this was both a step in the right direction and too timid a step - it's important to get in there before the instinctual framework that statistics replaces can be established), we'll never be able to get most people to the interesting substantive disagreements after the stupid common ones in most fields. The careful rejection of instinct where it fails is one of the principal tasks of civilisation. The person who can see the occasional hiccup in a well-running system and who investigates the foundations of the system impartially rather than demands it be torn down - this is the kind of person we should hope to produce in future generations - our feet should be comfortable on abstract ground.
To second-guess nature, I wonder what kind of an EEA might've produced beings with better basic reasoning skills than humanity - we might daydream about a big Rubik's cube in the sky, but .. maybe this kind of thing is one of the best arguments for transhumanist technologies - our brains just had to be "good enough" for the fuzzy process of selection to keep us, but now we'd like something better than evolution needed (I imagine the reason we don't have it now is that the actual benefit of a good statistical model is both heavily modern-worldview-dependent (that is, it requires philosophical materialism/naturalism to be particularly useful) and that it was of such marginal benefit in evolutionary times that it was drowned out by the reasonably large amount of noise present in selection).
To second-guess history, it'd be interesting to know how societies and histories would've worked out differently had we had this (for starters, I'd bet we'd see fewer casinos, ha ha ha).
As noted, I believe we can beat (with suitable education at the right age) our poor intuitive models out of ourselves, but to the extent that it's biological it'd be interesting to imagine tinkering with brain development so that it's naturally better - so the painful self-conquering (and kvetching of the defeated) that (yes yes, I bet you could predict that I'd bring us here) Freud described in 「Civilisation and its Discontents」 could be mitigated. Then again, if we survive long enough without editing it right out of our biology, maybe it'd be a suitable scar to help us keep some humility in whatever future we build for ourselves.
I think we can understand how people might consider this perspective to be alien to humanity, and why the dreams of the people (that Hollywood has honed in on) are so hostile to the scientist. The types of classical virtue that people have striven for since long before religions began to do the same (and then eventually claimed exclusive use of the idea) have always been hard - for secularists to reclaim that book in order to add new entries to it (yes, I do claim that some of this belongs in the field of virtue - it involves significant self-conquering, affects character, etc) might feel like an outrage to those who have managed to be the only ones talking about it for so long. There's a lot more to say on this front, but it's easily inferred from things I've probably said before about the costs of virtue.
For those who love playing with definitions, could we have a flavour of transhumanism that is entirely based off of education and notions of virtue, or is that just philosophy (or for the particularly snarky and historically blind, dystopia)?
I try not to be generally excitable about Google things - apart from their HR people (as well as a former boss (and maybe friendish) of mine) jerking me around, and their being a mostly advert-supported company, I'm nervous at how much gets built on top of their technologies and I'm not sure what we could replace if they decided to close everything tomorrow (e.g. there's all sorts of cool stuff people do with google maps - are there public alternatives to all that sat data they licence from the various companies? Do we need a company like Google to sponsor the basics like this, or is decentralisation possible that would let us break their (so far mostly unexercised) options to restrict or ad-embellish things)? Relying on them is dangerous to the extent that we have. That said, it's pretty cool that they now have bike trails in Google Maps and let people get separate walking and biking directions to places (thanks to jwz for pointing it out). Although entering that mode takes one to San Francisco, it seems to know plenty of Pittsburgh trails too.
Just to top off this scatterbrained post (think of this as a cherry?)
Quote of the day:
- "Superiour pilots uses their superior judgment to avoid needing to exercise their superiour skill"