Today, I spent the first half of the day at the Brain Imaging Research Centre, initially to get some software, but then discussing software and the like with some of the researchers there. They're all really friendly, and I seem to get along well with them. To some degree, the BIRC is my second workplace -- when I'm running a study, I often spend more time down there than in my office. I met another researcher who works for someone who might be described as being my boss's rival who's going to be wrapping up a very large study with him at the end of the month, and took her resume to hand to my boss for a somewhat similar study that we might be making very active soon -- she has been running child subjects in the MRI, which is apparently not the easiest thing to do. I wonder if I'll ever have child subjects.
Today I also put in the final paperwork to order a $10,000 RAID. One of the challenges of MRI research is dealing with data retention. In days of experiments past, it was possible and easy to keep all the data for years, on paper in drawers. Reanalysis of that data by other researchers is common faire. In the MRI experiments I run, things are very different. There's behavioural data that's collected by the experiment software that takes about 300k -- that's not particularly big. There's then structural data for the subject, which takes about 50 megs. This data stores the structure of the head, and for subjects that request it, I can do neat 3d reconstructions of their face using it. 50M is not too big, but it adds up over the number of subjects I have. Then there's functional data, which takes about 730 megs per subject. A typical MRI study has around 20 subjects. A typical MRI study has around 20 subjects. A typical MRI study has around 20 subjects. A typical MRI study has around 20 subjects. A typical MRI study has around 20 subjects. So, storing the raw data is around 800M per subject, or 16 gigs per experiment. This is just the raw data -- we then, as we start to analyse the data, multiply the size of each subject directory by 3, and then as we start group analysis, tack on another 60G for the transformed data. So far, having done a number of analyses on that data, my experiment goes up to 111 Gigabytes of data. We have a number of such experiments, and so our disk usage is incredible. It remains to be seen how neuropsychology researchers (and their sysadmins) across the world manage this. In 20 years, will we still have all this data?
On the way back from the BIRC, I had an amusing memory of when I learned to snap my fingers, and realized that I haven't snapped my fingers for years. The knowledge, hard won (like whistling or riding a bike), is still there, but now my fingers hurt on one of my hands when I do it. It's kind of strange thinking about this. I also remember that, in retrospect, I was a pretty strange kid. One of the things about most of my closer friends is that we often had some pretty adult conversations. I realize in retrospect that there wasn't a single, "adult" level on which we were all communicating -- there were some conversations, like on philosophy, where with some of my friends we were talking like college kids, and other areas where people said things that were considerably over my head. I look back, and think about how many stupid misunderstandings or times in the dark I had because I or they wern't on the same level, and feel both amused and embarassed. I had some experiences and opportunities that people outside the group didn't have, and conversely no doubt missed out on a lot of things the normal kids had. My use of conversational implicature and (what is likely based on a markov chain or perhaps something more sophisticated) conversation following is still a bit off compared to most people's, but I think I've gotten much better at fitting in when/where I need to.
This is pretty weird.