Friday 6 February 2009

Learned Helplessness in Computing?

I know I should be revising, seeing that my Atomic and Laser Physics exam is mere hours away, but I ended up on another Wikipedia trek, and came across the article on learned helplessness. Reading through it, I found that I could make many connections with the currently depressing state of computing, and attributing it to the complexity and proprietaryness of software.

Learned helplessness is a much-studied psychological phenomenon, where a subject gives up trying to change their situation. An example cited consists of three groups of dogs; group A is a control group and are put into harnesses and left for the duration of the experiment; groups B and C are put into harnesses but are also given unpleasant electric shocks. Each group B dog has a lever in front of it which does nothing when activated, whereas each group C dog has a lever which turns off the shocks to that dog and one of the group B dogs. The dogs in group C learn that the lever turns off their shocks, and they use it whenever they start to get shocked. Group B dogs, however, learn that their lever does nothing, whilst their shocks seem to stop randomly (remember, each B dog is paired to a C dog's lever, so the B dogs don't know why their shocks stop).

After this stage of the experiment all of the dogs are put into part two. where they are unharnessed in a pen divided in two by a small partition. The half of the floor with a dog on is electrified, whilst the half without is normal. Dogs from groups A and C would hop over the partition, away from the electricity and thus away from the pain. They don't know that the other side's not electrified, but they have a go and find that it's not. The dogs from group B, however, just lie down on the electrified floor and whimper, as they are repeatedly electrocuted. They could hop over the partition, but don't bother trying. These dogs become depressed.

The conclusion of the experiment is that the sense of control is very important. Dogs in group B and group C got exactly the same shocks (since they were both controlled by group C's levers), but only group B got depressed. Essentially, they learned that nothing they did would stop the electricity, it just stopped randomly. They then applied this knowledge to the second situation and took the shocks, rather than trying the new possibility of jumping over the divide.

This can be seen in people, where some parents can end up neglecting their babies since they 'learn' that the child doesn't stop crying whether they give it attention or not, and thus ignore it, thinking they are helpless to stop its cries.

The psychological explanation for this is that the depressed subjects, in an attempt to rationalise the seemingly random lack of control, think of it as an inevitability ("Babies cry"), blame themselves ("I'm making it cry") and think of it as pervasive ("I'm a bad parent"). This learned helplessness digs a psychological hole which is notoriously difficult to break out of, and even causes feedback loops, for example a neglected child will cry more and have more problems than that of an attentive parent, thus reinforcing the "I'm a bad parent" and "I'm making it cry" beliefs. In fact, even knowledge of learned helplessness can make things worse, since it can act as a confirmation of the helplessness ("You've told yourself that you're helpless when you're actually not." "See? I TOLD you I was a bad parent!") and others can end up blaming the condition for things rather than the person ("It's not your fault that your baby's ill, you've learned to be helpless at looking after it." "Yes, you should probably take it away since I'm too learned-helpless to look after it.")

So, aside from knowing more being awesome, how does this apply to anything I'm interested in? Well I couldn't stop contrasting the explanations with computing. The dominant computing platform these days is Microsoft Windows which, although all software has bugs, seems to be full of them. A lot of these bugs are user interface related, where the required action to achieve the desired task is non-obvious, or a seemingly obvious action produces an unexpected result (which includes 'crashes', where a program disappears without the user telling it to). Although anyone more involved in software development would view these as bugs in the software which should be reported and fixed, frequently less technical users (which is the vast majority) view such things as inevitable ("Computers crash"), as their fault ("I made it crash") and pervasive ("I'm bad with computers"). Just look at the currently running adverts for the Which? PC Guide: A bunch of regular people saying how their computers keep messing up, and then an offer of a guide to show them how it's all their fault because they're doing it wrong.

Since I write software, I would say that the Which? PC Guide is a complete hack: It's fixing something in the wrong place. A broken piece of software should not be fixed by telling each and every user how to work around the broken bits, the software should be fixed so that nobody ever experiences those issues again. However, since it's proprietary software, nobody is allowed to fix it other than Microsoft (although there are numerous other hacks to work around the broken bits, some of which have created an entire industry, such as firewalls, anti-virus/spyware/adware programs, etc.).

The majority of computer users, however, do not think like me, since I am a group C dog: I know how to fix things. In fact, in human experiments into learned helplessness, it was found that people could concentrate more and solve problems more quickly in the presence of an annoying and distracting noise if they had a button which could turn it off, than those subjected to the noise without such a button, EVEN WHEN THE BUTTON WASN'T PRESSED. So on a Free Software system, where I know that it is possible for me to fix something if I truly wanted to, I don't get depressed, however on a proprietary system I frequently get annoyed, angry, irritated, etc. when the software behaves in undesirable ways.

For example, clicking a link that says "Download this program" in Idiot Exploiter 8 doesn't download the program, it just gives an subtle message under the toolbar that Internet Explorer has "protected" me by preventing the program from downloading and that I should click it to change that, and when clicked presents a menu with the option to download the program (how is this any different to the previous promise of a download?), which when clicked brings up a box asking if I want to save the program or run it, so I click run and when it's downloaded I get a warning saying that programs can do stuff to the computer, do I want to run it? I click run again (how is this any different to the previous promise of running the program?) and Windows pops up a message saying that the program is doing stuff, do I want to allow it to continue? I press continue an FINALLY get the the "first step" of the installer.

On Debian I could give a similar example that I can't get the Gdebi package installer to work, which means that I have to save packages and install them with the command "dpkg -i package_filename.deb", which can result in a broken setup if the newly installed package depends on other stuff, which means I need to install the stuff to fix it with "apt-get -f install" and press "y" to confirm it. This may seem annoying, but I know that if I wanted to fix it badly enough then I could, and would even be encouraged to do so (afterall, Gdebi works perfectly well in Ubuntu).

Whenever my wireless card messes up on Debian, on the other hand, I get incredibly frustrated and annoyed, and often need to walk away from my laptop and have a break, since I feel completely powerless over it. The wireless firmware I use is proprietary, since Broadcom don't tell anyone the language that their wifi chips speak (although clean-room reverse engineering of a Free Software replacement in Italy seems to be showing some promise), so even though I'm running completely Free Software applications on a Free Software kernel of Free Software drivers (in my case Linux), and can look at the code at any time to see what it's doing and possibly fix any problems, when it comes to my wireless card the disconnects are seemingly random, as I have no way of inspecting the firmware since it is proprietary. I therefore feel helpless to stop it disconnecting, and can't remedy the situation in any way other than disabling the Wifi, unloading the driver, reloading the driver, enabling the Wifi and trying to reconnect. If that doesn't work then all I can do is to try it again. In fact, I've even written a little script which does all of that whenever I run "restart-wireless". It's so bad that the developers of NetworkManager, the (currently) best network control system on Linux, do the same thing. If network manager's running and I get disconnected then I see the wireless network icon disappear and the wifi LED turn off. After a few seconds the Wifi LED comes back on, the Wifi icon comes back and it tries to connect. If it doesn't work then it happens again. It's depressing.

So there's one reason I think computing is in the sorry state that it is, people are being conditioned to think that "computers crash" (which conveniently keeps the cost of quality control down), that they aren't "using them properly" (which conveniently keeps the costs of having good designers down) and that they're destined to always be "clueless with everything computer-related" (which conveniently keeps people upgrading stuff they don't need to on the advice of the 'experts' selling the upgrades). This was caused by the proprietary software world, since with Free Software the volunteers, users, developers, companies and organisations which make and sell it actively encourage all users to "hop the partition" and 'scratch their own itches', since it results in less work and more Free software for those doing the encouraging. Whether it was intentional or not is debatable (never assign to malice that which can be explained by (in?)competance).

This unfortunately means that people like my Mum have some kind of internal off switch which is activated by the word "computer", so that when things like the broadband packages they are paying for are discussed with a sentence like "This one has a limit on how much we can do per month, so they'll charge more if we go over it, but this one doesn't" are met with a response such as "Well you know I don't understand these things" (which is the exact same sentence used for every attempt at explaining something, no matter how basic). It makes me *REALLY* frustrated when people don't bother to apply mentals skills which even five year olds possess, simply because they know computers are involved. Discuss the exact same thing with phone contracts, or even the price of meat per kilo, and they'll readily discuss the merits of each option, and even go into the small print, but with computers they've learned to be helpless, and thus think they have no control over anything related, and feel much more comfortable being extorted by monthly bills twice the size of what they could have, where the value calculations are worked out by someone else, than they do with having to confront some computer-related thinking for a few minutes.

Another big cause of computer-helplessness is a genuine problem with computing today, Free Software or not. Empirical evidence does say that just the presence of control, whether or not it is used, is the important bit (like my access to the source code for everything I use), but it's still a chore to actually make use of that control.

As an example, a few years ago the Nautilus file manager changed so that icons got bounding boxes. This meant that before the change if I clicked in a transparent corner of a circular icon then nothing would get selected, but after the change the circular icon would be selected because I'd clicked within the bounding box. This is a good thing usability-wise, but I was rather annoyed with the way it interfered with my specific setup. I had cut out images and assigned them as icons to the various folders in my Home folder, stretched them rather large and arranged them manually so that they filled the Nautilus window without overlapping, so that clicking the visible parts would select the icon, whilst clicking on a transparent part would 'fall through' and select one visible below. I was very proud of this, and it had taken quite a while to do. Then, after an update, all of the icons got bounding boxes and thus clicks in transparent areas no longer fell through, making selecting and double-clicking things unusable. I had to make all of the icons small again, and arrange them in a grid, destroying the previous awesomeness. I took it upon myself a few months ago to bring back the no-bounding-box Nautilus as a well-buried option, so I got the source code to the most recent version of Nautilus and looked through the version control history to find out when the change was made which added the bounding boxes (I think this is where it changed http://svn.gnome.org/viewvc/nautilus?view=revision&revision=9123 ) and replaced that section in the latest code with the old code, and it worked. However, this took a few days, since I've done very little C programming and never used GObject with C before, and I didn't even have to write any code (it was just copypasta). If I want to fix every bug I find it would take an intractable amount of time, even though I can fix any bug I want to individually.

There looks to be some promising stuff going on to rectify this at the Viewpoints Research Institute, an organisation funded by the US government with awesome Computer Scientists like Alan Kay. One of their aims is to "reinvent" computing, which basically involves making (yet another) computer system, but one which is as understandable (and hence small) as possible. They're aiming for a complete, working system in under 20,000 lines of code (for comparison, Windows has around 40,000,000), and have got some nice tools like their "Combined Object Lambda Architecture" programming system, which aims to be written in itself and be able to compile down as far as FPGAs (ie. rewiring the microchips themselves to represent the program), and OMeta which allows very compact and easy to understand programming language implementations (for example they have an almost-complete (missing "try"/"catch" and "with") Javascript interpreter which is only 177 lines of code), which, like their COLA, is written in itself. This allows COLA-based implementations of other languages to make their system, with new languages so easy to define that each part can be made in a tailor-made language, even being redefined in places where it makes things more comprehensible.

Hopefully having more understandable and approachable code will mean it is easier to find and fix bugs, so that nobody has to experience them for long. It might also help to reduce the number of people who teach themselves to be helpless at computing, although as for the ones who are already learned-helpless it will take a lot of effort on their part to break out of it, which won't be helped by proprietary companies trying to dress up their shit code as some kind of magical snake oil which cannot be obtained from anywhere else, or be written by mere mortals (which GNU set out to disprove with UNIX and has done a pretty fine job), and the media displaying binary all over the place whenever the internals of computers is mentioned.

OK, I think I should carry on revising now, as I've gone off on a bit of a rant, but damn it my blog's still not boring or crap! :P