Tuesday, 26 January 2010
Diet Python and some ranting
Long time no post (in fact, I haven't posted at all since starting my fourth year at Sheffield Uni!). I keep writing big ranty things, meaning to post them, but haven't got around to it yet. Ah well, maybe soon (exams at the moment though :( )
As part of my biannual exam revision procrastination I've started to rejig my Diet Python language. The idea behind Diet Python is pretty simple: take all of the syntactic sugar out of Python. Thus Diet Python is just a subset of Python, which is just as powerful as the complete language. Diet Python isn't meant to be programmed in, however; it exists to simplify Python programs so that they can be handled more easily by automated tools. For example, in Python the following two lines are equivalent:
a + b
a.__add__(b)
So the Diet Python equivalent is this:
a.__add__(b)
It is thus completely compatible with Python, but there is no need to bother with "+" (or the associated nodes in the Abstract Syntax Tree). The same applies to other Python syntax such as:
my_list[index]
my_list.__getitem__(index)
In Diet Python we can get rid of the "[]" subscription notation, without losing the ability to grab elements from lists (or any other type of object which implements the "[]" syntax, which is done via the __getitem__ method). Thus if we have a Python implementation (CPython, Jython, PyPy, IronPython, etc.) then it is also a Diet Python implementation (plus some extra stuff that Diet Python won't use), but more interestingly if we implement Diet Python then we've actually implemented the whole of Python in terms of features, just not the syntax. This can be overcome easily by using a translator to turn Python's nice, sugary syntax into Diet Python's awkward, canonical syntax, which is exactly what I've done.
Diet Python originally started as a simple test case for my Python pretty printer (or "decompiler"), which turns a Python Abstract Syntax Tree, produced by Python 2.x's built-in "compiler" module, into valid Python code which implements the AST's functionality (ie. compile some Python into an AST, stick that into the decompiler to get some Python code, then compile that to an AST and the two ASTs should be the same (as long as every transformation is reversible, is reversed, doesn't lose information and is done naively, that is ;) ).
The decompiler itself was an experiment to get used to the PyMeta pattern matching framework (which I've since used in a University project to test my code), and since PyMeta, as an implementation of Alessandro Warth's OMeta, should be nicely extensible via subclassing, I wanted both an experiment in PyMeta and an experiment in extending my experiment in PyMeta to really get to grips with it.
Unfortunately subclassing in PyMeta has proven difficult, which might be a bug in the implementation (I'll have to check up on that). Making a pattern matcher, for example to decompile Python ASTs, in PyMeta allows anyone to make a similar pattern matcher based on it quite easily through Python's object system. For example if you want to get rid of every "print" statement in some code, you take the decompiler (which is a Python class object), then you write down the grammar rules which differ from the original (in this example every Print and Printnl node should be translated into '' (ie. an empty string, and thus no code)), then you tell the decompiler to make a grammar out of your rules, and it will give you a new Python class which implements a pattern matcher using the decompiler's rules + your new ones (where the new ones override the decompiler's ones in case of conflicts).
This is all well and good, however the REALLY cool thing about OMeta and thus PyMeta is that their operation, ie. turning written rules into parsers for those rules, is written in (O/Py)Meta (which is why they are Meta). Thus it is possible to take OMeta and, by writing some OMeta rules, change the way that OMeta works, we could call it OMeta'. Now OMeta' can be changed by writing rules in either OMeta or OMeta', to produce another pattern matcher which we can call OMeta'', and so on. This, however, doesn't seem to work in PyMeta, despite trying multiple ways and looking through the source code (which is written in OMeta) over and over again. Sad face :(
Ah well, this limitation has resulted in a bit of hackiness when it comes to the Python decompiler and Diet Python translators. Firstly, PyMeta has no syntax for comments, which is annoying. It should be simple to subclass PyMeta to make a PyMeta' which supports comments, but since I can't subclass PyMeta without losing its bootstrapping, I'm stuck with using Python to remove comments before passing the rules to PyMeta. The second hack is that doing tree operations requires recursion. Whilst PyMeta has recursion built in, it's not available in the most suitable way for my experiments. Once again, subclassing PyMeta should solve this, but I can't, so I've had to monkey-patch the AST nodes (ie. pollute their namespaces with functions and attributes) then call these from inside the grammar. What this results in is every node instantiating their own pattern matcher on themselves, which happens recursively down the trees.
Unfortunately the "type" system of Python 2.x rears its ugly head here, where historical implementation decisions have left Python with 2 object hierarchies (which, I believe, was one of the main motivations for making Python 3). The object system which is the most familiar, since it's used in Python code, has the class "object" as the eventual ancestor of everything, such that every class is a subclass of object, or a subclass of a subclass of object, etc. This would be a "pure" object system, except that "everything" isn't quite everything. Many core pieces of Python, for example text strings, are not descendents of "object" at all, and are not subclasses of anything, or indeed classes. Instead they are "types", where each "type" seems to be isolated from everything else, written by hand in C, utterly inextensible, cannot be subclassed, and basically brings to mind those nightmarish things that Java programmers call "basic types" (*shudders*, *washes mouth out with soap*). Since they are in their own little statically-compiled-C world there is no way to monkey patch them with the required functions and attributes, so that every string, number, None and probably more require custom code in the pattern matchers. Shit. This also brings with it that great friend of everybody who loves to waste time known as combinatorial explosion. In other words, instead of doing a substitution like:
apply_recursively ::= <anything>:a => a.recurse()
("apply_recursively" is defined as taking anything and calling it "a", then outputting the value of running "a.recurse()")
We have to do something like:
apply_recursively ::= <anything>:a ?(not issubclass(a.__class__, Node)) => a
| <anything>:a => a.recurse()
("apply_recursively" is defined as taking anything and calling it "a", as long as it is not descended from "Node", and outputting it's value, or else taking anything and calling it "a" and outputting the value of "a.recurse()")
And of course, since this is our friend combinatorial explosion, we cannot write:
apply_recursively ::=<anything>:a1 <anything>:a2 <anything>:a3 <anything>:a4 => a1.recurse() + a2.recurse() + a3.recurse() + a4.recurse()
Oh no, if there's the chance that any of those might be "types" (*winces*) then we are forced to write instead:
apply_recursively ::= <anything>:a1 <anything>:a2 <anything>:a3 <anything>:a4 ?(not (issubclass(a1__class__, Node) or issubclass(a2__class__, Node) or issubclass(a3__class__, Node) or issubclass(a4__class__, Node))) => a1 + a2 + a3 + a4
| <anything>:a1 <anything>:a2 <anything>:a3 <anything>:a4 ?(not (issubclass(a1.__class__, Node) or issubclass(a2.__class__, Node) or issubclass(a3.__class__, Node)) and issubclass(a4.__class__, Node)) => a1 + a2 + a3 + a4.recurse()
| <anything>:a1 <anything>:a2 <anything>:a3 <anything>:a4 ?(not (issubclass(a1.__class__, Node) or issubclass(a2.__class__, Node) or issubclass(a4.__class__, Node)) and issubclass(a3.__class__, Node)) => a1 + a2 + a3.recurse() + a4
| <anything>:a1 <anything>:a2 <anything>:a3 <anything>:a4 ?(not (issubclass(a1.__class__, Node) or issubclass(a2.__class__, Node)) and issubclass(a3.__class__, Node) and issubclass(a4.__class__, Node)) => a1 + a2 + a3.recurse() + a4.recurse()
| <anything>:a1 <anything>:a2 <anything>:a3 <anything>:a4 ?(not (issubclass(a1.__class__, Node) or issubclass(a4.__class__, Node) or issubclass(a3.__class__, Node)) and issubclass(a2.__class__, Node)) => a1 + a2.recurse() + a3 + a4
| <anything>:a1 <anything>:a2 <anything>:a3 <anything>:a4 ?(not (issubclass(a1.__class__, Node) or issubclass(a3.__class__, Node)) and issubclass(a2.__class__, Node) and issubclass(a4.__class__, Node)) => a1 + a2.recurse() + a3 + a4.recurse()
| <anything>:a1 <anything>:a2 <anything>:a3 <anything>:a4 ?(not (issubclass(a1.__class__, Node) or issubclass(a4.__class__, Node)) and issubclass(a3.__class__, Node) and issubclass(a2.__class__, Node)) => a1 + a2.recurse() + a3.recurse() + a4
| <anything>:a1 <anything>:a2 <anything>:a3 <anything>:a4 ?(not (issubclass(a1.__class__, Node)) and issubclass(a2.__class__, Node) and issubclass(a3.__class__, Node) and issubclass(a4.__class__, Node)) => a1 + a2.recurse() + a3.recurse() + a4.recurse()
| <anything>:a1 <anything>:a2 <anything>:a3 <anything>:a4 ?(not (issubclass(a4.__class__, Node) or issubclass(a2.__class__, Node) or issubclass(a3.__class__, Node)) and issubclass(a1.__class__, Node)) => a1.recurse() + a2 + a3 + a4
| <anything>:a1 <anything>:a2 <anything>:a3 <anything>:a4 ?(not (issubclass(a3.__class__, Node) or issubclass(a2.__class__, Node)) and issubclass(a1.__class__, Node) and issubclass(a4.__class__, Node)) => a1.recurse() + a2 + a3 + a4.recurse()
| <anything>:a1 <anything>:a2 <anything>:a3 <anything>:a4 ?(not (issubclass(a4.__class__, Node) or issubclass(a2.__class__, Node)) and issubclass(a3.__class__, Node) and issubclass(a1.__class__, Node)) => a1.recurse() + a2 + a3.recurse() + a4
| <anything>:a1 <anything>:a2 <anything>:a3 <anything>:a4 ?(not (issubclass(a2.__class__, Node)) and issubclass(a1.__class__, Node) and issubclass(a3.__class__, Node) and issubclass(a4.__class__, Node)) => a1.recurse() + a2 + a3.recurse() + a4.recurse()
| <anything>:a1 <anything>:a2 <anything>:a3 <anything>:a4 ?(issubclass(a1.__class__, Node) and issubclass(a2.__class__, Node) and not (issubclass(a3.__class__, Node) or issubclass(a4.__class__, Node))) => a1.recurse() + a2.recurse() + a3 + a4
| <anything>:a1 <anything>:a2 <anything>:a3 <anything>:a4 ?(not (issubclass(a3.__class__, Node)) and issubclass(a2.__class__, Node) and issubclass(a1.__class__, Node) and issubclass(a4.__class__, Node)) => a1.recurse() + a2.recurse() + a3 + a4.recurse()
| <anything>:a1 <anything>:a2 <anything>:a3 <anything>:a4 ?(issubclass(a1.__class__, Node) and issubclass(a2.__class__, Node) and issubclass(a3.__class__, Node) and not (issubclass(a4.__class__, Node))) => a1.recurse() + a2.recurse() + a3.recurse() + a4
| <anything>:a1 <anything>:a2 <anything>:a3 <anything>:a4 ?(issubclass(a1.__class__, Node) or issubclass(a2.__class__, Node) or issubclass(a3.__class__, Node) or issubclass(a4.__class__, Node)) => a1.recurse() + a2.recurse() + a3.recurse() + a4.recurse()
Which, even if you've never programmed before, should look like a bloody stupid hoop to have to jump through.
So, there's a little insight into how even high-level, meta, abstract things can be hindered by ancient, low-level implementation artifacts, and possibly an insight into the angry posts I was making to Indenti.ca whilst writing this stuff six months ago ;)
My code, as always, is on Gitorious, and now that I've turned the Diet Python translator into a tree transform it should be much easier to strip away more and more layers of unnecessary Python (and thus pave the way for some interesting programming experiments!)
Long time no post (in fact, I haven't posted at all since starting my fourth year at Sheffield Uni!). I keep writing big ranty things, meaning to post them, but haven't got around to it yet. Ah well, maybe soon (exams at the moment though :( )
As part of my biannual exam revision procrastination I've started to rejig my Diet Python language. The idea behind Diet Python is pretty simple: take all of the syntactic sugar out of Python. Thus Diet Python is just a subset of Python, which is just as powerful as the complete language. Diet Python isn't meant to be programmed in, however; it exists to simplify Python programs so that they can be handled more easily by automated tools. For example, in Python the following two lines are equivalent:
a + b
a.__add__(b)
So the Diet Python equivalent is this:
a.__add__(b)
It is thus completely compatible with Python, but there is no need to bother with "+" (or the associated nodes in the Abstract Syntax Tree). The same applies to other Python syntax such as:
my_list[index]
my_list.__getitem__(index)
In Diet Python we can get rid of the "[]" subscription notation, without losing the ability to grab elements from lists (or any other type of object which implements the "[]" syntax, which is done via the __getitem__ method). Thus if we have a Python implementation (CPython, Jython, PyPy, IronPython, etc.) then it is also a Diet Python implementation (plus some extra stuff that Diet Python won't use), but more interestingly if we implement Diet Python then we've actually implemented the whole of Python in terms of features, just not the syntax. This can be overcome easily by using a translator to turn Python's nice, sugary syntax into Diet Python's awkward, canonical syntax, which is exactly what I've done.
Diet Python originally started as a simple test case for my Python pretty printer (or "decompiler"), which turns a Python Abstract Syntax Tree, produced by Python 2.x's built-in "compiler" module, into valid Python code which implements the AST's functionality (ie. compile some Python into an AST, stick that into the decompiler to get some Python code, then compile that to an AST and the two ASTs should be the same (as long as every transformation is reversible, is reversed, doesn't lose information and is done naively, that is ;) ).
The decompiler itself was an experiment to get used to the PyMeta pattern matching framework (which I've since used in a University project to test my code), and since PyMeta, as an implementation of Alessandro Warth's OMeta, should be nicely extensible via subclassing, I wanted both an experiment in PyMeta and an experiment in extending my experiment in PyMeta to really get to grips with it.
Unfortunately subclassing in PyMeta has proven difficult, which might be a bug in the implementation (I'll have to check up on that). Making a pattern matcher, for example to decompile Python ASTs, in PyMeta allows anyone to make a similar pattern matcher based on it quite easily through Python's object system. For example if you want to get rid of every "print" statement in some code, you take the decompiler (which is a Python class object), then you write down the grammar rules which differ from the original (in this example every Print and Printnl node should be translated into '' (ie. an empty string, and thus no code)), then you tell the decompiler to make a grammar out of your rules, and it will give you a new Python class which implements a pattern matcher using the decompiler's rules + your new ones (where the new ones override the decompiler's ones in case of conflicts).
This is all well and good, however the REALLY cool thing about OMeta and thus PyMeta is that their operation, ie. turning written rules into parsers for those rules, is written in (O/Py)Meta (which is why they are Meta). Thus it is possible to take OMeta and, by writing some OMeta rules, change the way that OMeta works, we could call it OMeta'. Now OMeta' can be changed by writing rules in either OMeta or OMeta', to produce another pattern matcher which we can call OMeta'', and so on. This, however, doesn't seem to work in PyMeta, despite trying multiple ways and looking through the source code (which is written in OMeta) over and over again. Sad face :(
Ah well, this limitation has resulted in a bit of hackiness when it comes to the Python decompiler and Diet Python translators. Firstly, PyMeta has no syntax for comments, which is annoying. It should be simple to subclass PyMeta to make a PyMeta' which supports comments, but since I can't subclass PyMeta without losing its bootstrapping, I'm stuck with using Python to remove comments before passing the rules to PyMeta. The second hack is that doing tree operations requires recursion. Whilst PyMeta has recursion built in, it's not available in the most suitable way for my experiments. Once again, subclassing PyMeta should solve this, but I can't, so I've had to monkey-patch the AST nodes (ie. pollute their namespaces with functions and attributes) then call these from inside the grammar. What this results in is every node instantiating their own pattern matcher on themselves, which happens recursively down the trees.
Unfortunately the "type" system of Python 2.x rears its ugly head here, where historical implementation decisions have left Python with 2 object hierarchies (which, I believe, was one of the main motivations for making Python 3). The object system which is the most familiar, since it's used in Python code, has the class "object" as the eventual ancestor of everything, such that every class is a subclass of object, or a subclass of a subclass of object, etc. This would be a "pure" object system, except that "everything" isn't quite everything. Many core pieces of Python, for example text strings, are not descendents of "object" at all, and are not subclasses of anything, or indeed classes. Instead they are "types", where each "type" seems to be isolated from everything else, written by hand in C, utterly inextensible, cannot be subclassed, and basically brings to mind those nightmarish things that Java programmers call "basic types" (*shudders*, *washes mouth out with soap*). Since they are in their own little statically-compiled-C world there is no way to monkey patch them with the required functions and attributes, so that every string, number, None and probably more require custom code in the pattern matchers. Shit. This also brings with it that great friend of everybody who loves to waste time known as combinatorial explosion. In other words, instead of doing a substitution like:
apply_recursively ::= <anything>:a => a.recurse()
("apply_recursively" is defined as taking anything and calling it "a", then outputting the value of running "a.recurse()")
We have to do something like:
apply_recursively ::= <anything>:a ?(not issubclass(a.__class__, Node)) => a
| <anything>:a => a.recurse()
("apply_recursively" is defined as taking anything and calling it "a", as long as it is not descended from "Node", and outputting it's value, or else taking anything and calling it "a" and outputting the value of "a.recurse()")
And of course, since this is our friend combinatorial explosion, we cannot write:
apply_recursively ::=<anything>:a1 <anything>:a2 <anything>:a3 <anything>:a4 => a1.recurse() + a2.recurse() + a3.recurse() + a4.recurse()
Oh no, if there's the chance that any of those might be "types" (*winces*) then we are forced to write instead:
apply_recursively ::= <anything>:a1 <anything>:a2 <anything>:a3 <anything>:a4 ?(not (issubclass(a1__class__, Node) or issubclass(a2__class__, Node) or issubclass(a3__class__, Node) or issubclass(a4__class__, Node))) => a1 + a2 + a3 + a4
| <anything>:a1 <anything>:a2 <anything>:a3 <anything>:a4 ?(not (issubclass(a1.__class__, Node) or issubclass(a2.__class__, Node) or issubclass(a3.__class__, Node)) and issubclass(a4.__class__, Node)) => a1 + a2 + a3 + a4.recurse()
| <anything>:a1 <anything>:a2 <anything>:a3 <anything>:a4 ?(not (issubclass(a1.__class__, Node) or issubclass(a2.__class__, Node) or issubclass(a4.__class__, Node)) and issubclass(a3.__class__, Node)) => a1 + a2 + a3.recurse() + a4
| <anything>:a1 <anything>:a2 <anything>:a3 <anything>:a4 ?(not (issubclass(a1.__class__, Node) or issubclass(a2.__class__, Node)) and issubclass(a3.__class__, Node) and issubclass(a4.__class__, Node)) => a1 + a2 + a3.recurse() + a4.recurse()
| <anything>:a1 <anything>:a2 <anything>:a3 <anything>:a4 ?(not (issubclass(a1.__class__, Node) or issubclass(a4.__class__, Node) or issubclass(a3.__class__, Node)) and issubclass(a2.__class__, Node)) => a1 + a2.recurse() + a3 + a4
| <anything>:a1 <anything>:a2 <anything>:a3 <anything>:a4 ?(not (issubclass(a1.__class__, Node) or issubclass(a3.__class__, Node)) and issubclass(a2.__class__, Node) and issubclass(a4.__class__, Node)) => a1 + a2.recurse() + a3 + a4.recurse()
| <anything>:a1 <anything>:a2 <anything>:a3 <anything>:a4 ?(not (issubclass(a1.__class__, Node) or issubclass(a4.__class__, Node)) and issubclass(a3.__class__, Node) and issubclass(a2.__class__, Node)) => a1 + a2.recurse() + a3.recurse() + a4
| <anything>:a1 <anything>:a2 <anything>:a3 <anything>:a4 ?(not (issubclass(a1.__class__, Node)) and issubclass(a2.__class__, Node) and issubclass(a3.__class__, Node) and issubclass(a4.__class__, Node)) => a1 + a2.recurse() + a3.recurse() + a4.recurse()
| <anything>:a1 <anything>:a2 <anything>:a3 <anything>:a4 ?(not (issubclass(a4.__class__, Node) or issubclass(a2.__class__, Node) or issubclass(a3.__class__, Node)) and issubclass(a1.__class__, Node)) => a1.recurse() + a2 + a3 + a4
| <anything>:a1 <anything>:a2 <anything>:a3 <anything>:a4 ?(not (issubclass(a3.__class__, Node) or issubclass(a2.__class__, Node)) and issubclass(a1.__class__, Node) and issubclass(a4.__class__, Node)) => a1.recurse() + a2 + a3 + a4.recurse()
| <anything>:a1 <anything>:a2 <anything>:a3 <anything>:a4 ?(not (issubclass(a4.__class__, Node) or issubclass(a2.__class__, Node)) and issubclass(a3.__class__, Node) and issubclass(a1.__class__, Node)) => a1.recurse() + a2 + a3.recurse() + a4
| <anything>:a1 <anything>:a2 <anything>:a3 <anything>:a4 ?(not (issubclass(a2.__class__, Node)) and issubclass(a1.__class__, Node) and issubclass(a3.__class__, Node) and issubclass(a4.__class__, Node)) => a1.recurse() + a2 + a3.recurse() + a4.recurse()
| <anything>:a1 <anything>:a2 <anything>:a3 <anything>:a4 ?(issubclass(a1.__class__, Node) and issubclass(a2.__class__, Node) and not (issubclass(a3.__class__, Node) or issubclass(a4.__class__, Node))) => a1.recurse() + a2.recurse() + a3 + a4
| <anything>:a1 <anything>:a2 <anything>:a3 <anything>:a4 ?(not (issubclass(a3.__class__, Node)) and issubclass(a2.__class__, Node) and issubclass(a1.__class__, Node) and issubclass(a4.__class__, Node)) => a1.recurse() + a2.recurse() + a3 + a4.recurse()
| <anything>:a1 <anything>:a2 <anything>:a3 <anything>:a4 ?(issubclass(a1.__class__, Node) and issubclass(a2.__class__, Node) and issubclass(a3.__class__, Node) and not (issubclass(a4.__class__, Node))) => a1.recurse() + a2.recurse() + a3.recurse() + a4
| <anything>:a1 <anything>:a2 <anything>:a3 <anything>:a4 ?(issubclass(a1.__class__, Node) or issubclass(a2.__class__, Node) or issubclass(a3.__class__, Node) or issubclass(a4.__class__, Node)) => a1.recurse() + a2.recurse() + a3.recurse() + a4.recurse()
Which, even if you've never programmed before, should look like a bloody stupid hoop to have to jump through.
So, there's a little insight into how even high-level, meta, abstract things can be hindered by ancient, low-level implementation artifacts, and possibly an insight into the angry posts I was making to Indenti.ca whilst writing this stuff six months ago ;)
My code, as always, is on Gitorious, and now that I've turned the Diet Python translator into a tree transform it should be much easier to strip away more and more layers of unnecessary Python (and thus pave the way for some interesting programming experiments!)
Diet Python and some ranting
Thursday, 21 January 2010
mv/((1-(v/c)^2)^(1/2)) or, in non-relativistic terms, momentum
I know this will probably start a flame war if widely read (which it probably won't be, since this is just a braindump page) but I want to say it anyway: I think the Gnome desktop is in a bit of a pickle, and the problem I think it is facing is that which many people have to deal with when programming, and that is stopping too early.
I'll talk you through one of my first programs as an example; it was for an assignment in Object Oriented Design in Java, which was to make a very simple pinball game. There was some code given as a starting point, but it was terrible (it was essentially a C program shoehorned into compiling as Java, thus it didn't follow an Object Oriented Design and, whilst syntactically correct, wasn't really written in Java at all), though most (probably all) used it as a starting point. Being very new to programming, especially in such a verbose and overly-strict language as Java, I couldn't really follow the given code: I could see the API and get it to do the things I told it, but I couldn't understand all of what it was doing, and amongst those bits which I did understand I saw some obvious hacks. As a Physicist and perfectionist-who-never-ends-up-finishing-anything I didn't want to use incorrect code, and especially didn't want to rely on someone else's proof without understanding it (Curry-Howard and all that). Thus I did the first thing that anyone who's ever done any Maths would do and tried to work it out myself and see if my solution matched. The result was a program of which I was quite proud: rather than being a hack to shift graphics around the screen (as the provided code had been), I'd started from what I knew and had made a basic Physics simulation with graphical output (originally based on AWT then switched to Swing). The code was spread out amongst many classes, there were no hacks in the object system (although there were a few in the main method to set things up, but that was just for testing), it was very intuitive, was the largest programming project I'd tackled thus far (although now I'd consider it to be the size of a moderate experiment) and I had it running pretty early into the assignment. It now needed some recognisable objects from a pinball table (like flippers) and more game-like control. However, rather than writing these I just played around inside it. Constantly. It was fun to throw things around, generate hundreds of objects and watch them bounce around off each other but most of all it was fun to know that I'd made it. In the end it got marked as 70%, since all my playing stopped me from actually making any kind of pinball game (I talked through the code, but when running it just presented a bunch of balls bouncing off "pins" which were just fixed balls).
Now, what's this got to do with Gnome? Well, Gnome works really well. It's unobtrusive, stable, intuitive and does "what people want". Reading through Planet Gnome gives some design bugs (the keyring dialogue asking for a password without indicating what is after the keyring or why, allowing a trojan to ask for the keyring without the user having any indication that they're giving access to malware and not NetworkManager or Evolution or Empathy or whatever), a few posts on Zeitgeist, translations, some compatibility improvements and a call for more contributions to the 2.29 features list. Other than that it's musings about transport, architecture, low-level distro tweaking (nothing to do with Gnome, things like boot performance). It seems that there are three levels of Gnome developer (and I mean developer here as anybody involved in improving Gnome):
1) The it does "what I want" developers, who may maintain things but make very gradual changes (like layout changes, adding translations, etc.). To these, Gnome is an excellent platform which chugs away unobtrusively for them in the background and allows them to get on with life; their itches are the "papercut" annoyances that are small, but happen a lot (something like an info dialogue which doesn't have a "don't show this again" option).
2) The inspired developers who's itches are to push the platform in new directions. They work hard to do this, but as one-person-armies they can only do well on one thing at a time (for example UPnP support).
3) The ignored developers who's itches to make sweeping changes never really get scratched due to the inertia of userbase familiarity and developer apathy due to the adequate nature of the platform as-is. These include the Gnome mobile efforts, the online integration attempts and so on. For those wanting an example of inertia, take Nautilus's spatial mode. This is rejected in favour of the old one-window browser mode by many users, simply because it was there first.
Now, I am not trying to crap on the project, far from it I'm offering my observations of the development slowdown over the past 6 or 7 years I've been using Gnome and following its developers. I'm not in any way trying to undermine the efforts of those three groups I've mentioned, I am trying to point out the obstacles that each face and how they could be overcome.
Many of those developers in groups 2 and 3 could benefit from Gnome 3 stirring things up a bit to break the stagnation. The reason I put quotes around does "what I/people want" is because, in my opinion, there's no such thing. Recall Henry Ford's famous remark that if he'd given people what they wanted then he would've given them a faster horse. Nobody knows that they want something unless they try it for a while and integrate it into how they go about things. I'm frequently met with the does "what I want" argument with regards to Windows. If the current situation in Gnome is unfortunate then that lumbering beast known as Windows XP is the armageddon! Admittedly, Windows Vista is awful: those few of its features which are any good don't make up for the cost in performance, and many of the features deemed "good" by others aren't features they're just attempts at shifting responsibility for brokenness onto users (those damned popups!). However, since there was an explosion in computer ownership during the lifetime of XP (2001-2007), for many people XP is a fundamental fact about what a computer is, in the same way that a toaster is not a toaster without a spinny dial for toasting time. Of course there are those who lived through Windows 95, 98, ME (lol), 2000 and XP, and who saw Vista as the next rung on the steady upgrade ladder it was intended to be, but viewed it as not worth the wait compared to the more timely releases of the past, and no doubt the growing use of computers made its drawbacks all the more annoying.
The appearance of KDE 4 stirred things up in KDE land, but there was such a wholesale developer switch due to the underlying library improvements that the users have been dragged kicking and screaming into a more flexible system, which is a must for supporting both the type 1 people and the types 2 and 3. Plus any annoyance generated has either been blogs-full-of-ads whoring or else has been reacted to by the KDE developers and sorted, or at least steps have been taken so that they become sorted some time in the future ;)
The alternative to the kicking and screaming approach is to fork. Forking is usually bad news, but unfortunately there seem to be quite a few projects aiming to address deficiencies in Gnome, without actually fixing what they perceive to be wrong with Gnome. For example XFCE tries to use fewer resources, whilst LXDE tries to use even fewer resources. There are loads of custom file managers, panels, window managers, widget systems etc. which each do an OK job, but don't play nicely with each other and so don't quite combine to become teh awesomes as they rightfully should.
Doing things the "right" way, or leaving many opportunities open for others to reject what you've made in favour of something else, is a much harder thing to accomplish than just shipping a vertical, one-size-doesn't-fit-any stack. Lots of standards are supported in Gnome, but a lot of work is happening outside the project and being shoehorned into the defaults later because it happens to use Glib.
I hope the introspection initiative gets the ball rolling for Gnome again, since it's lowering the barrier to entry and allowing customisation through abstractions and introspection which can guide enthusiastic minds into tackling the itches they face without putting them off by archaic languages like C.
I know this will probably start a flame war if widely read (which it probably won't be, since this is just a braindump page) but I want to say it anyway: I think the Gnome desktop is in a bit of a pickle, and the problem I think it is facing is that which many people have to deal with when programming, and that is stopping too early.
I'll talk you through one of my first programs as an example; it was for an assignment in Object Oriented Design in Java, which was to make a very simple pinball game. There was some code given as a starting point, but it was terrible (it was essentially a C program shoehorned into compiling as Java, thus it didn't follow an Object Oriented Design and, whilst syntactically correct, wasn't really written in Java at all), though most (probably all) used it as a starting point. Being very new to programming, especially in such a verbose and overly-strict language as Java, I couldn't really follow the given code: I could see the API and get it to do the things I told it, but I couldn't understand all of what it was doing, and amongst those bits which I did understand I saw some obvious hacks. As a Physicist and perfectionist-who-never-ends-up-finishing-anything I didn't want to use incorrect code, and especially didn't want to rely on someone else's proof without understanding it (Curry-Howard and all that). Thus I did the first thing that anyone who's ever done any Maths would do and tried to work it out myself and see if my solution matched. The result was a program of which I was quite proud: rather than being a hack to shift graphics around the screen (as the provided code had been), I'd started from what I knew and had made a basic Physics simulation with graphical output (originally based on AWT then switched to Swing). The code was spread out amongst many classes, there were no hacks in the object system (although there were a few in the main method to set things up, but that was just for testing), it was very intuitive, was the largest programming project I'd tackled thus far (although now I'd consider it to be the size of a moderate experiment) and I had it running pretty early into the assignment. It now needed some recognisable objects from a pinball table (like flippers) and more game-like control. However, rather than writing these I just played around inside it. Constantly. It was fun to throw things around, generate hundreds of objects and watch them bounce around off each other but most of all it was fun to know that I'd made it. In the end it got marked as 70%, since all my playing stopped me from actually making any kind of pinball game (I talked through the code, but when running it just presented a bunch of balls bouncing off "pins" which were just fixed balls).
Now, what's this got to do with Gnome? Well, Gnome works really well. It's unobtrusive, stable, intuitive and does "what people want". Reading through Planet Gnome gives some design bugs (the keyring dialogue asking for a password without indicating what is after the keyring or why, allowing a trojan to ask for the keyring without the user having any indication that they're giving access to malware and not NetworkManager or Evolution or Empathy or whatever), a few posts on Zeitgeist, translations, some compatibility improvements and a call for more contributions to the 2.29 features list. Other than that it's musings about transport, architecture, low-level distro tweaking (nothing to do with Gnome, things like boot performance). It seems that there are three levels of Gnome developer (and I mean developer here as anybody involved in improving Gnome):
1) The it does "what I want" developers, who may maintain things but make very gradual changes (like layout changes, adding translations, etc.). To these, Gnome is an excellent platform which chugs away unobtrusively for them in the background and allows them to get on with life; their itches are the "papercut" annoyances that are small, but happen a lot (something like an info dialogue which doesn't have a "don't show this again" option).
2) The inspired developers who's itches are to push the platform in new directions. They work hard to do this, but as one-person-armies they can only do well on one thing at a time (for example UPnP support).
3) The ignored developers who's itches to make sweeping changes never really get scratched due to the inertia of userbase familiarity and developer apathy due to the adequate nature of the platform as-is. These include the Gnome mobile efforts, the online integration attempts and so on. For those wanting an example of inertia, take Nautilus's spatial mode. This is rejected in favour of the old one-window browser mode by many users, simply because it was there first.
Now, I am not trying to crap on the project, far from it I'm offering my observations of the development slowdown over the past 6 or 7 years I've been using Gnome and following its developers. I'm not in any way trying to undermine the efforts of those three groups I've mentioned, I am trying to point out the obstacles that each face and how they could be overcome.
Many of those developers in groups 2 and 3 could benefit from Gnome 3 stirring things up a bit to break the stagnation. The reason I put quotes around does "what I/people want" is because, in my opinion, there's no such thing. Recall Henry Ford's famous remark that if he'd given people what they wanted then he would've given them a faster horse. Nobody knows that they want something unless they try it for a while and integrate it into how they go about things. I'm frequently met with the does "what I want" argument with regards to Windows. If the current situation in Gnome is unfortunate then that lumbering beast known as Windows XP is the armageddon! Admittedly, Windows Vista is awful: those few of its features which are any good don't make up for the cost in performance, and many of the features deemed "good" by others aren't features they're just attempts at shifting responsibility for brokenness onto users (those damned popups!). However, since there was an explosion in computer ownership during the lifetime of XP (2001-2007), for many people XP is a fundamental fact about what a computer is, in the same way that a toaster is not a toaster without a spinny dial for toasting time. Of course there are those who lived through Windows 95, 98, ME (lol), 2000 and XP, and who saw Vista as the next rung on the steady upgrade ladder it was intended to be, but viewed it as not worth the wait compared to the more timely releases of the past, and no doubt the growing use of computers made its drawbacks all the more annoying.
The appearance of KDE 4 stirred things up in KDE land, but there was such a wholesale developer switch due to the underlying library improvements that the users have been dragged kicking and screaming into a more flexible system, which is a must for supporting both the type 1 people and the types 2 and 3. Plus any annoyance generated has either been blogs-full-of-ads whoring or else has been reacted to by the KDE developers and sorted, or at least steps have been taken so that they become sorted some time in the future ;)
The alternative to the kicking and screaming approach is to fork. Forking is usually bad news, but unfortunately there seem to be quite a few projects aiming to address deficiencies in Gnome, without actually fixing what they perceive to be wrong with Gnome. For example XFCE tries to use fewer resources, whilst LXDE tries to use even fewer resources. There are loads of custom file managers, panels, window managers, widget systems etc. which each do an OK job, but don't play nicely with each other and so don't quite combine to become teh awesomes as they rightfully should.
Doing things the "right" way, or leaving many opportunities open for others to reject what you've made in favour of something else, is a much harder thing to accomplish than just shipping a vertical, one-size-doesn't-fit-any stack. Lots of standards are supported in Gnome, but a lot of work is happening outside the project and being shoehorned into the defaults later because it happens to use Glib.
I hope the introspection initiative gets the ball rolling for Gnome again, since it's lowering the barrier to entry and allowing customisation through abstractions and introspection which can guide enthusiastic minds into tackling the itches they face without putting them off by archaic languages like C.
mv/((1-(v/c)^2)^(1/2)) or, in non-relativistic terms, momentum
Subscribe to:
Posts (Atom)