Sunday, 31 August 2008
Wednesday, 27 August 2008
C00l
Suspend actually seems like a nice feature. Never tried it before, but I'm guessing it would be useful when I need to carry my laptop somewhere in my backpack for a while, but not long enough to warrant turning it off. Don't know it if worked straight away or if it was a quirk I used to try and get hibernation to work. (Which still doesn't).
Also, why do so many people get food from John's Van? It looks and smells like cat sick. I don't see the appeal.
Also, why do so many people get food from John's Van? It looks and smells like cat sick. I don't see the appeal.
Suspend actually seems like a nice feature. Never tried it before, but I'm guessing it would be useful when I need to carry my laptop somewhere in my backpack for a while, but not long enough to warrant turning it off. Don't know it if worked straight away or if it was a quirk I used to try and get hibernation to work. (Which still doesn't).
Also, why do so many people get food from John's Van? It looks and smells like cat sick. I don't see the appeal.
Also, why do so many people get food from John's Van? It looks and smells like cat sick. I don't see the appeal.
C00l
Note to self
If ALSA screws up again, make sure to check for hidden configuration files in your Home folder as well as the system defaults. Comment the entire lot out, since the autodetection is smarter than whoever wrote those files.
Finally, Ayreon are back :D
Finally, Ayreon are back :D
If ALSA screws up again, make sure to check for hidden configuration files in your Home folder as well as the system defaults. Comment the entire lot out, since the autodetection is smarter than whoever wrote those files.
Finally, Ayreon are back :D
Finally, Ayreon are back :D
Note to self
Resits Over
Now it's in the hands of the Gods (by which I mean, of course, the Scientists). There's nothing I can do to change the outcome now, but I at least hope that nobody seriously pisses them off before they get marking.
COM166 wasn't too bad. It was actually the same paper as the first time from what I could tell, but the papers had to be left behind. PHY203 was allowed to be taken away after the exam, thus they had a different paper for the resit. Weirdly I seemed better at the statistical mechanics (Boltzmann distributions, partition functions and the like, but sadly not so much with the statistical weighting of macrostates and its relation to entropy :( Yes, I know the formula, but just couldn't articulate its relation to microstate counting very well) than some of the thermodynamics, even though I left it very late with the statistical revision. Meh, probably it was fresher in my mind.
Anyway, yes. They're gone now and I'm confident I've passed, but obviously we live in a Universe who's determinism is a matter of debate. I may have done slightly better had I not been listening to the BRAT game music playing over and over in my head on the way into the exam, and a subconscious yet rather annoying repetition of "WOOOOOHOHOHOOOOOAAA!" about half way through. Yes, my head is weird. m8.
Now I can carry on exactly what I was doing before, but this time not feel so guilty about wasting time :P </joking>
COM166 wasn't too bad. It was actually the same paper as the first time from what I could tell, but the papers had to be left behind. PHY203 was allowed to be taken away after the exam, thus they had a different paper for the resit. Weirdly I seemed better at the statistical mechanics (Boltzmann distributions, partition functions and the like, but sadly not so much with the statistical weighting of macrostates and its relation to entropy :( Yes, I know the formula, but just couldn't articulate its relation to microstate counting very well) than some of the thermodynamics, even though I left it very late with the statistical revision. Meh, probably it was fresher in my mind.
Anyway, yes. They're gone now and I'm confident I've passed, but obviously we live in a Universe who's determinism is a matter of debate. I may have done slightly better had I not been listening to the BRAT game music playing over and over in my head on the way into the exam, and a subconscious yet rather annoying repetition of "WOOOOOHOHOHOOOOOAAA!" about half way through. Yes, my head is weird. m8.
Now I can carry on exactly what I was doing before, but this time not feel so guilty about wasting time :P </joking>
Now it's in the hands of the Gods (by which I mean, of course, the Scientists). There's nothing I can do to change the outcome now, but I at least hope that nobody seriously pisses them off before they get marking.
COM166 wasn't too bad. It was actually the same paper as the first time from what I could tell, but the papers had to be left behind. PHY203 was allowed to be taken away after the exam, thus they had a different paper for the resit. Weirdly I seemed better at the statistical mechanics (Boltzmann distributions, partition functions and the like, but sadly not so much with the statistical weighting of macrostates and its relation to entropy :( Yes, I know the formula, but just couldn't articulate its relation to microstate counting very well) than some of the thermodynamics, even though I left it very late with the statistical revision. Meh, probably it was fresher in my mind.
Anyway, yes. They're gone now and I'm confident I've passed, but obviously we live in a Universe who's determinism is a matter of debate. I may have done slightly better had I not been listening to the BRAT game music playing over and over in my head on the way into the exam, and a subconscious yet rather annoying repetition of "WOOOOOHOHOHOOOOOAAA!" about half way through. Yes, my head is weird. m8.
Now I can carry on exactly what I was doing before, but this time not feel so guilty about wasting time :P </joking>
COM166 wasn't too bad. It was actually the same paper as the first time from what I could tell, but the papers had to be left behind. PHY203 was allowed to be taken away after the exam, thus they had a different paper for the resit. Weirdly I seemed better at the statistical mechanics (Boltzmann distributions, partition functions and the like, but sadly not so much with the statistical weighting of macrostates and its relation to entropy :( Yes, I know the formula, but just couldn't articulate its relation to microstate counting very well) than some of the thermodynamics, even though I left it very late with the statistical revision. Meh, probably it was fresher in my mind.
Anyway, yes. They're gone now and I'm confident I've passed, but obviously we live in a Universe who's determinism is a matter of debate. I may have done slightly better had I not been listening to the BRAT game music playing over and over in my head on the way into the exam, and a subconscious yet rather annoying repetition of "WOOOOOHOHOHOOOOOAAA!" about half way through. Yes, my head is weird. m8.
Now I can carry on exactly what I was doing before, but this time not feel so guilty about wasting time :P </joking>
Resits Over
Two Wrongs Do Not Make A Right
Recently there was an Open Source iPhone application made called OpenClip which allows copy and paste functionality to work between applications. Well, seems Apple didn't care for it and in their latest firmware update they've stopped it working. To me that just means a bit of I-told-you-so gloating, since it's exactly what to expect when you take on a computer that's largely controlled, and for all intents and purposes owned, by the manufacturer.
In somewhat related news, perhaps this is Karma (no not the OpenSuse one), the iPhone adverts which Apple have been spreading liberally all over the place have been labelled "misleading" by the Advertising Standards Authority. Now, although it causes yet another I-told-you-so moment and I'm obviously teh awesome at modestness, I'm going to tell you what it's about. The adverts are apparently misleading because they claim "all the parts of the internet are on the iPhone", but since Flash and Java aren't installed then that claim is wrong and (to quote the BBC) "Some pages on the iPhone won't look like they do at home".
Now, I'm sure a few people will remember my furious claims that the Internet and the Web are not the same thing. Well, this confusion is right at the heart of the issue. I'll say right away that I think this decision is wrong and that the iPhone adverts are not misleading in the way that has been claimed. I'll detail why it is wrong below:
Teh Internets
The Internet is a super-global computer network. A computer network is defined (as I can attest from both my formal and informal studies in this area) as a way for machines to communicate, but which specifically describes a general network (so the 'phone network isn't a computer network since it passes analogue audio signals around, whilst most computer networks pass packets of binary data which are, in a quantum mechanical world, general as they can mean anything). An internetwork (an internet) is a network of networks, where devices called routers translate between the different physical technologies and virtual addressing systems of each network (for instance I have a router in my house which translates between IEEE 802.11g wireless networking, Ethernet, USB networking and my ADSL line). "The Internet" is a slang name for the network of internetworks which reaches around the globe, into the Earth and out into space which is more commonly referred to by its correct title "teh Internets". If you want a diagram showing the "net" then this'll have to do:
As you can see, each vertex (joint) is a computer, and each edge is a network connection of some kind. Apple have claimed that on their iPhone (connected to the 3G network operator on the left at about 8 o'clock) has all parts of the Internet on the 'phone. Now, since the Internet is by definition more than just one 'phone this obviously referrs to the fact that the 'phone can access the entire Internet (insomuch as any device can, taking into account people using encryption and various server outages around the world). The Internet is real, physical thing that you can see if you look at the wires coming out of your house. The iPhone can access the entire Internet just as much as a desktop computer.
Now, the actual data that gets passed around on the Internet can be literally anything, since it is by definition a general network. However, throwing random bits at computers isn't particularly useful if the other end doesn't know what you're trying to say. This is where protocols come in. There are LOADS of protocols, and they stack up on top of each other at various layers of the architecture. Examples are IP (the Internet Protocol), FTP (file transfer protocol), XMPP (eXtensible Messaging and Presence Protocol), SMTP (simple mail transport protocol), and of course HTTP (hypertext transfer protocol).
The Web
HTTP is all about transfering "hypertext". This is like plaintext (which is actually just a bunch of binary, anything can be seen as a type of plaintext) except that it uses SGML markup to specify links (called "hrefs"). These links allow readers of the hypertext document to be transported to another hypertext document living at the address given in the href. I'll draw you another diagram below:
This is a little harder to understand since there are two kinds of node, the pages (in green) and the links, technically called anchors (in red). Plus the edges are directed (ie. there are arrows). Just to clarify, there are no arrows pointing TO links, all arrows end in a green page but I'm not too good at drawing with a mouse :P
What's happening is that you're at a certain page in your browser (in green) when you click one of the links (in red). Your browser shoots off down the arrow coming from that link until it gets to the green page at the end of it. If there are links in that page you can click those, but if not then you either need to go Back along the last edge your browser went down or enter a location manually. This is the Web, and you can't see it because it's not real. It's just a visualisation of a data structure. The green web pages mostly live on computers , which are nodes in the first picture, but the Internet and the Web are completely different things.
Web pages, as I've said, are hypertext, written in the HyperText Markup Language (HTML), which is just a special structuring of text. HTML is designed to allow arbitrary non-HTML stuff to be stuck inside, and the standards say, and I quote:
So, in essence what I am saying is this:
1) The "Internet" is the physical network which carries data. The iPhone has a complete connection to the Internet, as they claim.
2) The Web, which seems to be the point being argued, is NOT the Internet. Apple have not claimed that everything on the Web will work on the iPhone, they've said that you can get everything that is on the Internet.
3) Flash and Java are not HTML and thus not even IN the Web. They are external in nature, even if they appear in the middle of a HTML document. For an analogy, open a text editor window and drag it over this web browser window. Is the text editor now part of the Web? Is it bollocks. Flash and Java can communicate over HTTP if directed to do so but they are not part of the Web. They are interpreted programs which can be accessed over the Internet.
4) Flash and Java have been incredibly proprietary until very very recently. When the iPhone was in development it was completely reasonable to not include these proprietary technologies, especially if they weren't available for Darwin on an ARM (which was an internal Apple build environment until the iPhone came out anyway). Now that Free Software Flash players and Java environments are getting mature it may be a different story, since the rules are different with Free Software (Apple would still be in control, rather than Sun or Adobe).
5) For fuck's sake! The *BEST* part about the Web is that it degrades gracefully. If a browser doesn't support some feature which it encounters then it just ignores the request and carries on. This is exactly what the iPhone is doing. The real irony is that WebKit, the browser engine used on the iPhone, is one of the most standards-compliant, correctly done browsers there is! The majority of desktop machines are stuck on Internet Explorer, so the comparison to "at home" (ie. on a desktop or laptop, most probably running IE) is actually scary.
OK rant over. For now.
In somewhat related news, perhaps this is Karma (no not the OpenSuse one), the iPhone adverts which Apple have been spreading liberally all over the place have been labelled "misleading" by the Advertising Standards Authority. Now, although it causes yet another I-told-you-so moment and I'm obviously teh awesome at modestness, I'm going to tell you what it's about. The adverts are apparently misleading because they claim "all the parts of the internet are on the iPhone", but since Flash and Java aren't installed then that claim is wrong and (to quote the BBC) "Some pages on the iPhone won't look like they do at home".
Now, I'm sure a few people will remember my furious claims that the Internet and the Web are not the same thing. Well, this confusion is right at the heart of the issue. I'll say right away that I think this decision is wrong and that the iPhone adverts are not misleading in the way that has been claimed. I'll detail why it is wrong below:
Teh Internets
The Internet is a super-global computer network. A computer network is defined (as I can attest from both my formal and informal studies in this area) as a way for machines to communicate, but which specifically describes a general network (so the 'phone network isn't a computer network since it passes analogue audio signals around, whilst most computer networks pass packets of binary data which are, in a quantum mechanical world, general as they can mean anything). An internetwork (an internet) is a network of networks, where devices called routers translate between the different physical technologies and virtual addressing systems of each network (for instance I have a router in my house which translates between IEEE 802.11g wireless networking, Ethernet, USB networking and my ADSL line). "The Internet" is a slang name for the network of internetworks which reaches around the globe, into the Earth and out into space which is more commonly referred to by its correct title "teh Internets". If you want a diagram showing the "net" then this'll have to do:
As you can see, each vertex (joint) is a computer, and each edge is a network connection of some kind. Apple have claimed that on their iPhone (connected to the 3G network operator on the left at about 8 o'clock) has all parts of the Internet on the 'phone. Now, since the Internet is by definition more than just one 'phone this obviously referrs to the fact that the 'phone can access the entire Internet (insomuch as any device can, taking into account people using encryption and various server outages around the world). The Internet is real, physical thing that you can see if you look at the wires coming out of your house. The iPhone can access the entire Internet just as much as a desktop computer.
Now, the actual data that gets passed around on the Internet can be literally anything, since it is by definition a general network. However, throwing random bits at computers isn't particularly useful if the other end doesn't know what you're trying to say. This is where protocols come in. There are LOADS of protocols, and they stack up on top of each other at various layers of the architecture. Examples are IP (the Internet Protocol), FTP (file transfer protocol), XMPP (eXtensible Messaging and Presence Protocol), SMTP (simple mail transport protocol), and of course HTTP (hypertext transfer protocol).
The Web
HTTP is all about transfering "hypertext". This is like plaintext (which is actually just a bunch of binary, anything can be seen as a type of plaintext) except that it uses SGML markup to specify links (called "hrefs"). These links allow readers of the hypertext document to be transported to another hypertext document living at the address given in the href. I'll draw you another diagram below:
This is a little harder to understand since there are two kinds of node, the pages (in green) and the links, technically called anchors (in red). Plus the edges are directed (ie. there are arrows). Just to clarify, there are no arrows pointing TO links, all arrows end in a green page but I'm not too good at drawing with a mouse :P
What's happening is that you're at a certain page in your browser (in green) when you click one of the links (in red). Your browser shoots off down the arrow coming from that link until it gets to the green page at the end of it. If there are links in that page you can click those, but if not then you either need to go Back along the last edge your browser went down or enter a location manually. This is the Web, and you can't see it because it's not real. It's just a visualisation of a data structure. The green web pages mostly live on computers , which are nodes in the first picture, but the Internet and the Web are completely different things.
Web pages, as I've said, are hypertext, written in the HyperText Markup Language (HTML), which is just a special structuring of text. HTML is designed to allow arbitrary non-HTML stuff to be stuck inside, and the standards say, and I quote:
13.3.1 Rules for rendering objects
A user agent must interpret an OBJECT element according to the following precedence rules:
- The user agent must first try to render the object. It should not render the element's contents, but it must examine them in case the element contains any direct children that are PARAM elements (see object initialization) or MAP elements (see client-side image maps).
- If the user agent is not able to render the object for whatever reason (configured not to, lack of resources, wrong architecture, etc.), it must try to render its contents.
Authors should not include content in OBJECT elements that appear in the HEAD element.
Flash and Java applets are examples of OBJECT elements. The standards say that the user agent (browser) should try to render OBJECT elements which aren't inside the HEAD element. If they can't then they should try to render the contents (for example the "alt" text which is rendered if an image doesn't load). The iPhone's WebKit browser is completely following the rules here.So, in essence what I am saying is this:
1) The "Internet" is the physical network which carries data. The iPhone has a complete connection to the Internet, as they claim.
2) The Web, which seems to be the point being argued, is NOT the Internet. Apple have not claimed that everything on the Web will work on the iPhone, they've said that you can get everything that is on the Internet.
3) Flash and Java are not HTML and thus not even IN the Web. They are external in nature, even if they appear in the middle of a HTML document. For an analogy, open a text editor window and drag it over this web browser window. Is the text editor now part of the Web? Is it bollocks. Flash and Java can communicate over HTTP if directed to do so but they are not part of the Web. They are interpreted programs which can be accessed over the Internet.
4) Flash and Java have been incredibly proprietary until very very recently. When the iPhone was in development it was completely reasonable to not include these proprietary technologies, especially if they weren't available for Darwin on an ARM (which was an internal Apple build environment until the iPhone came out anyway). Now that Free Software Flash players and Java environments are getting mature it may be a different story, since the rules are different with Free Software (Apple would still be in control, rather than Sun or Adobe).
5) For fuck's sake! The *BEST* part about the Web is that it degrades gracefully. If a browser doesn't support some feature which it encounters then it just ignores the request and carries on. This is exactly what the iPhone is doing. The real irony is that WebKit, the browser engine used on the iPhone, is one of the most standards-compliant, correctly done browsers there is! The majority of desktop machines are stuck on Internet Explorer, so the comparison to "at home" (ie. on a desktop or laptop, most probably running IE) is actually scary.
OK rant over. For now.
Recently there was an Open Source iPhone application made called OpenClip which allows copy and paste functionality to work between applications. Well, seems Apple didn't care for it and in their latest firmware update they've stopped it working. To me that just means a bit of I-told-you-so gloating, since it's exactly what to expect when you take on a computer that's largely controlled, and for all intents and purposes owned, by the manufacturer.
In somewhat related news, perhaps this is Karma (no not the OpenSuse one), the iPhone adverts which Apple have been spreading liberally all over the place have been labelled "misleading" by the Advertising Standards Authority. Now, although it causes yet another I-told-you-so moment and I'm obviously teh awesome at modestness, I'm going to tell you what it's about. The adverts are apparently misleading because they claim "all the parts of the internet are on the iPhone", but since Flash and Java aren't installed then that claim is wrong and (to quote the BBC) "Some pages on the iPhone won't look like they do at home".
Now, I'm sure a few people will remember my furious claims that the Internet and the Web are not the same thing. Well, this confusion is right at the heart of the issue. I'll say right away that I think this decision is wrong and that the iPhone adverts are not misleading in the way that has been claimed. I'll detail why it is wrong below:
Teh Internets
The Internet is a super-global computer network. A computer network is defined (as I can attest from both my formal and informal studies in this area) as a way for machines to communicate, but which specifically describes a general network (so the 'phone network isn't a computer network since it passes analogue audio signals around, whilst most computer networks pass packets of binary data which are, in a quantum mechanical world, general as they can mean anything). An internetwork (an internet) is a network of networks, where devices called routers translate between the different physical technologies and virtual addressing systems of each network (for instance I have a router in my house which translates between IEEE 802.11g wireless networking, Ethernet, USB networking and my ADSL line). "The Internet" is a slang name for the network of internetworks which reaches around the globe, into the Earth and out into space which is more commonly referred to by its correct title "teh Internets". If you want a diagram showing the "net" then this'll have to do:
As you can see, each vertex (joint) is a computer, and each edge is a network connection of some kind. Apple have claimed that on their iPhone (connected to the 3G network operator on the left at about 8 o'clock) has all parts of the Internet on the 'phone. Now, since the Internet is by definition more than just one 'phone this obviously referrs to the fact that the 'phone can access the entire Internet (insomuch as any device can, taking into account people using encryption and various server outages around the world). The Internet is real, physical thing that you can see if you look at the wires coming out of your house. The iPhone can access the entire Internet just as much as a desktop computer.
Now, the actual data that gets passed around on the Internet can be literally anything, since it is by definition a general network. However, throwing random bits at computers isn't particularly useful if the other end doesn't know what you're trying to say. This is where protocols come in. There are LOADS of protocols, and they stack up on top of each other at various layers of the architecture. Examples are IP (the Internet Protocol), FTP (file transfer protocol), XMPP (eXtensible Messaging and Presence Protocol), SMTP (simple mail transport protocol), and of course HTTP (hypertext transfer protocol).
The Web
HTTP is all about transfering "hypertext". This is like plaintext (which is actually just a bunch of binary, anything can be seen as a type of plaintext) except that it uses SGML markup to specify links (called "hrefs"). These links allow readers of the hypertext document to be transported to another hypertext document living at the address given in the href. I'll draw you another diagram below:
This is a little harder to understand since there are two kinds of node, the pages (in green) and the links, technically called anchors (in red). Plus the edges are directed (ie. there are arrows). Just to clarify, there are no arrows pointing TO links, all arrows end in a green page but I'm not too good at drawing with a mouse :P
What's happening is that you're at a certain page in your browser (in green) when you click one of the links (in red). Your browser shoots off down the arrow coming from that link until it gets to the green page at the end of it. If there are links in that page you can click those, but if not then you either need to go Back along the last edge your browser went down or enter a location manually. This is the Web, and you can't see it because it's not real. It's just a visualisation of a data structure. The green web pages mostly live on computers , which are nodes in the first picture, but the Internet and the Web are completely different things.
Web pages, as I've said, are hypertext, written in the HyperText Markup Language (HTML), which is just a special structuring of text. HTML is designed to allow arbitrary non-HTML stuff to be stuck inside, and the standards say, and I quote:
So, in essence what I am saying is this:
1) The "Internet" is the physical network which carries data. The iPhone has a complete connection to the Internet, as they claim.
2) The Web, which seems to be the point being argued, is NOT the Internet. Apple have not claimed that everything on the Web will work on the iPhone, they've said that you can get everything that is on the Internet.
3) Flash and Java are not HTML and thus not even IN the Web. They are external in nature, even if they appear in the middle of a HTML document. For an analogy, open a text editor window and drag it over this web browser window. Is the text editor now part of the Web? Is it bollocks. Flash and Java can communicate over HTTP if directed to do so but they are not part of the Web. They are interpreted programs which can be accessed over the Internet.
4) Flash and Java have been incredibly proprietary until very very recently. When the iPhone was in development it was completely reasonable to not include these proprietary technologies, especially if they weren't available for Darwin on an ARM (which was an internal Apple build environment until the iPhone came out anyway). Now that Free Software Flash players and Java environments are getting mature it may be a different story, since the rules are different with Free Software (Apple would still be in control, rather than Sun or Adobe).
5) For fuck's sake! The *BEST* part about the Web is that it degrades gracefully. If a browser doesn't support some feature which it encounters then it just ignores the request and carries on. This is exactly what the iPhone is doing. The real irony is that WebKit, the browser engine used on the iPhone, is one of the most standards-compliant, correctly done browsers there is! The majority of desktop machines are stuck on Internet Explorer, so the comparison to "at home" (ie. on a desktop or laptop, most probably running IE) is actually scary.
OK rant over. For now.
In somewhat related news, perhaps this is Karma (no not the OpenSuse one), the iPhone adverts which Apple have been spreading liberally all over the place have been labelled "misleading" by the Advertising Standards Authority. Now, although it causes yet another I-told-you-so moment and I'm obviously teh awesome at modestness, I'm going to tell you what it's about. The adverts are apparently misleading because they claim "all the parts of the internet are on the iPhone", but since Flash and Java aren't installed then that claim is wrong and (to quote the BBC) "Some pages on the iPhone won't look like they do at home".
Now, I'm sure a few people will remember my furious claims that the Internet and the Web are not the same thing. Well, this confusion is right at the heart of the issue. I'll say right away that I think this decision is wrong and that the iPhone adverts are not misleading in the way that has been claimed. I'll detail why it is wrong below:
Teh Internets
The Internet is a super-global computer network. A computer network is defined (as I can attest from both my formal and informal studies in this area) as a way for machines to communicate, but which specifically describes a general network (so the 'phone network isn't a computer network since it passes analogue audio signals around, whilst most computer networks pass packets of binary data which are, in a quantum mechanical world, general as they can mean anything). An internetwork (an internet) is a network of networks, where devices called routers translate between the different physical technologies and virtual addressing systems of each network (for instance I have a router in my house which translates between IEEE 802.11g wireless networking, Ethernet, USB networking and my ADSL line). "The Internet" is a slang name for the network of internetworks which reaches around the globe, into the Earth and out into space which is more commonly referred to by its correct title "teh Internets". If you want a diagram showing the "net" then this'll have to do:
As you can see, each vertex (joint) is a computer, and each edge is a network connection of some kind. Apple have claimed that on their iPhone (connected to the 3G network operator on the left at about 8 o'clock) has all parts of the Internet on the 'phone. Now, since the Internet is by definition more than just one 'phone this obviously referrs to the fact that the 'phone can access the entire Internet (insomuch as any device can, taking into account people using encryption and various server outages around the world). The Internet is real, physical thing that you can see if you look at the wires coming out of your house. The iPhone can access the entire Internet just as much as a desktop computer.
Now, the actual data that gets passed around on the Internet can be literally anything, since it is by definition a general network. However, throwing random bits at computers isn't particularly useful if the other end doesn't know what you're trying to say. This is where protocols come in. There are LOADS of protocols, and they stack up on top of each other at various layers of the architecture. Examples are IP (the Internet Protocol), FTP (file transfer protocol), XMPP (eXtensible Messaging and Presence Protocol), SMTP (simple mail transport protocol), and of course HTTP (hypertext transfer protocol).
The Web
HTTP is all about transfering "hypertext". This is like plaintext (which is actually just a bunch of binary, anything can be seen as a type of plaintext) except that it uses SGML markup to specify links (called "hrefs"). These links allow readers of the hypertext document to be transported to another hypertext document living at the address given in the href. I'll draw you another diagram below:
This is a little harder to understand since there are two kinds of node, the pages (in green) and the links, technically called anchors (in red). Plus the edges are directed (ie. there are arrows). Just to clarify, there are no arrows pointing TO links, all arrows end in a green page but I'm not too good at drawing with a mouse :P
What's happening is that you're at a certain page in your browser (in green) when you click one of the links (in red). Your browser shoots off down the arrow coming from that link until it gets to the green page at the end of it. If there are links in that page you can click those, but if not then you either need to go Back along the last edge your browser went down or enter a location manually. This is the Web, and you can't see it because it's not real. It's just a visualisation of a data structure. The green web pages mostly live on computers , which are nodes in the first picture, but the Internet and the Web are completely different things.
Web pages, as I've said, are hypertext, written in the HyperText Markup Language (HTML), which is just a special structuring of text. HTML is designed to allow arbitrary non-HTML stuff to be stuck inside, and the standards say, and I quote:
13.3.1 Rules for rendering objects
A user agent must interpret an OBJECT element according to the following precedence rules:
- The user agent must first try to render the object. It should not render the element's contents, but it must examine them in case the element contains any direct children that are PARAM elements (see object initialization) or MAP elements (see client-side image maps).
- If the user agent is not able to render the object for whatever reason (configured not to, lack of resources, wrong architecture, etc.), it must try to render its contents.
Authors should not include content in OBJECT elements that appear in the HEAD element.
Flash and Java applets are examples of OBJECT elements. The standards say that the user agent (browser) should try to render OBJECT elements which aren't inside the HEAD element. If they can't then they should try to render the contents (for example the "alt" text which is rendered if an image doesn't load). The iPhone's WebKit browser is completely following the rules here.So, in essence what I am saying is this:
1) The "Internet" is the physical network which carries data. The iPhone has a complete connection to the Internet, as they claim.
2) The Web, which seems to be the point being argued, is NOT the Internet. Apple have not claimed that everything on the Web will work on the iPhone, they've said that you can get everything that is on the Internet.
3) Flash and Java are not HTML and thus not even IN the Web. They are external in nature, even if they appear in the middle of a HTML document. For an analogy, open a text editor window and drag it over this web browser window. Is the text editor now part of the Web? Is it bollocks. Flash and Java can communicate over HTTP if directed to do so but they are not part of the Web. They are interpreted programs which can be accessed over the Internet.
4) Flash and Java have been incredibly proprietary until very very recently. When the iPhone was in development it was completely reasonable to not include these proprietary technologies, especially if they weren't available for Darwin on an ARM (which was an internal Apple build environment until the iPhone came out anyway). Now that Free Software Flash players and Java environments are getting mature it may be a different story, since the rules are different with Free Software (Apple would still be in control, rather than Sun or Adobe).
5) For fuck's sake! The *BEST* part about the Web is that it degrades gracefully. If a browser doesn't support some feature which it encounters then it just ignores the request and carries on. This is exactly what the iPhone is doing. The real irony is that WebKit, the browser engine used on the iPhone, is one of the most standards-compliant, correctly done browsers there is! The majority of desktop machines are stuck on Internet Explorer, so the comparison to "at home" (ie. on a desktop or laptop, most probably running IE) is actually scary.
OK rant over. For now.
Two Wrongs Do Not Make A Right
Tuesday, 26 August 2008
An Option I Would Like To See In Every Application
It runs like ass on my graphics card+driver combination (and even pops up a message saying that my setup is unsupported) but at least it's running! That's OpenMetaverse Viewer, previously known as SecondLife Viewer. Linden Labs are trying to make SecondLife completely Free Software, their client has been for a while and their servers are completely Debian-based, but due to a) proprietary third-party code they're not allowed to liberate and must replace, and b) their desire for every server to be able to communicate with each other (and thus not fracture the users into camps) it is taking a while.
Some particularly zealot-like Free Software people have made their own server called OpenSim, so to try and get rid of trademarks/company names/etc. the Free Software SecondLife viewer is now OpenMetaverse Viewer and can connect to SecondLife and OpenSim servers.
Since Ubuntu was being a bit of a pain I've gone back to Debian, unstable to be precise, but if you want to install omvviewer (found here) then you'll need some packages from testing which can be found here. Specifically you'll need "xulrunner", "libxul-common", "libxul0d", "libnss3-0d" and "libmozjs0d" which are all to do with Mozilla's XULRunner (the thing Firefox is made in) since that can apparently be embedded in the virtual worlds.
Some particularly zealot-like Free Software people have made their own server called OpenSim, so to try and get rid of trademarks/company names/etc. the Free Software SecondLife viewer is now OpenMetaverse Viewer and can connect to SecondLife and OpenSim servers.
Since Ubuntu was being a bit of a pain I've gone back to Debian, unstable to be precise, but if you want to install omvviewer (found here) then you'll need some packages from testing which can be found here. Specifically you'll need "xulrunner", "libxul-common", "libxul0d", "libnss3-0d" and "libmozjs0d" which are all to do with Mozilla's XULRunner (the thing Firefox is made in) since that can apparently be embedded in the virtual worlds.
It runs like ass on my graphics card+driver combination (and even pops up a message saying that my setup is unsupported) but at least it's running! That's OpenMetaverse Viewer, previously known as SecondLife Viewer. Linden Labs are trying to make SecondLife completely Free Software, their client has been for a while and their servers are completely Debian-based, but due to a) proprietary third-party code they're not allowed to liberate and must replace, and b) their desire for every server to be able to communicate with each other (and thus not fracture the users into camps) it is taking a while.
Some particularly zealot-like Free Software people have made their own server called OpenSim, so to try and get rid of trademarks/company names/etc. the Free Software SecondLife viewer is now OpenMetaverse Viewer and can connect to SecondLife and OpenSim servers.
Since Ubuntu was being a bit of a pain I've gone back to Debian, unstable to be precise, but if you want to install omvviewer (found here) then you'll need some packages from testing which can be found here. Specifically you'll need "xulrunner", "libxul-common", "libxul0d", "libnss3-0d" and "libmozjs0d" which are all to do with Mozilla's XULRunner (the thing Firefox is made in) since that can apparently be embedded in the virtual worlds.
Some particularly zealot-like Free Software people have made their own server called OpenSim, so to try and get rid of trademarks/company names/etc. the Free Software SecondLife viewer is now OpenMetaverse Viewer and can connect to SecondLife and OpenSim servers.
Since Ubuntu was being a bit of a pain I've gone back to Debian, unstable to be precise, but if you want to install omvviewer (found here) then you'll need some packages from testing which can be found here. Specifically you'll need "xulrunner", "libxul-common", "libxul0d", "libnss3-0d" and "libmozjs0d" which are all to do with Mozilla's XULRunner (the thing Firefox is made in) since that can apparently be embedded in the virtual worlds.
An Option I Would Like To See In Every Application
Sunday, 24 August 2008
On Computer Animation
Animation is an optical illusion brought about by rapidly changing images and the brain's natural appreciation of Heisenberg's Uncertainty Principle.
Animation is an optical illusion brought about by rapidly changing images and the brain's natural appreciation of Heisenberg's Uncertainty Principle.
On Computer Animation
Saturday, 23 August 2008
QEdje Is Working! :D
There are quite a few Free Software libraries out there for making GUIs (graphical user interfaces). The most famous are probably GTK+ (started by the GIMP and used by Gnome, XFCE and LXDE) and QT (started by Trolltech and used by KDE), however there are quite a few more such as GNUStep, Motif (used by CDE), FLTK (used by EDE) and EFL (used by Enlightenment).
EFL, the Enlightenment Foundation Libraries, are particularly interesting. They are incredibly lightweight, running quite happily on a mobile phone, yet allow all sorts of animation (as in, proper 2D animation rather than just moving/spinning/stretching things) and are completely themable. The way this works (from what I can find out) is that every EFL program uses the Evas canvas ("canvas" is the name given to a widget which allows arbitrary drawing on top, rather than imposing some kind of structure), then Etk and EWL draw on top (the canvas is created implicitly by Etk and EWL, even if you don't make one explicitly). This is opposite to most toolkits, like GTK+ for example, where the widgets are drawn in the window (which is usually divided up into a rigid structure) and canvases are implemented as widgets.
A nice feature of the EFL is called Edje. Edje allows an application to be written without worrying about the GUI, instead just requiring that an external file be called. These external files describe the interface, and are entirely declarative (that is, they say "I want a button" rather than "This is how to draw a button"), think of it like the HTML of a web page, with Edje being the web browser which draws it (actually, this would be a more appropriate description of XUL, but I can't get my head around XUL's seemingly overcomplicated use of CSS, JavaScript and XML :( ).
Edje files are compiled into compressed archives (using EET) which act like incredibly far-reaching themes. This means that a theme doesn't just contain pretty pictures to use as buttons, or the programming to draw the right lines at the right locations, it actually contains the entire user interface. To continue the web page analogy, if Gmail or Facebook used an analogous system then instead of merely being able to change the theming via CSS (which may have to be specifically forced from the browser preferences, because web-app people suck balls), you could actually use a completely different webpage to interact with the underlying application (no more "New look!" announcements, since anybody could use any look they wanted all of the time).
Now to address the title of this post ;) As I've described, Edje is a declarative system. An awesome feature of this is that Edje can be completely replaced without the user even noticing, since the point is to say "I want a button" and not care about how it gets done. Well, the developers of Canola looked at moving to QT, since it offers more features than EFL, is more widely developed, developed for and installed. However, they found that Edje was too awesome to leave behind, so they ported it to QT and called it QEdje! What's particularly nice about QEdje is that a) the canvas used instead of Evas, called QZion, is rather abstract in itself, so that different QT systems can be used to do the work (eg. direct drawing to the screen with the QPainter backend, more abstract layout with the QGraphicsView backend or 3D accelerated with the KGameCanvas backend, depending on the environment it is being used in) and b) the huge wealth of QT widgets can be used in the program (this is pretty powerful considering that as well as buttons, labels and tickboxes, QT also has a whole web browser, Open Document Format compatible rich text areas and an abstracted audio/video multimedia engine (which, so far, uses Gstreamer (Rhythmbox, Totem, etc.), Xine (GXine, xine-ui), VLC, Mplayer, QuickTime (QuickTime Player, etc.) on Mac and DirectShow (Windows Media Player, Media Player Classic, etc.) on Windows).
After a little wrangling I've got it to compile and the examples are working. This means I can have a play about, so I'll report on my findings :)
EFL, the Enlightenment Foundation Libraries, are particularly interesting. They are incredibly lightweight, running quite happily on a mobile phone, yet allow all sorts of animation (as in, proper 2D animation rather than just moving/spinning/stretching things) and are completely themable. The way this works (from what I can find out) is that every EFL program uses the Evas canvas ("canvas" is the name given to a widget which allows arbitrary drawing on top, rather than imposing some kind of structure), then Etk and EWL draw on top (the canvas is created implicitly by Etk and EWL, even if you don't make one explicitly). This is opposite to most toolkits, like GTK+ for example, where the widgets are drawn in the window (which is usually divided up into a rigid structure) and canvases are implemented as widgets.
A nice feature of the EFL is called Edje. Edje allows an application to be written without worrying about the GUI, instead just requiring that an external file be called. These external files describe the interface, and are entirely declarative (that is, they say "I want a button" rather than "This is how to draw a button"), think of it like the HTML of a web page, with Edje being the web browser which draws it (actually, this would be a more appropriate description of XUL, but I can't get my head around XUL's seemingly overcomplicated use of CSS, JavaScript and XML :( ).
Edje files are compiled into compressed archives (using EET) which act like incredibly far-reaching themes. This means that a theme doesn't just contain pretty pictures to use as buttons, or the programming to draw the right lines at the right locations, it actually contains the entire user interface. To continue the web page analogy, if Gmail or Facebook used an analogous system then instead of merely being able to change the theming via CSS (which may have to be specifically forced from the browser preferences, because web-app people suck balls), you could actually use a completely different webpage to interact with the underlying application (no more "New look!" announcements, since anybody could use any look they wanted all of the time).
Now to address the title of this post ;) As I've described, Edje is a declarative system. An awesome feature of this is that Edje can be completely replaced without the user even noticing, since the point is to say "I want a button" and not care about how it gets done. Well, the developers of Canola looked at moving to QT, since it offers more features than EFL, is more widely developed, developed for and installed. However, they found that Edje was too awesome to leave behind, so they ported it to QT and called it QEdje! What's particularly nice about QEdje is that a) the canvas used instead of Evas, called QZion, is rather abstract in itself, so that different QT systems can be used to do the work (eg. direct drawing to the screen with the QPainter backend, more abstract layout with the QGraphicsView backend or 3D accelerated with the KGameCanvas backend, depending on the environment it is being used in) and b) the huge wealth of QT widgets can be used in the program (this is pretty powerful considering that as well as buttons, labels and tickboxes, QT also has a whole web browser, Open Document Format compatible rich text areas and an abstracted audio/video multimedia engine (which, so far, uses Gstreamer (Rhythmbox, Totem, etc.), Xine (GXine, xine-ui), VLC, Mplayer, QuickTime (QuickTime Player, etc.) on Mac and DirectShow (Windows Media Player, Media Player Classic, etc.) on Windows).
After a little wrangling I've got it to compile and the examples are working. This means I can have a play about, so I'll report on my findings :)
There are quite a few Free Software libraries out there for making GUIs (graphical user interfaces). The most famous are probably GTK+ (started by the GIMP and used by Gnome, XFCE and LXDE) and QT (started by Trolltech and used by KDE), however there are quite a few more such as GNUStep, Motif (used by CDE), FLTK (used by EDE) and EFL (used by Enlightenment).
EFL, the Enlightenment Foundation Libraries, are particularly interesting. They are incredibly lightweight, running quite happily on a mobile phone, yet allow all sorts of animation (as in, proper 2D animation rather than just moving/spinning/stretching things) and are completely themable. The way this works (from what I can find out) is that every EFL program uses the Evas canvas ("canvas" is the name given to a widget which allows arbitrary drawing on top, rather than imposing some kind of structure), then Etk and EWL draw on top (the canvas is created implicitly by Etk and EWL, even if you don't make one explicitly). This is opposite to most toolkits, like GTK+ for example, where the widgets are drawn in the window (which is usually divided up into a rigid structure) and canvases are implemented as widgets.
A nice feature of the EFL is called Edje. Edje allows an application to be written without worrying about the GUI, instead just requiring that an external file be called. These external files describe the interface, and are entirely declarative (that is, they say "I want a button" rather than "This is how to draw a button"), think of it like the HTML of a web page, with Edje being the web browser which draws it (actually, this would be a more appropriate description of XUL, but I can't get my head around XUL's seemingly overcomplicated use of CSS, JavaScript and XML :( ).
Edje files are compiled into compressed archives (using EET) which act like incredibly far-reaching themes. This means that a theme doesn't just contain pretty pictures to use as buttons, or the programming to draw the right lines at the right locations, it actually contains the entire user interface. To continue the web page analogy, if Gmail or Facebook used an analogous system then instead of merely being able to change the theming via CSS (which may have to be specifically forced from the browser preferences, because web-app people suck balls), you could actually use a completely different webpage to interact with the underlying application (no more "New look!" announcements, since anybody could use any look they wanted all of the time).
Now to address the title of this post ;) As I've described, Edje is a declarative system. An awesome feature of this is that Edje can be completely replaced without the user even noticing, since the point is to say "I want a button" and not care about how it gets done. Well, the developers of Canola looked at moving to QT, since it offers more features than EFL, is more widely developed, developed for and installed. However, they found that Edje was too awesome to leave behind, so they ported it to QT and called it QEdje! What's particularly nice about QEdje is that a) the canvas used instead of Evas, called QZion, is rather abstract in itself, so that different QT systems can be used to do the work (eg. direct drawing to the screen with the QPainter backend, more abstract layout with the QGraphicsView backend or 3D accelerated with the KGameCanvas backend, depending on the environment it is being used in) and b) the huge wealth of QT widgets can be used in the program (this is pretty powerful considering that as well as buttons, labels and tickboxes, QT also has a whole web browser, Open Document Format compatible rich text areas and an abstracted audio/video multimedia engine (which, so far, uses Gstreamer (Rhythmbox, Totem, etc.), Xine (GXine, xine-ui), VLC, Mplayer, QuickTime (QuickTime Player, etc.) on Mac and DirectShow (Windows Media Player, Media Player Classic, etc.) on Windows).
After a little wrangling I've got it to compile and the examples are working. This means I can have a play about, so I'll report on my findings :)
EFL, the Enlightenment Foundation Libraries, are particularly interesting. They are incredibly lightweight, running quite happily on a mobile phone, yet allow all sorts of animation (as in, proper 2D animation rather than just moving/spinning/stretching things) and are completely themable. The way this works (from what I can find out) is that every EFL program uses the Evas canvas ("canvas" is the name given to a widget which allows arbitrary drawing on top, rather than imposing some kind of structure), then Etk and EWL draw on top (the canvas is created implicitly by Etk and EWL, even if you don't make one explicitly). This is opposite to most toolkits, like GTK+ for example, where the widgets are drawn in the window (which is usually divided up into a rigid structure) and canvases are implemented as widgets.
A nice feature of the EFL is called Edje. Edje allows an application to be written without worrying about the GUI, instead just requiring that an external file be called. These external files describe the interface, and are entirely declarative (that is, they say "I want a button" rather than "This is how to draw a button"), think of it like the HTML of a web page, with Edje being the web browser which draws it (actually, this would be a more appropriate description of XUL, but I can't get my head around XUL's seemingly overcomplicated use of CSS, JavaScript and XML :( ).
Edje files are compiled into compressed archives (using EET) which act like incredibly far-reaching themes. This means that a theme doesn't just contain pretty pictures to use as buttons, or the programming to draw the right lines at the right locations, it actually contains the entire user interface. To continue the web page analogy, if Gmail or Facebook used an analogous system then instead of merely being able to change the theming via CSS (which may have to be specifically forced from the browser preferences, because web-app people suck balls), you could actually use a completely different webpage to interact with the underlying application (no more "New look!" announcements, since anybody could use any look they wanted all of the time).
Now to address the title of this post ;) As I've described, Edje is a declarative system. An awesome feature of this is that Edje can be completely replaced without the user even noticing, since the point is to say "I want a button" and not care about how it gets done. Well, the developers of Canola looked at moving to QT, since it offers more features than EFL, is more widely developed, developed for and installed. However, they found that Edje was too awesome to leave behind, so they ported it to QT and called it QEdje! What's particularly nice about QEdje is that a) the canvas used instead of Evas, called QZion, is rather abstract in itself, so that different QT systems can be used to do the work (eg. direct drawing to the screen with the QPainter backend, more abstract layout with the QGraphicsView backend or 3D accelerated with the KGameCanvas backend, depending on the environment it is being used in) and b) the huge wealth of QT widgets can be used in the program (this is pretty powerful considering that as well as buttons, labels and tickboxes, QT also has a whole web browser, Open Document Format compatible rich text areas and an abstracted audio/video multimedia engine (which, so far, uses Gstreamer (Rhythmbox, Totem, etc.), Xine (GXine, xine-ui), VLC, Mplayer, QuickTime (QuickTime Player, etc.) on Mac and DirectShow (Windows Media Player, Media Player Classic, etc.) on Windows).
After a little wrangling I've got it to compile and the examples are working. This means I can have a play about, so I'll report on my findings :)
QEdje Is Working! :D
Friday, 22 August 2008
Thursday, 21 August 2008
Tuesday, 19 August 2008
Saturday, 16 August 2008
A lengthy list of practical problems with shipping non-free software
This has been written since Ubuntu Brainstorm doesn't allow comments over a certain length, I want to cover this in as much depth as I can and I have a tendency to ramble :P
This is about non-free software (ie. proprietary, the cost doesn't matter) and the practical problems which would be encountered if it were shipped by default in a distribution. This does not go into the Free Software vs. proprietary software political arguments, it is all about the undeniable problems that come with non-free software.
The main software for discussion is codecs, the software which can encode and decode different types of file. Codecs are simply algorithms, they are nothing more. They contain no GUI, no player, nothing except the algorithm. An algorithm (like MP3) is just a set of steps to follow, as an example an algorithm to read a book would be something like:
Pick up book
Rotate so that front cover is facing you and the top is at the top
Lift the front cover at the edge furthest from the spine, being careful not to lift any pages
Turn the front cover 180 degrees until it is in the same plane as the pages yet on the opposite side of the spine
If there is any text on the inside of the page beneath the front cover:
- Find the end of the top line closest to the spine
- Read until the end closest to the edge of the book is reached
- If there is any more text on the page then move to the line below and repeat
Turn a single page from the right-hand-side to the left-hand-side of the spine in the same way as the front cover
If there is any text on the left page:
- Find the end of the top line closest to the edge of the book
- Read until the end closest to the spine is reached
- If there is any more text on the page then move to the line below and repeat
Repeat the steps for the previous page, then for this page, on every subsequent page until the end of the book is reached
Turn the back cover of the book onto the rest of the book
Put down the book
That's obviously a very crude algorithm as it doesn't take into account right-to-left languages, footnotes, etc. but it is a valid implementation of a book decoder. The copyright on the above text belongs to me, since I have written it, and will remain so for probably well over a hundred years. However, somebody can look at a book and make their own algorithm to read it without ever knowing that I've even written this blog post. My copyrights will not be infringed since their algorithm cannot be a derivitive of my work as they haven't even seen it.
This is how LAME (LAME Ain't an MP3 Encoder) was written. The developers didn't look at how anyone else had decoded the MP3 format, they didn't need to, and thus no copyright is infringed.
However, software patents allow me to patent the working of my algorithm, as long as it is novel and non-obvious to someone skilled in the field where it applies (the rigour for such things is often debatable though!).
This is about non-free software (ie. proprietary, the cost doesn't matter) and the practical problems which would be encountered if it were shipped by default in a distribution. This does not go into the Free Software vs. proprietary software political arguments, it is all about the undeniable problems that come with non-free software.
The main software for discussion is codecs, the software which can encode and decode different types of file. Codecs are simply algorithms, they are nothing more. They contain no GUI, no player, nothing except the algorithm. An algorithm (like MP3) is just a set of steps to follow, as an example an algorithm to read a book would be something like:
Pick up book
Rotate so that front cover is facing you and the top is at the top
Lift the front cover at the edge furthest from the spine, being careful not to lift any pages
Turn the front cover 180 degrees until it is in the same plane as the pages yet on the opposite side of the spine
If there is any text on the inside of the page beneath the front cover:
- Find the end of the top line closest to the spine
- Read until the end closest to the edge of the book is reached
- If there is any more text on the page then move to the line below and repeat
Turn a single page from the right-hand-side to the left-hand-side of the spine in the same way as the front cover
If there is any text on the left page:
- Find the end of the top line closest to the edge of the book
- Read until the end closest to the spine is reached
- If there is any more text on the page then move to the line below and repeat
Repeat the steps for the previous page, then for this page, on every subsequent page until the end of the book is reached
Turn the back cover of the book onto the rest of the book
Put down the book
That's obviously a very crude algorithm as it doesn't take into account right-to-left languages, footnotes, etc. but it is a valid implementation of a book decoder. The copyright on the above text belongs to me, since I have written it, and will remain so for probably well over a hundred years. However, somebody can look at a book and make their own algorithm to read it without ever knowing that I've even written this blog post. My copyrights will not be infringed since their algorithm cannot be a derivitive of my work as they haven't even seen it.
This is how LAME (LAME Ain't an MP3 Encoder) was written. The developers didn't look at how anyone else had decoded the MP3 format, they didn't need to, and thus no copyright is infringed.
However, software patents allow me to patent the working of my algorithm, as long as it is novel and non-obvious to someone skilled in the field where it applies (the rigour for such things is often debatable though!).
The problem with patents on software is that it's very easy for someone to make software, but they could be treading all over other people's patents without ever know it!
This has been written since Ubuntu Brainstorm doesn't allow comments over a certain length, I want to cover this in as much depth as I can and I have a tendency to ramble :P
This is about non-free software (ie. proprietary, the cost doesn't matter) and the practical problems which would be encountered if it were shipped by default in a distribution. This does not go into the Free Software vs. proprietary software political arguments, it is all about the undeniable problems that come with non-free software.
The main software for discussion is codecs, the software which can encode and decode different types of file. Codecs are simply algorithms, they are nothing more. They contain no GUI, no player, nothing except the algorithm. An algorithm (like MP3) is just a set of steps to follow, as an example an algorithm to read a book would be something like:
Pick up book
Rotate so that front cover is facing you and the top is at the top
Lift the front cover at the edge furthest from the spine, being careful not to lift any pages
Turn the front cover 180 degrees until it is in the same plane as the pages yet on the opposite side of the spine
If there is any text on the inside of the page beneath the front cover:
- Find the end of the top line closest to the spine
- Read until the end closest to the edge of the book is reached
- If there is any more text on the page then move to the line below and repeat
Turn a single page from the right-hand-side to the left-hand-side of the spine in the same way as the front cover
If there is any text on the left page:
- Find the end of the top line closest to the edge of the book
- Read until the end closest to the spine is reached
- If there is any more text on the page then move to the line below and repeat
Repeat the steps for the previous page, then for this page, on every subsequent page until the end of the book is reached
Turn the back cover of the book onto the rest of the book
Put down the book
That's obviously a very crude algorithm as it doesn't take into account right-to-left languages, footnotes, etc. but it is a valid implementation of a book decoder. The copyright on the above text belongs to me, since I have written it, and will remain so for probably well over a hundred years. However, somebody can look at a book and make their own algorithm to read it without ever knowing that I've even written this blog post. My copyrights will not be infringed since their algorithm cannot be a derivitive of my work as they haven't even seen it.
This is how LAME (LAME Ain't an MP3 Encoder) was written. The developers didn't look at how anyone else had decoded the MP3 format, they didn't need to, and thus no copyright is infringed.
However, software patents allow me to patent the working of my algorithm, as long as it is novel and non-obvious to someone skilled in the field where it applies (the rigour for such things is often debatable though!).
This is about non-free software (ie. proprietary, the cost doesn't matter) and the practical problems which would be encountered if it were shipped by default in a distribution. This does not go into the Free Software vs. proprietary software political arguments, it is all about the undeniable problems that come with non-free software.
The main software for discussion is codecs, the software which can encode and decode different types of file. Codecs are simply algorithms, they are nothing more. They contain no GUI, no player, nothing except the algorithm. An algorithm (like MP3) is just a set of steps to follow, as an example an algorithm to read a book would be something like:
Pick up book
Rotate so that front cover is facing you and the top is at the top
Lift the front cover at the edge furthest from the spine, being careful not to lift any pages
Turn the front cover 180 degrees until it is in the same plane as the pages yet on the opposite side of the spine
If there is any text on the inside of the page beneath the front cover:
- Find the end of the top line closest to the spine
- Read until the end closest to the edge of the book is reached
- If there is any more text on the page then move to the line below and repeat
Turn a single page from the right-hand-side to the left-hand-side of the spine in the same way as the front cover
If there is any text on the left page:
- Find the end of the top line closest to the edge of the book
- Read until the end closest to the spine is reached
- If there is any more text on the page then move to the line below and repeat
Repeat the steps for the previous page, then for this page, on every subsequent page until the end of the book is reached
Turn the back cover of the book onto the rest of the book
Put down the book
That's obviously a very crude algorithm as it doesn't take into account right-to-left languages, footnotes, etc. but it is a valid implementation of a book decoder. The copyright on the above text belongs to me, since I have written it, and will remain so for probably well over a hundred years. However, somebody can look at a book and make their own algorithm to read it without ever knowing that I've even written this blog post. My copyrights will not be infringed since their algorithm cannot be a derivitive of my work as they haven't even seen it.
This is how LAME (LAME Ain't an MP3 Encoder) was written. The developers didn't look at how anyone else had decoded the MP3 format, they didn't need to, and thus no copyright is infringed.
However, software patents allow me to patent the working of my algorithm, as long as it is novel and non-obvious to someone skilled in the field where it applies (the rigour for such things is often debatable though!).
The problem with patents on software is that it's very easy for someone to make software, but they could be treading all over other people's patents without ever know it!
A lengthy list of practical problems with shipping non-free software
Wednesday, 13 August 2008
Some ramblings about graphics, compilers and CPUs
This is probably very wrong, but whatever, it's *my* blog :P
I am following the work going on with Gallium3D as much as possible, since this seems like it should offer some pretty cool features when it becomes stable. It seems (from what I understand) to be an abstraction layer over the graphics card. Until now drivers are written for graphics cards which implement certain things like OpenGL, etc. and anything which is not supported relies on (much slower) software rendering (like Mesa for OpenGL), or if that's not available it just doesn't work.
Since the drivers are responsible for providing a lot of things as well as the basic hardware access this results in a lot of duplicate work, with slightly different ways of doing the same things for each driver (for instance every driver needs to implement a memory manager. The Intel driver's memory manager is called GEM and is Free Software and available for other drivers to use, but it's written in a very Intel-specific way and is therefore useless to other drivers). There is work going on to make a generic memory manager for the kernel and a generic way for graphics drivers to access the hardware (called the Direct Rendering Infrastructure, working towards version 2), but it still leaves rather a lot of common ground in each driver.
Gallium3D is a very generic abstraction layer that is designed to sit in between the drivers and the actual programs and libraries being used. Gallium3D is implemented to work in a very similar way to the actual silicon inside current graphics cards, thus writing a driver to sit between the card and Gallium3D is pretty straighforward because not much has to be translated from one representation to another (and DRI can still be used to do a lot of the background work). This makes Gallium3D act rather like a software graphics card, but doesn't cause too much slowdown since a) it is so similar to real hardware and b) it uses the Low Level Virtual Machine (LLVM) system to gain Just In Time compilation and optimisation for its code*. Since Gallium3D is software it can be run on any hardware with a suitable driver (which, as I said, is pretty painless to make), so libraries like OpenGL can be written to talk to Gallium3D and they will run on whatever hardware Gallium3D talks to (with software like Mesa running anything that the hardware can't handle). This means that writing a graphics library is a lot easier (since you only have to write it for Gallium3D's state tracker, eerything else comes free after that) and thus more than the basic OpenGL-and-that's-it can be used.
As an example, WINE can run Windows programs on Linux, BSD, MacOS, etc. (and even on Windows :P ). However, a lot of Windows programs and especially games use Microsoft's DirectX system for 3D graphics. Normally the WINE team writes replacements for Microsoft's libraries which perform the same job, thus allowing Windows programs to run, but writing their own DirectX implementation would be a huge job on its own. Making a different one for every graphics card driver out there would be even worse, and relying on the driver's developers to do it like they currently do for OpenGL would make the drivers EVEN MORE bloated and complicated than they already are.
Thus the WINE team are writing a DirectX implementation, but instead of working with the graphics card they are writing it to use OpenGL (since OpenGL is already in the current drivers). This is pretty inefficient, however, since DirectX operations need to be translated to OpenGL, then the OpenGL needs to be translated to however the graphics card works, then run, and since games are generally very resource intensive, not to mention that 3D is one of the hardest things for a computer to do anyway, it's far from ideal. With Gallium3D, however, the OpenGL translation can be skipped entirely and DirectX can be implemented straight to Gallium3D, which then gets sent very efficiently to the graphics card whilst still only needing to be written for one system. Likewise other libraries can be accelerated through a graphics card without fear of some users not having a driver capable of it. Application developers could even write a custom library for their application and count on it being accelerated regardless of the card it runs on (for example, a proprietary program cannot count on drivers containing code to handle their custom, proprietary library since it can't be included by its very nature. It can, however, count on Gallium3D being installed and thus can include one implementation which will be accelerated regardless of the driver).
* JIT compilation is pretty cool. There are historically 2 types of program: one is compiled (like C) which means the program is first turned from source code into machine code, and then this machine code runs when you tell it to. The second is interpreted (like Python), which means that instead of running directly on the computer, it is run instead inside another program called a "virtual machine" (which is usually compiled, although it can itself be interpreted, as long as at some point there is a compiled virtual machine talking to the real computer). Programs are more flexible than computer processors, which means that interpreted programming languages can have many nice features added and usually be written more simply than compiled languages. Compiled programs usually run faster than interpreted programs, since they are run directly, there is no virtual machine acting as a middle-man.
Just-In-Time compilation, however, is like a mixture between the two. The source code is NOT compiled into a runnable program, making it different to compiled languages. However, there is no need for a virtual machine, thus making it not an interpreted language. What happens is very siimlar to an interpreted language, but instead of the virtual machine looking at what to do next then telling the computer, in a JIT language the next bit of code gets compiled just before it needs to run and is then given to the real computer just like a compiled program.
At first glance it would seem like this might be faster than interpreting a language (again, no middle-man) but slightly slower than a compiled program (since with a compiled program all of the compiling is already done before you run the program). However, JIT-compiled languages can actually be FASTER than compiled programs!
This is because when a program is compiled then that is it, the finished thing, that code will be sent to the computer. Compiled programs must therefore be able to handle anything they are given, so a banking program must be able to handle everything from fractions of pennies up to billions and perhaps trillions of pounds. Such large numbers take up a lot of memory, and moving them around and performing calculations on them can take time. Compiled programs have to use a lot of memory for everything and accept these slow-downs just in case they are given a huge number. This means handling £1 as something like £0000000000001, just so it can deal with a trillion pounds if so commanded. A JIT-compiled program, however, knows how much it is dealing with, thus £1 can be stored as simply £1, whilst a thousand billion pounds can still be dealt with if it comes along by storing it as £1000000000000. This means JIT programs can use less memory for storing values, the calculations they do can be quicker as they don't need to deal with unneeded digits, and the program can speed up greatly due to less cache misses**.
Another advantage of JIT compilation is that unneeded calculations can be skipped. For example, a program may need to add deposits to an account and take away withdrawals. In a precompiled program this will always take at least two calculations, either add the deposits to the total then take the withdrawals from the total, or take the withdrawals away from the deposits and add the result onto the total. In a JIT-compiled program this can be made more efficient, since if the withdrawal amount is zero then the compiler can skip one of the calculations, and likewise if the deposits are zero. If both are zero then both calculations can be skipped, and if they are both the same then the calculations can also be skipped. For instance, compare the following precompiled pseudo-code, where it has to work for any values, and the JIT pseudo-code which already knows the values since it is compiled whilst the program is running:
Precompiled:
add(total, deposit)
subtract(total, withdrawal)
OR
add(total, subtract(deposit, withdrawal))
JIT:
add(total, 15)
subtract(total, 15)
OR
add(total, subtract(15, 15))
The JIT compiler can skip either of those, since it knows they cancel out each other's effects.
JIT compilation is becoming more and more common, with JIT compilers being written for Java and Smalltalk, for example. There is even JIT support in EUAE (the E(nhanced/xtended/ggplant) U(NIX/biquitous) Amiga Emulator, the exact naming of which is open to interpretation). An emulator translates programs written for one computer into something which will run on a different type of computer. In EUAE's case this means running code written for the chips in Amiga computers (such as the Motorola 68020) on different chips like the Intel 386. This used to be done by treating the Amiga programs like an interpreted language, with the emulator acting as the virtual machine. With the JIT engine, however, the Amiga programs can be run directly on the different chip, with the "compilation" actually being a translation from one chips instructions to another's.
A very promising project currently being worked on is the Low Level Virtual Machine (LLVM). Traditional compilers, like the GNU Compiler Collection, work by translating the given language into a common, internal language which the compiler can work with. Various optimisations can then be done on this internal representation before translating it into machine code for whatever computer is requested. LLVM, however, is slightly more clever. It performs the same translation and optimisation, but is not only able to translate this internal representation into machine code, it also has a virtual machine capable of interpreting it and is even able to JIT compile it, depending on the options chosen by the user. This means that, for example, C code, which is the classic example of a compiled language, can be fed into LLVM and made to run in a virtual machine or be JIT compiled. The same goes for Ada, Smalltalk and other languages which LLVM can handle. This means that LLVM could potentially (when bugs are fixed and GCC-specific assumptions in some programs are handled) make almost the whole of a computer system compile Just-In-Time and be optimised on-the-fly (not quite everything though, since something needs to start up the JIT compiler :P ). LLVM could even optimise itself by compiling itself. Likewise an entire system could be run in a virtual machine without the need for the likes of KVM or Virtual Box. The future looks pretty interesting!
** Computer processors can be as fast as you care to make them, but that doesn't matter if you can't give them things to do at the same rate. The processor contains "registers" which contain the numbers it's dealing with, however these registers can only contain one thing at a time and thus their values have to keep changing for each operation. The place where numbers can actually be stored over a long time is the RAM (Random Access Memory), the processor's registers can therefore get their next values from any point in the RAM and dump the results of calculations just performed into any other point in the RAM. However, this is a huge slowdown, so processors have a cache which is able to store a few things at a time, and the registers are connected to the cache rather than the RAM. This is a lot faster, since frequently used values can be stored in the cache and accessed really quickly (these are called "hits").However, if something needed is not yet in the cache then it has to be taken from the RAM (a "miss") which a) takes time and b) means kicking something else out of the cache to make room for the new value.
There are various algorithms for trying to keep cache misses as low as possible, but an obvious way is to use small numbers. In a really simplified (and decimal rather than binary) example, let's say a cache can store ten digits, then I can store for example 5, 16, 3, 78, 4, 67 and 1 all at once, since that is 10 digits in total. That can't really be made any more efficient, so if a different number is needed (a miss occurs) then at least one of them has to get replaced. If a program is being overly cautious about the size of the numbers it is expecting, then it might use the cache inefficiently. For example, say the program is anticipating values of a few hundred, it might want to store the number 547 which would require three places in the cache. If the number it actually deals with turns out to be five, then it will store 005 in the cache, wasting two spaces, replacing numbers it doesn't need to and thus increasing the likelyhood of a miss. If there is a miss later and one or two of those three digits are chosen to be replaced then the whole 005 gets kicked out rather than just the wasteful zeroes, meaning that any future calculations needing that 5 will result in a miss and the whole 005 will have to be brought in again from the RAM, causing the same inefficiencies again.
OK, back to revision :P
I am following the work going on with Gallium3D as much as possible, since this seems like it should offer some pretty cool features when it becomes stable. It seems (from what I understand) to be an abstraction layer over the graphics card. Until now drivers are written for graphics cards which implement certain things like OpenGL, etc. and anything which is not supported relies on (much slower) software rendering (like Mesa for OpenGL), or if that's not available it just doesn't work.
Since the drivers are responsible for providing a lot of things as well as the basic hardware access this results in a lot of duplicate work, with slightly different ways of doing the same things for each driver (for instance every driver needs to implement a memory manager. The Intel driver's memory manager is called GEM and is Free Software and available for other drivers to use, but it's written in a very Intel-specific way and is therefore useless to other drivers). There is work going on to make a generic memory manager for the kernel and a generic way for graphics drivers to access the hardware (called the Direct Rendering Infrastructure, working towards version 2), but it still leaves rather a lot of common ground in each driver.
Gallium3D is a very generic abstraction layer that is designed to sit in between the drivers and the actual programs and libraries being used. Gallium3D is implemented to work in a very similar way to the actual silicon inside current graphics cards, thus writing a driver to sit between the card and Gallium3D is pretty straighforward because not much has to be translated from one representation to another (and DRI can still be used to do a lot of the background work). This makes Gallium3D act rather like a software graphics card, but doesn't cause too much slowdown since a) it is so similar to real hardware and b) it uses the Low Level Virtual Machine (LLVM) system to gain Just In Time compilation and optimisation for its code*. Since Gallium3D is software it can be run on any hardware with a suitable driver (which, as I said, is pretty painless to make), so libraries like OpenGL can be written to talk to Gallium3D and they will run on whatever hardware Gallium3D talks to (with software like Mesa running anything that the hardware can't handle). This means that writing a graphics library is a lot easier (since you only have to write it for Gallium3D's state tracker, eerything else comes free after that) and thus more than the basic OpenGL-and-that's-it can be used.
As an example, WINE can run Windows programs on Linux, BSD, MacOS, etc. (and even on Windows :P ). However, a lot of Windows programs and especially games use Microsoft's DirectX system for 3D graphics. Normally the WINE team writes replacements for Microsoft's libraries which perform the same job, thus allowing Windows programs to run, but writing their own DirectX implementation would be a huge job on its own. Making a different one for every graphics card driver out there would be even worse, and relying on the driver's developers to do it like they currently do for OpenGL would make the drivers EVEN MORE bloated and complicated than they already are.
Thus the WINE team are writing a DirectX implementation, but instead of working with the graphics card they are writing it to use OpenGL (since OpenGL is already in the current drivers). This is pretty inefficient, however, since DirectX operations need to be translated to OpenGL, then the OpenGL needs to be translated to however the graphics card works, then run, and since games are generally very resource intensive, not to mention that 3D is one of the hardest things for a computer to do anyway, it's far from ideal. With Gallium3D, however, the OpenGL translation can be skipped entirely and DirectX can be implemented straight to Gallium3D, which then gets sent very efficiently to the graphics card whilst still only needing to be written for one system. Likewise other libraries can be accelerated through a graphics card without fear of some users not having a driver capable of it. Application developers could even write a custom library for their application and count on it being accelerated regardless of the card it runs on (for example, a proprietary program cannot count on drivers containing code to handle their custom, proprietary library since it can't be included by its very nature. It can, however, count on Gallium3D being installed and thus can include one implementation which will be accelerated regardless of the driver).
* JIT compilation is pretty cool. There are historically 2 types of program: one is compiled (like C) which means the program is first turned from source code into machine code, and then this machine code runs when you tell it to. The second is interpreted (like Python), which means that instead of running directly on the computer, it is run instead inside another program called a "virtual machine" (which is usually compiled, although it can itself be interpreted, as long as at some point there is a compiled virtual machine talking to the real computer). Programs are more flexible than computer processors, which means that interpreted programming languages can have many nice features added and usually be written more simply than compiled languages. Compiled programs usually run faster than interpreted programs, since they are run directly, there is no virtual machine acting as a middle-man.
Just-In-Time compilation, however, is like a mixture between the two. The source code is NOT compiled into a runnable program, making it different to compiled languages. However, there is no need for a virtual machine, thus making it not an interpreted language. What happens is very siimlar to an interpreted language, but instead of the virtual machine looking at what to do next then telling the computer, in a JIT language the next bit of code gets compiled just before it needs to run and is then given to the real computer just like a compiled program.
At first glance it would seem like this might be faster than interpreting a language (again, no middle-man) but slightly slower than a compiled program (since with a compiled program all of the compiling is already done before you run the program). However, JIT-compiled languages can actually be FASTER than compiled programs!
This is because when a program is compiled then that is it, the finished thing, that code will be sent to the computer. Compiled programs must therefore be able to handle anything they are given, so a banking program must be able to handle everything from fractions of pennies up to billions and perhaps trillions of pounds. Such large numbers take up a lot of memory, and moving them around and performing calculations on them can take time. Compiled programs have to use a lot of memory for everything and accept these slow-downs just in case they are given a huge number. This means handling £1 as something like £0000000000001, just so it can deal with a trillion pounds if so commanded. A JIT-compiled program, however, knows how much it is dealing with, thus £1 can be stored as simply £1, whilst a thousand billion pounds can still be dealt with if it comes along by storing it as £1000000000000. This means JIT programs can use less memory for storing values, the calculations they do can be quicker as they don't need to deal with unneeded digits, and the program can speed up greatly due to less cache misses**.
Another advantage of JIT compilation is that unneeded calculations can be skipped. For example, a program may need to add deposits to an account and take away withdrawals. In a precompiled program this will always take at least two calculations, either add the deposits to the total then take the withdrawals from the total, or take the withdrawals away from the deposits and add the result onto the total. In a JIT-compiled program this can be made more efficient, since if the withdrawal amount is zero then the compiler can skip one of the calculations, and likewise if the deposits are zero. If both are zero then both calculations can be skipped, and if they are both the same then the calculations can also be skipped. For instance, compare the following precompiled pseudo-code, where it has to work for any values, and the JIT pseudo-code which already knows the values since it is compiled whilst the program is running:
Precompiled:
add(total, deposit)
subtract(total, withdrawal)
OR
add(total, subtract(deposit, withdrawal))
JIT:
add(total, 15)
subtract(total, 15)
OR
add(total, subtract(15, 15))
The JIT compiler can skip either of those, since it knows they cancel out each other's effects.
JIT compilation is becoming more and more common, with JIT compilers being written for Java and Smalltalk, for example. There is even JIT support in EUAE (the E(nhanced/xtended/ggplant) U(NIX/biquitous) Amiga Emulator, the exact naming of which is open to interpretation). An emulator translates programs written for one computer into something which will run on a different type of computer. In EUAE's case this means running code written for the chips in Amiga computers (such as the Motorola 68020) on different chips like the Intel 386. This used to be done by treating the Amiga programs like an interpreted language, with the emulator acting as the virtual machine. With the JIT engine, however, the Amiga programs can be run directly on the different chip, with the "compilation" actually being a translation from one chips instructions to another's.
A very promising project currently being worked on is the Low Level Virtual Machine (LLVM). Traditional compilers, like the GNU Compiler Collection, work by translating the given language into a common, internal language which the compiler can work with. Various optimisations can then be done on this internal representation before translating it into machine code for whatever computer is requested. LLVM, however, is slightly more clever. It performs the same translation and optimisation, but is not only able to translate this internal representation into machine code, it also has a virtual machine capable of interpreting it and is even able to JIT compile it, depending on the options chosen by the user. This means that, for example, C code, which is the classic example of a compiled language, can be fed into LLVM and made to run in a virtual machine or be JIT compiled. The same goes for Ada, Smalltalk and other languages which LLVM can handle. This means that LLVM could potentially (when bugs are fixed and GCC-specific assumptions in some programs are handled) make almost the whole of a computer system compile Just-In-Time and be optimised on-the-fly (not quite everything though, since something needs to start up the JIT compiler :P ). LLVM could even optimise itself by compiling itself. Likewise an entire system could be run in a virtual machine without the need for the likes of KVM or Virtual Box. The future looks pretty interesting!
** Computer processors can be as fast as you care to make them, but that doesn't matter if you can't give them things to do at the same rate. The processor contains "registers" which contain the numbers it's dealing with, however these registers can only contain one thing at a time and thus their values have to keep changing for each operation. The place where numbers can actually be stored over a long time is the RAM (Random Access Memory), the processor's registers can therefore get their next values from any point in the RAM and dump the results of calculations just performed into any other point in the RAM. However, this is a huge slowdown, so processors have a cache which is able to store a few things at a time, and the registers are connected to the cache rather than the RAM. This is a lot faster, since frequently used values can be stored in the cache and accessed really quickly (these are called "hits").However, if something needed is not yet in the cache then it has to be taken from the RAM (a "miss") which a) takes time and b) means kicking something else out of the cache to make room for the new value.
There are various algorithms for trying to keep cache misses as low as possible, but an obvious way is to use small numbers. In a really simplified (and decimal rather than binary) example, let's say a cache can store ten digits, then I can store for example 5, 16, 3, 78, 4, 67 and 1 all at once, since that is 10 digits in total. That can't really be made any more efficient, so if a different number is needed (a miss occurs) then at least one of them has to get replaced. If a program is being overly cautious about the size of the numbers it is expecting, then it might use the cache inefficiently. For example, say the program is anticipating values of a few hundred, it might want to store the number 547 which would require three places in the cache. If the number it actually deals with turns out to be five, then it will store 005 in the cache, wasting two spaces, replacing numbers it doesn't need to and thus increasing the likelyhood of a miss. If there is a miss later and one or two of those three digits are chosen to be replaced then the whole 005 gets kicked out rather than just the wasteful zeroes, meaning that any future calculations needing that 5 will result in a miss and the whole 005 will have to be brought in again from the RAM, causing the same inefficiencies again.
OK, back to revision :P
This is probably very wrong, but whatever, it's *my* blog :P
I am following the work going on with Gallium3D as much as possible, since this seems like it should offer some pretty cool features when it becomes stable. It seems (from what I understand) to be an abstraction layer over the graphics card. Until now drivers are written for graphics cards which implement certain things like OpenGL, etc. and anything which is not supported relies on (much slower) software rendering (like Mesa for OpenGL), or if that's not available it just doesn't work.
Since the drivers are responsible for providing a lot of things as well as the basic hardware access this results in a lot of duplicate work, with slightly different ways of doing the same things for each driver (for instance every driver needs to implement a memory manager. The Intel driver's memory manager is called GEM and is Free Software and available for other drivers to use, but it's written in a very Intel-specific way and is therefore useless to other drivers). There is work going on to make a generic memory manager for the kernel and a generic way for graphics drivers to access the hardware (called the Direct Rendering Infrastructure, working towards version 2), but it still leaves rather a lot of common ground in each driver.
Gallium3D is a very generic abstraction layer that is designed to sit in between the drivers and the actual programs and libraries being used. Gallium3D is implemented to work in a very similar way to the actual silicon inside current graphics cards, thus writing a driver to sit between the card and Gallium3D is pretty straighforward because not much has to be translated from one representation to another (and DRI can still be used to do a lot of the background work). This makes Gallium3D act rather like a software graphics card, but doesn't cause too much slowdown since a) it is so similar to real hardware and b) it uses the Low Level Virtual Machine (LLVM) system to gain Just In Time compilation and optimisation for its code*. Since Gallium3D is software it can be run on any hardware with a suitable driver (which, as I said, is pretty painless to make), so libraries like OpenGL can be written to talk to Gallium3D and they will run on whatever hardware Gallium3D talks to (with software like Mesa running anything that the hardware can't handle). This means that writing a graphics library is a lot easier (since you only have to write it for Gallium3D's state tracker, eerything else comes free after that) and thus more than the basic OpenGL-and-that's-it can be used.
As an example, WINE can run Windows programs on Linux, BSD, MacOS, etc. (and even on Windows :P ). However, a lot of Windows programs and especially games use Microsoft's DirectX system for 3D graphics. Normally the WINE team writes replacements for Microsoft's libraries which perform the same job, thus allowing Windows programs to run, but writing their own DirectX implementation would be a huge job on its own. Making a different one for every graphics card driver out there would be even worse, and relying on the driver's developers to do it like they currently do for OpenGL would make the drivers EVEN MORE bloated and complicated than they already are.
Thus the WINE team are writing a DirectX implementation, but instead of working with the graphics card they are writing it to use OpenGL (since OpenGL is already in the current drivers). This is pretty inefficient, however, since DirectX operations need to be translated to OpenGL, then the OpenGL needs to be translated to however the graphics card works, then run, and since games are generally very resource intensive, not to mention that 3D is one of the hardest things for a computer to do anyway, it's far from ideal. With Gallium3D, however, the OpenGL translation can be skipped entirely and DirectX can be implemented straight to Gallium3D, which then gets sent very efficiently to the graphics card whilst still only needing to be written for one system. Likewise other libraries can be accelerated through a graphics card without fear of some users not having a driver capable of it. Application developers could even write a custom library for their application and count on it being accelerated regardless of the card it runs on (for example, a proprietary program cannot count on drivers containing code to handle their custom, proprietary library since it can't be included by its very nature. It can, however, count on Gallium3D being installed and thus can include one implementation which will be accelerated regardless of the driver).
* JIT compilation is pretty cool. There are historically 2 types of program: one is compiled (like C) which means the program is first turned from source code into machine code, and then this machine code runs when you tell it to. The second is interpreted (like Python), which means that instead of running directly on the computer, it is run instead inside another program called a "virtual machine" (which is usually compiled, although it can itself be interpreted, as long as at some point there is a compiled virtual machine talking to the real computer). Programs are more flexible than computer processors, which means that interpreted programming languages can have many nice features added and usually be written more simply than compiled languages. Compiled programs usually run faster than interpreted programs, since they are run directly, there is no virtual machine acting as a middle-man.
Just-In-Time compilation, however, is like a mixture between the two. The source code is NOT compiled into a runnable program, making it different to compiled languages. However, there is no need for a virtual machine, thus making it not an interpreted language. What happens is very siimlar to an interpreted language, but instead of the virtual machine looking at what to do next then telling the computer, in a JIT language the next bit of code gets compiled just before it needs to run and is then given to the real computer just like a compiled program.
At first glance it would seem like this might be faster than interpreting a language (again, no middle-man) but slightly slower than a compiled program (since with a compiled program all of the compiling is already done before you run the program). However, JIT-compiled languages can actually be FASTER than compiled programs!
This is because when a program is compiled then that is it, the finished thing, that code will be sent to the computer. Compiled programs must therefore be able to handle anything they are given, so a banking program must be able to handle everything from fractions of pennies up to billions and perhaps trillions of pounds. Such large numbers take up a lot of memory, and moving them around and performing calculations on them can take time. Compiled programs have to use a lot of memory for everything and accept these slow-downs just in case they are given a huge number. This means handling £1 as something like £0000000000001, just so it can deal with a trillion pounds if so commanded. A JIT-compiled program, however, knows how much it is dealing with, thus £1 can be stored as simply £1, whilst a thousand billion pounds can still be dealt with if it comes along by storing it as £1000000000000. This means JIT programs can use less memory for storing values, the calculations they do can be quicker as they don't need to deal with unneeded digits, and the program can speed up greatly due to less cache misses**.
Another advantage of JIT compilation is that unneeded calculations can be skipped. For example, a program may need to add deposits to an account and take away withdrawals. In a precompiled program this will always take at least two calculations, either add the deposits to the total then take the withdrawals from the total, or take the withdrawals away from the deposits and add the result onto the total. In a JIT-compiled program this can be made more efficient, since if the withdrawal amount is zero then the compiler can skip one of the calculations, and likewise if the deposits are zero. If both are zero then both calculations can be skipped, and if they are both the same then the calculations can also be skipped. For instance, compare the following precompiled pseudo-code, where it has to work for any values, and the JIT pseudo-code which already knows the values since it is compiled whilst the program is running:
Precompiled:
add(total, deposit)
subtract(total, withdrawal)
OR
add(total, subtract(deposit, withdrawal))
JIT:
add(total, 15)
subtract(total, 15)
OR
add(total, subtract(15, 15))
The JIT compiler can skip either of those, since it knows they cancel out each other's effects.
JIT compilation is becoming more and more common, with JIT compilers being written for Java and Smalltalk, for example. There is even JIT support in EUAE (the E(nhanced/xtended/ggplant) U(NIX/biquitous) Amiga Emulator, the exact naming of which is open to interpretation). An emulator translates programs written for one computer into something which will run on a different type of computer. In EUAE's case this means running code written for the chips in Amiga computers (such as the Motorola 68020) on different chips like the Intel 386. This used to be done by treating the Amiga programs like an interpreted language, with the emulator acting as the virtual machine. With the JIT engine, however, the Amiga programs can be run directly on the different chip, with the "compilation" actually being a translation from one chips instructions to another's.
A very promising project currently being worked on is the Low Level Virtual Machine (LLVM). Traditional compilers, like the GNU Compiler Collection, work by translating the given language into a common, internal language which the compiler can work with. Various optimisations can then be done on this internal representation before translating it into machine code for whatever computer is requested. LLVM, however, is slightly more clever. It performs the same translation and optimisation, but is not only able to translate this internal representation into machine code, it also has a virtual machine capable of interpreting it and is even able to JIT compile it, depending on the options chosen by the user. This means that, for example, C code, which is the classic example of a compiled language, can be fed into LLVM and made to run in a virtual machine or be JIT compiled. The same goes for Ada, Smalltalk and other languages which LLVM can handle. This means that LLVM could potentially (when bugs are fixed and GCC-specific assumptions in some programs are handled) make almost the whole of a computer system compile Just-In-Time and be optimised on-the-fly (not quite everything though, since something needs to start up the JIT compiler :P ). LLVM could even optimise itself by compiling itself. Likewise an entire system could be run in a virtual machine without the need for the likes of KVM or Virtual Box. The future looks pretty interesting!
** Computer processors can be as fast as you care to make them, but that doesn't matter if you can't give them things to do at the same rate. The processor contains "registers" which contain the numbers it's dealing with, however these registers can only contain one thing at a time and thus their values have to keep changing for each operation. The place where numbers can actually be stored over a long time is the RAM (Random Access Memory), the processor's registers can therefore get their next values from any point in the RAM and dump the results of calculations just performed into any other point in the RAM. However, this is a huge slowdown, so processors have a cache which is able to store a few things at a time, and the registers are connected to the cache rather than the RAM. This is a lot faster, since frequently used values can be stored in the cache and accessed really quickly (these are called "hits").However, if something needed is not yet in the cache then it has to be taken from the RAM (a "miss") which a) takes time and b) means kicking something else out of the cache to make room for the new value.
There are various algorithms for trying to keep cache misses as low as possible, but an obvious way is to use small numbers. In a really simplified (and decimal rather than binary) example, let's say a cache can store ten digits, then I can store for example 5, 16, 3, 78, 4, 67 and 1 all at once, since that is 10 digits in total. That can't really be made any more efficient, so if a different number is needed (a miss occurs) then at least one of them has to get replaced. If a program is being overly cautious about the size of the numbers it is expecting, then it might use the cache inefficiently. For example, say the program is anticipating values of a few hundred, it might want to store the number 547 which would require three places in the cache. If the number it actually deals with turns out to be five, then it will store 005 in the cache, wasting two spaces, replacing numbers it doesn't need to and thus increasing the likelyhood of a miss. If there is a miss later and one or two of those three digits are chosen to be replaced then the whole 005 gets kicked out rather than just the wasteful zeroes, meaning that any future calculations needing that 5 will result in a miss and the whole 005 will have to be brought in again from the RAM, causing the same inefficiencies again.
OK, back to revision :P
I am following the work going on with Gallium3D as much as possible, since this seems like it should offer some pretty cool features when it becomes stable. It seems (from what I understand) to be an abstraction layer over the graphics card. Until now drivers are written for graphics cards which implement certain things like OpenGL, etc. and anything which is not supported relies on (much slower) software rendering (like Mesa for OpenGL), or if that's not available it just doesn't work.
Since the drivers are responsible for providing a lot of things as well as the basic hardware access this results in a lot of duplicate work, with slightly different ways of doing the same things for each driver (for instance every driver needs to implement a memory manager. The Intel driver's memory manager is called GEM and is Free Software and available for other drivers to use, but it's written in a very Intel-specific way and is therefore useless to other drivers). There is work going on to make a generic memory manager for the kernel and a generic way for graphics drivers to access the hardware (called the Direct Rendering Infrastructure, working towards version 2), but it still leaves rather a lot of common ground in each driver.
Gallium3D is a very generic abstraction layer that is designed to sit in between the drivers and the actual programs and libraries being used. Gallium3D is implemented to work in a very similar way to the actual silicon inside current graphics cards, thus writing a driver to sit between the card and Gallium3D is pretty straighforward because not much has to be translated from one representation to another (and DRI can still be used to do a lot of the background work). This makes Gallium3D act rather like a software graphics card, but doesn't cause too much slowdown since a) it is so similar to real hardware and b) it uses the Low Level Virtual Machine (LLVM) system to gain Just In Time compilation and optimisation for its code*. Since Gallium3D is software it can be run on any hardware with a suitable driver (which, as I said, is pretty painless to make), so libraries like OpenGL can be written to talk to Gallium3D and they will run on whatever hardware Gallium3D talks to (with software like Mesa running anything that the hardware can't handle). This means that writing a graphics library is a lot easier (since you only have to write it for Gallium3D's state tracker, eerything else comes free after that) and thus more than the basic OpenGL-and-that's-it can be used.
As an example, WINE can run Windows programs on Linux, BSD, MacOS, etc. (and even on Windows :P ). However, a lot of Windows programs and especially games use Microsoft's DirectX system for 3D graphics. Normally the WINE team writes replacements for Microsoft's libraries which perform the same job, thus allowing Windows programs to run, but writing their own DirectX implementation would be a huge job on its own. Making a different one for every graphics card driver out there would be even worse, and relying on the driver's developers to do it like they currently do for OpenGL would make the drivers EVEN MORE bloated and complicated than they already are.
Thus the WINE team are writing a DirectX implementation, but instead of working with the graphics card they are writing it to use OpenGL (since OpenGL is already in the current drivers). This is pretty inefficient, however, since DirectX operations need to be translated to OpenGL, then the OpenGL needs to be translated to however the graphics card works, then run, and since games are generally very resource intensive, not to mention that 3D is one of the hardest things for a computer to do anyway, it's far from ideal. With Gallium3D, however, the OpenGL translation can be skipped entirely and DirectX can be implemented straight to Gallium3D, which then gets sent very efficiently to the graphics card whilst still only needing to be written for one system. Likewise other libraries can be accelerated through a graphics card without fear of some users not having a driver capable of it. Application developers could even write a custom library for their application and count on it being accelerated regardless of the card it runs on (for example, a proprietary program cannot count on drivers containing code to handle their custom, proprietary library since it can't be included by its very nature. It can, however, count on Gallium3D being installed and thus can include one implementation which will be accelerated regardless of the driver).
* JIT compilation is pretty cool. There are historically 2 types of program: one is compiled (like C) which means the program is first turned from source code into machine code, and then this machine code runs when you tell it to. The second is interpreted (like Python), which means that instead of running directly on the computer, it is run instead inside another program called a "virtual machine" (which is usually compiled, although it can itself be interpreted, as long as at some point there is a compiled virtual machine talking to the real computer). Programs are more flexible than computer processors, which means that interpreted programming languages can have many nice features added and usually be written more simply than compiled languages. Compiled programs usually run faster than interpreted programs, since they are run directly, there is no virtual machine acting as a middle-man.
Just-In-Time compilation, however, is like a mixture between the two. The source code is NOT compiled into a runnable program, making it different to compiled languages. However, there is no need for a virtual machine, thus making it not an interpreted language. What happens is very siimlar to an interpreted language, but instead of the virtual machine looking at what to do next then telling the computer, in a JIT language the next bit of code gets compiled just before it needs to run and is then given to the real computer just like a compiled program.
At first glance it would seem like this might be faster than interpreting a language (again, no middle-man) but slightly slower than a compiled program (since with a compiled program all of the compiling is already done before you run the program). However, JIT-compiled languages can actually be FASTER than compiled programs!
This is because when a program is compiled then that is it, the finished thing, that code will be sent to the computer. Compiled programs must therefore be able to handle anything they are given, so a banking program must be able to handle everything from fractions of pennies up to billions and perhaps trillions of pounds. Such large numbers take up a lot of memory, and moving them around and performing calculations on them can take time. Compiled programs have to use a lot of memory for everything and accept these slow-downs just in case they are given a huge number. This means handling £1 as something like £0000000000001, just so it can deal with a trillion pounds if so commanded. A JIT-compiled program, however, knows how much it is dealing with, thus £1 can be stored as simply £1, whilst a thousand billion pounds can still be dealt with if it comes along by storing it as £1000000000000. This means JIT programs can use less memory for storing values, the calculations they do can be quicker as they don't need to deal with unneeded digits, and the program can speed up greatly due to less cache misses**.
Another advantage of JIT compilation is that unneeded calculations can be skipped. For example, a program may need to add deposits to an account and take away withdrawals. In a precompiled program this will always take at least two calculations, either add the deposits to the total then take the withdrawals from the total, or take the withdrawals away from the deposits and add the result onto the total. In a JIT-compiled program this can be made more efficient, since if the withdrawal amount is zero then the compiler can skip one of the calculations, and likewise if the deposits are zero. If both are zero then both calculations can be skipped, and if they are both the same then the calculations can also be skipped. For instance, compare the following precompiled pseudo-code, where it has to work for any values, and the JIT pseudo-code which already knows the values since it is compiled whilst the program is running:
Precompiled:
add(total, deposit)
subtract(total, withdrawal)
OR
add(total, subtract(deposit, withdrawal))
JIT:
add(total, 15)
subtract(total, 15)
OR
add(total, subtract(15, 15))
The JIT compiler can skip either of those, since it knows they cancel out each other's effects.
JIT compilation is becoming more and more common, with JIT compilers being written for Java and Smalltalk, for example. There is even JIT support in EUAE (the E(nhanced/xtended/ggplant) U(NIX/biquitous) Amiga Emulator, the exact naming of which is open to interpretation). An emulator translates programs written for one computer into something which will run on a different type of computer. In EUAE's case this means running code written for the chips in Amiga computers (such as the Motorola 68020) on different chips like the Intel 386. This used to be done by treating the Amiga programs like an interpreted language, with the emulator acting as the virtual machine. With the JIT engine, however, the Amiga programs can be run directly on the different chip, with the "compilation" actually being a translation from one chips instructions to another's.
A very promising project currently being worked on is the Low Level Virtual Machine (LLVM). Traditional compilers, like the GNU Compiler Collection, work by translating the given language into a common, internal language which the compiler can work with. Various optimisations can then be done on this internal representation before translating it into machine code for whatever computer is requested. LLVM, however, is slightly more clever. It performs the same translation and optimisation, but is not only able to translate this internal representation into machine code, it also has a virtual machine capable of interpreting it and is even able to JIT compile it, depending on the options chosen by the user. This means that, for example, C code, which is the classic example of a compiled language, can be fed into LLVM and made to run in a virtual machine or be JIT compiled. The same goes for Ada, Smalltalk and other languages which LLVM can handle. This means that LLVM could potentially (when bugs are fixed and GCC-specific assumptions in some programs are handled) make almost the whole of a computer system compile Just-In-Time and be optimised on-the-fly (not quite everything though, since something needs to start up the JIT compiler :P ). LLVM could even optimise itself by compiling itself. Likewise an entire system could be run in a virtual machine without the need for the likes of KVM or Virtual Box. The future looks pretty interesting!
** Computer processors can be as fast as you care to make them, but that doesn't matter if you can't give them things to do at the same rate. The processor contains "registers" which contain the numbers it's dealing with, however these registers can only contain one thing at a time and thus their values have to keep changing for each operation. The place where numbers can actually be stored over a long time is the RAM (Random Access Memory), the processor's registers can therefore get their next values from any point in the RAM and dump the results of calculations just performed into any other point in the RAM. However, this is a huge slowdown, so processors have a cache which is able to store a few things at a time, and the registers are connected to the cache rather than the RAM. This is a lot faster, since frequently used values can be stored in the cache and accessed really quickly (these are called "hits").However, if something needed is not yet in the cache then it has to be taken from the RAM (a "miss") which a) takes time and b) means kicking something else out of the cache to make room for the new value.
There are various algorithms for trying to keep cache misses as low as possible, but an obvious way is to use small numbers. In a really simplified (and decimal rather than binary) example, let's say a cache can store ten digits, then I can store for example 5, 16, 3, 78, 4, 67 and 1 all at once, since that is 10 digits in total. That can't really be made any more efficient, so if a different number is needed (a miss occurs) then at least one of them has to get replaced. If a program is being overly cautious about the size of the numbers it is expecting, then it might use the cache inefficiently. For example, say the program is anticipating values of a few hundred, it might want to store the number 547 which would require three places in the cache. If the number it actually deals with turns out to be five, then it will store 005 in the cache, wasting two spaces, replacing numbers it doesn't need to and thus increasing the likelyhood of a miss. If there is a miss later and one or two of those three digits are chosen to be replaced then the whole 005 gets kicked out rather than just the wasteful zeroes, meaning that any future calculations needing that 5 will result in a miss and the whole 005 will have to be brought in again from the RAM, causing the same inefficiencies again.
OK, back to revision :P
Some ramblings about graphics, compilers and CPUs
Monday, 11 August 2008
PubSub Browser
The x60br library, which I mentioned previously, was created primarily for this thing. It is a kind of PubSub browser, showing the nodes and their relationships.
Working on my Liferea/Akregator-style reader has come to a stop for the moment, since I need some nodes to actually test with, so I thought I'd copy the browser idea for managing nodes (which I can then subscribe to, publish to, read things from, etc.). A couple of days of hacking have given me this:
Looks pretty basic, but during its creation I've given my library a rather hacky Object Oriented interface*. Anyway, the "browsing" part is working. The Jabber account to use is currently hard-coded into the application as "test1@localhost", but this suffices for my testing purposes.
To use it the address of a PubSub-capable server is given (in this case "pubsub.localhost"**) and this server is queried for nodes and added to the tree. When the replies come in they are given to the program as a list of Nodes. These nodes are checked (rather inefficiently) against the Nodes already displayed in the tree to see if a) they are the same Node (in which case they are disgarded) and b) if any listed Node is their parent. Then the remaining Nodes are added to the tree.
As you can see in the image I've queried the server pubsub.localhost and it contains 2 collection nodes***, "/pubsub" and "/home", which each contain a leaf node, "/pubsub/nodes" and "/home/localhost" respectively. UPDATE: Just to clarify a little, PubSub nodes can be anything, they don't have to follow a directory-style slash system, these nodes are simply there by default in ejabberd to give some kind of structure to the nodes. This is handled by a plugin to the server, which can be replaced, but since the ejabberd developers know a hell of a lot more about this than me I'm going with their implementation :D This means that "/home/localhost" could actually be in "/nodes" rather than "/home", or it could be in both or neither. I can make a node "heh" and put it anywhere. Nodes, as far as I know, need to be unique to the server, so I can't have two nodes "blog" on the same server, regardless of which collections I want them in. However, nodes can also have a name, so I can make a node "/home/localhost/chris/blog" and give it a name "blog", and another node "/home/localhost/joanne/blog" and call that "blog" too, and another called "/home/localhost/harriet" called "blog", etc. these can all be in the same collections if I want, or not. This flexibility is good, but it does mean that I'm going to see what other people are doing before working out a structure to use (for example, should a blog be a collection, with each post being a collection containing a leaf node with the post and a leaf node with the comments? Maybe tags should be collections which contain leaf nodes of applicable posts, etc.)
The main drawback in the current browser application is that the known nodes are queried one after another, meaning that leaf nodes in multiple collections won't work yet (since it would get as far as the first collection and notice that this node is already in the list and disgard it before the second collection is reached).
Next on the agenda is adding icons to see which rows are leaves, which are collections and which are servers. Then I'll stick on some add and remove buttons, and possibly look into drag 'n' drop reordering.
UPDATE: Seems adding the icons is a bit of a bother. Since the correct GTK way of making tree views and lists is VERY confusing and involved (a TreeView is needed which shows a TreeModel which contains Columns which contain CellRenderers which draw whatever GObject type is put in the cell of the column of the model of the view. Very flexible but also very over the top!) I am using the ObjectTree from Kiwi, which is built on PyGTK's tree system, but is much easier to use. The problem is, to draw an icon alongside the text I need to give the Column two CellRenderers. ObjectTree guesses which CellRenderer to use for me based on what I give it and just gets on with it, however this means I spent a while trying to reimplement some of these guessing methods in a subclass of ObjectTree.
Thankfully, however, in the latest version of Kiwi this functionality has been added by simply making two columns (one for the icon, one for the text) and giving the first Column to the second when it is created. This version of Kiwi isn't in Ubuntu Hardy, however, so I got the one from Intrepid and installed it (it's written in Python, thus shouldn't care about libc differences and such). I've since upgraded my whole system to Intrepid, but that's a different matter.
Anyway, turns out that there are problems with this way of doing things too, although I'm not sure if it's due to a bug in Kiwi as columns-in-columns is so new, or a fault of mine (which I can't really check through Google since this functionality hasn't been around long enough to let other people make the same mistakes as me). I can get the icons to appear, but I can't get them to refresh when I tell the ObjectTree to refresh. Since the nodes' type is discovered through a request to the server and I am using the ObjectTree itself as my data model, the nodes must be added to the tree before their type is known. When the reply comes in with the node's type then I can update the icon to reflect this type, however the updated icon is never used even after a refresh. This means I need to know the type when I add it to the ObjectTree, which would result in more headaches since I'd either get some nodes unable to find their parent (since the parent hasn't been added yet as it is still awaiting its type), or I would have to make a completely separate storage model and then make sure that the contents of that storage model are kept in sync with the ObjectTree. I really hope it's a bug in Kiwi, since then a) the easy way *is* the correct way, just that a bug is stopping it working, and b) I probably won't have to fix the bug :P
* The objects I've needed to make, besides the original PubSubClient, are Node and Server. Server only contains a string called name which stores the address (this is mainly so it can be added to the tree and for type comparison purposes at the moment). Node can contain name (it's node), jid (its JabberID, or at least the JabberID of the server it lives on), server (the Server it is on), type which is either "leaf" or "collection" and parent, which is a Server for top-level nodes or a Node if it is in a collection. Nodes also have some functions, but these functions must be passed a PubSubClient through which to send messages. Since the PubSubClient contains methods for everything defined in XEP-0060 all the Node's functions do is call the appropriate method of the PubSubClient handed to it.
This is a pretty poor level of Object Orientation, but it can be smarteded up over time and at least applications no longer need to deal with XML (which wasn't that different from the XML stanzas coming in from xmpppy in the first place!)
** Due to having no domain name and having a router in between it and the Internets, my local Jabber server (ejabberd) can't talk to outside servers at the moment. I'm treating this as a blessing at the moment though, since it means my tests can only screw up my server (which can easily be reinstalled since it contains nothing of importance). I can log in to warbo@jabber.org, chriswarbo@gmail.com or pha06cw@sheffield.ac.uk if I want to access servers over the real Internets.
*** In PubSub there are 2 kinds of node. Leaf nodes can have things published to them (blogs, listened to music tracks, etc.) but they cannot contain any other nodes. Collection nodes are the opposite, they can contain other nodes (collection or leaf) but cannot have things published to them. Leaf nodes can be thought of as files whilst collection nodes can be thought of as folders, with the main difference being that leaf nodes can be in any number of collections at once.
UPDATE: PS: I may add this to Gitorious or something when it works, since there's no reason to deny people its use just because my library isn't finished yet. The browser can live with a copy of the library which works for the tasks required and thus doesn't need updates, whilst all of the breakage and rewriting and feature development can go on in the main version of the library.
Working on my Liferea/Akregator-style reader has come to a stop for the moment, since I need some nodes to actually test with, so I thought I'd copy the browser idea for managing nodes (which I can then subscribe to, publish to, read things from, etc.). A couple of days of hacking have given me this:
Looks pretty basic, but during its creation I've given my library a rather hacky Object Oriented interface*. Anyway, the "browsing" part is working. The Jabber account to use is currently hard-coded into the application as "test1@localhost", but this suffices for my testing purposes.
To use it the address of a PubSub-capable server is given (in this case "pubsub.localhost"**) and this server is queried for nodes and added to the tree. When the replies come in they are given to the program as a list of Nodes. These nodes are checked (rather inefficiently) against the Nodes already displayed in the tree to see if a) they are the same Node (in which case they are disgarded) and b) if any listed Node is their parent. Then the remaining Nodes are added to the tree.
As you can see in the image I've queried the server pubsub.localhost and it contains 2 collection nodes***, "/pubsub" and "/home", which each contain a leaf node, "/pubsub/nodes" and "/home/localhost" respectively. UPDATE: Just to clarify a little, PubSub nodes can be anything, they don't have to follow a directory-style slash system, these nodes are simply there by default in ejabberd to give some kind of structure to the nodes. This is handled by a plugin to the server, which can be replaced, but since the ejabberd developers know a hell of a lot more about this than me I'm going with their implementation :D This means that "/home/localhost" could actually be in "/nodes" rather than "/home", or it could be in both or neither. I can make a node "heh" and put it anywhere. Nodes, as far as I know, need to be unique to the server, so I can't have two nodes "blog" on the same server, regardless of which collections I want them in. However, nodes can also have a name, so I can make a node "/home/localhost/chris/blog" and give it a name "blog", and another node "/home/localhost/joanne/blog" and call that "blog" too, and another called "/home/localhost/harriet" called "blog", etc. these can all be in the same collections if I want, or not. This flexibility is good, but it does mean that I'm going to see what other people are doing before working out a structure to use (for example, should a blog be a collection, with each post being a collection containing a leaf node with the post and a leaf node with the comments? Maybe tags should be collections which contain leaf nodes of applicable posts, etc.)
The main drawback in the current browser application is that the known nodes are queried one after another, meaning that leaf nodes in multiple collections won't work yet (since it would get as far as the first collection and notice that this node is already in the list and disgard it before the second collection is reached).
Next on the agenda is adding icons to see which rows are leaves, which are collections and which are servers. Then I'll stick on some add and remove buttons, and possibly look into drag 'n' drop reordering.
UPDATE: Seems adding the icons is a bit of a bother. Since the correct GTK way of making tree views and lists is VERY confusing and involved (a TreeView is needed which shows a TreeModel which contains Columns which contain CellRenderers which draw whatever GObject type is put in the cell of the column of the model of the view. Very flexible but also very over the top!) I am using the ObjectTree from Kiwi, which is built on PyGTK's tree system, but is much easier to use. The problem is, to draw an icon alongside the text I need to give the Column two CellRenderers. ObjectTree guesses which CellRenderer to use for me based on what I give it and just gets on with it, however this means I spent a while trying to reimplement some of these guessing methods in a subclass of ObjectTree.
Thankfully, however, in the latest version of Kiwi this functionality has been added by simply making two columns (one for the icon, one for the text) and giving the first Column to the second when it is created. This version of Kiwi isn't in Ubuntu Hardy, however, so I got the one from Intrepid and installed it (it's written in Python, thus shouldn't care about libc differences and such). I've since upgraded my whole system to Intrepid, but that's a different matter.
Anyway, turns out that there are problems with this way of doing things too, although I'm not sure if it's due to a bug in Kiwi as columns-in-columns is so new, or a fault of mine (which I can't really check through Google since this functionality hasn't been around long enough to let other people make the same mistakes as me). I can get the icons to appear, but I can't get them to refresh when I tell the ObjectTree to refresh. Since the nodes' type is discovered through a request to the server and I am using the ObjectTree itself as my data model, the nodes must be added to the tree before their type is known. When the reply comes in with the node's type then I can update the icon to reflect this type, however the updated icon is never used even after a refresh. This means I need to know the type when I add it to the ObjectTree, which would result in more headaches since I'd either get some nodes unable to find their parent (since the parent hasn't been added yet as it is still awaiting its type), or I would have to make a completely separate storage model and then make sure that the contents of that storage model are kept in sync with the ObjectTree. I really hope it's a bug in Kiwi, since then a) the easy way *is* the correct way, just that a bug is stopping it working, and b) I probably won't have to fix the bug :P
* The objects I've needed to make, besides the original PubSubClient, are Node and Server. Server only contains a string called name which stores the address (this is mainly so it can be added to the tree and for type comparison purposes at the moment). Node can contain name (it's node), jid (its JabberID, or at least the JabberID of the server it lives on), server (the Server it is on), type which is either "leaf" or "collection" and parent, which is a Server for top-level nodes or a Node if it is in a collection. Nodes also have some functions, but these functions must be passed a PubSubClient through which to send messages. Since the PubSubClient contains methods for everything defined in XEP-0060 all the Node's functions do is call the appropriate method of the PubSubClient handed to it.
This is a pretty poor level of Object Orientation, but it can be smarteded up over time and at least applications no longer need to deal with XML (which wasn't that different from the XML stanzas coming in from xmpppy in the first place!)
** Due to having no domain name and having a router in between it and the Internets, my local Jabber server (ejabberd) can't talk to outside servers at the moment. I'm treating this as a blessing at the moment though, since it means my tests can only screw up my server (which can easily be reinstalled since it contains nothing of importance). I can log in to warbo@jabber.org, chriswarbo@gmail.com or pha06cw@sheffield.ac.uk if I want to access servers over the real Internets.
*** In PubSub there are 2 kinds of node. Leaf nodes can have things published to them (blogs, listened to music tracks, etc.) but they cannot contain any other nodes. Collection nodes are the opposite, they can contain other nodes (collection or leaf) but cannot have things published to them. Leaf nodes can be thought of as files whilst collection nodes can be thought of as folders, with the main difference being that leaf nodes can be in any number of collections at once.
UPDATE: PS: I may add this to Gitorious or something when it works, since there's no reason to deny people its use just because my library isn't finished yet. The browser can live with a copy of the library which works for the tasks required and thus doesn't need updates, whilst all of the breakage and rewriting and feature development can go on in the main version of the library.
The x60br library, which I mentioned previously, was created primarily for this thing. It is a kind of PubSub browser, showing the nodes and their relationships.
Working on my Liferea/Akregator-style reader has come to a stop for the moment, since I need some nodes to actually test with, so I thought I'd copy the browser idea for managing nodes (which I can then subscribe to, publish to, read things from, etc.). A couple of days of hacking have given me this:
Looks pretty basic, but during its creation I've given my library a rather hacky Object Oriented interface*. Anyway, the "browsing" part is working. The Jabber account to use is currently hard-coded into the application as "test1@localhost", but this suffices for my testing purposes.
To use it the address of a PubSub-capable server is given (in this case "pubsub.localhost"**) and this server is queried for nodes and added to the tree. When the replies come in they are given to the program as a list of Nodes. These nodes are checked (rather inefficiently) against the Nodes already displayed in the tree to see if a) they are the same Node (in which case they are disgarded) and b) if any listed Node is their parent. Then the remaining Nodes are added to the tree.
As you can see in the image I've queried the server pubsub.localhost and it contains 2 collection nodes***, "/pubsub" and "/home", which each contain a leaf node, "/pubsub/nodes" and "/home/localhost" respectively. UPDATE: Just to clarify a little, PubSub nodes can be anything, they don't have to follow a directory-style slash system, these nodes are simply there by default in ejabberd to give some kind of structure to the nodes. This is handled by a plugin to the server, which can be replaced, but since the ejabberd developers know a hell of a lot more about this than me I'm going with their implementation :D This means that "/home/localhost" could actually be in "/nodes" rather than "/home", or it could be in both or neither. I can make a node "heh" and put it anywhere. Nodes, as far as I know, need to be unique to the server, so I can't have two nodes "blog" on the same server, regardless of which collections I want them in. However, nodes can also have a name, so I can make a node "/home/localhost/chris/blog" and give it a name "blog", and another node "/home/localhost/joanne/blog" and call that "blog" too, and another called "/home/localhost/harriet" called "blog", etc. these can all be in the same collections if I want, or not. This flexibility is good, but it does mean that I'm going to see what other people are doing before working out a structure to use (for example, should a blog be a collection, with each post being a collection containing a leaf node with the post and a leaf node with the comments? Maybe tags should be collections which contain leaf nodes of applicable posts, etc.)
The main drawback in the current browser application is that the known nodes are queried one after another, meaning that leaf nodes in multiple collections won't work yet (since it would get as far as the first collection and notice that this node is already in the list and disgard it before the second collection is reached).
Next on the agenda is adding icons to see which rows are leaves, which are collections and which are servers. Then I'll stick on some add and remove buttons, and possibly look into drag 'n' drop reordering.
UPDATE: Seems adding the icons is a bit of a bother. Since the correct GTK way of making tree views and lists is VERY confusing and involved (a TreeView is needed which shows a TreeModel which contains Columns which contain CellRenderers which draw whatever GObject type is put in the cell of the column of the model of the view. Very flexible but also very over the top!) I am using the ObjectTree from Kiwi, which is built on PyGTK's tree system, but is much easier to use. The problem is, to draw an icon alongside the text I need to give the Column two CellRenderers. ObjectTree guesses which CellRenderer to use for me based on what I give it and just gets on with it, however this means I spent a while trying to reimplement some of these guessing methods in a subclass of ObjectTree.
Thankfully, however, in the latest version of Kiwi this functionality has been added by simply making two columns (one for the icon, one for the text) and giving the first Column to the second when it is created. This version of Kiwi isn't in Ubuntu Hardy, however, so I got the one from Intrepid and installed it (it's written in Python, thus shouldn't care about libc differences and such). I've since upgraded my whole system to Intrepid, but that's a different matter.
Anyway, turns out that there are problems with this way of doing things too, although I'm not sure if it's due to a bug in Kiwi as columns-in-columns is so new, or a fault of mine (which I can't really check through Google since this functionality hasn't been around long enough to let other people make the same mistakes as me). I can get the icons to appear, but I can't get them to refresh when I tell the ObjectTree to refresh. Since the nodes' type is discovered through a request to the server and I am using the ObjectTree itself as my data model, the nodes must be added to the tree before their type is known. When the reply comes in with the node's type then I can update the icon to reflect this type, however the updated icon is never used even after a refresh. This means I need to know the type when I add it to the ObjectTree, which would result in more headaches since I'd either get some nodes unable to find their parent (since the parent hasn't been added yet as it is still awaiting its type), or I would have to make a completely separate storage model and then make sure that the contents of that storage model are kept in sync with the ObjectTree. I really hope it's a bug in Kiwi, since then a) the easy way *is* the correct way, just that a bug is stopping it working, and b) I probably won't have to fix the bug :P
* The objects I've needed to make, besides the original PubSubClient, are Node and Server. Server only contains a string called name which stores the address (this is mainly so it can be added to the tree and for type comparison purposes at the moment). Node can contain name (it's node), jid (its JabberID, or at least the JabberID of the server it lives on), server (the Server it is on), type which is either "leaf" or "collection" and parent, which is a Server for top-level nodes or a Node if it is in a collection. Nodes also have some functions, but these functions must be passed a PubSubClient through which to send messages. Since the PubSubClient contains methods for everything defined in XEP-0060 all the Node's functions do is call the appropriate method of the PubSubClient handed to it.
This is a pretty poor level of Object Orientation, but it can be smarteded up over time and at least applications no longer need to deal with XML (which wasn't that different from the XML stanzas coming in from xmpppy in the first place!)
** Due to having no domain name and having a router in between it and the Internets, my local Jabber server (ejabberd) can't talk to outside servers at the moment. I'm treating this as a blessing at the moment though, since it means my tests can only screw up my server (which can easily be reinstalled since it contains nothing of importance). I can log in to warbo@jabber.org, chriswarbo@gmail.com or pha06cw@sheffield.ac.uk if I want to access servers over the real Internets.
*** In PubSub there are 2 kinds of node. Leaf nodes can have things published to them (blogs, listened to music tracks, etc.) but they cannot contain any other nodes. Collection nodes are the opposite, they can contain other nodes (collection or leaf) but cannot have things published to them. Leaf nodes can be thought of as files whilst collection nodes can be thought of as folders, with the main difference being that leaf nodes can be in any number of collections at once.
UPDATE: PS: I may add this to Gitorious or something when it works, since there's no reason to deny people its use just because my library isn't finished yet. The browser can live with a copy of the library which works for the tasks required and thus doesn't need updates, whilst all of the breakage and rewriting and feature development can go on in the main version of the library.
Working on my Liferea/Akregator-style reader has come to a stop for the moment, since I need some nodes to actually test with, so I thought I'd copy the browser idea for managing nodes (which I can then subscribe to, publish to, read things from, etc.). A couple of days of hacking have given me this:
Looks pretty basic, but during its creation I've given my library a rather hacky Object Oriented interface*. Anyway, the "browsing" part is working. The Jabber account to use is currently hard-coded into the application as "test1@localhost", but this suffices for my testing purposes.
To use it the address of a PubSub-capable server is given (in this case "pubsub.localhost"**) and this server is queried for nodes and added to the tree. When the replies come in they are given to the program as a list of Nodes. These nodes are checked (rather inefficiently) against the Nodes already displayed in the tree to see if a) they are the same Node (in which case they are disgarded) and b) if any listed Node is their parent. Then the remaining Nodes are added to the tree.
As you can see in the image I've queried the server pubsub.localhost and it contains 2 collection nodes***, "/pubsub" and "/home", which each contain a leaf node, "/pubsub/nodes" and "/home/localhost" respectively. UPDATE: Just to clarify a little, PubSub nodes can be anything, they don't have to follow a directory-style slash system, these nodes are simply there by default in ejabberd to give some kind of structure to the nodes. This is handled by a plugin to the server, which can be replaced, but since the ejabberd developers know a hell of a lot more about this than me I'm going with their implementation :D This means that "/home/localhost" could actually be in "/nodes" rather than "/home", or it could be in both or neither. I can make a node "heh" and put it anywhere. Nodes, as far as I know, need to be unique to the server, so I can't have two nodes "blog" on the same server, regardless of which collections I want them in. However, nodes can also have a name, so I can make a node "/home/localhost/chris/blog" and give it a name "blog", and another node "/home/localhost/joanne/blog" and call that "blog" too, and another called "/home/localhost/harriet" called "blog", etc. these can all be in the same collections if I want, or not. This flexibility is good, but it does mean that I'm going to see what other people are doing before working out a structure to use (for example, should a blog be a collection, with each post being a collection containing a leaf node with the post and a leaf node with the comments? Maybe tags should be collections which contain leaf nodes of applicable posts, etc.)
The main drawback in the current browser application is that the known nodes are queried one after another, meaning that leaf nodes in multiple collections won't work yet (since it would get as far as the first collection and notice that this node is already in the list and disgard it before the second collection is reached).
Next on the agenda is adding icons to see which rows are leaves, which are collections and which are servers. Then I'll stick on some add and remove buttons, and possibly look into drag 'n' drop reordering.
UPDATE: Seems adding the icons is a bit of a bother. Since the correct GTK way of making tree views and lists is VERY confusing and involved (a TreeView is needed which shows a TreeModel which contains Columns which contain CellRenderers which draw whatever GObject type is put in the cell of the column of the model of the view. Very flexible but also very over the top!) I am using the ObjectTree from Kiwi, which is built on PyGTK's tree system, but is much easier to use. The problem is, to draw an icon alongside the text I need to give the Column two CellRenderers. ObjectTree guesses which CellRenderer to use for me based on what I give it and just gets on with it, however this means I spent a while trying to reimplement some of these guessing methods in a subclass of ObjectTree.
Thankfully, however, in the latest version of Kiwi this functionality has been added by simply making two columns (one for the icon, one for the text) and giving the first Column to the second when it is created. This version of Kiwi isn't in Ubuntu Hardy, however, so I got the one from Intrepid and installed it (it's written in Python, thus shouldn't care about libc differences and such). I've since upgraded my whole system to Intrepid, but that's a different matter.
Anyway, turns out that there are problems with this way of doing things too, although I'm not sure if it's due to a bug in Kiwi as columns-in-columns is so new, or a fault of mine (which I can't really check through Google since this functionality hasn't been around long enough to let other people make the same mistakes as me). I can get the icons to appear, but I can't get them to refresh when I tell the ObjectTree to refresh. Since the nodes' type is discovered through a request to the server and I am using the ObjectTree itself as my data model, the nodes must be added to the tree before their type is known. When the reply comes in with the node's type then I can update the icon to reflect this type, however the updated icon is never used even after a refresh. This means I need to know the type when I add it to the ObjectTree, which would result in more headaches since I'd either get some nodes unable to find their parent (since the parent hasn't been added yet as it is still awaiting its type), or I would have to make a completely separate storage model and then make sure that the contents of that storage model are kept in sync with the ObjectTree. I really hope it's a bug in Kiwi, since then a) the easy way *is* the correct way, just that a bug is stopping it working, and b) I probably won't have to fix the bug :P
* The objects I've needed to make, besides the original PubSubClient, are Node and Server. Server only contains a string called name which stores the address (this is mainly so it can be added to the tree and for type comparison purposes at the moment). Node can contain name (it's node), jid (its JabberID, or at least the JabberID of the server it lives on), server (the Server it is on), type which is either "leaf" or "collection" and parent, which is a Server for top-level nodes or a Node if it is in a collection. Nodes also have some functions, but these functions must be passed a PubSubClient through which to send messages. Since the PubSubClient contains methods for everything defined in XEP-0060 all the Node's functions do is call the appropriate method of the PubSubClient handed to it.
This is a pretty poor level of Object Orientation, but it can be smarteded up over time and at least applications no longer need to deal with XML (which wasn't that different from the XML stanzas coming in from xmpppy in the first place!)
** Due to having no domain name and having a router in between it and the Internets, my local Jabber server (ejabberd) can't talk to outside servers at the moment. I'm treating this as a blessing at the moment though, since it means my tests can only screw up my server (which can easily be reinstalled since it contains nothing of importance). I can log in to warbo@jabber.org, chriswarbo@gmail.com or pha06cw@sheffield.ac.uk if I want to access servers over the real Internets.
*** In PubSub there are 2 kinds of node. Leaf nodes can have things published to them (blogs, listened to music tracks, etc.) but they cannot contain any other nodes. Collection nodes are the opposite, they can contain other nodes (collection or leaf) but cannot have things published to them. Leaf nodes can be thought of as files whilst collection nodes can be thought of as folders, with the main difference being that leaf nodes can be in any number of collections at once.
UPDATE: PS: I may add this to Gitorious or something when it works, since there's no reason to deny people its use just because my library isn't finished yet. The browser can live with a copy of the library which works for the tasks required and thus doesn't need updates, whilst all of the breakage and rewriting and feature development can go on in the main version of the library.
PubSub Browser
Friday, 8 August 2008
LXDE is actually teh awesome
Looking to speed up you PC? Want to have as small a system as possible to keep on a USB drive? Looking to play around with your desktop? Give LXDE a try, it is the Lightweight X11 Desktop Environment.
It is built using GTK, the same toolkit used to build GNOME and XFCE, but it is a lot faster than those two. The developers were after the most lightweight, yet still usable by today's standards, desktop system.
The first thing to do was strip away all of the fancy-yet-unneeded cruft, leaving essential things like a toolkit, window manager, desktop with icons, file manager, panel with menus, terminal and some way of configuring it all. Everything else is either not needed, or can be dragged in from elsewhere.
Next to do was to see what things were already good. GTK was chosen as a decent toolkit since it includes accessibility features and right-to-left language support and things, which are essential for many users. Window managers follow standards, and can thus be mixed and matched with each other, allowing any to suffice. Since there are already a lot of efficiency-minded window managers out there this problem went away by simply using them (Openbox by default, but any can be used). For a GTK-based file manager PCMan was chosen as it is rather minimal and very fast (since it only scans a folder if something has changed, else it uses what it remembers from last time it was there).
They have written their own panel, their own appearance configuration tool, terminal, desktop and some niceties like a network monitor and CPU monitor, all following efficiency as their main goal.
This results in a VERY snappy and lightweigth system. Combine this with lightweight applications like the Midori browser (although Dillo is even smaller, but not as nice), Sylpheed email reader, etc. and you have a completely capable desktop perfect for older machines, restricted space drives or just computers you actually want to USE rather than having to wait for everything.
PS: Remember when I wet myself over the Aurora GTK theme? Well it seems I've been in KDE4 land a bit too long since there are some awesome themes banging around.
PPS: OMFG! This guy is a God!
It is built using GTK, the same toolkit used to build GNOME and XFCE, but it is a lot faster than those two. The developers were after the most lightweight, yet still usable by today's standards, desktop system.
The first thing to do was strip away all of the fancy-yet-unneeded cruft, leaving essential things like a toolkit, window manager, desktop with icons, file manager, panel with menus, terminal and some way of configuring it all. Everything else is either not needed, or can be dragged in from elsewhere.
Next to do was to see what things were already good. GTK was chosen as a decent toolkit since it includes accessibility features and right-to-left language support and things, which are essential for many users. Window managers follow standards, and can thus be mixed and matched with each other, allowing any to suffice. Since there are already a lot of efficiency-minded window managers out there this problem went away by simply using them (Openbox by default, but any can be used). For a GTK-based file manager PCMan was chosen as it is rather minimal and very fast (since it only scans a folder if something has changed, else it uses what it remembers from last time it was there).
They have written their own panel, their own appearance configuration tool, terminal, desktop and some niceties like a network monitor and CPU monitor, all following efficiency as their main goal.
This results in a VERY snappy and lightweigth system. Combine this with lightweight applications like the Midori browser (although Dillo is even smaller, but not as nice), Sylpheed email reader, etc. and you have a completely capable desktop perfect for older machines, restricted space drives or just computers you actually want to USE rather than having to wait for everything.
PS: Remember when I wet myself over the Aurora GTK theme? Well it seems I've been in KDE4 land a bit too long since there are some awesome themes banging around.
PPS: OMFG! This guy is a God!
Looking to speed up you PC? Want to have as small a system as possible to keep on a USB drive? Looking to play around with your desktop? Give LXDE a try, it is the Lightweight X11 Desktop Environment.
It is built using GTK, the same toolkit used to build GNOME and XFCE, but it is a lot faster than those two. The developers were after the most lightweight, yet still usable by today's standards, desktop system.
The first thing to do was strip away all of the fancy-yet-unneeded cruft, leaving essential things like a toolkit, window manager, desktop with icons, file manager, panel with menus, terminal and some way of configuring it all. Everything else is either not needed, or can be dragged in from elsewhere.
Next to do was to see what things were already good. GTK was chosen as a decent toolkit since it includes accessibility features and right-to-left language support and things, which are essential for many users. Window managers follow standards, and can thus be mixed and matched with each other, allowing any to suffice. Since there are already a lot of efficiency-minded window managers out there this problem went away by simply using them (Openbox by default, but any can be used). For a GTK-based file manager PCMan was chosen as it is rather minimal and very fast (since it only scans a folder if something has changed, else it uses what it remembers from last time it was there).
They have written their own panel, their own appearance configuration tool, terminal, desktop and some niceties like a network monitor and CPU monitor, all following efficiency as their main goal.
This results in a VERY snappy and lightweigth system. Combine this with lightweight applications like the Midori browser (although Dillo is even smaller, but not as nice), Sylpheed email reader, etc. and you have a completely capable desktop perfect for older machines, restricted space drives or just computers you actually want to USE rather than having to wait for everything.
PS: Remember when I wet myself over the Aurora GTK theme? Well it seems I've been in KDE4 land a bit too long since there are some awesome themes banging around.
PPS: OMFG! This guy is a God!
It is built using GTK, the same toolkit used to build GNOME and XFCE, but it is a lot faster than those two. The developers were after the most lightweight, yet still usable by today's standards, desktop system.
The first thing to do was strip away all of the fancy-yet-unneeded cruft, leaving essential things like a toolkit, window manager, desktop with icons, file manager, panel with menus, terminal and some way of configuring it all. Everything else is either not needed, or can be dragged in from elsewhere.
Next to do was to see what things were already good. GTK was chosen as a decent toolkit since it includes accessibility features and right-to-left language support and things, which are essential for many users. Window managers follow standards, and can thus be mixed and matched with each other, allowing any to suffice. Since there are already a lot of efficiency-minded window managers out there this problem went away by simply using them (Openbox by default, but any can be used). For a GTK-based file manager PCMan was chosen as it is rather minimal and very fast (since it only scans a folder if something has changed, else it uses what it remembers from last time it was there).
They have written their own panel, their own appearance configuration tool, terminal, desktop and some niceties like a network monitor and CPU monitor, all following efficiency as their main goal.
This results in a VERY snappy and lightweigth system. Combine this with lightweight applications like the Midori browser (although Dillo is even smaller, but not as nice), Sylpheed email reader, etc. and you have a completely capable desktop perfect for older machines, restricted space drives or just computers you actually want to USE rather than having to wait for everything.
PS: Remember when I wet myself over the Aurora GTK theme? Well it seems I've been in KDE4 land a bit too long since there are some awesome themes banging around.
PPS: OMFG! This guy is a God!
LXDE is actually teh awesome
Thursday, 7 August 2008
PubSub Ahoy!
I've just found out about x60br through an obscure link trek from a blog post in my feed reader. x60br is an XMPP PubSub library for Python, similar to the one I'm writing. This is teh awesome, since it means I can check what I'm doing against someone else, and possibly look into using their implementation in some areas.
The main difference between the two (apart from the API, since mine needs to actually work before I can give it a proper API :P ) is that they handle the asynchronous nature of XMPP differently.
In order to understand this you'll need to know a little about synchronous vs. asynchronous programming, which I've touched on before. The bits of programs which do the most work are functions. The purpose of a function is to define how to do something so that you can do it later on, for instance saying "this is how to get the items from a PubSub node" then later, when you want to get the items at a node, you can just say "get the items here" rather than having to recite every step again and again.
In the example I just gave it could be done in two ways, synchronously or asynchronously. If it were done synchronously your program would say something like:
def get_items(server, node):
items = []
ask_for_items(server, node)
reply = None
while reply is None:
process_replies()
for item in reply:
items.append(item)
return items
my_blog_posts = get_items("pubsub.jabber.org", "seriously-this-is-not-worth-reading")
The first part DEFINES how to get the items at a node (nothing is run, it is simply a definition of what getting items means). When the last line is reached the program will request the items from the server "http://pubsub.jabber.org", then will wait for replies until it gets one and finally put every item it finds into my_blog_posts. Then the next line is run. This is a REALLY bad way of doing things in any kind of interactive, GUI-based program, since the whole program will appear to freeze whilst the reply is being waited for.
The way around this is to make the function asynchronous. This is more complicated than the above way of doing things, but it means that your program can get on with doing other stuff (like updating progress bars, for example) whilst the routers, ISPs and servers go about sending messages to and fro so you can get the items. The way I have implemented this in my library works a little like the following:
def get_items(server, node, completed_function):
stanza_id = ask_for_items(server, node)
assign_handler(stanza_id, completed_function)
def assign_blog_posts(items):
my_blog_posts = items
get_items("pubsub.jabber.org", "seriously-this-is-not-worth-reading", assign_blog_posts)
This is in a simplified form (since namespaces aren't dealt with, some functions aren't defined and the reply handling mechanism isn't shown) but it gets the point across. What happens is that the request for items is still sent, but instead of waiting around for a reply before doing anything else, the program just gets on with things. When a reply IS received, the function assign_blog_posts is run, which does whatever needs to be done with the items. This is called a handler function (since it handles the replies).
The way x60br does things is in a slightly different way. It uses the following technique:
def get_items(server, node):
items = ask_for_items(server, node)
return items
my_blog_posts = get_items("pubsub.jabber.org", "seriously-this-is-not-worth-reading")
This looks the same as the synchronous way, and it is, except that it uses some clever behind-the-scenes stuff in functions like ask_for_items which makes my_blog_post get assigned straight away, letting the program carry on, except instead of assigning the items to my_blog_posts it essentially assigns a dummy. When the replies are received any instances of this dummy are changed to the actual items. This is done using the famous Twisted framework for Python.
These are two ways of approaching the same problem (one triggers a response, one hands around an IOU), so I'll have a go at using x60br (although Twisted really seems more web servery, rather than desktop applicationy). I'll use x60br's API when modelling my own library's though, at least as a guide.
The main difference between the two (apart from the API, since mine needs to actually work before I can give it a proper API :P ) is that they handle the asynchronous nature of XMPP differently.
In order to understand this you'll need to know a little about synchronous vs. asynchronous programming, which I've touched on before. The bits of programs which do the most work are functions. The purpose of a function is to define how to do something so that you can do it later on, for instance saying "this is how to get the items from a PubSub node" then later, when you want to get the items at a node, you can just say "get the items here" rather than having to recite every step again and again.
In the example I just gave it could be done in two ways, synchronously or asynchronously. If it were done synchronously your program would say something like:
def get_items(server, node):
items = []
ask_for_items(server, node)
reply = None
while reply is None:
process_replies()
for item in reply:
items.append(item)
return items
my_blog_posts = get_items("pubsub.jabber.org", "seriously-this-is-not-worth-reading")
The first part DEFINES how to get the items at a node (nothing is run, it is simply a definition of what getting items means). When the last line is reached the program will request the items from the server "http://pubsub.jabber.org", then will wait for replies until it gets one and finally put every item it finds into my_blog_posts. Then the next line is run. This is a REALLY bad way of doing things in any kind of interactive, GUI-based program, since the whole program will appear to freeze whilst the reply is being waited for.
The way around this is to make the function asynchronous. This is more complicated than the above way of doing things, but it means that your program can get on with doing other stuff (like updating progress bars, for example) whilst the routers, ISPs and servers go about sending messages to and fro so you can get the items. The way I have implemented this in my library works a little like the following:
def get_items(server, node, completed_function):
stanza_id = ask_for_items(server, node)
assign_handler(stanza_id, completed_function)
def assign_blog_posts(items):
my_blog_posts = items
get_items("pubsub.jabber.org", "seriously-this-is-not-worth-reading", assign_blog_posts)
This is in a simplified form (since namespaces aren't dealt with, some functions aren't defined and the reply handling mechanism isn't shown) but it gets the point across. What happens is that the request for items is still sent, but instead of waiting around for a reply before doing anything else, the program just gets on with things. When a reply IS received, the function assign_blog_posts is run, which does whatever needs to be done with the items. This is called a handler function (since it handles the replies).
The way x60br does things is in a slightly different way. It uses the following technique:
def get_items(server, node):
items = ask_for_items(server, node)
return items
my_blog_posts = get_items("pubsub.jabber.org", "seriously-this-is-not-worth-reading")
This looks the same as the synchronous way, and it is, except that it uses some clever behind-the-scenes stuff in functions like ask_for_items which makes my_blog_post get assigned straight away, letting the program carry on, except instead of assigning the items to my_blog_posts it essentially assigns a dummy. When the replies are received any instances of this dummy are changed to the actual items. This is done using the famous Twisted framework for Python.
These are two ways of approaching the same problem (one triggers a response, one hands around an IOU), so I'll have a go at using x60br (although Twisted really seems more web servery, rather than desktop applicationy). I'll use x60br's API when modelling my own library's though, at least as a guide.
I've just found out about x60br through an obscure link trek from a blog post in my feed reader. x60br is an XMPP PubSub library for Python, similar to the one I'm writing. This is teh awesome, since it means I can check what I'm doing against someone else, and possibly look into using their implementation in some areas.
The main difference between the two (apart from the API, since mine needs to actually work before I can give it a proper API :P ) is that they handle the asynchronous nature of XMPP differently.
In order to understand this you'll need to know a little about synchronous vs. asynchronous programming, which I've touched on before. The bits of programs which do the most work are functions. The purpose of a function is to define how to do something so that you can do it later on, for instance saying "this is how to get the items from a PubSub node" then later, when you want to get the items at a node, you can just say "get the items here" rather than having to recite every step again and again.
In the example I just gave it could be done in two ways, synchronously or asynchronously. If it were done synchronously your program would say something like:
def get_items(server, node):
items = []
ask_for_items(server, node)
reply = None
while reply is None:
process_replies()
for item in reply:
items.append(item)
return items
my_blog_posts = get_items("pubsub.jabber.org", "seriously-this-is-not-worth-reading")
The first part DEFINES how to get the items at a node (nothing is run, it is simply a definition of what getting items means). When the last line is reached the program will request the items from the server "http://pubsub.jabber.org", then will wait for replies until it gets one and finally put every item it finds into my_blog_posts. Then the next line is run. This is a REALLY bad way of doing things in any kind of interactive, GUI-based program, since the whole program will appear to freeze whilst the reply is being waited for.
The way around this is to make the function asynchronous. This is more complicated than the above way of doing things, but it means that your program can get on with doing other stuff (like updating progress bars, for example) whilst the routers, ISPs and servers go about sending messages to and fro so you can get the items. The way I have implemented this in my library works a little like the following:
def get_items(server, node, completed_function):
stanza_id = ask_for_items(server, node)
assign_handler(stanza_id, completed_function)
def assign_blog_posts(items):
my_blog_posts = items
get_items("pubsub.jabber.org", "seriously-this-is-not-worth-reading", assign_blog_posts)
This is in a simplified form (since namespaces aren't dealt with, some functions aren't defined and the reply handling mechanism isn't shown) but it gets the point across. What happens is that the request for items is still sent, but instead of waiting around for a reply before doing anything else, the program just gets on with things. When a reply IS received, the function assign_blog_posts is run, which does whatever needs to be done with the items. This is called a handler function (since it handles the replies).
The way x60br does things is in a slightly different way. It uses the following technique:
def get_items(server, node):
items = ask_for_items(server, node)
return items
my_blog_posts = get_items("pubsub.jabber.org", "seriously-this-is-not-worth-reading")
This looks the same as the synchronous way, and it is, except that it uses some clever behind-the-scenes stuff in functions like ask_for_items which makes my_blog_post get assigned straight away, letting the program carry on, except instead of assigning the items to my_blog_posts it essentially assigns a dummy. When the replies are received any instances of this dummy are changed to the actual items. This is done using the famous Twisted framework for Python.
These are two ways of approaching the same problem (one triggers a response, one hands around an IOU), so I'll have a go at using x60br (although Twisted really seems more web servery, rather than desktop applicationy). I'll use x60br's API when modelling my own library's though, at least as a guide.
The main difference between the two (apart from the API, since mine needs to actually work before I can give it a proper API :P ) is that they handle the asynchronous nature of XMPP differently.
In order to understand this you'll need to know a little about synchronous vs. asynchronous programming, which I've touched on before. The bits of programs which do the most work are functions. The purpose of a function is to define how to do something so that you can do it later on, for instance saying "this is how to get the items from a PubSub node" then later, when you want to get the items at a node, you can just say "get the items here" rather than having to recite every step again and again.
In the example I just gave it could be done in two ways, synchronously or asynchronously. If it were done synchronously your program would say something like:
def get_items(server, node):
items = []
ask_for_items(server, node)
reply = None
while reply is None:
process_replies()
for item in reply:
items.append(item)
return items
my_blog_posts = get_items("pubsub.jabber.org", "seriously-this-is-not-worth-reading")
The first part DEFINES how to get the items at a node (nothing is run, it is simply a definition of what getting items means). When the last line is reached the program will request the items from the server "http://pubsub.jabber.org", then will wait for replies until it gets one and finally put every item it finds into my_blog_posts. Then the next line is run. This is a REALLY bad way of doing things in any kind of interactive, GUI-based program, since the whole program will appear to freeze whilst the reply is being waited for.
The way around this is to make the function asynchronous. This is more complicated than the above way of doing things, but it means that your program can get on with doing other stuff (like updating progress bars, for example) whilst the routers, ISPs and servers go about sending messages to and fro so you can get the items. The way I have implemented this in my library works a little like the following:
def get_items(server, node, completed_function):
stanza_id = ask_for_items(server, node)
assign_handler(stanza_id, completed_function)
def assign_blog_posts(items):
my_blog_posts = items
get_items("pubsub.jabber.org", "seriously-this-is-not-worth-reading", assign_blog_posts)
This is in a simplified form (since namespaces aren't dealt with, some functions aren't defined and the reply handling mechanism isn't shown) but it gets the point across. What happens is that the request for items is still sent, but instead of waiting around for a reply before doing anything else, the program just gets on with things. When a reply IS received, the function assign_blog_posts is run, which does whatever needs to be done with the items. This is called a handler function (since it handles the replies).
The way x60br does things is in a slightly different way. It uses the following technique:
def get_items(server, node):
items = ask_for_items(server, node)
return items
my_blog_posts = get_items("pubsub.jabber.org", "seriously-this-is-not-worth-reading")
This looks the same as the synchronous way, and it is, except that it uses some clever behind-the-scenes stuff in functions like ask_for_items which makes my_blog_post get assigned straight away, letting the program carry on, except instead of assigning the items to my_blog_posts it essentially assigns a dummy. When the replies are received any instances of this dummy are changed to the actual items. This is done using the famous Twisted framework for Python.
These are two ways of approaching the same problem (one triggers a response, one hands around an IOU), so I'll have a go at using x60br (although Twisted really seems more web servery, rather than desktop applicationy). I'll use x60br's API when modelling my own library's though, at least as a guide.
PubSub Ahoy!
I could really do with a replacement 'phone
Mine has pretty much packed in, due to a couple of failures.
Firstly the battery capacity is now incredibly low, which is to be expected after around a year or more. I could get a replacement battery, like I did for my C35i, but that one made no difference (maybe a software fault assuming that the battery is the same).
Secondly the mechanical joystick at the top of the keypad is completely FUBAR. It makes attempting to use the phone a very infuriating experience. I knew this would happen eventually, since moving parts should always be avoided if at all possible, and even remarked as much when I noticed that Harriet's old phone (the model brought out after mine) had replaced the joystick with a set of buttons, obviously for reliability purposes.
Also there are some general annoyances with the phone, like the bloody awful proprietary connectors on the bottom, meaning that I need an adaptor to use a set of standard headphones and/or microphone, and the power cable, when inserted, is perpetually riding the infinitesimal border between connected and not. Plus the software is full of annoyances, like my inability to remove things I do not use and the incredibly tiny storage space allocated to saving SMS messages which forces me to purge the phone of all saved messages about every three weeks, whilst the onboard storage has several megabytes free but are unused.
Thus I want a new phone, and I want a phone that runs Free Software by default. The tricky thing is deciding which one. The options, as far as I can tell, are the following:
Something from Motorola (eg. Razr V2). Advantages: Should be relatively cheap since they are mass-market phones. Disadvantages: Locked down and pretty much unchangable, kind of defeating the purposes of my decision for a hackable phone.
Something running Android: Advantages: Should be widely supported and have a wealth of applications and developers. No specific OS, since the whole thing runs on Java, which could make alternatives to the underlying Linux a possibility. Disadvantages: Nothing is currently available, as far as I can tell. Java-only might drive me crazy after a while.
Nokia Internet Tablet: Advantages: Large touch-screen and full keyboard. HUGE resolution. Wealth of developers and applications. Familiar technology (Python, QT, GTK, etc.) available. Disadvantages: As far as I can tell they can't connect to GSM/EDGE/HSDPA/3G/etc. which makes them pretty useless as a phone.
OpenMoko: Advantages: Completely hackable. Familiar technologies. Large resolution. Touchscreen. Disadvantages: Not completely ready software wise (but can be upgraded as time goes on, calling works as far as I can tell). Possibly not as much developer and application support? Battery life might be an issue until the power management is implemented.
Something Symbian: Advantages: Phone-targetted, ie. built for the job. Should be lots of application and developer support. Disadvantages: Not all Free Software yet, Nokia plans on completing the liberation by around April I think.
MobLin, LinMob, etc.: Disadvantages: I can't actually see anything I can buy.
So, from those options I think I'm going to go with an OpenMoko Freerunner. I will probably use part of my loan when it comes through, so looking at getting one next month if possible.
Any thoughts by anyone, at all, who might possibly be reading this, who I can probably count on one hand?
Firstly the battery capacity is now incredibly low, which is to be expected after around a year or more. I could get a replacement battery, like I did for my C35i, but that one made no difference (maybe a software fault assuming that the battery is the same).
Secondly the mechanical joystick at the top of the keypad is completely FUBAR. It makes attempting to use the phone a very infuriating experience. I knew this would happen eventually, since moving parts should always be avoided if at all possible, and even remarked as much when I noticed that Harriet's old phone (the model brought out after mine) had replaced the joystick with a set of buttons, obviously for reliability purposes.
Also there are some general annoyances with the phone, like the bloody awful proprietary connectors on the bottom, meaning that I need an adaptor to use a set of standard headphones and/or microphone, and the power cable, when inserted, is perpetually riding the infinitesimal border between connected and not. Plus the software is full of annoyances, like my inability to remove things I do not use and the incredibly tiny storage space allocated to saving SMS messages which forces me to purge the phone of all saved messages about every three weeks, whilst the onboard storage has several megabytes free but are unused.
Thus I want a new phone, and I want a phone that runs Free Software by default. The tricky thing is deciding which one. The options, as far as I can tell, are the following:
Something from Motorola (eg. Razr V2). Advantages: Should be relatively cheap since they are mass-market phones. Disadvantages: Locked down and pretty much unchangable, kind of defeating the purposes of my decision for a hackable phone.
Something running Android: Advantages: Should be widely supported and have a wealth of applications and developers. No specific OS, since the whole thing runs on Java, which could make alternatives to the underlying Linux a possibility. Disadvantages: Nothing is currently available, as far as I can tell. Java-only might drive me crazy after a while.
Nokia Internet Tablet: Advantages: Large touch-screen and full keyboard. HUGE resolution. Wealth of developers and applications. Familiar technology (Python, QT, GTK, etc.) available. Disadvantages: As far as I can tell they can't connect to GSM/EDGE/HSDPA/3G/etc. which makes them pretty useless as a phone.
OpenMoko: Advantages: Completely hackable. Familiar technologies. Large resolution. Touchscreen. Disadvantages: Not completely ready software wise (but can be upgraded as time goes on, calling works as far as I can tell). Possibly not as much developer and application support? Battery life might be an issue until the power management is implemented.
Something Symbian: Advantages: Phone-targetted, ie. built for the job. Should be lots of application and developer support. Disadvantages: Not all Free Software yet, Nokia plans on completing the liberation by around April I think.
MobLin, LinMob, etc.: Disadvantages: I can't actually see anything I can buy.
So, from those options I think I'm going to go with an OpenMoko Freerunner. I will probably use part of my loan when it comes through, so looking at getting one next month if possible.
Any thoughts by anyone, at all, who might possibly be reading this, who I can probably count on one hand?
Mine has pretty much packed in, due to a couple of failures.
Firstly the battery capacity is now incredibly low, which is to be expected after around a year or more. I could get a replacement battery, like I did for my C35i, but that one made no difference (maybe a software fault assuming that the battery is the same).
Secondly the mechanical joystick at the top of the keypad is completely FUBAR. It makes attempting to use the phone a very infuriating experience. I knew this would happen eventually, since moving parts should always be avoided if at all possible, and even remarked as much when I noticed that Harriet's old phone (the model brought out after mine) had replaced the joystick with a set of buttons, obviously for reliability purposes.
Also there are some general annoyances with the phone, like the bloody awful proprietary connectors on the bottom, meaning that I need an adaptor to use a set of standard headphones and/or microphone, and the power cable, when inserted, is perpetually riding the infinitesimal border between connected and not. Plus the software is full of annoyances, like my inability to remove things I do not use and the incredibly tiny storage space allocated to saving SMS messages which forces me to purge the phone of all saved messages about every three weeks, whilst the onboard storage has several megabytes free but are unused.
Thus I want a new phone, and I want a phone that runs Free Software by default. The tricky thing is deciding which one. The options, as far as I can tell, are the following:
Something from Motorola (eg. Razr V2). Advantages: Should be relatively cheap since they are mass-market phones. Disadvantages: Locked down and pretty much unchangable, kind of defeating the purposes of my decision for a hackable phone.
Something running Android: Advantages: Should be widely supported and have a wealth of applications and developers. No specific OS, since the whole thing runs on Java, which could make alternatives to the underlying Linux a possibility. Disadvantages: Nothing is currently available, as far as I can tell. Java-only might drive me crazy after a while.
Nokia Internet Tablet: Advantages: Large touch-screen and full keyboard. HUGE resolution. Wealth of developers and applications. Familiar technology (Python, QT, GTK, etc.) available. Disadvantages: As far as I can tell they can't connect to GSM/EDGE/HSDPA/3G/etc. which makes them pretty useless as a phone.
OpenMoko: Advantages: Completely hackable. Familiar technologies. Large resolution. Touchscreen. Disadvantages: Not completely ready software wise (but can be upgraded as time goes on, calling works as far as I can tell). Possibly not as much developer and application support? Battery life might be an issue until the power management is implemented.
Something Symbian: Advantages: Phone-targetted, ie. built for the job. Should be lots of application and developer support. Disadvantages: Not all Free Software yet, Nokia plans on completing the liberation by around April I think.
MobLin, LinMob, etc.: Disadvantages: I can't actually see anything I can buy.
So, from those options I think I'm going to go with an OpenMoko Freerunner. I will probably use part of my loan when it comes through, so looking at getting one next month if possible.
Any thoughts by anyone, at all, who might possibly be reading this, who I can probably count on one hand?
Firstly the battery capacity is now incredibly low, which is to be expected after around a year or more. I could get a replacement battery, like I did for my C35i, but that one made no difference (maybe a software fault assuming that the battery is the same).
Secondly the mechanical joystick at the top of the keypad is completely FUBAR. It makes attempting to use the phone a very infuriating experience. I knew this would happen eventually, since moving parts should always be avoided if at all possible, and even remarked as much when I noticed that Harriet's old phone (the model brought out after mine) had replaced the joystick with a set of buttons, obviously for reliability purposes.
Also there are some general annoyances with the phone, like the bloody awful proprietary connectors on the bottom, meaning that I need an adaptor to use a set of standard headphones and/or microphone, and the power cable, when inserted, is perpetually riding the infinitesimal border between connected and not. Plus the software is full of annoyances, like my inability to remove things I do not use and the incredibly tiny storage space allocated to saving SMS messages which forces me to purge the phone of all saved messages about every three weeks, whilst the onboard storage has several megabytes free but are unused.
Thus I want a new phone, and I want a phone that runs Free Software by default. The tricky thing is deciding which one. The options, as far as I can tell, are the following:
Something from Motorola (eg. Razr V2). Advantages: Should be relatively cheap since they are mass-market phones. Disadvantages: Locked down and pretty much unchangable, kind of defeating the purposes of my decision for a hackable phone.
Something running Android: Advantages: Should be widely supported and have a wealth of applications and developers. No specific OS, since the whole thing runs on Java, which could make alternatives to the underlying Linux a possibility. Disadvantages: Nothing is currently available, as far as I can tell. Java-only might drive me crazy after a while.
Nokia Internet Tablet: Advantages: Large touch-screen and full keyboard. HUGE resolution. Wealth of developers and applications. Familiar technology (Python, QT, GTK, etc.) available. Disadvantages: As far as I can tell they can't connect to GSM/EDGE/HSDPA/3G/etc. which makes them pretty useless as a phone.
OpenMoko: Advantages: Completely hackable. Familiar technologies. Large resolution. Touchscreen. Disadvantages: Not completely ready software wise (but can be upgraded as time goes on, calling works as far as I can tell). Possibly not as much developer and application support? Battery life might be an issue until the power management is implemented.
Something Symbian: Advantages: Phone-targetted, ie. built for the job. Should be lots of application and developer support. Disadvantages: Not all Free Software yet, Nokia plans on completing the liberation by around April I think.
MobLin, LinMob, etc.: Disadvantages: I can't actually see anything I can buy.
So, from those options I think I'm going to go with an OpenMoko Freerunner. I will probably use part of my loan when it comes through, so looking at getting one next month if possible.
Any thoughts by anyone, at all, who might possibly be reading this, who I can probably count on one hand?
I could really do with a replacement 'phone
Subscribe to:
Posts (Atom)