Double-Plus Good
I’ve been following the JDNC project recently and along the way mentioned that it might be nice to combine the authorization dialogs for WebStart applications that have multiple jars signed by different certs into a single dialog to make it more user friendly. Mark Davidson who seems to work at Sun (I assume on the Swing team) promptly responded that he’d spoken to a WebStart engineer about it, logged an RFE for me and said the WebStart guy was going to talk to the security guys about it during the week. That’s of course no guarantee that it will actually be implemented anytime soon (or any time at all) but it’s nice to at least know that the message got through. Particularly impressive is the fact that I wasn’t even trying to get a message through, just spouting ideas off the top of my head. It’s generally been very good to see the Sun engineers working to involve the community on JDNC, I had initially thought that JDNC would be a “token” opensource project where Sun employees still control everything but make the source code available. I’m very glad to say that I was wrong about that, there is definitely a strong effort being made to get people involved and already a number of knowledgeable folks from outside are getting involved and providing some very good advice.
Linux’s Curse (Again)
The story so far:
- Preston Gralla commented
- I commented
- Brian McCallister commented
- I commented again
- Brian McCallister commented again At least I think that’s how it went. Firstly, Brian was right to call me on my use of cygwin to bring UNIX capabilities to Windows. It’s not in the default install, it’s not at all obvious and 99% of Windows users will never even hear about it. As Brian says, “if you don’t use it, you don’t learn it”. So if we conceed that the command line is a killer attraction then Linux has it’s big advantage over windows. That’s where I think Brian and I may disagree. First let me start by saying that the command line is great, it’s an incredibly powerful tool with a lot of really great advantages. It does however require a lot of learning and it’s not once off learning either. The command line requires you to constantly learn – every new task requires research to find out what command does what you want. Then you have to constantly remember all these different commands so that you can use them when you need them. Nothing is obvious, nothing is intuitive. Everything is powerful.
This is a paradigm thing — the drive to ubiquitize computers required them to have an interface comparable to that of a toaster. Now that they are ubiquitous, lets bring back the idea of a powerful interface. Please. I agree that we need to make user interfaces more powerful and let people do more with their computers but don’t throw the baby out with the bath water. People are no more capable of learning interfaces today as they were 15 years ago. The ubiquity of the GUI does not make it easier for people to learn the command line, in fact it makes it harder due to the unlearning required. Having a command line available with powerful tools is great for advanced users that want to get into that but its still not an option for the vast majority of computer users because they will never get enough benefit out of it to justify the learning cost. Furthermore, the learning cost will be excessively high for casual users because they will continually forget the commands that are available to them. So how do we reconcile the two goals – having a simple interface and providing full power to advanced users. Most people will suggest creating two interfaces, one for novices and one for advanced users (the novice interface is usually called a “wizard”). This is outright bad user interface design. Jef Raskin provides the best argument I’ve seen on why this is bad user interface in The Humane Interface but sadly I don’t have a copy at hand to give an exact reference. Essentially though the argument is that instead of users having to learn one interface, they must now learn two to be able to use the software. When they first start using the program they are a novice and learn to use the wizard interface. Then they become more familiar with the program and consider themselves advanced so they switch to the advanced interface. Unfortunately once they change the interface they are no longer advanced users – they are completely new to the interface and are in fact beginners. All the learning they did with the beginner interface is worthless and they have to start from scratch with the advanced interface. Worse still, the advanced interface will almost certainly have been designed with “these are advanced users in mind, they’ll work it out” in mind and is thus much more difficult to learn that it should be. The other big problem with having two interface modes is the amount of extra developer time that is required to achieve it. That time could have been better spent making the advanced interface easier to learn. How does this relate to the GUI vs Command line debate? Firstly it shows a weakness in interfaces like Linux where you can do a lot with the GUI but quite often have to switch to the command line, as well as a weakness with Windows where you can do a lot with the command line but often have to switch to the GUI. It’s also a weakness with OS X both ways (some things are GUI only, some things are command line only). More importantly though it explains why we can’t expect people to learn a command line interface now any more than we could when computers first got started. So how do we make things more powerful while keeping the baby firmly in the bathtub? The first thing I’d point to is AppleScript which is an awesomely cool way to bring some of the power of the command line to the GUI. The ability to pipe one program into another is realized through AppleScript and in fact extended much beyond what the command line pipes can do. AppleScript is shell scripting for the GUI. AppleScript however is difficult to learn and the language is awful but these are implementation details – the idea itself is still sound. The biggest problem with the AppleScript concept though is that you effectively always have to write a shell script which involves firing up the script editor. Too slow. What if we mixed the concept of the GUI and the command line together though? Most of the time you’re in the GUI just like normal because it’s easy to use and for the most common computing tasks it’s the most efficient way to do things (how many sighted people surf the web exclusively from lynx?). When you need the power of the command line though you hit a key combination and a command line pops up to allow you to write AppleScript snippets (though in a more intuitive language that AppleScript). Oddly enough, HyperCard contains pretty much this exact interface. If you hit Apple-M in a HyperCard stack, the message box pops up and you can enter any command you like to control that stack, another stack or even execute an AppleScript. One key thing here though is that it’s not a terminal window that pops up, it’s a floating window that by default operates on the current application. So if I’m in Microsoft Word typing away and I think to myself: “I need to insert all the jpg images of my charts into this appendix” today I would have to click “Insert->Image…->From File…->chart.gif” however many times but with the built in command prompt I’d just hit the magical key combination to bring it up and then “insert /Users/aj/Documents/charts/*.gif” and let Word do the rest. Note that insert would be an AppleScript command defined by Word and tab completion is a necessity. Similarly, if I wanted to attach a zip archive of a particular folder to an email, I’d bring up the command prompt with a keystroke and enter something like “attach `zip /Users/aj/Documents/emailDocs`” or better “zip /Users/aj/Documents/emailDocs and attach it” which is much more HyperCard like. That scheme combines the power of command lines with the power and simplicity of GUIs. Coming back to Brian’s comments though:
Re: Linux’s Curse
Brian McCallister comments on my earlier comments on Preston Gralla’s comments on Linux on the desktop. By and large I agree with Brian, the UNIX command line is a sensationally powerful thing which provides awesome flexibility and power for those who wish to learn it. The downside is it’s awful trying to learn it. I spend a lot of time at a bash command line and I still couldn’t tell you off the top of my head what Brian’s examples do. They’re simple and straight forward to him because he uses those tools every day, I don’t so they’re very foreign and require learning (I use different command line programs). GUIs have a major learnability advantage because the options are (or should be) visible to the user. More importantly though, in the context of Linux on the desktop the power of the command line disappears to a very large degree. OS X is a very good example of this because it is UNIX on the desktop and you find that most people don’t use the command line very much if at all. Mostly that’s because they can’t be bothered learning it and because they typically don’t have a need for the power it provides. The key point though and it was the main point of my arguments is summed up so well with Brian’s comment:
Linux’s Curse
Preston Gralla’s comments on how Linux didn’t impress him too much really got me thinking. Preston didn’t bash Linux or try to argue that Linux was inferior to Windows – he just pointed out that he can already do everything he wanted to on Windows and didn’t have any problems with it, so why change? That’s Linux’s big problem. It’s biggest feature has always been stability and security. In other words, it’s biggest feature is that it doesn’t have Windows bugs. There’s a curse on depending on being better than your competitors by having fewer bugs though – eventually your competitor fixes their bugs. Lets assume for a moment that Linux is perfect software, it has no bugs, no security flaws, never has and never will. This is clearly not the case but lets work with the best case scenario for Linux. Windows starts out as barely usable because of all the bugs in it. There’s barely a single piece of functionality that isn’t affected by bugs and users constantly have to keep in mind how to work around bugs while they use the system. Again this has never been the case but it is the best case scenario for anyone competing with Windows. Now, what happens when the next version of Linux comes out? It has no bugs. Great! What happens when the next release of Windows comes out? It has fewer bugs. Most applications become more stable as they mature even if they add more features at the same time because in each release you typically add a few features (which are probably buggy) and fix a bunch of bugs in existing features. Windows certainly has become more stable over time, though there was that ME release which might have been a step backwards… So here’s the curse of Linux: even in the best of worlds, Linux’s biggest feature is eroding out from underneath it and will continue to do so until it is negligible. You simply can’t survive forever on a product that is better because it has fewer bugs – the competition will always catch up. You have to add features to differentiate yourself and they have to be innovative – really, truly, oh-my-gosh-that’s-awsome, I-never-would-have-thought-of-that innovative. Linux doesn’t have that kind of innovation from anything I’ve seen. Linux was created as a clone of UNIX and a Linux command line still looks and acts pretty much like every other UNIX out there. It has the same basic set of commands, similar programming APIs etc. Fortunately, desktop users don’t care about any of that. Unfortunately, Linux’s desktop environment doesn’t show any real innovation either. If I were to describe my Windows desktop when it first boots up I’d say something like:
The Rumours Of XML’s Death Have Been Greatly Exaggerated
Mark Pilgrim posts an interesting article entitled XML on the Web Has Failed and he’s right to some degree. Character sets remain a huge mess on the internet, but I think he’s pinning the failure on the wrong technology. It’s not XML that’s failed, but RFC3023 which specifies a set of rules for detecting the XML charset when combined with HTTP. The reason RFC3023 fails is because noone likes the way it works and it’s just not implemented anywhere. The one part of the specification that causes problems is what to do when XML is transferred over HTTP with no Charset header and a Content-Type of text/xml. The one reason that rule is so screwed up (it says to ignore the charset in the XML file) is because a bunch of proxies translate the character encoding without knowing anything about the content being transferred (except that it’s text/something). So what’s the solution? Put some common sense back into the mix, if there’s no charset in the HTTP headers and a charset declared in the XML file, use the charset from the XML file, then fix the proxies that are destroying content – they’re probably destroying a lot of HTML files as well since they wouldn’t pay attention to the content type specified in a meta tag. Claiming that XML has failed is throwing the baby out with the bath water, the problem is just that there’s some stupid proxies doing things that, while currently allowed, are pretty obviously going to destroy content at least some of the time. So I propose a very simple solution to this problem. Add one new rule to the HTTP spec:
Haiku
During the week I wrote some documentation to help people write XSLTs that work really well with EditLive! for XML. Generally any given XSLT will work but there are some techniques you can use to make them work better with the augmentation process we use to add editable fields into your XSLT’s output. Our official document writing took exception to one sentence about where the best place to put “action buttons” is in an XSLT. Action buttons are clickable things that are a cross between a hyperlink and a button which perform operations like adding another element or attribute or moving things up and down in the document. The sentence was apparently too confusing but contained a subtle but important point which couldn’t just be removed. I still think the clearest way to phrase the sentence was as a haiku: most cases intuition used button works Sadly, they won’t let me put that in the docs. So now the challenge goes out – what is the subtle but important point that the haiku so eloquently reveals? Maybe if enough people get it I’ll be allowed to put it in….
US Arrest Rates
Justen Erenkrantz comments on his day at the ball game and it reminded me of just how arrest happy the US police are. Police seem to be managed on a local level in the US so my experience with the San Francisco area police may not apply US wide. I’ve never seen so many people getting arrested in such a short time. For that matter I don’t think I’ve ever seen someone get arrested in real life before my trip to the US. While I was over there I was seeing at about two people a day getting handcuffed and carted away. Maybe I was just spending too much time with the wrong crowd.
New Modem
Well after a being so excited about getting my ADSL connection, my ADSL modem decided to crap out. Every so often it would just lock up hard – all the lights off except the power light – and stay like that even when power cycled. So yesterday I ordered a Billion 7100S which arrived today. It seems to be working after a slight bit of worry that it wasn’t getting an IP (turned out I’d turned the DHCP client off instead of the DHCP server). So now I’m happily *and* reliably connected to the interweb thingy again. Yay for that.
How To Get An Orinoco Wireless Card Working under OS X
Wireless Driver will definitely work on OS X 10.3 (Panther) and also claims to work for 10.2 (Jaguar) and 10.1 so it should cover all your needs. Simply run the installer then either kextload the extension or just reboot and specify the network to connect to in the new “Wireless Config” control panel and configure the new “Ethernet Interface” that will appear in the network control panel.
ADSL Has Arrived
We finally have ADSL at our new place. The world has resumed revolving at normal speed.
The World Just Ended
When did this happen? Google doing image ads? This doesn’t bode well for the future of man kind. Fortunately:
We currently have no plans to show image ads on Google.com. But that “currently” still concerns me.