Which is worse: Disgraced Navy SEAL Shane Wolf (Vin Diesel) is handed a new assignment: Protect the five Plummer kids from enemies of their recently deceased father or Tommy Lee Jones stars as a Texas Ranger who must protect a group of 5 cheerleaders who have witnessed a murder? I think The Pacifier is victorious on the grounds of worst title.
Your Linguistic Profile: |
45% General American English |
25% Dixie |
20% Yankee |
5% Upper Midwestern |
0% Midwestern |
I think some people were confused about what I meant by "structural" changes. I'm talking about things on the scale of whether apps involve windows, dialogs, and menus (and thus whether there's a window manager, and which widgets are found in the GUI toolkit). Another example might be removing files from the user model (including the file manager and having open/save in applications). These are just examples, they aren't intended to be good ideas. Some of the stuff here I would count as GNOME 3 material, but a lot of it is feasible in GNOME 2 (and some of it isn't even that interesting, it's just code cleanup).
Which structural changes are potentially interesting enough to justify GNOME 3? I don't know. Maybe there aren't any. This is something people would have to experiment with (google cache, the site was down when I wrote this).
My point is as much to discourage a flawed GNOME 3 as it is to encourage an interesting one. I think some people are wanting total freedom from backward compatibility, dogfooding, ABI stability, freezes, etc. without adequate cause for said freedom. About anything you can imagine as a tweak or evolution of the current desktop is going to be feasible in GNOME 2.x. GNOME 3 should have strong cause in real user-centered innovation, it should not be "we haven't broken ABI in a while, let's go for it! woohoo!" because that approach is driven by technical wanking, not by the goal of making a better desktop.
Unless someone starts some experiments and scopes out some interesting new directions, I think we'll be in the 2.x series forever.
BTW, if we did do a GNOME 3 in the sense that I've described it, I bet the 2.x branch would continue indefinitely alongside; 2.x would be much more suitable for technical workstation kinds of applications, for example.
Keep in mind, there are some pretty interesting things to do on 2.x, especially if you think about writing apps rather than modifying the core platform.
My take on the GNOME 3 discussion is that we have multiple goals at the moment which are truly in conflict. On the one hand, we have the fairly traditional desktop of today, with some growing userbase, and it's important not to break it. Most of the employed-to-work-on-GNOME developers are focused on this. On the other hand, innovation is necessary to move forward, and there's a lot we could do. But we don't want to drag today's users through broken intermediate states and make them wait 3 years for bugfixes to come out.
Look at how much screaming we get for relatively tiny changes to today's desktop such as the file selector, focus stealing prevention, or whatever. Even with these small things, until the details are ironed out (adding typeahead to the file selector, adding flashing taskbar buttons to the focus stealing prevention) it's pretty disruptive.
There's also a reality that the GNOME 2 desktop, as shipped by most everyone, is the GNOME environment plus Firefox plus OpenOffice.org plus Evolution or Thunderbird; which means that to take a particular UI direction or add a platform feature, you have to convince all of those projects to change... and they are incented to keep the platform as similar to Windows in basic structure as possible. The value of platform features is to create commonality between apps, so if apps don't use the platform, platform features are useless.
So the forces of existing userbase, the easiest-to-reach future userbase, cross-platform applications, and funded development efforts are strongly pulling GNOME 2 toward conservatism.
I think GNOME 3 should be a fork for that reason. There's another reason it should be a fork; I've been using a petrified wood analogy. The process for forming petrified wood is that an organic structure is replaced by minerals. The result is a rock, with the same shape and structure as a piece of wood.
If you think about the transition from GNOME 1 to GNOME 2, as with petrification we took the structure of GNOME 1 (session manager plus window manager plus file manager plus control center plus panel) and recoded it to be a different material, but the same shape. GNOME 2 is in an important sense "the same thing" as GNOME 1, though it's certainly much more robust and polished.
This structural similarity is fractal in nature, so on the whole-network level we have the structure of thick clients plus mail/calendar server plus directory server for example, and or inside the toolkit we have the basic premises shared by GTK+, Qt, and Swing.
Some people seem to think of GNOME 3 as "when we get to break ABI" - I consider this a dumb goal, and I'm not sure breaking ABI even gets you anywhere. GNOME 3 may be more about creating different ABIs. My view is that GNOME 3 is interesting vs. GNOME 2 if and only if we are changing the structure, i.e. what the desktop really consists of and how it works on a macro level; GNOME 3 can't possibly be worth it if we're just substituting minerals for lignin and cellulose but getting the same structure in the end. It's worth it if we're building a different kind of tree.
Something that makes GNOME 3 harder: you can't change this macro level in interesting ways without breaking all the apps. (I state this with confidence because apps are the only interesting purpose of a desktop, ergo failing to change the apps means you've failed to change anything interesting.) Which doesn't mean you can't run the old apps, but it means that today's apps would be sort of freakishly out-of-place on a GNOME 3 desktop that had changed shape. In other words old apps would run in a "compatibility mode" and would be visibly distinct from new apps.
I don't know if we should do GNOME 3 today, or who should do it; but I would caution that the purpose should be real value. Structural change rather than rewriting purely to get new construction materials. And I think this makes a temporary (but probably multi-year) fork almost mandatory. Trying to do GNOME 3 "in place" will destabilize GNOME 2 while preventing the structural changes that would have value.
Adding to my previous post a bit, regarding the 6-month cycle: I don't think going 6 to 9 months is at all useful for enabling GNOME 3. GNOME 3 is a bigger effort than that. We should pick the release cycle to benefit GNOME 2, not GNOME 3. And I think GNOME 2 is running pretty smoothly (well, there are some issues but I would not blame the release dates). Larger changes can be done in 12 months rather than 6 by just skipping a cycle for a particular component. We've done this in a number of cases. 9 months is long enough to start causing the problems we were trying to avoid when we created the time-based release approach. Even if we don't think enough cool stuff happens in 6 months, I'd argue that changing to 7 is better than changing to 9.
Ah geez, again I foolishly fail to remember that phrasing things a certain way results in Slashdot articles which inevitably have misleading headlines and summaries. For the record, my point is not that we should do a GNOME 3 (especially right now), and it definitely isn't that I personally intend to do a GNOME 3. It's that if someone did a GNOME 3, the right way to do it is to create a fairly long-lived branch (aka fork) of the project while continuing the GNOME 2.x series on a 6-month cycle in the meantime. I'm responding to other people's blogs here, rather than proposing something.
And for the record I don't think conservatism in GNOME 2 is bad, it's just different. The important point is to recognize that you can't do two things in one branch. Doing it all in one place results in both breaking the crap out of current users, and failing to innovate or do interesting things. So you split them apart. This is also lower-risk; if the innovation fails, then you just drop the branch.
Aaron, there is no standards committee or standards body at freedesktop.org. freedesktop.org right now is the equivalent of Sourceforge, essentially, except that projects have to be desktop-related.
I fully agree with outreach and that is what the successful projects have done. e.g. for fontconfig, Keith Packard went and talked to GTK+, Qt, Mozilla and even provided patches to all three of those IIRC. Writing the patches taught him what would be workable.
If the D-Conf developers don't do outreach then they are taking the risk that their project won't be adopted. There's no risk to us (us being GNOME and KDE). Waldo and I are going out of our way a bit to give them feedback whether they ask or not, but they still have to get things approved by GNOME and KDE as a whole, and if their ambitions are beyond GNOME and KDE, they have to get Mozilla and OO.org and Samba and whoever else they care about as well.
Precisely because freedesktop.org is not a standards committee, though, there's no freedesktop.org policy to outreach or not outreach. And nobody to say which config system to use. If someone else has E-Conf and F-Conf and Q-Conf then they can all post to the list, and they can all have some CVS space (once they have some actual code and look plausible), and they can fight it out until one of them successfully gets adopted by the major projects. Or maybe the major projects will choose something not hosted on freedesktop.org at all, or stick with their current stuff, and that's fine too.
You may ask "what's this about a freedesktop.org platform"; here is my original suggestion on that and I think this is my latest comment. Let me quote from that last one:
The "stack" concept is useful but NOT for technology pimping. A goal of the stack should be to show which technology is *well-cooked* - and part of the definition of "well-cooked" is "widely de facto shown to be useful." So it doesn't make sense to use the stack to "push" or promote technologies, since "already accepted" should be a gating principle.
While some people may have other ideas, I will again reiterate that KDE and GNOME can veto anything by simply not going along with it. That's the core reason I don't understand any paranoia here.
In an earlier post I mentioned the book On Intelligence, the author Jeff Hawkins has now started a company called Numenta.
This is what I wanted to do when I was in school - take a reverse-engineering pragmatic approach to figuring out how to copy human intelligence. I couldn't figure out how to do it in an academic context though. Most people seemed to be doing research where they presupposed questionable-sounding theories (the cognitive science "mind as a bunch of processing modules" or AI "mind as Prolog"/"mind as logic engine" nonsense). After presupposing everything interesting they would design experiments to test little nitpicky aspects of it. The day-to-day life of a grad student looked sort of horrible to be honest.
So I figured software development is a lot more fun day-to-day, and even if working on intelligence is interesting, it'd be preferable to come back to it in some other way later in life.
I'm pretty interested in this Numenta thing then. My guess is that we've decided AI is more star-trek-impossible than it is because of all the people who've approached it in the wrong way. If Numenta gets even a basic, highly limited version of their concept working, a whole class of applications that were computationally intractable will become feasible.
The reason I think Hawkins is more likely to be right than some of the past train-wreck AI attempts is that his theories have a hope of explaining actual conversations, creativity, culture, etc. I studied anthropology/linguistics/pscyhology and it always struck me that the psychology theories of mind had no chance of explaining the world as documented by anthropology.
My favorite reading in school was a monster called The improvisational performance of "culture" in realtime discursive practice, notable for its awe-inspiringly complex and pretentious writing style (starting with the title). Once parsed it turns out to contain a theory of why and how people talk. It spells out the theory by explaining a single conversation and why that conversation happened as it did.
If you take this one conversation as an example, the general direction of Hawkins's AI thinking has a hope in hell of replicating the conversation, and the general direction of a lot of other AI thinking has no hope.
The other thing I like about Hawkins is that he doesn't seem to be religious. A lot of people studying intelligence seem to have gone into it with an emotional investment in the value of math/logic/computers.
Of course, I never knew that much about all this, and anything I did know is now almost a decade out of date... plus I've forgotten most of it. So don't invest in Numenta on my advice. But I'm hoping it will turn out to be interesting.
Aaron, I don't really get your post on some level. I feel I'm missing something. The whole argument I've been repeatedly making on xdg list is that GNOME will not adopt a "D-Conf" unless it's actually better than GConf, and of course Waldo says the same with respect to KConfig.
For the record, I don't think the current round of D-Conf discussion will go anywhere, because a number of the people involved are known to be more about sending mail than writing software, and some other people involved seem very reluctant to listen to the GConf/KConfig experiences enough to genuinely improve on the existing systems. This means that the remaining people who might do something useful are probably confused or drowned out.
If people go off and write something called D-Conf that doesn't take into account the KConfig and GConf lessons, then we won't use it. Pretty simple. If they want to try to get it right though, then let them. What's to be afraid of?
For my part, I'm just documenting here the "lessons of GConf" in the hope that sooner or later someone will use this information wisely.
BTW, Aaron you say:
there are two good kinds of standards: ones that document successes that already exist and were, really, already "standard" before being canonized as such, and ones that are such great technologies that people run to adopt them.
I agree wholeheartedly and that is how freedesktop.org should work (and how it has worked in successful cases such as fontconfig and Cairo and EWMH). Anybody can host their experimental project on freedesktop.org, and discuss their own crazy ideas on the mailing lists, but there is no standards committee. The purpose of freedesktop.org is to allow people to see what's already de facto, and give people a place to try to develop great technologies with input from the desktop projects. Some (most?) of the attempts will fail and that's fine.
At least, that's my view of things. Other people are free to think whatever they want. I'm not paranoid about it because the major desktop projects pretty obviously have a veto on any proposed "common" system or standard in any area, so what people think doesn't really matter, only what the core desktop developers think will make a difference.
I guess the main conclusion is, don't confuse "people who post to a mailing list" with "the actual developers" - surely anyone who's worked on an open source project understands this...
I've been trying to get "D-Conf" discussions on the rails; some people don't seem to understand that this is about creating a certain user experience for admins, programmers, and end users. The requirements flow from that, not from a bunch of technology modules or libraries that are available.
I was recently reading a report on VoIP technology consisting of a discussion between various industry participants. Most of them kept trying to predict and talk about the future by extrapolating trends, guessing at upcoming legal changes, finding out about business deals in the works, and combining technical buzzwords: in this context they would tend to think "Skype got lots of users because it was P2P" (which reminded me of this "architecture astronauts" article).
One of the participants in the conversation had it right, though. He repeatedly argued that Skype had its userbase because it let you make free calls, it worked most of the time, and the calls had adequate quality. In other words it was a useful product that solved people's problems, and shockingly, that resulted in people using it.
Check out this thing (disclaimer: I've never tried it, maybe it rocks, but I'm going to criticize their web site anyway). Their marketing slogan in the page banner is "Blending: VoIP, IM, Presence, and Social Networking"; since this web page has a "Geek Zone" link presumably the main page is supposed to be the non-geek end user zone. How isolated from reality do you have to be if you think "Blending: VoIP, IM, Presence, and Social Networking" will make someone buy a product. Exercise: call someone outside the tech industry and ask them if they would like to blend VoIP, IM, Presence, and Social Networking today and how much they would be willing to pay to do so.
My guess is that someone got too caught up in BS about "convergence." Maybe "convergence" is such a popular idea because anyone can become a futurist this way without thinking too much. You just take a couple of technologies and WHAT IF WE COMBINED THEM???? IT WOULD BE AWESOME!!!!
To be fair, if that web site is really marketing to VCs or other tech industry people, maybe their slogan is a good one.
And I don't mean to say that combining technologies always sucks. Lots of people use IM and phone calls in conjunction for example, and surely you could create IM/phone integration that was useful. But just sticking both technologies in one product doesn't have any magic benefit. You have to get down to the details of exactly how they really work together to create an improved user experience. What does the software do. It doesn't matter which market it's in or which technologies it uses.
Take down the photo, it burns...
Ross, maybe just fixing this bug would be better than a tray icon; the main intended application of the "urgency hint" is exactly the "you have new IM" type of thing.
Plus you get to use my horrible "make a GtkWidget flash in a theme-friendly way" hack and terrify your neighbors ;-)
People are always using my name for an 37337 nick; most recently there's someone posting comments on LWN with login "havoc" (I am "hp"). Unfortunately these are often not comments I agree with, and people attribute them to me. Please don't attribute them to me. For all I know the same thing happens on Slashdot or other sites where I don't usually read the comments.
If you have a name like "John" or "Bob" then presumably people assume there are lots of people named that, but for "Havoc" many assume I'm the only one. Other than the X-Men character and the GI Joe vehicle of course.
Notably, both of those are really lame. Havok is one of the lamest X-Men, and as that url says about the vehicle "Most don't like the Havoc because it's silly. I agree, it is."