Gave a short talk about the desktop at the Red Hat World Tour event near MIT yesterday; lots of people showed up for the talks, including Richard Stallman.
Since people are asking about the AFL again, here is the story. The idea of the license is to be an X/BSD-type license written in proper best-practice legalese (see below). There's a patent clause right now that was well-intentioned but kind of broken; a revision to address the patent concern is supposed to be in the works, and we're waiting for the new version to relicense D-BUS under it.
Here is some explanatory text that once came with the license, it's in dbus/COPYING also:
The following is intended to describe the essential differences between the Academic Free License (AFL) version 1.0 and other open source licenses: The Academic Free License is similar to the BSD, MIT, UoI/NCSA and Apache licenses in many respects but it is intended to solve a few problems with those licenses. * The AFL is written so as to make it clear what software is being licensed (by the inclusion of a statement following the copyright notice in the software). This way, the license functions better than a template license. The BSD, MIT and UoI/NCSA licenses apply to unidentified software. * The AFL contains a complete copyright grant to the software. The BSD and Apache licenses are vague and incomplete in that respect. * The AFL contains a complete patent grant to the software. The BSD, MIT, UoI/NCSA and Apache licenses rely on an implied patent license and contain no explicit patent grant. * The AFL makes it clear that no trademark rights are granted to the licensor's trademarks. The Apache license contains such a provision, but the BSD, MIT and UoI/NCSA licenses do not. * The AFL includes the warranty by the licensor that it either owns the copyright or that it is distributing the software under a license. None of the other licenses contain that warranty. All other warranties are disclaimed, as is the case for the other licenses. * The AFL is itself copyrighted (with the right granted to copy and distribute without modification). This ensures that the owner of the copyright to the license will control changes. The Apache license contains a copyright notice, but the BSD, MIT and UoI/NCSA licenses do not.
As for the (l)ongoing language debate I think we need to allow for people to both use Java and C# (or any other language) that has a big enough community around it to ensure that bindings are kept up to date and follows the development of the rest of the desktop.
I agree with that, but you're answering a different question: "what languages can you write third-party apps in?" My question is, what technologies and languages do we use to implement the desktop core itself. We have to make a decision here, if only "stick with C and C++."
Havoc, you are skipping over the fact that a viable compromise for the community is not a viable compromise for some products, and hence why you see some companies picking a particular technology as I described at length below.
To be clear, I don't mind if companies unilaterally choose a technology to use in their own company-specific products - that is expected. The question I'm asking is about what we do with the open source desktop stack. What can we use today in GNOME, Evolution, Mozilla, OpenOffice.org, and so forth?
You guys have Mono on hold in Evolution to avoid fragmenting this core desktop stack, which is absolutely the right thing to do. What I don't want to see though is language/component-related progress on hold forever, if there's something that is viable today. Is there some way Mono can become viable upstream in GNOME? If not, let's move on to another option for the desktop core.
Let's move from "on hold" to "starting the process of deciding," though who knows how long it will take once we start.
To me step one is to enumerate the options and then see which are realistically viable. At that point we have a primarily technical decision between the options that remain.
Here's a concrete question: how do you want to proceed with getting Mono "off hold" in the Evolution project? What is the plan for getting the necessary buy-in?
I agree that cross-platform apps are useful as a matter of sound modular engineering, migration path, and getting a lot more users and developers working on the apps. But my end goal is definitely to get people using a 100% open source solution - i.e. Linux.
I'm not convinced the "Linux" vs. "Open Source" desktop strongly affects the issue of what languages to use in the core desktop; because we can certainly run C++, Java, Python, and many other languages in a Longhorn environment, and access the Longhorn APIs either on the backend of a cross-platform toolkit such as XUL, or directly in a framework similar to AbiWord's. Both of those are possible no matter what language we choose.
It does matter how we write our cross-platform apps. One way is to simply reimplement Windows - say that WINE and .NET are our platforms. The problem is that you can never be better than Windows.
The other way is to have our own platform which is our native platform and can have Linux-specific advantages in manageability, security, usability, and other aspects. And then application authors can choose whether they want to be optimized for both Linux and Windows (AbiWord-style or Epiphany/Camino-style); or whether they want to be slightly out of place on each but write their code only once, using some cross-platform API.
Miguel summarizes my argument as "we need to pick Java to avoid fragmentation," I'd put it more generally "we need to pick a viable compromise," again my attempt to summarize options people have brought up:
Why do we "need to pick"? Because until we pick something we are picking Stick to C/C++ by default, but not really acknowledging it; and so in fact have been holding off on technologies such as UNO and XPCOM that could improve C/C++ on the grounds that we might in principle move to a managed language instead. If Java and Mono are clearly not going to get consensus, let's seriously consider PyGTK and XPCOM.
i.e. right now we're just waiting indefinitely for I'm not sure what - is there something we can go ahead and do?
Regarding cross-platform development: I think we need an open source abstraction layer. Using Microsoft APIs directly isn't the way to do this. Longhorn does not force you to write the entire app in C#, it forces you to invoke the CLR-based APIs at some level; which can be done indirectly, in the "Windows backend" of a portable platform.
I'm trying to imagine the target demographic for this product.
Been getting more and more email on the subject of Java, Mono, C++, etc.; not clear I can ever reply to all the points raised ;-) I still don't think anyone has addressed my #1 worry however, as I tried to point out in my last blog post: forking and fragmentation are the outcome if current trends continue. Debating which course of action is better in an absolute sense is pretty pointless unless that debate results in a direction with sufficiently broad support.
Put very bluntly, do we think Sun will ship a desktop with Mono no matter how long we discuss and advocate it? Though less clear-cut, do we think IBM (for example) will? Or from the other side, do you think GNOME or Mozilla or for that matter Red Hat or Debian will accept a proprietary JDK dependency, no matter how long it's discussed? I just don't see these positions changing. And if we then push on without consensus, we are going to have a hell of a fiasco - at least if you judge our success in part by how many users we have.
The Linux desktop has maybe 2% marketshare; it's not time yet to start shedding allies.
Here's a question: would we rather wait indefinitely to get consensus and legal clarity on the most controversial courses of action, or go ahead and move forward on a compromise course of action? Are we looking to ensure the perfect solution with indefinite delay, or can we find a 90%-as-good solution we can use right away?
Some quick summary thoughts on technical and legal issues, though again as I keep repeating, we have a higher-order problem that rightness-in-absolute-terms matters nothing without real-life consensus.
Technical: I agree that C# has some incremental improvements over Java. But the two systems are essentially similar, and when you compare them to C instead of to each other, they look much more alike than different. Cross-language in Java can certainly be done (cf. Jython for example) and can be improved with existing technology such as CNI or UNO. There's no reason it's hard to do a P/Invoke equivalent for Java, either. This stuff could even go in the Java spec eventually.
Legal: This topic is huge, but a quick thought I have had. We're talking as if Mono and Java have entirely distinct sets of patents that apply to them; but the technologies are quite similar. It's possible (though I'm speculating) that the patents overlap heavily. (Though the overlap isn't quite symmetrical, since Java predates C#.) And for that matter many large companies probably happen to have patents that apply. At some point Stick with C/C++ looks pretty attractive.
Another angle on the law: a problem with legal issues is that none of us are lawyers, and generally it's not possible to post or republish legal advice if we receive it. Because the legal matters are a complex risk analysis, not black and white, to me what it comes down to is whether those who have something to lose, and have gotten legal advice, are willing to ship a particular piece of software.
As a number of mails I've received argue, we should remember that sticking with C++ or Python or the like is an option. We don't have to buy into anything more - and may be unable to, because legal experts will have final say.
At the same time, what really interests me is the possibility of getting enough consensus today to start using a managed language. Right now the community seems to be in "wait and see" mode while companies are in multiple, incompatible "full speed ahead" modes.
Lots of good replies to my post on higher-level language support in the desktop, including other blogs, private mail, and a lively discussion in the halls and internal lists at Red Hat. I want to try to summarize some of the feedback and answer the details, but not enough time tonight. I do want to make a high level point though.
If you go back and read my original post, I referenced legal and technical issues and gave brief opinions, but I did not get into all the details or try to resolve them exhaustively.
To me the technology gives us a spectrum of options - some status quo, some 25% or 75% or 90% improvements. But technology isn't the only factor.
Similarly, the legal issues rule out some options (such as non-open-source software), and indicate lesser or greater levels of risk. Miguel is right to point out that any significant program probably violates someone's claimed patents, for example. However, the law is not black and white - one patent is not the same as hundreds, all patent owners don't pose equal risk, some patents have clearer prior art, and judges and juries look at the clarity and intent of any infringement. So on the legal front as on the technical front, we have a spectrum from status quo to various degrees of increased risk.
On a third dimension there are strategic issues, both for the open source desktop community as a whole, and for each company.
To me the problem we face is more than stacking up Java vs. Mono vs. C/C++ on legal and technical and strategic dimensions and picking one. Here is the big issue: forking and fragmentation are the outcome if current trends continue. Novell has started coding desktop components in Mono and Sun has started coding stuff that requires the proprietary JDK.
At present, many Linux desktop backers are also strong Java backers and would refuse to ship anything based on a .NET clone; Sun obviously, but also I would guess IBM, Oracle, and so forth. From the other side, for many desktop components a proprietary JDK dependency is either illegal (due to the GPL) or simply counter to the founding principles of the project.
Slow down and consider for a while what it means for the desktop projects to force this issue and allow "platform wars" to be played out with those projects as battleground. Instead, we need to find a compromise or course of action that most interest groups can at least conceivably buy into.
I tried to propose one such compromise, that is, using the subset of the Java specification available in open source form. But it was probably distracting to mix this opinion with simply posing the problem. What I'd like to see is enumeration of the possible compromises, and a serious look at whether the compromises genuinely get broad enough support. ".NET clone" and "proprietary Java" to me are both obvious nonstarters and all but guarantee desktop Linux forks.
Many of the replies I received suggested alternate compromises. I'll try to summarize some possibilities raised so far.
As I understand it nobody is advocating C# plus .NET platform, as this is widely understood to be legally and strategically out of the question.
It's unclear that everything I've enumerated here is viable. We don't know yet whether C# in any form (even just the ECMA core) can get broad acceptance. We don't know for sure that Java can either. The only thing we know for sure to be viable is the status quo: Stick to C/C++. To me, I would rather accept that than see the current trend toward fragmentation. However, I do think it would be better if we can find a way to move forward.
The reason I like Open source Java subset: it's a subset of Mono (due to IKVM), a subset of the proprietary JDKs, and a subset of gcj. It can run on any of these implementations. Because of this, there should be no new legal and strategic concerns introduced if a desktop project uses Open source Java subset: everyone is already shipping a VM that supports it. Moreover, if in the future we decide we can start to use C#, this strategy would not preclude it; we switch to the C# and IKVM compromise at that time.
Java offers enough technical advantages to be worthwhile, and has the nice property that it pulls together the desktop and the server side of Linux.
BTW, regarding panic and urgency; I am immediately worried about the fragmentation issue, which seems to be in progress already; and longer-term but seriously interested in the technical competitiveness of open source platforms. Callum says similar worries were expressed about Visual Basic - but IIRC Visual Basic has the largest marketshare of any language by far... so perhaps the worries were pretty valid. Whatever the real urgency, I don't see how delaying a discussion of these issues is going to make them any clearer or easier.
I decided life was too boring, we need to finally discuss Mono, Java, and the Linux desktop. Here are my thoughts, hopefully a productive start.
Mike Loukides missed the point of my last post. The virtues of Swing aren't relevant.
Right now you can install Red Hat and Sun JDS and you get the same major components by default: GTK+, GNOME, OpenOffice.org, Evolution, Mozilla. Plus the same choices for some of the smaller apps. Ximian was also in sync with this, though Novell has not yet taken a position.
This has a couple of implications. One, ISVs and customers have a single standard client side platform and application suite. Two, these companies are using the power of the open source model to pool resources and compete with Microsoft on the desktop.
Often, apps are part of the platform; in Evolution for example, the address book and the PDA interfaces. The browser and office suite are more obviously and extensively platforms, but the email app is as well. Something like Looking Glass simply is platform, it isn't an app at all.
Every vendor has to add value, and that's fine. However, doing so by replacing the major platform components has negative consequences for both the vendor and the overall effort to unseat Windows. It's even more problematic if the replacement requires proprietary components owned by one vendor, because none of their competitors will be willing to use those components, assuring fragmentation.
Up to now I'm talking pragmatics or "open source," but there's a "free software" or principled angle as well. Most large projects such as GNOME and Mozilla would never consider a dependency on a proprietary library. Of course this is why no interesting part of the Java Desktop System today uses Java, because there's no GPL-compatible Java implementation and thus no wide usage of Java in the open source desktop community.
Net effect: to use Java in their Java Desktop System, Sun has to rewrite code wholesale and maintain it themselves. The traditional open source community simply won't accept proprietary Java dependencies.
I support using Java throughout the open source desktop; we need to move to a high-level programming environment. I'd push hard to start writing almost all new code in Java if we could and gradually refactor the apps to a high-level language without feature regressions. But we need an open source Java implementation to make this possible.
Sun is in a hard position. 1) Keep proprietary ownership of Java or 2) make Java open source and thus a standard feature of the Linux desktop. If they choose 1), then they have a second choice: a) Don't use Java in the core Linux desktop or b) reimplement much of the core Linux desktop in Sun-specific ways.
1)b) is especially harsh due to the GPL; you can't refactor Evolution to gradually migrate it to Java, because the GPL forbids linking to Java. So they have to start Glow from scratch. Similarly if they want to use Java in GNOME, they will have to replace each GNOME component wholesale.
Don't get me wrong, Sun has done a lot for the open source desktop and GNOME in particular, and I love the developers involved. But as an empirical matter, I don't think their approach to Java and their approach to the desktop are compatible, and when I saw Glow I felt it was the most dramatic demonstration of that so far.
Maybe everyone but me already saw this, but Sun rewrites Evolution in Java Swing. Yay, an "open source" mail/calendar client with a dependency on a proprietary JDK. Yay, let's rewrite a couple million lines of code and fragment the Linux desktop platform; clearly the way to beat Microsoft.
In other news, increasing dependencies on the proprietary JDK in OO.org, which forces the codebase shipped by Red Hat, Debian, and other companies who won't rely on a JDK license from Sun to diverge more and more from the mainline. Not to mention the StarOffice vs. OpenOffice.org delta.
Let's not talk about forking the window system on a fundamental level - I'm curious how they plan to use this, because GTK+ and GNOME sure as hell aren't taking patches to use a proprietary window system. Oh, rewrite everything in Swing! ;-) Or fork the GTK+ API?
Of course, this is probably good for everyone else; Sun has to fund proprietary-size development teams while everyone else benefits from the open source model. Sun JDS - your nonstandard proprietary desktop solution. Buy it today.
How does Mikael watch 40 Buffy episodes in a week? They were showing the whole series on TV a while back, 1 per day, and we tried to keep up. Somewhere in season 2 we gave up in despair at the backlog on videotape. ;-) Ah, the days before Tivo.
GNOME Office feels like a huge missed opportunity to me. OpenOffice.org is clearly the best choice today, but it's far from perfect. Still, GNOME Office isn't even competing.
To compete, I'm convinced the AbiWord and Gnumeric projects have to merge; it's that simple. An office suite is a single project. And then those developers should see the presentation app as part of their charter. Sure Gnumeric and AbiWord are architected differently and use a different approach to portability, but why not suck it up for now and iterate them toward convergence over time.
There are real paying customers that recognize that the Gnumeric spreadsheet engine is nicer than OpenOffice.org's spreadsheet. Jody has done a fantastic job. But a spreadsheet is too huge to support and develop two, and Gnumeric is a nonstarter because it's a spreadsheet only. It has to be the suite.
Sometimes AbiWord does a better job on Word import than OpenOffice.org, among other advantages. But again, same problem. A presentation app is on the roadmap, granted, but let's add the spreadsheet. Most companies don't choose a word processor; they choose an office suite.
Another hard but worthwhile decision would be to use the OO.org file formats. If GNOME Office decided to compete for real, it would take a couple years to become viable. In that time people will have collected lots of OO.org files and migration path will be a requirement dramatically simplified by keeping the same format. One could even imagine using big chunks of OO.org code via the UNO framework.
An interesting thought would be to pursue Gobe-style home-user-focused UI enhancements, to further emphasize the usability wins over OpenOffice.org.
In any case, I just don't see the hard decisions and bold roadmap directions. Which is fair; the hackers working on this have the right to do what they find interesting and valuable, and I'm not working on it. But I can't help feeling disappointed by the lack of alternatives in the office suite category, so I thought I'd whine a little bit.
I might add, simply using files for user data - documents, address book, calendar - is totally plausible to me. Make these files non-dotfiles in the home directory. Maybe make them a single file with a MIME type, so you can just drag and drop your calendar or address book between systems.
Or use dotfiles. e.g. metacity uses dotfiles to save sessions.
Files lack change notification and make it harder to write configuration UI such as a generic management tool or the control center. They make it hard to do lockdown in a generalized way and support a central data store for an entire network. But this stuff doesn't really matter if the file is genuinely data, rather than preferences. The home directory is where user data lives.
Evolution guys bringing up complex data structures again, here's an old mail on the subject. Check my "list of things I was planning to get around to" in that mail, ah the follies of youth...
Storing "structs" (or dicts/hashes) as little XML strings could be a totally sane underengineered solution, but surely needs some convenience API around it, or perhaps standardize the XML format and add a GCONF_TYPE_DICT_STRING rather than using GCONF_TYPE_STRING ambiguously.
Increasingly clear that gconf needs a lot of work, notes here for example. But I'm not sure how to get this done, I'm afraid that a total rewrite would just make different mistakes instead of creating the Right Thing. Not that I've ever rewritten anything...
For the average admin, the primary mistakes in gconf (other than multiple-login design bugs) seem to be things that are confusing. Here are some of those and possible solutions.
1. The XML file format is confusing; the format itself is ugly, the fact that it uses a filesystem hierarchy instead of a single file, the starting filenames with "%" to avoid clash with key names, and the historical lack of whitespace in the files... either a DB database or a single huge XML file would have been less strange, though neither would be as robust as the current setup, it would be more obvious how things worked.
2. The fact that "schema objects" (GCONF_TYPE_SCHEMA) are "installed" into the config database itself confuses the hell out of people; it means there are two things called "schema" (.schemas files and GCONF_TYPE_SCHEMA) and two things called "defaults" (the default in the schema and the default in /etc/gconf/gconf.xml.defaults). Making this worse, RPM spec files change one default and admins are supposed to change the other default. Piling on to the confusion, schemas aren't exactly the same thing they are in LDAP terminology.
The solution I'd propose here is to yank schemas out of gconfd and the config database entirely. Make them just XML files; apps load the ones they care about at runtime and read the defaults themselves; gconf-editor just loads them all. So have only .schemas files, not .schemas in the database.
That does still leave people trying to edit the .schemas files to change defaults, instead of changing gconf.xml.defaults. Not sure what to do about that. Maybe it's just not a problem.
3. Apps that have "profiles" add another layer of complexity that's sort of a pain in the ass. See gnome-terminal (or the panel which has the complexity but not the user-visible profiles feature).
4. The panel has its own ad hoc complexity, as described here and here. I believe the panel could avoid this, by just copying the current setup to new screens, or by creating a config source with ~/.gconf removed and copying that. Or by using gconf_engine_get_default_from_schema() or similar. Unfortunately the panel is the number one thing any admin wants to configure, so its extra complexity is most people's impression of gconf itself.
5. gconf-editor doesn't expose the right way of thinking about the system, because it simply edits the current user's configuration. Ideally, we'd have a much more admin-task-based UI that could do things such as "dump the current panel configuration to a file", "create a kickstart script to load said file", or at least "edit the systemwide default or mandatory settings."
Anyhow... aside from all these gripes, the basic idea of gconf I believe has been proven useful in practice. A process transparent model-view approach to settings is the right architecture with genuine advantages. But after a couple years remaining more or less unchanged, some iteration and evolution towards a better implementation seems long overdue.
One thing that has been fixed up recently is that the panel and other apps in principle handle locked-down keys. Maybe it's almost reasonable these days to log in with ~/.gconf removed from one's gconf path, and start filing bugs about stuff that breaks...
Though that raises another topic, should frequently-changing transient state such as current window positions be in gconf ;-) Consider an LDAP backend, where LDAP is totally unscalable for writes and reasonably scalable for reads. And of course this transient state should get saved even on a fully locked-down desktop. So should removing ~/.gconf even be counted as reasonable? The fuzzy split between data, preferences, and transient state comes back to haunt us.