Today several people pointed out the new Dell Studio Hybrid, which looks pretty interesting.
(I'd seen the Mario video before, but I wasn't clever enough to notice it's the perfect metaphor for software development...)
If assembling a system from parts, it looks better to get a somewhat larger case with a case fan, rather than a tiny case with a CPU fan. A maybe-working example of this could be:
Total hypothetical cost: $750
This has case fans, but with the slow CPU and notebook drive it may be possible to turn them off or dial them way down. The fanless power supply is expensive; the Nexus quiet power supply fan in my development system is pretty inaudible, so perhaps that is a better option, saving $70 or so and bringing total cost to only $680. A power supply fan could reduce the need to run the case fans. Scrounging for stuff on sale could save a few more dollars no doubt.
These built-from-parts systems always run over budget when it turns out some part doesn't work with some other part.
An expensive version could have instead:
Total extra cost +$255 so the "deluxe" approach is $1005.
Though larger and uglier, this system would be roughly comparable in cost to the Mac Mini, despite much better hardware specs. However, if a next-generation Mac Mini ever came out, I'd expect it to have specs and cost similar to the above. It would quite possibly use the X4500HD chipset, even.
Looks like nobody sells the X4500HD Intel motherboards until late August, so I'm not going to run out and buy anything. Maybe better options will come along in the meantime.
Suggestions people have sent in:
Could take another $200-300 off the price most likely messing with the tradeoffs and shopping around for best prices.
Still looking for my perfect TV computer. Thanks to everyone for suggestions so far, I really appreciate it. Several people expressed interest in hearing any answers, so I'll keep posting what I learn.
Ideas suggested so far:
None of these are quite there for me yet. I do want an actual computer, not an appliance/"thin-client"; I don't have another computer free to be the server, and none of the appliances support all the video sites and formats we use. I'd also like some future safety. I don't even own any Blu-Ray DVDs right now, but this thing should last a few years, and who knows what new video services will come out over the next 5 years. (We canceled cable in favor of "a la carte" purchases online or in DVD form, so we don't need cable cards, tuners, or DVR functionality.)
Right now we're attaching my laptop to the TV and using the clever diNovo Mini as a remote. But I want to free up the laptop. (That's the bottom line objective here, get my laptop back.)
Any type of fan is probably a showstopper. I was almost willing to live with the Mac Mini fan since it is somewhat quiet, but the Mac Mini has a slow GPU in it. While a very quiet fan might be OK, my experience with supposedly quiet fans is that they aren't quiet enough, especially under CPU load, and especially after a year or two of wearing out. For my developer workstation I bought all the special quiet parts, and it's indeed very quiet while I type this, but the fan spins up if the CPU is under load... for example while watching video.
In short, fans are risky.
One thing I haven't investigated: just get a laptop. But I bet that's expensive-ish.
There are many good options if cost is not a factor; such as Hush, A-Tech Fabrication, mCubed, or Niveus. These are all great except they are very expensive. I'm willing to pay extra for something good, but a thousand dollars extra is too much.
I believe there's probably a solution involving the Intel DG45FC or a similar board, a slow/cheap/low-power CPU (perhaps a single-core Celeron - that could be slower than necessary even), a fanless case, a fanless CPU heatsink of some kind, and a quiet/low-power 2.5-inch notebook drive.
But, there's no way to know whether a given pile of parts will snap together and not catch on fire.
No doubt if I wait long enough there will be something prebuilt available. I am tempted to try the Intel board though. Maybe I can cut a hole in the case for the heatsink ;-)
I want a computer with the following specs to connect to a TV:
Seems like an obvious product, but I haven't found it yet. Anyone have a link? Email me and I'll post an update if I find something.
Chris Blizzard said it struck him at GUADEC that there were two very different initiatives, desktop and mobile, happening in the GNOME community.
Miguel's post on GTK+ 3.0 reminded me of Chris's comment.
In the mobile world GTK+ is not quite ideal, and GTK+ based platforms are in the early-to-mid stages, not the highly-mature stage. I bet GTK+ needs a lot of evolution to be what it could be in non-desktop contexts.
For the Linux desktop, GTK+ should be kept stable and mature, while for new opportunities, perhaps not.
In a post last month I argued that evolution was appropriate for the Linux desktop, while revolution was appropriate for new ideas and categories of product. But what if GTK+ is part of both?
Two new directions people are working on: 1) GTK+ as an improved cross-platform desktop toolkit, and 2) GNOME technologies refactored into a mobile platform.
Thought: in the GTK+ 3.0 discussion, discuss how GTK+ can address both these new directions and the traditional Linux desktop, without screwing anybody.
Thought #2: as with any project, GTK+ will be driven by whoever is doing the work. And as with most big projects (Mozilla, Linux, GTK+ itself historically), clusters of developers with funding will be able to do a lot of work. Most of the work today seems to be funded by the non-Linux-desktop developers (who have some significant-to-them revenue stream attached to GTK+), not by the desktop Linux distributions (who can't connect GTK+ work to revenue in a compelling way). If the Linux desktop developers can't find revenue, and can't muster significant volunteer resources, they are going to have less and less influence - that's how open source works.
GTK+ needs an ability to evolve, which means a 3.0. Another way to say it: new toolkits should not have to start from scratch, they should be based on existing GTK+ code. Because 3.0 would be parallel-installable, it's a new toolkit in the GTK+ tradition, but not the same library. It does not replace 2.x in an operating system install.
3.0 replaces 2.x to the extent that the people working on 2.x stop working on it, and shift to 3.x. If there's nobody who wants to keep working on 2.x beyond a certain point, it's tough to argue 2.x is important beyond that point. 2.x could live forever if someone were interested in doing the work.
In open source, "important enough to me that I'll work on it or pay someone to work on it" matters. "Important enough to complain about" does not matter much.
All approaches to GTK+ 3.0 have their downsides. The downside of "break ABI without adding features" for me is that an ABI break opportunity is "wasted."
Sealing struct fields and removing deprecated stuff doesn't much increase the list of features that can be implemented without breaking ABI. Most ABI breaks are semantic, not a technicality about a struct field. Example: whether you have gtk_widget_get_window() or ((GtkWidget*)widget)->window, you still can't get rid of the GdkWindow, nor can you move GdkWindow creation out of realize().
Most features requiring an ABI break would end up in GTK+ 4.0, which is in a long time. So the 3.0 ABI break will be "wasted" in that sense.
I was kind of hoping for some things like killing all non-toplevel X windows, or killing X11-centric GDK API in 3.0 ... instead of waiting for 4.0. That's the part of the GTK+ 3.0 proposal that makes me a little disappointed.
However: the sealed struct fields and deprecated-stuff-removal would make it simpler to work on ABI-breaking features in a branch, and simpler to maintain an ABI-incompatible device-specific or platform-specific branch, I imagine. It would also be simpler to automatically find all uses of a given field or behavior in a set of apps, in order to update them. Maybe it's ideal to have a bunch of "4.0 candidate" feature branches over the next few years, eventually merging them to ship 4.0 in five years.
I'm skeptical, as many others are, of claims that "cleaning up code" or "removing deprecated stuff" are ends in themselves... sometimes code cleanup is important, because the code is still in active use, and it becomes impossible to make it correct or understand it anymore. But the deprecated GTK+ widgets aren't like that; they are just sitting there untouched and are a cosmetic problem at worst. They don't significantly affect anyone who isn't using them, that I know of.
It bugs me when people act like deleting deprecated stuff is an end in itself, without discussing what it enables. Even if deprecated stuff is using up more than a few percentage points of maintainer time (is it? I have trouble buying that), if the problem with it is maintenance time, maybe the way to address that is to find some way to pile another maintainer on, instead of disrupting all app developers.
Besides, the worst deprecated stuff was all kept out of GTK+ ... there's a bunch of free deprecated-stuff-ectomy still possible by finishing Project Ridley.
I'm happy people are trying some new directions such as mobile and GTK-as-true-cross-platform-toolkit. Ability to break ABI will be needed for that work at some point.
To be clear:
Here's to progress!
API design rule: error codes alone are not a reasonable way to report errors.
You need a mechanism that can return a helpful string. Like exceptions in most languages, or GError in C. You can return an error code also, if appropriate.
A helpful string does NOT mean some generic string form of the error code, as with strerror().
Here is an example from XPCOM (have to pick on someone, and this is what I tripped on today):
Error: [Exception... "'Component does not have requested interface' when calling method: [nsIFactory::createInstance]" nsresult: "0x80004002 (NS_NOINTERFACE)" location: "<unknown>" data: no]Wouldn't it be helpful to know which interface was requested, deep inside the app someplace? I vote yes. But XPCOM doesn't have exceptions with messages, it has error codes. (The least helpful error code, of course, is NS_FAILED. NS_NOINTERFACE is a little bit helpful.)
Any number of APIs manage to screw this up. X11 protocol: guilty. SQLite: guilty. Cairo: guilty. OpenGL: guilty. I'm sure there are dozens more.
All of the above leave programmers reading docs and tracing code to figure out mysterious error codes, for errors that would have been obvious given some more context in a string. In C, the function you use to set/report/throw an error should take a format string like printf(), so it's easy to add context.
Unless you're writing something very low-level and performance-critical, like a system call, error codes are always wrong. If an API is simple and well-specified enough, error codes can be sufficiently clear. But, none of X11, Cairo, XPCOM, SQLite, or OpenGL qualify for simple enough.
Error codes have one other nasty problem: people start to overload existing codes to mean many different things, because they're reluctant to add new codes or because new codes would break ABI. This is visible in UNIX system calls, X11, and of course any API with a generic "FAILED" code.