Gilles Vandenoostende

Hi, I'm Gilles Vandenoostende - designer, illustrator and digital busybody with a love of language, based in Ghent, Belgium.

Archive for the ‘Articles’ Category

Servo and Blink

Yesterday Mozilla (and Samsung) announced they are starting work on a brand new web-rendering engine called Servo; a replacement for Gecko, which has arguably become a little long in the tooth, and is being pummeled mercilessly by Webkit in the marketplace. I’m excited for them: sometimes reinventing the wheel is exactly what’s needed in software, when legacy cruft becomes a hindrance to moving stuff forward.

I’d love to see Servo disrupt the current era of Webkit dominance.

Meanwhile, the Google Chrome team announces they’re also working on a new engine called Blink. There’s two ways to look at this move:

  1. Webkit, like Gecko, has been around for a long time and that legacy might in fact be acting as a detriment to further innovation. So it’s a good thing Google is making a new engine, even though Webkit is the best rendering engine currently out there.
  2. Or, you can take the cynical perspective: Google is only interested in advancing its own interests, and deeply embroiled in a massive war with Apple over who gets to dominate the mobile web. It’s not necessarily interested in advancing the web, but only in getting a proprietary leg up on its competitors.

That second possibility could end up being very bad for the web indeed. If we learnt our lessons from what Microsoft did back when it was in a position of dominance that is. Since I don’t particularly trust Google to not “be evil” anymore, I’m more inclined to take the second stance on this issue. Google’s in a position of power with Chrome, and I don’t particularly trust them to not put their own interests before those of an open & consistent web.

Thoughts on FiftyThree's Paper

I came across an interesting article on how Paper’s new color mixer works:

In the new version of Paper released last week, you mix colors with your fingers, like it’s paint–only somehow more beautiful. This one magical feature burned a year of development time, resurrected the work of two dead German scientists, and got Apple’s attention.

It’s a good article so go read it. I made the color-mixer in-app-purchase last week and I’ve had some opportunities to play around with it. Here are my thoughts on it, and Paper as a whole.

I’m not sure how I feel about Paper and its new color mixer. I love the app for sketching, and I felt that the originally limited (and fixed) color palette was an interesting creative constraint: you worked with what you had. Some people felt the same way and described it as the Instagram for people who draw, which I feel is apt: even those of us with little to no art skills could make something aesthetically pleasing with it, in the same way that Instagram’s filters can make crappy cell-phone pictures look better than they ought to, simply by virtue of its pleasantly analogue feel.

But now they added a color mixer the app is climbing out of the toy-box and is inching into more professional territory. It portends you should be able to utilize every color in existence, yet I feel as though I’m constantly struggling against the app’s skeuomorphic design. I feel like it’s holding me back, unlike purely digital painting apps like Brushes 3 or Autodesk’s Sketchbook Pro which are much closer to giving you the power usually reserved for professional tools like Photoshop.

One thing that irks me is that with certain tools Paper decides wether a color should multiply when blending (i.e. darkening whatever’s underneath) or if it should be opaque (i.e. cover up what’s underneath) in a very digital on/off way. For instance, if you have 49% white and 51% color, it multiplies, but if you’re at 51%-49% it suddenly covers. Since you have no precision control over this mix it’s easy to mess up.

I know it’s emulating traditional painting in this regard, but in real life you have some control over the paint/thinner ratio (the analog equivalent of opacity), which Paper doesn’t give you any real control over (but then Paper also gives you the ability to undo, so there’s that).

That said, I still love playing around with it (and I really should post some more of my drawings). If you think of it as a sketchbook (and give yourself permission to mess up) it’s great, but at the end of the day it’s still a toy app, whereas apps like Brushes 3 could be used to make finished artwork that I could drop into Photoshop and use in production. I think that’s a shame when you consider how great the rest of the app feels to use.

I wish Photoshop also had a similar sketchbook mode so I wouldn’t have to mess with the file system when I just want to do a quick speed-painting, sketch or finger-exercise.

So Obama won I guess

I’ve always taken the Bill Hicks approach when viewing politics in America:

“I’ll show you politics in America. Here it is, right here. ‘I think the puppet on the right shares my beliefs.’ ‘I think the puppet on the left is more to my liking.’ ‘Hey, wait a minute, there’s one guy holding out both puppets!’” – Bill Hicks

I’d love for Obama to prove me wrong in the next 4 years and actually changes a few things, but for now I’m just glad someone who believes in magic underwear isn’t allowed near the nuclear launch codes.

Zuckerberg: Facebook's greatest mobile mistake was betting on HTML5

TechCrunh:

Today, Mark Zuckerberg revealed that Facebook’s mobile strategy relied too much on HTML5, rather than native applications.

Not only was this a big mistake with mobile, but Zuckerberg says that its biggest mistake period was the focus on HTML5. This is the first time that the Facebook CEO has openly admitted this, but things are looking good for the new iOS native app. According to Zuckerberg, people are consuming twice as many feed stories since the update to the new iOS app, which is great.

Blaming HTML5* for your crummy native app is like blaming a hammer for your inability to do brain-surgery with it.

The old Facebook iOS app (which used HTML5) was slow, yes. But Tumblr’s iOS app feels buttery smooth and native, and that’s built on HTML5 too. HTML5, like any other programming language, can be slow when your code isn’t optimized, and even a casual glance at some of Facebook’s front-end code with a webinspector reveals some of the massive amounts of overhead their code has.

It’s a well known fact that adding a single Facebook “like”-button to any HTML page adds dozens of additional file-requests, each of them incrementally slowing down the entire site (in terms of both initial loading and rendering). If the bloat in these small things are any indication as to the rest of their web-codebase then it’s no wonder HTML5 didn’t work out for them.

Web-development is a science. Any good front-end developer worth his salt has performance on his mind the entire time. “Do I use this Javascript library, saving me a few hours of work but adds another file-request and some other functionality I don’t need, or do I write a custom solution in a few lines of code because that’s all the app really needs?” is a common dilemma. “Can I achieve this graphical effect with CSS, or do I load an image?” is another. Developing a website is the sum total of all these decisions. Get too many of them wrong, and your site or application slows down to a crawl. People coming from native or back-end development backgrounds can be completely oblivious to this part of the job in my experience. They’re simply not used to thinking about these constraints.

Using HTML5 for native development is arguably a cost- and time-saving measure, nothing more. If you hire the right kind of front-end developers like Tumblr, you might pull off something that’s damn close to native, but if you get it wrong you end up with something that’s slow and annoying. Facebook clearly didn’t have the right kind of people on their app-dev team to pull that off, so going native is their best bet.

Doesn’t mean there’s anything wrong with HTML5 though.

* For the pedantics among us, mentally replace every mention of “HTML5″ in this article with “HTML, CSS and Javascript”.

On Windows

Feels like it’s 1999 again:

EU regulators investigating Microsoft’s Windows 8

(Reuters) – EU antitrust regulators are investigating whether Microsoft blocks computer makers from installing rival web browsers on its upcoming Windows 8 operating system following complaints from several companies.

It’s easy to draw parallels here: why are the antitrust regulators going after Microsoft when Apple has been doing the same thing on iOS for years? I think it just comes down to the fact that they’re still insisting on calling their new OS “Windows”. In fact, I think all of Microsoft’s recent woes can all be attributed to their bone-headed clinging to the Windows-brand.

Think about it. If they had called their ARM-based tablet OS XBox Touch for instance, instead of Windows RT, noone would’ve batted an eye at their third party browser policy. After all, noone is suing Nintendo or Sony because they can’t install their own software on those game-consoles. That’s why Apple’s been getting away with the same thing: iOS devices are more akin to game consoles than general purpose computers.

But if you call your it Windows RT, people naturally assume it’s full-blown Windows: the OS that you can install anything you want on. For MS to suddenly switch their OS to Apple’s walled-garden model, yet to keep calling it the same generates false expectations. “What do you mean I can’t install [insert 3rd party app here]? It’s freaking Windows!”

The Windows brand is harming Microsoft’s other ventures as well: their Windows phones seem to be about as popular as cancer among normal people, despite getting mostly favourable reviews*. It’s hard to imagine the XBox would’ve been as big a success as it is if they’d called it the Windows Box instead. Windows just isn’t cool.

Over the decades, “Windows” has become synonymous with bugs, crashes and usability problems. Never mind the fact that the more recent releases of their OS are a lot more stable, they still have a reputation that isn’t going to go away any time soon. “Windows always crashes” has become a meme, and not just among the tech-savvy.

I once heard a rumor that Microsoft CEO Steve Ballmer has a tattoo of the Windows logo somewhere on his body. That would go a long way towards explaining why Microsoft is so stubbornly holding on to such a poisonous brand. So Mr. Ballmer: here’s my advice: laser that tattoo off and hire this guy to invent a new name for you. Or it just might be too late already.

 

* Although I do feel most reviewers are cutting Microsoft huge amounts of slack – as a software ecosystem Windows Phone has a lot of catching up to do with both iOS and Android.

The Mobile Context

There are two schools of thought when it comes to mobile webdesign: one is responsive webdesign (RWD), which aims to serve the exact same HTML (and thus content) to all users, but adapts the layout to suit each visitor’s device. The other approach is to build a completely standalone mobile site, with a different layout and different content, and to redirect users using server-side device detection.

Defenders of the second approach often cite “The Mobile Context” as their reason for doing this: they argue that people browsing from a phone are likely to be on the move and thus need to be able to get to the most important information ASAP. They might be on a slow data-connection, have a cheap smartphone with a slow CPU and limited memory, or might just be high-powered business people for whom time is money and who have little patience.

Let’s take a website for a restaurant, as an example. A mobile user is most likely looking for the address, telephone number for reservations and the menu, so we put all that info right on the homepage. Great! Now please explain to me why mobile users deserve such an efficient experience, but why people on a desktop browser prefer to look at a 2MB stock-photo of a girl eating salad and dig through a complicated navigation-structure instead? People like efficiency no matter what device they happen to be using. But marketing  and unsubstantiated assumptions can quickly get in the way of correctly divining what people actually want from your site.

Many assumptions about people’s device capabilities are also flawed. Just because someone’s using a laptop doesn’t mean they’re on a broadband connection – they might be tethered to their 3G smartphone or using free Wifi in a coffeeshop along with two dozen other hipsters. Likewise, I might be using my phone to read something in the comfort of my own home and capacious wireless network – restricting me to a cut-down mobile site makes no sense here either.

The point is, you can’t predict users intentions and needs based on their device. We need to be building websites that are streamlined and efficient for all users. That’s why I find that mobile-first responsive webdesign is such a great design methodology: it forces you to prioritize your content from the get-go, so that you end up with a unified experience where all users get a good website.

Another likely reason why most big companies opt for a separate mobile site alongside their existing site instead of one responsive site is because it’s quicker to slap together a cut-down mobile version than to do a complete redesign. And that’s fine – it’s a long and hard process and few businesses can afford to wait that long while the competition might be stealing their mobile customers. But just remember that a separate mobile site is a bandaid – a temporary stopgap – and not a long-term solution.

Sooner or later, you’ll have to bite the responsive bullet.

A proper linked list

Instead of adding a bunch of arbitrary unicode symbols next to your hyperlinks, instead take a look at how I run my blog:

Have your titles link to the content

If you’ll notice, either on this blog or if you’re consuming my feed (yeah, you like that, don’t you), whenever I post a link to another site or article, I put the link right on the title of the post. You can’t miss it. Clicking it takes you away from my site and towards whatever I decided to share. I specifically modified my WordPress theme and its RSS feed to support this non-standard behaviour, which I first saw on the mother of all linked list blogs (who I think even coined the term), Daring Fireball.

Readers shouldn’t have to scour my poorly written copy to find what I’m blathering on about. In fact, they needn’t even read whatever I wrote if they don’t want to. It’s just a juicy headline that leads straight to the content. Everything else is just icing on the cake.

Conventional wisdom (an oxymoron if I’ve ever heard one) would think this is bad — what about my pageviews? Hogswash. If you want pageviews, create your own original content. Getting pageviews from linking to other people’s work is a happy coincidence, but it shouldn’t be a business-model.

Don’t overquote or rewrite

When I’m linking to something, I never quote more than one or two paragraphs. I also rarely quote from anything other than the article’s intro, except for when I want to explicitly comment on something specific. I want people to read the whole thing in its original form, but if I’d quote all the best bits on my site, what is there left for anyone to read?

And rewriting, which is the entire business model of sites like the Huffington post or the Gawker network, well that’s just wrong. I won’t go so far as to say it’s stealing, but I know I wouldn’t be comfortable doing it.

Why have a linked list at all?

I used to hate the idea of just linking to stuff as a blog. I assumed I needed to write huge articles that had original content to be a blogger. Which is part of the reason why the first 5 years of this blog’s existence saw less than 10 posts. Through reading other blogs (like those joined in the great Read & Trust network), I learnt the value of just linking and, perhaps more importantly, commenting on stuff.

In fact, what started me off on maintaining this personal linked list is this post by Matt Gemmell about why he disabled comments on his blog:

[…] I want to make it clear that this isn’t a means to discourage conversation; indeed, I hope the opposite is true. If you read something here, and want to reply, please do one of the following, in order of preference:

  1. Write a response on your own blog. Considered, long-form follow-ups by an identifiable, accountable person are the ultimate form of feedback and discussion. I’d love to read what you have to say. Let me know about it via email or a tweet […] [Emphasis mine]

Why contribute your insights to the bottom half of the internet when you can make an identifiable comment on your own terms for which you can take 100% ownership?

I consider this blog to be like a portfolio of my mind. It’s an insight both into my thoughts and opinions, and those of others that I agree (or disagree) with. Someone reading this blog should have a pretty good idea what I do, how I do it, and what I stand for. That’s the reason I’m keeping a blog. What’s yours?

My iPad predictions

About a month ago, during a bout of boredom, I decided to engage in some impromptu Apple punditry on twitter. Just for shits and giggles, let’s see how far off the mark I was.

First tweets:

Predictions: iPad 3 will look exactly like the iPad 2, but with Retina display and A6 chip. 32GB entry model. 3G standard for all models.

Keeping the same form factor will be necessary to keep up with massive demand, as the factories won’t have to change their infrastructure.

A6 and Retina: A6 is a no-brainer, but retina is a guess, though logical. I doubt they’ll create a new category for that (iPad Pro)

I don’t see what point of such a resolution on that small a display would have for pros. So Retina becomes standard.

Same chassis: half check. The design is pretty much identical, but it’s slightly deeper1, probably because of an even larger battery to enable 4G without sacrificing uptime.

Retina: check! There were some rumors at the time that they might spin off the retina version iPad into a new Pro category, but I wasn’t buying any of it.

A6 chip: they branded it A5X, but they might as well have called it an A6. So check.

Next prediction:

32GB entry-model: [some of – ed] those rich-media iBooks textbooks weigh in at 1GB+ a pop. 16GB just won’t cut it anymore.

32GB entry model: wrong. Looks like Apple is betting on streaming & cloud-storage all the way. Is local storage going the way of optical?

3G standard: wrong again!

Final prediction:

Camera: don’t expect much changes there. Maybe 1080p video capture but that’s it.

1080p video: Check! The camera is still slightly less advanced than the one in the latest iPhone, but it seems they went beyond what I thought they would do.

So, final score: 3.5/6 – guess I shouldn’t quit my day job just yet. Still, at least I’m not stupid enough to bet against this being a huge success.

 

1. The new iPad is 9.4 mm deep, whereas the iPad 2 was 8.8 mm. The new one is even 51 g heavier – I’m guessing that’s all battery, but I think the retina display might also be responsible.

Get Things Done

I’ve been using Things as my to-do manager for over 18 months now. I bought both iOS versions (iPhone & iPad) and the Mac app and really went all-in with it. My whole life is in there now, from my lists of ideas, to my projects and even my grocery-list! Using its repeating tasks feature I’ve been building good habits (like drawing and blogging every day) and generally I’m more on top of stuff than I was 18 months ago. Aesthetics- and design-wise, it’s a truly lovely suite of applications, as evidenced by the design awards they’ve won.

But if you asked me now if I would recommend it to anyone, I’d have to think long and hard about it.

Y’see, for any GTD application to be truly useful, in this day and age, it has to be multi-platform. Ideas and inspiration can strike at any time of day, wether I’m sitting behind my desk or when I’m on the bus. And so it makes sense to have your database available on as many devices as possible. And that’s for me where Things fails – hard.

Things currently uses Bonjour and your local wi-fi network to get your iOS devices’ databases in sync with Things for Mac. It’s slow. Very slow. And it’s flaky: with no real “force sync” button it’s normal to spend a lot of time launching, re-launching, force-quitting and launching again and again until my iDevice finally “finds” my Mac and starts the sync.

It’s so frustrating that I haven’t even touched the iPad version of Things for over a year now – it just wasn’t worth the effort to keep in sync. Right now, I’m basically living off the iPhone version and just use the desktop as my backup. I find that a waste.

But Cultured Code have been teasing us with the promise of “cloud syncing” for over a year now. In fact, their first public communication regarding cloud-sync dates back to December 2010. Since then, they haven’t gotten further than a private beta – which has recently gone public.

The beta

I’ve tried the private beta. It worked well. But I haven’t been able to use it much, because they still haven’t figured out a way to migrate your existing Things database to the cloud. So your only option is to ditch (or manually transfer) your entire database – all your projects, areas, tags, all your ideas – just so you can begin using a beta version.

I’m sorry, but that’s just not good enough! A simple database exporter/importer is the type of thing they give to CS students as homework – not the thing a team of programmers working for over 14 months couldn’t have figured out.

In the same timeframe, Apple announced and released iCloud, and lots of people have either updated their existing apps, or even built completely new apps, that leverage it. But Cultured Code just kept working at their own proprietary solution.

I’ve thought about switching. Wunderlist has cloud-syncing and cross-platform straight out of the box. And it’s free – but I won’t switch to it because it lacks repeating tasks[1], and they also don’t offer a Things importer. I’ve also looked at Omnifocus (which has had cloud-syncing since 2008), but I’m reluctant to buy into them, because they’re even more expensive than Things, which also wasn’t exactly cheap to begin with.

So we’ve been waiting. And waiting. And Cultured Code keeps teasing us with betas that, to me, aren’t really betas[2].

Ben Brooks made a bet with someone on Twitter that we’d see Textmate 2[3] before Things cloud-sync, and won, sort of.

Eternal support

But I’m not un-empathetic. I  understand the troubles a small independent company can have, trying to support software indefinitely, for free. And I know Cultured Code aren’t incompetent: they managed to build a version of Things for the iPad in the time between its announcement and launch[4], so we know they can work quickly, if they want to.

Which is why it’s all the more infuriating to see them taking so long on something so seemingly straightforward. They write about the difficulties keeping stuff in sync and preventing merging conflicts, and scaling to thousands of users, and I get all that, but I can also see that people have been doing stuff like that successfully for years now. Those problems can surely be considered solved now? Are they just reinventing the wheel?

So look, Cultured Code, if it’s about money – I wouldn’t mind paying for an upgrade to cloud sync. You’ve got plenty of mechanisms at your disposal to make that work, like in-app purchases. Hell, I’ll even buy the Mac App Store version of Things[5] to replace my current version if I have to.

But just. Get. It. Done! … Or at least give us a concrete deadline when you’re planning to finally launch it, so we can decide for ourselves if it’s still worth the wait. Don’t keep us endlessly waiting for a future that – for all we know – might not come.

</rant>

 

[1] It is a planned feature though.
[2] Beta software generally means it’s feature-complete, but can still contain bugs and/or performance issues.
[3] Textmate 2 is like the Duke Nukem Forever of the Mac world, and they’ve released a public alpha before Things got a public beta.
[4] The iPad was released on April 3rd, Things for iPad came out on April 1st.
[5] I bought my version from Cultured Code themselves.

Internet Explorer and the prefix-drama

It’s no secret Internet Explorer is losing market-share to other, better browsers everywhere. And I applaud this, mostly out of spite at Microsoft for screwing us over for almost 10 years with IE6. I also believe Microsoft is best at the stuff it’s not the market-leader in[1] so I’m glad they’re losing ground in the browser-racket.

But now they’re lamenting the fact that a lot of webdesigners are optimizing their work only for Webkit-browsers. Apparently, this bothers them so much that they are planning to modify their browser to support hitherto webkit-only CSS3 by abusing the vendor-prefix convention[2]. If you’re not a web-developer, you might not understand why this is bad, but it is, trust me.

I can understand the browser vendors though. It seems like a moment doesn’t go by without some new CSS technique being published that was obviously built for Webkit first, and for others second. That’s gotta hurt, especially when you’ve got so many highly skilled programmers working to make your browser engine just as good, only to see their hard work ignored by the developer-community[3].

But is potentially breaking the internet the answer to winning developer hearts and minds? No!

So I decided to investigate. I myself am probably one of the people Microsoft aims to convert, since I use Google Chrome as my primary browser, both for personal use and testing my work[4]. So I asked myself: what would it take for me to switch from Chrome to IE as my primary development browser?

And aside from Chrome just being faster, nicer looking, having good development tools built-in, always being up to date and generally just being a better experience, there’s one elephant in the room that Microsoft is totally ignoring: it’s cross-platform.

I can run Chrome just as well on my Macbook as well as on my Windows PC. And there are no major rendering differences between the two versions, aside from font-rendering, which is tied to the host OS anyway. Even Apple’s Safari is cross platform in this way! But Internet Explorer is Windows-only. Highly inconvenient for many of us developers who switched to Macs years ago.

Personally, I sacrifice dozens of gigabytes worth of hard-drive space to install a plethora of Virtual Windows Machines, just to be able to run Internet Explorer on my Mac so I can test my work. It’s that or buying separate, dedicated testing PC’s. A big source of friction for just testing a website!

So here’s my advice to Microsoft: You want more web-designer support? Release a Mac version of Internet Explorer, this time one that’s 100% identical to the Windows version[5]. I shouldn’t have to run 4 Virtual Machines on one laptop just to test websites.

If you can do that Microsoft, I’ll start using the -ms-* prefix when possible, okay?

 

[1] Just look at the XBox, Windows Phone 7, or even the Metro interface: they’re all good products designed to penetrate areas where Microsoft is an underdog. Contrast that with how horrible Windows and Office are, products they still have a quasi-monopoly in, and you can see my point.
[2] Mozilla and Opera are also threatening to do this, but this post is all about Microsoft.
[3] Microsoft even resorts to paying developers to make web-apps to showcase IE’s power.
[4] Naturally, I do test my work in other browsers, and – apparently unlike a lot of people – I do take the effort to write my CSS3 for all supporting browsers. I also use Elements.less to save myself from having to write all those prefixes by hand.
[5] I’m talking about the rendering engine, the UI should be native to the host OS, naturally.

Back to top