Gilles Vandenoostende

Hi, I'm Gilles Vandenoostende - designer, illustrator and digital busybody with a love of language, based in Ghent, Belgium.

Archive for the ‘Articles’ Category

Double Fine breaks all Kickstarter records

Wow. When I linked to this Kickstarter yesterday I had no idea it would turn out anything like this. In just 8 hours, gamers from around the world pitched in and got the game to its $400.000 target. But they didn’t stop there, donations kept coming and at time of writing the total sits at a whopping $1,262,086 – over three times what was asked. And there’s still 32 days left to go. Double Fine have already said that the extra funds will go towards releasing the game in multiple languages, on multiple platforms, and to generally just making it better than they had hoped would be possible. I wish them all the best of luck, and as a backer I’m especially looking forward to the documentary they’re planning to make about the game as it’s being developed.

If they can pull this off, it could revolutionize the games industry. The way things work today is that major publishers have to be pitched to, and if you convince them that your game has the potential to make back its investment and more, they’ll fund you. The results are predictable: most games today are either sequels, reboots, remakes, re-imaginings or otherwise derivative of successful games that have come before. Crowd-funding has just proven itself to be a valid alternative to the traditional publisher system, provided you’ve got the talent and fan-base to pull it off.

And it might also be the perfect answer to all the piracy concerns plaguing the industry – think about it: people have paid money here for a game that doesn’t even exist yet. Even if no one buys the game other than the backers (and I doubt that, seeing the massive enthusiasm the game has received), they won’t lose any money or jobs over it.

Between this, the runaway success of self-funded games like Minecraft and the rise of digital distribution (Steam, the App Stores, etc…), it seems like there’s never been a greater time to be an indie games developer.

On Vendor Prefixes

Remy Sharp weighs in on the same issue as the post I linked to earlier:

We do like vendor prefixes. They give us access to bleeding edge CSS properties, and make our sites look cool. But there’s a serious problem: non-webkit vendors are giving serious consideration to implementing the -webkit prefix to a number of CSS properties.

This is bat shit crazy, but where the web has arrived to. This is one developer’s opinion, but you need to voice your opinion now, and if you’re agreement that this is madness, you need to act now. Make your voice hear, blog about it, tweet about it: make a lot of noise.

The entire point of vendor prefixes was to allow browsers to implement experimental features without breaking things. Since browsers are clever enough to ignore any CSS they don’t understand, it’s a clean, effective and safe way for front-end developers to focus on delivering modern, cutting-edge sites without compromising the site’s content or functionality for people using other browsers.

Provided the HTML of your site is 100% standards compliant you can feel safe adding certain effects using vendor prefixes, since it could never break browsers that do not support them*. It’s the embodiment of the progressive enhancement principle! You also have perfect granular control over  your styles: having different values is easy with prefixes, should there be any cross-browser issues. Compare and contrast that to all the havoc the box-model caused because one browser interpreted the standard a little differently from the rest…

By considering to support -webkit prefixes, Mozilla, Opera and Microsoft risk breaking the web. What if Internet Explorer’s implementation of -webkit-box-shadow causes problems, for example? Then we couldn’t use that feature anywhere anymore! Our options would be reduced to either:

  1. Sniffing the user’s browser or device to serve separate styles to each – BAD
  2. Figuring out new CSS Hacks, and turning our beautiful code into something resembling a Regular Expression – BAD
  3. Going back to only using stuff that’s 100% standard and universal – BORING

So, not good then. So what can be done to prevent this?

  1. Us web-developers have to take more care in implementing all prefixes, and not just the most popular ones, provided it makes sense for the project.
  2. The CSS Working Groups could speed things up and just standardize the stable parts of the spec. What’s the stable part? How about the part browser vendors are willing to implement, even at the cost of their own pride?

 

* And noone really cares about CSS validation, now do they?

Frictionless sharing

Continuing my rash of Facebook-posts: here’s an interesting article I’m making a conscious effort to share with you guys:

The Friction in Frictionless Sharing

[…]Instead of requiring the user to confirm every single article they choose to share, just give them a one-time dialog that enables them to share everything down the road.

That’s a lot less work for the user, right?

Well, no, not really. Because in the past the user only had to decide whether to share something they just read, but now they have to think about every single article before they even read it. If I read this article, then everyone will know I read it, and do I really want people to know I read it?

That creates more friction, not less.

-Nick Bradbury: The Friction in Frictionless Sharing

He makes a good point, but my objection to Facebook’s ‘frictionless’ sharing is more basic than that: if you’re sharing everything, isn’t that the same as just sharing nothing at all?

And just who out there is really interested in every single thing someone reads? That kind of all-inclusive information might seem useful to an FBI profiler trying to identify a serial killer, or to creepy stalkers* trying to learn your habits, but to your friends? I mean, I love my friends, but I couldn’t give a damn about 99% of what they read online, and I’m sure none of them care about 99% of the stuff I read either. I might care about the 1% they think is actually interesting, and one way of knowing that is if they made a conscious effort to share it.

Any effort to streamline sharing only serves to lower the standards for that which gets shared.

 

* Also known as advertisers.

UI Patterns for scrolling

Some design patterns I find that go well together with scrolling:

  1. Dynamically Fixed UI – like the sidebar on this very blog, for instance. Once a user scrolls past the top-most point of a certain UI element (like a menu or table-header) said element becomes ‘sticky’ and becomes fixed on screen. Now that mobile devices are beginning to support CSS’ position:fixed style, this could become a more common sight on sites.
  2. Animated Anchors – i.e. those links you click on and that scroll the page for you. Really handy because it not only takes the user to wherever he or she wanted to go, but at the same time also educates them on the layout of your site, and their position within it.
  3. Infinite Scroll – i.e. that AJAX trick to load in fresh content at the bottom of the page whenever a user scrolls past a certain point. For immersion, this cannot be beaten; If done right users can just lose themselves in the content before realizing just how long they’ve been reading the same page.

    The downside with this is that when a user clicks a link on the page, and then hits the back button, he or she will be thrown back to the top of the page. I can imagine a suitable workaround to this involving a just-in-time history push-state using History.js, so that when the user returns, they’re only taken back to the last page-fragment that was loaded.

The rising popularity of tablets and other touch-screen devices will only serve to increase the need for good scrolling user experiences.

Anti-Counterfeiting Trade Agreement

» EFF: ACTA Signed by 8 of 11 Countries – Now What?

While we’ve been collectively staring ourselves blind at SOPA, something even more insidious – ACTA – has for all intents and purposes already been passed. Only the EU, Mexico and Switzerland have yet to sign, but that is only a matter of time.

Now with a name like that, it’s easy to think it’s a well meaning and innocuous bit of legislation. Don’t let that fool you:

And unlike the Americans, who still have something remotely resembling a direct democracy, and can do something concrete like write their congressman, we in the EU have no such options, as the European government is not democratically elected and thus cannot be held accountable.

We’re already living in 1984. Our only hope is that the politicians responsible for selling us out did so out of incompetence rather than malice.

Welcome to my blog, 2.0

So, I’ve finally gotten round to updating this blog’s theme to closely match my main site’s look & feel. I hope you like it, and as always, should you run into any bugs or annoyances, let me know.

Now, I might be flattering myself by assuming you may have noticed this blog has become more active as of late. I’ve been trying to get into the habit of posting at least one update every day, and so far it’s working out, I think. Partly because I stopped worrying so much.

The power of the internet is that if someone has already made the point you wanted to make, there’s no reason to regurgitate the entire discourse again – just link to it, stupid. Which is what I’m doing, mostly.

Some of you might recognize this approach as being the same as John Gruber’s, and you’d be right. Daring Fireball happens to be one of my favourite blogs, and I’m not about to hide my influences. And looking around, there’s a lot of other blogs that seem to follow the same model of being part “original programming”, and part “linked list”, to borrow Gruber’s parlance. And it’s not hard to see the appeal of said model.

Think of it like commenting. I figured that instead of posting some anonymous comment on someone else’s blog, for it to be moderated, altered, ignored or deleted, why not take complete ownership of my opinion and post it on my own blog?

I’ll still try to write original articles. But if I should come across another blog during my research that makes the point just as good or even better than I ever could, I’ll just link to it instead. And in the mean time, I’ll be getting a lot of practice and hopefully become a much better writer than I am now.

GoDaddy - or How people working on the internet should know how it works

Go Daddy, a big US domain-name registrar, signed their own death warrant by supporting SOPA:

On October 31st, 2011, TechDirt[18] and the Domains[19] both published articles stating that Christine Jones, the general counsel and corporate secretary from GoDaddy.com submitted a piece to Politico stating the company’s support of SOPA, then called the E-Parasite Bill. TechDirt countered the statement by posting screen shots of how GoDaddy itself encourages people to violate SOPA by suggesting domain names that would infringe on other established sites’ copyright and name trademarks. The day before the bill was set to be heard in the House, November 15th, GoDaddy filed an official statement[20] breaking down exactly why they were supporting SOPA, claiming that “there is no question that we need these added tools to counteract illegal foreign sites that are falling outside the jurisdiction of U.S. law enforcement.”

(knowyourmeme.com)

Of course, the online community at large didn’t like this. On December 22nd, Reddit user selfprodigy started a thread entitled: GoDaddy supports SOPA, I’m transferring 51 domains & suggesting a move your domain day.

Sensing trouble, Go Daddy issued a PR statement the next day withdrawing their support of SOPA with the usual cushy, smushy, PR non-language happy-talk:

Looks to Internet Community & Fellow Tech Leaders to Develop Legislation We All Support

SCOTTSDALE, Ariz. (Dec. 23, 2011) – Go Daddy is no longer supporting SOPA, the “Stop Online Piracy Act” currently working its way through U.S. Congress.

“Fighting online piracy is of the utmost importance, which is why Go Daddy has been working to help craft revisions to this legislation – but we can clearly do better,” Warren Adelman, Go Daddy’s newly appointed CEO, said. “It’s very important that all Internet stakeholders work together on this. Getting it right is worth the wait. Go Daddy will support it when and if the Internet community supports it.”

(Go Daddy PR Statement)

But it was too little too late. By even contemplating supporting SOPA they had already acknowledged their own ignorance about how the internet works – not a good thing. Even changing their stance now, why should anyone believe they’d never do something stupid like this again? And it seems I’m not alone in this rationale: following this PR broadcast (and it was a broadcast, no comments were allowed on Go Daddy’s blog) the registrar has already lost thousands upon thousands of domain names.

And in a final act of desperation, they’re now attempting to stall transfers by returning incomplete WHOIS information.

Stay classy, Go Daddy.

The problem with CSS pre-processors

Miller Medeiros:

I will try to explain the most common problems I see every time someone shows a code sample or I see a project written using any of these languages/pre-processors.

I’ve documented my love for LESS on this very blog many times, but it’s good to see someone pointing out the potential dangers of using them: excessive code duplication and the performance hit associated with over-specific selectors[1]. Be sure to read the full article first, then come back if you want.

Done? Good. Now I’ll ask you: are these problems with pre-processors, or with how people are using them?

If your only programming experience lies with CSS or HTML, then by all means do yourself a favor and stay away from pre-compilers. I know from experience how easy it is to get carried away with a new language. If you’re not careful you could quickly end up shooting yourself in the foot by overcomplicating things.

But if you have a good grasp of plain CSS, you know some object-oriented programming and you understand how your LESS/SASS code will compile to CSS… Why the hell not? You should never see potential abuses by some as a reason to dismiss an entire concept for everyone. Pre-compilers don’t write bad CSS, bad programmers do!

 

[1] Although unless you’re writing the next Google, Facebook or some other mammoth web-app, I don’t think over-specificity means all that much for performance unless you’re desperate to eek out a precious few milliseconds, of course. It has its effects certainly, but I see it more as a problem hindering efficient code re-use, than with performance. If you find it helps you organize your code so you can more easily find your way, I think it’s a fair trade-off: After all, CPUs will only become faster, while programmer’s brains are pretty much stuck at where they are now. You can always optimize and refactor afterwards, if need be.

Hashbangs, still

And while we’re still on the new Twitter topic, why does the website still use hashbangs? All elements are in place to replace their hash-based deeplinking with the new HTML5 Pushstate interface (after all, Twitter’s URLs seem to work just fine without hashbangs[1]) and programming a fail-over for older browsers that don’t support it is easy since they already have that in place.

It’s a mystery.

 

[1] twitter.com/gillesv and twitter.com/#!/gillesv both work, as do twitter.com/gillesv/status/145119200678129666 and twitter.com/#!/gillesv/status/145119200678129666

Back to top