Google recently started experimenting with hiding the URL in Chrome and replacing it with an “Origin chip”, which seems a bit scary. Jake Archibald thinks this is a great idea, so does Nicholas Zakas. And they’re both pretty smart, so there must be something in this.
I’m late to the party here intentionally, because it gives me the luxury of rounding up what everyone else who is cleverer than me has already said. There seem to be two major arguments in favour:
- Security: it helps to thwart phishing by making it harder to be fooled into thinking you’re on a site that you’re not (because the browser will show just the hostname, rather than the full URL). See Jake’s post for this argument.
- User experience: URLs suck anyway. They’re just routing information, and normal people / non-geeks don’t care about routing information. In fact many users never use them, preferring to find every web page via search. Nicholas’s post makes this argument pretty well.
Amongst the detractors, we have Jeremy Keith, who addresses the security argument by concluding that:
Technically decapitation seems to solve a headache
I strongly agree with the sentiment here. Let’s not forget that the web community has form for ‘solving’ problems and then realising that the solution is itself a larger problem than the one it solved.
So, does origin chip achieve its aims without introducing any new problems? No. And, is it the best solution to these problems? No again. In summary:
- It doesn’t actually solve the security problem (people will still fall for phishing)
- There are better solutions to both the phishing problem and the UX
- There are negative side effects:
- Developers stop caring about making their URLs semantic
- Users find it harder to disambiguate websites with similar names (the “Facebook login” problem – more info below)
- The UI creates false assumptions about the way it works
If the problem is that people get duped into thinking they’re on a site that they’re not, here’s a better solution for you that I think will save more people than origin chip: When a user visits a page that is not in their history, and submits a form containing a password field, display a browser-generated page that shows the hostname, any security information or advisories associated with it, and the information being submitted, warns about phishing and asks the user to verify that they want to proceed.
In the early days of the web, Internet Explorer actually had something like that, displayed whenever you submitted anything over HTTP:
But since we do that all the time, everyone would tick the ‘don’t display this again’ box and it was eventually removed from later versions of the browser. Doing this in a smarter way would lead to far fewer false positives and give the user a reason to read it and consider it.
Here’s an illustration, which also showcases my amazing design skills:
I’d also argue that warning about the phishing at the form-submit stage is far more more likely to catch the user’s attention that badging the URL, no matter how flashy you make it.
Since solutions like this exist, it’s not surprising that the conspiracy brigade are busy accusing Google of finding excuses to promote search over using URLs.
Lets look at the user experience issues. Nicholas Zakas says people don’t want routing information. But it’s a myth that normal users don’t understand routing and disambiguation. After all, they’re aren’t many people who would struggle to put an address on a letter. And they know to put their address lines in order of most to least granular because they recognise that there’s a hierarchy to the data structure of a geographic address.
On the web, there have been lots of efforts to simplify routing – AOL keywords, facebook and twitter usernames, and advertisers whose SEO enables them to just put “Search online for X” on the poster rather than the URL. But all of these mechanisms are run by and for the benefit of single corporations or are highly competitive fights over ownership of a highly ambiguous common term in a single global namespace.
We need good, robust, inclusive, scalable routing mechanisms, and URLs are that. Users need a way of using them that doesn’t require remembering or typing all the routing info. It’s important that when I say I want “Amazon” I get
http://www.amazon.com (even when I’m travelling in the US), but equally important that I can easily decide to go to the other one if I want to.
EMail is the same. There must be thousands of John Smiths in the world, but I probably only know two or three at the most. If I type an email to “John Smith”, I should not need to enter routing info if I’m writing to the one I correspond with most frequently, but I need to understand the routing so that if I actually wanted a different one, I can identify the difference and pick the right recipient:
The changes to Chrome are a poor user experience for this reason – they obscure the routing information which encourages users to ignore it and developers to stop bothering with semantic URLs. Finally, it even creates a confusing search experience: if I click in the omnibox while on the BBC news site, I get:
This strongly suggests I will be searching the BBC news site when I start typing. It’s a common UI pattern used all over the web. Here’s Amazon doing it for their product search:
The problem is that in Chrome’s case the ‘origin chip’ vanishes as soon as you press a key, leaving you with a global search and forcing you to realise that your instinctive understanding of how the UI works was wrong. Horrible.
Jake points out that a read/write article caused confusion by briefly becoming the top Google result for ‘Facebook login’, showing that people don’t want to use URLs, but rather they find things by searching. To me all that says is that search is a popular way to find stuff (I think we knew that), and maybe Facebook should consider including the word ‘login’ in the title of their login page. It doesn’t make the URL any less important as the canonical identifier of the right page. In the Facebook example, if you hide away the URLs surely the problem gets worse, because future search anomalies such as this one will cause even more users to be confused than it did four years ago with the read/write article.
In summary, this change is bad because it’s a poor solution to problems where better solutions exist, and it creates new problems that we don’t currently have. It is however fantastic that these kinds of experiments are done in public, and Google deserves nothing but mountains of credit and admiration for that.
Other browser vendors would do well to follow this great example.