There’s been news about Chrome freezing their User-Agent string (and all other major browsers are on board). That means they’ll still have a User-Agent (UA) string (that comes across in headers and is available in JavaScript as navigator.userAgent
. By freezing it, it will be less useful over time in detecting the browser/platform/version, although the quoted reason for doing it is more about privacy and stopping fingerprinting rather than developer concerns.
In the front-end world, the general advice is: you shouldn’t be doing UA sniffing. The main problem is that so many sites get it wrong, and the changes they make with that information ends up hurting more than it helps. And the general advice for avoiding it is: you should test based on the reality of what you are trying to do instead.
Are you trying to test if a browser supports a particular feature? Then test for that feature, rather than the abstracted idea of a particular browser that is supposed to support that feature.
In JavaScript, sometimes features are very easy to test because you test for the presence of their APIs:
if (navigator.geolocation) {
navigator.geolocation.getCurrentPosition(showPosition);
} else {
console.warn("Geolocation not supported");
}
In CSS, we have a native mechanism via @supports
:
@supports (display: grid) {
.main {
display: grid;
}
}
That is exposed in JavaScript via an API that returns a boolean answer:
CSS.supports("display: flex");
Not everything on the web platform is this easy to test, but it’s generally possible without doing UA sniffing. If you’re in a difficult position, it’s always worth checking to see if Modernizr has a test for it, which is kinda the gold-standard of feature testing as chances are it has been battle-tested and has dealt with edge cases in a way you might not foresee. If you actually use the library, it gives you clean logical breaks:
if (Modernizr.requestanimationframe) {
// supported
} else {
// not-supported
}
What if you just really need to know the browser type, platform, and version? Well, apparently that information is still possible to get, via a new thing called User-Agent Client Hints (UA-CH).
Wanna know the platform? You set a header on the request called Sec-CH-Platform
and theoretically, you’ll get that information back in the response. You have to essentially ask for it, which is apparently enough to prevent the problematic privacy fingerprinting stuff. It appears there are headers like Sec-CH-Mobile
for mobile too, which is a little curious. Who is deciding what a “mobile” device is? What decisions are we expected to make with that?
Knowing information about the browser, platform and version at the server level if often desirable as well (sending different code in different situations) — just as much as it is client-side, but without the benefit of being able to do tests. Presumably, the frozen UA strings will be useful for long enough that server-side situations can port over to using UA-CH.
Professionally, I’ve been hands on with the mobile web space and seen it develop for more than 15 years and I know that many, big and small, websites rely on device detection based on the
User-Agent
header. From Google’s perspective it may seem easy to switch to the alternative UA-CH, but this is where the team pushing this change doesn’t understand the impact:Functionality based on device detection is critical, widespread and not only in front end code. Huge software systems with backend code rely on device detection, as well as entire infrastructure stacks.
In my most major codebase, we do a smidge of server-side UA detection. We use a Rails gem called Browser that exposes UA-derived info in a nice API. I can write:
if browser.safari?
end
We also expose information from that gem on the client-side so it can be used there as well. There is only a handful of instances of usage for both front and back, none of which look like they would be particularly difficult to handle in some other way.
In the past it’s been kinda tricky to relay front-end information back to the server in such a way that’s useful on the first page load (since the UA doesn’t know stuff like viewport size). I remember some pretty fancy dancing I’ve done where I load up a skeleton page that executes a tiny bit of JavaScript that did things like measure the viewport width and screen size, then set a cookie and force-refreshed the page. If the cookie was present, the server had what it needed and didn’t load the skeleton page at all on those requests.
Tricky stuff, but then the server has information about the viewport width on the server-side, which is useful for things, like sending small-screen assets (e.g.different HTML), which was otherwise impossible.
I mention that because UA-CH stuff is not to be confused with regular ol’ Client Hints. We’re supposed to be able to configure our servers to send an Accept-CH
header and then have our client-side code whitelist stuff to send back, like:
<meta http-equiv="Accept-CH" content="DPR, Viewport-Width">
That means a server can have information from the client about these things on subsequent page loads. That’s a nice API, but Firefox and Safari don’t support it. I wonder if it will get a bump if both of those browsers are signaling interest in UA-CH because of this frozen UA string stuff.
Relying on the capacities of the browser is of course the best way to go. If it’s a matter of capacities.
Remember the old bug in IE6 about doubling margins in float situations. It’s a bug. You can’t ask of the supposed support of floats or margins to fix this. You have to know whether it’s IE6 or not in front of you (or use other techniques, sure but it’s not the point).
Same case with overflowing elements when you use
transform: translateX()
and the horizontal scrollbars in IE11.Other cases,
<input type="number" />
is rendered differently depending on browsers, and it’s not purely about capacities. you can’t test for that.You also have differences of interpretation for the same situation, so support if officially there, but reactions are different.
When everything is perfect, even, standards are respected, fair enough, but that’s just “deciding” that there will be no bugs. Or at least no way to address certain bugs. A bit presumptuous for me.
Indeed! I’m glad Edge went with Chromium.
I hope Safari does the same, flexbox and grid are painfully bad.
Diversity can only be good if standards are properly implemented.
Pretty funny coming from Google considering they have a history of purposefully providing a worse experience by using UA-sniffing.
In fact, they’re still doing it today with Stadia.
https://twitter.com/tomwarren/status/1212496687949864961?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1212496687949864961&ref_url=https%3A%2F%2Fwww.onmsft.com%2Fnews%2Fgoogle-has-apparently-blocked-its-stadia-cloud-gaming-service-on-the-chromium-based-microsoft-edge
What is the best method to therefore target a specific browser using CSS without having use hacks? When necessary I’ve always used a JS library that adds the browsers user-agents to the HTML tag as classes for use.
It’s nice because many people use user-agents in situations where feature detection could be used. It’s bad because there might be some bugs that can’t be detected and, by freezing UA strings, people will need to:
Rely on hacks like fingerprinting feature support to check if the browser have the bug.
Assume all browsers have the bug and always workaround it.
Ignore the bug and break things.
IIRC the last time I saw a bug like that was when I was doing experiments with Service Workers and, unlike Chrome, some Safari versions ignored the worker for file downloads, which is impossible to detect using the usual feature detection. I don’t know if this issue still persists, but at this time this “bug” was quite bad because the workaround was quite memory expensive. Breaking things would not be nice because without the workaround Safari would download a wrong file (IIRC it was an HTML with an error message).
There are legitimate reasons for requesting the browser version. Testing web software through automation. Something may work fine in one version but a bug is generated on another. Seems like this can lead to much “Well it works great on my phone when I run Chrome” when a bug is reported and sent to be tested using automation in a farm.