history – CSS-Tricks https://css-tricks.com Tips, Tricks, and Techniques on using Cascading Style Sheets. Fri, 20 May 2022 20:24:45 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.1 https://i0.wp.com/css-tricks.com/wp-content/uploads/2021/07/star.png?fit=32%2C32&ssl=1 history – CSS-Tricks https://css-tricks.com 32 32 45537868 Verevolf https://css-tricks.com/verevolf/ https://css-tricks.com/verevolf/#comments Fri, 20 May 2022 20:24:43 +0000 https://css-tricks.com/?p=365971 Zeldman:

You may not know his name, but he played a huge part in creating the web you take for granted today. And he’s back—kind of.

That would be Glenn Davis and the Verevolf site Zeldman’s talking about. The …


Verevolf originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
Zeldman:

You may not know his name, but he played a huge part in creating the web you take for granted today. And he’s back—kind of.

That would be Glenn Davis and the Verevolf site Zeldman’s talking about. The site is a growing archive of Davis’s personal (and unvarnished) recollections pioneering the early web, like the one where he recounts the origin story of his uber-famous Cool Site of the Day.

Credit: Web Design Museum

Or the how an email he wrote out of frustration snowballed into the Web Standards Project.

Personal retellings of history can be fraught with emotion and Davis’s posts are no exception. What’s super cool is how his experiences augment other retellings, including Chapter 7 of our own Web History series where Jay Hoffman documents the evolution of web standards.

To Shared LinkPermalink on CSS-Tricks


Verevolf originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/verevolf/feed/ 1 365971
Why are hyperlinks blue? https://css-tricks.com/why-are-hyperlinks-blue/ https://css-tricks.com/why-are-hyperlinks-blue/#comments Mon, 14 Feb 2022 20:13:31 +0000 https://css-tricks.com/?p=363555 Last year, Elise Blanchard did some great historical research and discovered that blue hyperlinks replaced black hyperlinks in 1993. They’ve been blue for so long now that the general advice I always hear is to keep them that way. There …


Why are hyperlinks blue? originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
Last year, Elise Blanchard did some great historical research and discovered that blue hyperlinks replaced black hyperlinks in 1993. They’ve been blue for so long now that the general advice I always hear is to keep them that way. There is powerful societal muscle memory for “blue text is a clickable link.”

BUT WHY?!

On a hot tip, Elise kept digging and published a follow-up and identified the source of blue hyperlinks:

[…] it is Prof. Ben Shneiderman whom we can thank for the modern blue hyperlink.

But it didn’t start on the web. It was more about operating systems in the very early 1990s that started using blue for interactive components and highlighted text.

The decision to make hyperlinks blue in Mosaic, and the reason why we see it happening in Cello at the same time, is that by 1993, blue was becoming the industry standard for interaction for hypertext. It had been eight years since the initial research on blue as a hyperlink color. This data had been shared, presented at conferences, and printed in industry magazines. Hypertext went on to be discussed in multiple forums. Diverse teams’ research came to the same conclusion – color mattered. If it didn’t inspire Marc Andreessen and Eric Bina directly, it inspired those around them and those in their industry.

Windows 3.1 screenshot showing hyperlinks blue.

Because research:

[…] the blue hyperlink was indeed inspired by the research done at the University of Maryland.

To Shared LinkPermalink on CSS-Tricks


Why are hyperlinks blue? originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/why-are-hyperlinks-blue/feed/ 2 363555
Chapter 6: Web Design https://css-tricks.com/chapter-6-web-design/ https://css-tricks.com/chapter-6-web-design/#respond Tue, 29 Dec 2020 15:53:19 +0000 https://css-tricks.com/?p=331369 Previously in web history…

After the first websites demonstrate the commercial and aesthetic potential of the web, the media industry floods the web with a surge of new content. Amateur webzines — which define and voice and tone unique to


Chapter 6: Web Design originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>

Previously in web history…

After the first websites demonstrate the commercial and aesthetic potential of the web, the media industry floods the web with a surge of new content. Amateur webzines — which define and voice and tone unique to the web — are soon joined by traditional publishers. By the mid to late 90’s, most major companies will have a website, and the popularity of the web will begin to explore. Search engines emerge as one solution to cataloging the expanding universe of websites, but even they struggle to keep up. Brands soon begin to look for a way to stand out.

Alec Pollak was little more than a junior art director cranking out print ads when he got a call that would change the path of his career. He worked at advertising agency, Grey Entertainment, later called Grey Group. The agency had spent decades acquiring some of the biggest clients in the industry.

Pollak spent most of his days in the New York office, mocking up designs for magazines and newspapers. Thanks to a knack for computers, a hobby of his, he would get the odd digital assignment or two, working on a multimedia offshoot for an ad campaign. Pollak was on the Internet in the days of BBS. But when he saw the World Wide, the pixels brought to life on his screen by the Mosaic browser, he found a calling.

Sometime in early 1995, he got that phone call. “It was Len Fogge, President of the agency, calling little-old, Junior-Art-Director me,” Pollak would later recall. “He’d heard I was one of the few people in the agency who had an email address.” Fogge was calling because a particularly forward-thinking client (later co-founder of Warner Bros Online Donald Buckley) wanted a website for the upcoming film Batman Forever. The movie’s key demographic — tech-fluent, generally well-to-do comic book aficionados — made it perfect for a web experiment. Fogge was calling Pollak to see if that’s something he could do, build a website. Pollak never had. He knew little about the web other than how to browse it. The offer, however, was too good to pass up. He said, yes, he absolutely could build a website.

Art director Steve McCarron was assigned the project. Pollak had only managed to convince one other employee at Grey of the web’s potential, copywriter Jeffrey Zeldman. McCarron brought the two of them in to work on the site. With little in the way of examples, the trio locked themselves in a room and began to work out what they thought a website should look and feel like. Partnering with a creative team at Grey, and a Perl programmer, they emerged three months later with something cutting edge. The Batman Forever website launched in May of 1995.

The Batman Forever website

When you first came to the site, a moving bat (scripted in Perl by programmer Douglas Rice) flew towards your screen, revealing behind it the website’s home page. It was filled with short, punchy copy and edgy visuals that played on the film’s gothic motifs. The site featured a message board where fans could gather and discuss the film. It had a gallery of videos and images available for download, tiny low-resolution clips and stills from the film. It was packed edge-to-edge with content and easter eggs.

It was hugely successful and influential. At the time, it was visited by just about anyone with a web connection and a browser, Batman fan or not.

Over the next couple of years — a full generation in Internet time — this is how design would work on the web. It would not be a deliberate, top-down process. The web design field would form from blurry edges focused a little at a time. The practice would taken up not by polished professionals but by junior art directors and designers fresh out of college, amateurs with little to lose at the beginning of their careers. In other words, just as outsiders built the web, outsiders would design it.

Interest in the early web required tenacity and personal drive, so it sometimes came from unusual places. Like when Gina Blaber recruited a team inside of O’Reilly nimble and adventurous enough to design GNN from scratch. Or when Walter Isaacson looked for help with Pathfinder and found Chan Suh toiling away at websites deeply embedded in the marketing arm of a Time Warner publication. These weren’t the usual suspects. These were web designers.


Jamie Levy was certainly an outsider, with a massive influence on the practice of design on the web. A product of the San Fernando Valley punk scene, Levy came to New York to attend NYU’s Interactive Telecommunications Program. Even at NYU, a school which had produced some of the most influential artists and filmmakers of the time, Levy stood out. She had a brash attitude and a sharp wit, balanced by an incredible ability to self-motivate and adapt to new technology, and, most importantly, an explosive and immediately recognizable aesthetic.

Levy’s initial resistance to computers as a glorified calculator for shut-ins dropped once she saw what it could do with graphics. After graduating from NYU, Levy brought her experience in the punk scene designing zines — which she had designed, printed and distributed herself — to her multimedia work. One of her first projects was designing a digital magazine called Electric Hollywood using Hypercard, which she loaded and distributed on floppy disks. Levy mixed bold colors and grungy zine-inspired artistry with a clickable, navigable hypertext interface. Years before the web, Levy was building multimedia that felt a lot like what it would become.

Electric Hollywood was enough to cultivate a following. Levy was featured in magazines and in interviews. She also caught the eye of Billy Idol, who recruited her to create graphical interactive liner notes for his latest album, Cyberpunk, distributed with floppys alongside the CD. The album was a critical and commercial failure, but Levy’s reputation among a growing clique of digital designers was cemented.

Still, nothing compared to the first time she saw the web. Levy experienced the World Wide Web — which author Claire Evans describes in her book, Broad Band — “as a conversion.” “Once the browser came out,” Levy would later recall, “I was like, ‘I’m not making fixed-format anymore. I’m learning HTML and that was it.” Levy’s style, which brought the user in to experience her designs on their own terms, was a perfect fit for the web. She began moving her attention to this new medium.

People naturally gravitated towards Levy. She was a fixture in Silicon Alley, the media’s name for the new tech and web scene concentrated in New York City. Within a few years, they would be the ushers of the dot-com boom. In the early ’90’s, they were little more than a scrappy collection of digital designers and programmers and writers; “true believers” in the web, as they called themselves.

Levy was one of their acolytes. She became well known for her Cyber Slacker parties; late-night hangouts where she packed her apartment with a ragtag group of hackers and artists (often with appearances by DJ Spooky). Designers looked to her for inspiration. Many would emulate her work in their own designs. She even had some mainstream appeal. Whenever she graced the covers of major magazines like Esquire and Newsweek, she always had a skateboard or a keyboard in her hands.

It was her near mythic status that brought IT company Icon CMT calling about their new web project, a magazine called Word. The magazine would be Levy’s most ambitious project to date, and where she left her greatest influence on web design. Word would soon become a proving ground for her most impressive design ideas.

Word Magazine

Levy was put in charge of assembling a team. Her first recruit was Marisa Bowe, whom she had met on the Echo messaging board (BBS) run by Stacy Horn, based in New York. Bowe was originally brought on as a managing editor. But when editor in chief Jonathan Van Meter left before the project even got off the ground, Bowe was put in charge of the site’s editorial vision.

Levy found a spiritual partner in Bowe, having come to the web with a similar ethos and passion. Bowe would become a large part of defining the voice and tone that was so integral to the webzine revolution of the ’90’s. She had knack for locating authentic stories, and Word’s content was often, as Bowe called it “first-person memoirs.” People would take stories from their life and relate it to the cultural topics of the day. And Bowe’s writing and editorial style — edgy, sarcastic, and conversational — would be backed by the radical design choices of Levy.

Articles that appeared on Word were one-of a kind, where the images, backgrounds and colors chosen helped facilitate the tone of a piece. These art-directed posts pulled from Levy’s signature style, a blend of 8-bit graphics and off-kilter layouts, with the chaotic bricolage of punk rock zines. Pages came alive, representing through design the personality of the post’s author.

Word also became known for experimenting with new technologies almost as soon as they were released. Browsers were still rudimentary in terms of design possibilities, but they didn’t shy away from stretching those possibilities as far as they could go. It was one of the first magazines to use music, carefully paired with the content of the articles. When Levy first encountered what HTML tables could do to create grid-based layouts, she needed to use it immediately. “Everyone said, ‘Oh my God, this is going to change everything,’” she later recalled in an interview, “And I went back to to Word.com and I’d say, ‘We’ve got to do an artistic piece with tables in it.’ Every week there was some new HTML code to exploit.”

The duo was understandably cocky about their work, and with good reason. It would be years before others would catch up to what they did on Word. “Nobody is doing anything as interesting as Word, I wish someone would try and kick our ass,” Levy once bragged. Bowe echoed the sentiment, describing the rest of the web as “like frosting with no cake.” Still, for a lot of designers, their work would serve as inspiration and a template for what was possible. The whole point was to show off a bit.

Levy’s design was inspired by her work in the print world, but it was something separate and new. When she added some audio to a page, or painted a background with garish colors, she did so to augment its content. The artistry was the point. Things might have been a bit hard to find, a bit confusing, on Word. But that was ok. The joy of the site was discovering its design. Levy left the project before its first anniversary, but the pop art style would continue on the site under new creative director Yoshi Sodeoka. And as the years went on, others would try to capture the same radical spirit.

A couple of years later, Ben Benjamin would step away from his more humdrum work at CNet to create a more personal venture known as Superbad, a mix of offbeat, banal content and charged visuals created a place of exploration. There was no central navigation or anchor to the experience. One could simply click and find what they find next.

The early web also saw its most avant-garde movement in the form of Net.art, a loose community of digital artists pushing their experiments into cyberspace. Net artists exploited digital artifacts to create works of interactive works of art. For instance, Olia Lialina created visual narratives that used hypertext to glue together animated panels and prose. The collective Jodi.org, on the other hand, made a website that looked like complete gibberish, hiding its true content in the source code of the page itself.

These were the extreme examples. But they served in creating a version of the web that felt unrefined. Web work, therefore, was handed to newcomers and subordinates to figure out.

And so the web became defined, by definition, by a class of people that were willing to experiment — basically, it was twenty-somethings fresh out of college, in Silicon Valley, Silicon Alley, and everywhere in between who wrote the very first rules of web design. Some, like Levy and the team at Grey, pulled from their graphic design roots. Others tried something completely new.

There was no canvas, only the blaring white screen of a blank code editor. There was no guide, only bits of data streaming around the world.

But not for long.


In January of 1996, two web design books were published. The first was called Designing for the Web, by Jennifer Robbins, one of the original designers on the GNN team. Robbins had compiled months of notes about web design into a how-to guide for newbies. The second, designing web graphics, was written by Lynda Weinman, by then already owner of the eponymous web tutorial site Lynda.com. Weinman brought her experience in the film industry and with animation to bring a visual language to her practical guide to the web in a fusion of abstract thoughts on a new medium and concrete tips for new designers.

At the time, there were technical manuals and code guides, but few publications truly dedicated to design. Robbins and Weinman provided a much needed foundation.

Six months later, a third book was published, Creating Killer Websites, written by Dave Siegel. It was a very different kind of book. It began with a thesis. The newest generation of websites, what Siegel referred to as third generation sites, needed to guide visitors through their experiences. They needed to be interactive, familiar, and engaging. To achieve this level of interactivity, Siegel argues, required more than what the web platform could provide. What follows from this thesis is a book of programming hacks, ways to use HTML in ways it wasn’t strictly made for. Siegel popularized techniques that would soon become a de facto standard, using HTML tables and spacer GIFs to create advanced layouts, and using images to display heading fonts and visual backgrounds.

The publishing cadence of 1996 makes a good case study for the state and future of web design. The themes and messages of the books illustrate two points very well.

The first is the maturity of web design as a practice. The books published at the beginning of the year drew on predecessors — including Robbins from her time as a print designer, and Lynda from her work in animation — to help contextualize and codify the emerging field of web design. Six months later, that codification was already being expanded and made repeatable by writers like Siegel.

The second point it illustrates is a tension that was beginning to form. In the next few years, designers would begin to hone their craft. The basic layouts and structures of a page would become standardized. New best practices would be catalogued in dozens of new books. Web design would become a more mature practice, an industry all of its own.

But browsers were imperfect and HTML was limited. Coding the intricate designs of Word or Superbad required a bit of creative thinking. Alongside the sophistication of the web design field would follow a string of techniques and tools aimed at correcting browser limitations. These would cause problems later, but in the moment, they gave designers freedom. The history of web design is interweaved with this push and pull between freedom and constraint.


In March of 1995, Netscape introduced a new feature to version 1.1 of Netscape Navigator. It was called server push and it could be used to stream data back and forth between a server and a browser, updated dynamically. Its most common use was thought to be real-time data without refreshes, like a moving stock ticker or an updating news widget. But it could also be used for animation.

On the day that server push was released, there were two websites that used it. The first was the Netscape homepage. The second was a site with a single, animated bouncing blue dot. This produced its name: TheBlueDot.com.

TheBlueDot.com

The animation, and the site, were created by Craig Kanarick, who had worked long into the night the day before Netscape’s update release to have it ready for Day One. Designer Clay Shirky would later describe the first time he saw Kanarick’s animation: “We were sitting around looking at it and were just […] up until that point, in our minds, we had been absolutely cock of the walk. We knew of no one else who was doing design as well as Agency. The Blue Dot came up, and we wanted to hate it, but we looked at it and said, ‘Wow, this is really good.’

Kanarick would soon be elevated from a disciple of Silicon Alley to a dot-com legend. Along with his childhood friend Jeff Dachis, Kanarick created Razorfish, one of the earliest examples of a digital agency. Some of the web’s most influential early designers would begin their careers at Razorfish. As more sites came online, clients would come to Razorfish for fresh takes on design. The agency responded with a distinct style and mindset that permeated through all of their projects.

Jonathan Nelson, on the other hand, had only a vague idea for a nightclub when he moved to San Francisco. Nelson worked with a high school friend, Jonathan Steuer on a way to fuse an online community with a brick and mortar club. They were soon joined by Brian Behlendorf, a recent Berkeley grad with a mailing list of San Francisco rave-goers, and unique experiences for a still very new and untested World Wide Web.

Steuer’s day job was at Wired. He got Nelson and Behlendorf jobs there, working on the digital infrastructure of the magazine, while they worked out their idea for their club. By the time the idea for HotWired began to circulate, Behlendorf had earned himself a promotion. He worked as chief engineer on the project, directly under Steuer.

Nelson was getting restless. The nightclub idea was ill-defined and getting no traction. The web was beginning to pass him by, and he wanted to be part of it. Nelson was joined by his brother and programmer Cliff Skolnick to create an agency of their own. One that would build websites for money. Behlendorf agreed to join as well, splitting his time between HotWired and this new company.

Nelson leased an office one floor above Wired and the newly formed Organic Online began to try and recruit their first clients.

When HotWired eventually launched, it had sold advertising to half a dozen partners. Advertisers were willing to pay a few bucks to have proximity to the brand of cool that HotWired was peddling. None of them, however, had websites. HotWired needed people to build the ads that would be displayed on their site, but they also needed to build the actual websites the ads would link to. For the ads, they used Razorfish. For the brand microsites, they used Organic Online. And suddenly, there were web design experts.


Within the next few years, the practice of web design would go through radical changes. The amateurs and upstarts that had built the web with their fresh perspective and newcomer instincts would soon consolidate into formal enterprises. They created agencies like Organic and Razorfish, but also Agency.com, Modem Media, CKS, Site Specific, and dozens of others. These agencies had little influence on the advertising industry as a whole, at least initially. Even CKS, maybe the most popular agency in Silicon Valley earned what one writer noted, was the equivalent of “in one year what Madison Avenue’s best-known ad slingers collect in just seven days.”

On the other end, the web design community was soon filled by freelancers and smaller agencies. The multi-million dollar dot-com contracts might have gone to the trendy digital agencies, but there were plenty of businesses that needed a website for a lot less.

These needs were met by a cottage industry of designers, developers, web hosts, and strategists. Many of them collected web experience the same way Kanarick and Levy and Nelson and Behlendorf had — on their own and through trial and error. But ad-hoc experimentation could only go so far. It didn’t make sense for each designer to have to re-learn web design. Shortcuts and techniques were shared. Rules were written. And web design trod on more familiar territory.

The Blue Dot launched in 1995. That’s the same year that Word and the Batman Forever sites launched. They were joined that same year by Amazon and eBay, a realization of the commercial potential of the web. By the end of the year, more traditional corporations planted their flag on the web. Websites for Disney and Apple and Coca Cola were followed by hundreds and then thousands of brands and businesses from around the world.

Levy had the freedom to design her pages with an idiosyncratic brush. She used the language of the web to communicate meaning and re-inforce her magazine’s editorial style. New websites, however, had a different value proposition. In most cases, they were there for customers. To sell something, sometimes directly or sometimes indirectly through marketing and advertising. In either case, they needed a website that was clear. Simple. Familiar. To accomodate the needs of business, commerce, and marketing online, the web design industry turned to recognizable techniques.

Starting in 1996, design practice somewhat standardized around common features. The primary elements on a page — the navigation and header — smoothed out from site to site. The stylistic flourishes in layout, color, and use of images from the early web replaced by best practices and common structure. Designers drew on the work of one another and began to create repeatable patterns. The result was a web that, though less visually distinct, was easier to navigate. Like signposts alongside a road, the patterns of the web became familiar to those that used it.

In 1997, a couple of years after the launch of Batman Forever, Jeffrey Zeldman created the mailing list (and later website) A List Apart, to begin circulating web design tutorials and topics. It was just one of a growing list of web designers that rushed to fill the vacuum of knowledge surrounding web design. Web design tutorials blanketed the proto-blogosphere of mailing lists and websites. A near limitless hypertext library of techniques and tips and code examples was available to anyone that looked hard enough for it. Through that blanket distribution of expertise, came new web design methodologies.

Writing a decade after the launch of A List Apart, in 2007, designer Jeffrey Zeldman defined web design as “the creation of digital environments that facilitate and encourage human activity; reflect or adapt to individual voices and content; and change gracefully over time while always retaining their identity.” Zeldman here advocates for merging a familiar interface with brand identity to create predictable, but still stylized, experiences. It’s a shift in thinking from the website as an expression of its creator’s aesthetic, to a utility centered on the user.

This philosophical shift was balanced by a technical one. The two largest browsers, Microsoft and Netscape, vied for market control. They often introduced new capabilities — customizations to colors or backgrounds or fonts or layouts unique to a single browser. That made it hard for designers to create websites that looked the same in both browsers. Designers were forced to resort to fragile code (one could never be too sure if it would work the same the next day), or to turn to tools to smooth out these differences.

Visual editors, Microsoft Frontpage and Macromedia Dreamweaver and a few others, were the first to try and right the ship of design. They gave designers a way to create websites without any code at all. Websites could be built with just the movement of a mouse. In the same way you might use a paintbrush or a drawing tool in Photoshop or MS Paint, one could drag and drop a website into being. The process even got an acronym. WYSIWYG, or “What You See Is What You Get.”

The web, a dynamic medium in its best incarnation, required more frequent updates than designers were sometimes able to do. Writers wanted greater control over the content of their sites, but they were often forced to call the site administrator to make updates. Developers worked out a way to separate the content from how it was output to the screen and store it in a separate database. This led to the development of the first Content Management Systems, or CMS. Using a CMS, an editor or writer could log into a special section of their website, and use simple form fields to update the content of the site. There were even rudimentary WYSIWYG tools baked right in.

Without the CMS, the web would never have been able to keep pace with the blogging revolution or the democratization of publishing that was somewhat borne out in the following decade. But database rendered content and WYSIWYG editors introduced uniformity out of necessity. There were only so many options that could be given to designers. Content in a CMS was inserted into pre-fabricated layouts and templates. Visual editors focused on delivering the most useful and common patterns designers used in their website.


In 1998, PBS Online unveiled a brand new version of its website. At the center of it all was a brand new section, “TeacherSource”: a repository of supplemental materials custom-made for educators to use in their classrooms. In the time since PBS first launched its website three years earlier, they had created a thriving online destination — especially for kids and educators. They had tens of thousands of pages worth of content. Two million visitors streamed through the site each day. They had won at the newly created Webby Awards two years in a row. TeacherSource was simply the latest in a long list of digital-only content that enhanced their other media offerings.

The PBS TeacherSource website

Before they began working on TeacherSource, PBS had run some focus groups with teachers. They wanted to understand where they should put their focus. The teachers were asked about the site’s design and content. They didn’t comment much about the way that images were being used, or their creative use of layouts or the designer’s choice of colors. The number one complaint that PBS heard was that it was hard to find things. The menu was confusing, and there was no place to search.

This latest version of PBS had a renewed design, with special attention given to its navigation. In an announcement about the site’s redesign, Cindy Johanson referred to the design’s more understandable navigation menu and in-site search as a “new front door and lots of side doors.”

It’s a useful metaphor; one that designers would often return to. However, it also doubles as a unique indicator of where web design was headed. The visual design of the page was beginning to recede into the background in favor of clarity and understanding.

The more refined — and predictable — practice of design benefited the most important part of a website: the visitor. The surfing habits of web users were becoming more varied. There were simply more websites to browse. A common language, common designs, helped make it easier for visitors to orient themselves as they bounced from one site to the next. What the web lost in visual flourish it gained back in usability. By the next major change in design, this would go by the name User Experience. But not before one final burst of creative expression.


The second version of MONOcrafts.com, launched in 1998, was a revelation. A muted palette and plain photography belied a deeper construction and design. As you navigated the site, its elements danced on the page, text folding out from the side to reveal more information, pages transitioning smoothly from one to the next. One writer described the site as “orderly and monochromatic, geometric and spare. But present, too, is a strikingly lyrical component.”

The MONOcrafts website

There was the slightest bit of friction to the experience, where the menu would move away from your mouse or you would need to wait for a transition to complete before moving from one page to the next. It was a website that was mediative, precise, and technically complex. A website that for all its splendor, contained little more than a description of its purpose and a brief biography of its creator, Yugo Nakamura.

Nakamura began his career as a civil engineer, after studying civil engineering and architecture at Tokyo University. After working several years in the field, he found himself drawn to the screen. The physical world posed too many limitations. He would later state, “I found the simple fact that every experience was determined by the relationship between me and my surroundings, and I realised that I wanted to design the form of that relationship abstractly. That’s why I got into the web.” Drawing on the influences of notable web artists, Nakamura began to create elaborately designed websites under the moniker yugop, both for hire and as a personal passion.

yugop became famous for his expertise in a tool that gave him the freedom of composition and interactivity that had been denied to him in real-world engineering. A tool called Flash.

Flash had three separate lives before it entered the web design community. It began as software created for the pen computing market, a doomed venture which failed before it even got off the ground. From there, it was adapted to the screen as a drawing tool, and finally transformed, in 1996, into a keyframe animation package known as FutureSplash Animator. The software was paired with a new file format and embeddable player, a quirk of the software that would affirm its later success.

Through a combination of good fortune and careful planning, the FutureSplash player was added to browsers. The software’s creator, Jonathan Gay, first turned to Netscape Navigator, adapting the browser’s new plugin architecture to add widespread support for his file format player. A stroke of luck came when Microsoft’s web portal, MSN, had a need to embed streaming videos on its site, a feature for which the FutureSplash player was well-suited. To make sure it could be viewed by everyone, Microsoft baked the player directly into Internet Explorer. Within the span of a few months, FutureSplash went from just another animation tool to an ubiquitous file format playable in 99% of web browsers. By the end of 1996, Macromedia purchased FutureSplash Animator and rebranded it as Flash.

Flash was an animation tool. De facto support in major browsers made it adaptable enough to be a web design tool as well. Designers learned how to recreate the functionality of websites inside of Flash. Rather than relegating a Flash player to tiny corner of a webpage, some practitioners expanded the player to fill the whole screen, creating the very first Flash websites. By the end of 1996, Flash had captivated the web design community. Resources and techniques sprung up to meet the demand. Designers new to the web were met with tutorials and guides on how to build their websites in Flash.

The appeal to designers was its visual interface, drag and drop drawing tools that could be used to create animated navigation, transitions and audiovisual interactivity the web couldn’t support natively. Web design practitioners had been looking for that level of precision and control since HTML tables were introduced. Flash made it not only possible but, compared to HTML, nearly effortless. Using your mouse and your imagination — and very little, if any, code — could lead to sophisticated designs.

Even among the saturation that the new Flash community would soon become, MONOcrafts stood out. It’s use of Flash was playful, but with a definitive structure and flow.

Flash 4 had been released just before Nakamura began working on his site. It included a new scripting language known as ActionScript, which gave designers a way to programmatically add new interactive elements to the page. Nakamura used ActionScript, combined with the other capabilities of Flash, to create elements that would soon be seen on every website (and now feel like ancient relics of a forgotten past).

MONOcrafts was the first time that many web designers saw an animated intro bring them into the site. In the hands of yugop and other Flash experts, it was an elegant (and importantly, brief) introduction to the style and tone of a website. Before long, intros would become interminable, pervasive, and bothersome. So much so, designers would frequently add a “Skip Intro” button to the bottom of their sites. Clicking that button as soon as it appeared became almost a reflex for browsers of the mid-90’s, Flash-dominated web.

Nakamura also made sophisticated use of audio, something possible with ActionScript. Digitally compressed tones and clicks gave the site a natural feel, bringing the users directly into the experience. Before long, sounds would be everywhere, music playing in the background wherever you went. After that, audio elements would become an all but extinct design practice.

And MONOcrafts used transitions, animations, and navigation that truly made it shine. Nakamura, and other Flash experts, created new approaches to transitions and animations, carefully handled and deliberately placed, that would be retooled by designers in thousands of incarnations.

Designers turned to Flash, in part, because they had no other choice. They were the collateral damage of the so-called “Browser Wars” being played out by Netscape and Microsoft. Inconsistent implementations of web technologies like HTML and CSS made them difficult tools to rely on. Flash offered consistency.

This was met by a rise in the need for web clients. Companies with commercial or marketing needs wanted a way to stand out. In the era of Flash design, even e-commerce shopping carts zoomed across the page, and were animated as if in a video game. But the (sometimes excessive) embellishment was the point. There were many designers that felt they were being boxed in by the new rules of design. The outsiders who created the field of web design had graduated to senior positions at the agencies that they had often founded. Some left the industry altogether. They were replaced by a new freshman class as eager to define a new medium as the last. Many of these designers turned to Flash as their creative outlet.

The results were punchy designs applied to the largest brands. “In contrast to the web’s modern, business-like aesthetic, there is something bizarre, almost sentimental, about billion-dollar multinationals producing websites in line with Flash’s worst excess: long loading times, gaudy cartoonish graphics, intrusive sound and incomprehensible purpose,” notes writer Will Bedingfield. For some, Flash design represented summit of possibility for the web, its full potential realized. For others, it was a gaudy nuisance. It’s influence, however, is unquestionable.

Following the rise of Flash in the late 90’s and early 2000’s, the web would see a reset of sorts, one that came back to the foundational web technologies that it began with.


In April of 2000, as a new millennium was solidifying the stakes of the information age, John Allsopp wrote a post for A List Apart entitled “A Dao of Web Design.” It was written at the end of the first era of web design, and at the beginning of a new transformation of the web from a stylistic artifact of its print predecessors to a truly distinct design medium. “What I sense is a real tension between the web as we know it, and the web as it would be. It’s the tension between an existing medium, the printed page, and its child, the web,” Allsopp wrote. “And it’s time to really understand the relationship between the parent and the child, and to let the child go its own way in the world.”

In the post, Allsopp uses the work of Daoism to sketch out ideas around a fluid and flexible web. Designers, for too long, had attempted to assert control over the web medium. It is why they turned to HTML hacks, and later, to Flash. But the web’s fluidity is also its strength, and when embraced, opens up the possibilities for new designs.

Allsopp dedicates the second half of the post to outline several techniques that can aid designers in embracing this new medium. In so doing, he set the stage for concepts that would be essential to web design over the next decade. He talks about accessibility, web standards, and the separation of content and appearance. Five years before the article was written, those concepts were whispered by a barely known community. Ten years earlier, they didn’t even exist. It’s a great illustration of just how far things had come in such a short time.

Allsopp puts a fine point on the struggle and tension that existed on the web for the past decade as he looked to the future. From this tension, however, came a new practice entirely. The practice of web design.


Chapter 6: Web Design originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/chapter-6-web-design/feed/ 0 331369
Dark Ages of the Web https://css-tricks.com/dark-ages-of-the-web/ https://css-tricks.com/dark-ages-of-the-web/#comments Wed, 29 Jul 2020 23:17:40 +0000 https://css-tricks.com/?p=318198 A very fun jaunt through the early days of front-end web development. They are open to pull requests, so submit one if you’re into this kind of fun chronicling of our weird history!

That CSS3 Button generator really hits home


Dark Ages of the Web originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
A very fun jaunt through the early days of front-end web development. They are open to pull requests, so submit one if you’re into this kind of fun chronicling of our weird history!

That CSS3 Button generator really hits home. 😬

To Shared LinkPermalink on CSS-Tricks


Dark Ages of the Web originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/dark-ages-of-the-web/feed/ 4 318198
Yet Another JavaScript Framework https://css-tricks.com/yet-another-javascript-framework/ https://css-tricks.com/yet-another-javascript-framework/#comments Mon, 01 Apr 2019 14:20:06 +0000 http://css-tricks.com/?p=285011 On March 6, 2018, a new bug was added to the official Mozilla Firefox browser bug tracker. A developer had noticed an issue with Mozilla’s nightly build. The report noted that a 14-day weather forecast widget typically featured on a …


Yet Another JavaScript Framework originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
On March 6, 2018, a new bug was added to the official Mozilla Firefox browser bug tracker. A developer had noticed an issue with Mozilla’s nightly build. The report noted that a 14-day weather forecast widget typically featured on a German website had all of a sudden broken and disappeared. Nothing on the site had changed, so the problem had to be with Firefox.

Bug 1443630: Implementing array.prototype.flatten broke MooTools' version of it.
A screenshot of the bug report filed with Mozilla.

The problem, the developer noted in his report, appeared to stem from the site’s use of the JavaScript library MooTools.

At first glance, the bug appeared to be fairly routine, most likely a small problem somewhere in the website’s code or a strange coincidence. After just a few hours though, it became clear that the stakes for this one particular bug were far graver than anyone could have anticipated. If Firefox were to release this version of their browser as-is, they risked breaking an unknown, but still predictably rather large number of websites, all at once. Why that is has everything to do with the way MooTools was built, where it drew influence from, and the moment in time it was released. So to really understand the problem, we’ll have to go all the way back to the beginning.

In the beginning

First came plain JavaScript. Released in 1995 by a team at Netscape, Javascript began making its way into common use in the late 90’s. JavaScript gave web developers working with HTML a boost, allowing them to dynamically shift things around, lightly animate content, and add counters and stock tickers and weather widgets and all sorts of interactivity to sites.

By 2005, JavaScript development had become increasingly complex. This was precipitated by the use of a technique we know as Asynchronous JavaScript and XML (Ajax), a pattern that likely feels familiar these days for anyone that uses a website to do something more than just read some content. Ajax opened up the door for application-like functionality native to the web, enabling the release of projects like Google Maps and Gmail. The phrase “Web 2.0” was casually lobbed into conversation to describe this new era of dynamic, user-facing, and interactive web development. All thanks to JavaScript.

It was specifically Ajax that Sam Stephenson found himself coming back to again and again in the early years of the turn of the century. Stephenson was a regular contributor to Ruby on Rails, and kept running into the same issues when trying to connect to Rails with JavaScript using some fairly common Ajax code. Specifically, he was writing the same baseline code every time he started a new project. So he wrote a few hundred lines of code that smoothed out Ajax connections with Rails he could port to all of his projects. In just a few months, a hundred lines turned into many more and Prototype, one of the earliest examples of a full JavaScript framework, was officially released.

A screenshot of the Prototype website.
An early version of the Prototype website that emphasizes its ease of use and class-based structure.

Extending JavaScript

Ruby utilizes class inheritance, which tends to lend itself to object-oriented development. If you don’t know what that means, all you really need to know is that it runs a bit counter to the way JavaScript was built. JavaScript instead leans on what’s known as prototypal inheritance. What’s that mean? It means that everything in JavaScript can be extended using the base object as a prototype. Anything. Even native object prototypes like String or Array. In fact, when browsers do add new functions and features to Javascript, they often do so by taking advantage of this particular language feature. That’s where Stephenson got the name for his library, Prototype.

The bottom line is, prototypal inheritance makes JavaScript naturally forgiving and easily extendable. Its basically possible for any developer to build on top of the core JavaScript library in their own code. This isn’t possible in a lot of other programming languages, but JavaScript has always been a bit of an outlier in terms of its approach to accommodate a much larger, cross-domain developer base.

All of which is to say that Stephenson did two things when he wrote Prototype. The first was to add a few helpers that allowed object-oriented developers like himself to work with JavasScript using a familiar code structure. The second, and far more important here, is that he began to extend existing Javascript to add features that were planned for some point in the future but not implemented just yet. One good example of this was the function document.getElementByClassName, a slightly renamed version of a feature that wouldn’t actually land in JavaScript until around 2008. Prototype let you use it way back in 2006. The library was basically a wish list of features that developers were promised would be implemented by browsers sometime in the future. Prototype gave those developers a head-start, and made it much easier to do the simple things they had to do each and every day.

Prototype went through a few iterations in rapid succession, gaining significant steam after it was included by default in all Ruby on Rails installations not long after its release. Along the way, Prototype set the groundwork for basically every framework that would come after it. For instance, it was the first to use the dollar sign ($) as shorthand for selecting objects in JavaScript. It wrote code that was, in its own words, “self-documented,” meaning that documentation was scarce and learning the library meant diving into some code, a practice that is more or less commonplace these days. And perhaps most importantly, it removed the difficulty of making code run on all browsers, a near Herculean task in the days when browsers themselves could agree on very little. Prototype just worked, in every modern-at-the-time browser.

Prototype had its fair share of competition from other libraries like base2 which took the object-oriented bits of Prototype and spun them off into a more compact version. But the library’s biggest competitor came when John Resig decided to put his own horse in the race. Resig was particularly interested in that last bit, the work-in-all-browsers-with-the-same-code bit. He began working on a different take on that idea in 2005, and eventually unveiled a library of his own at Barcamp in New York in January of 2006.

It was called jQuery.

New Wave JavaScript

jQuery shipped with the tagline “New Wave Javascript,” a curious title given how much Resig borrowed from Prototype. Everything from the syntax to its tools for working with Ajax — even its use of a dollar sign as a selector — came over from Prototype to jQuery. But it wasn’t called New Wave JavaScript because it was original. It was called New Wave JavaScript because it was new.

jQuery: New Wave JavaScript. jQuery is a new type of JavaScript library. jQuery is designed to change the way that you write JavaScript.
“New Wave” Javascript

jQuery’s biggest departure from Prototype and its ilk was that it didn’t extend existing and future JavaScript functionality or object primitives. Instead, it created brand new features, all assembled with a unique API that was built on top of what already existed in JavaScript. Like Prototype, jQuery provided lots of ways to interact with webpages, like select and move around elements, connect to servers, and make pages feel snappy and dynamic (though it lacked the object-oriented leanings of its predecessor). Crucially, however, all of this was done with new code. New functions, new syntax, new API’s, hence a new wave of development. You had to learn the “jQuery way” of doing things, but once you did, you could save yourself tons of time with all of the stuff jQuery made a lot easier. It even allowed for extendable plugins, meaning other developers could build cool, new stuff on top of it.

MooTools

It might sound small, but that slight paradigm shift was truly massive. A shift that seismic required a response, a response that incidentally came the very next year, in 2007, when Valerio Proietti found himself entirely frustrated with another library altogether. The library was called script.aculo.us, and it helped developers with page transitions and animations. Try as he might, Proietti just couldn’t get script.aculo.us to do what he wanted to do, so (as many developers in his position have done in the past), he decided to rewrite his own version. An object-oriented developer himself, he was already a big fan of Protoype, so he based his first version off of the library’s foundational principles. He even attempted to coast off its success with his first stab at a name: prototype.lite.js. A few months and many new features later, Proietti transformed that into MooTools.

Like Protoype, MooTools used an object-oriented programming methodology and prototypical inheritance to extend the functionality of core JavaScript. In fact, most of MooTools was simply about adding new functionality to built-in prototypes (i.e. String, Array). Most of what MooTools added was on the JavaScript roadmap for inclusion in browsers at some point in the future; the library was there to fill in the gap in the meantime. The general idea was that once a feature finally did land in browsers, they’d simply update their own code to match it. Web designers and developers liked MooTools because it was powerful, easy to use, and made them feel like they were coding in the future.

MooTools is a compact, modular, Object-oriented JavaScript framework designed to make writing extensible and compatible code easier and faster.
MooTools: Object-oriented, developer-friendly

There was plenty there from jQuery too. Like jQuery, MooTools smoothed over inconsistencies and bugs in the various browsers on the market, and offered quick and easy ways to add transitions, make server requests, and manipulate webpages dynamically. By the time MooTools was released, the jQuery syntax had more or less become the standard, and MooTools was not going to be the one to break the mold.

There was enough similarities between the two, in fact, for them to be pitted against one another in a near-endless stream of blog posts and think-pieces. MooTools vs. jQuery, a question for the ages. Websites sprung up to compare the two. MooTools was a “framework,” jQuery was a “library.” MooTools made coding fun, jQuery made the web fun. MooTools was for Geminis, and jQuery was for Sagittariuses. In truth, both worked very well, and the use of one over the other was mostly a matter of personal preference. This is largely true of many of the most common developer library debates, but they continue on all the same.

The legacy of legacy frameworks

Ultimately, it wasn’t features or code structure that won the day — it was time. One by one, the core contributors of MooTools peeled off from the project to work on other things. By 2010, only a few remained. Development slowed, and the community wasn’t far behind. MooTools continued to be popular, but its momentum had come to a screeching halt.

jQuery’s advantage was simply that they continued on, expanded even. In 2010, when MooTools development began to wind down, jQuery released the first version of jQuery Mobile, an attempt at retooling the library for a mobile world. The jQuery community never quit, and in the end, it gave them the advantage.

The legacy and reach of MooTools, however, is massive. It made its way onto hundreds of thousands of sites, and spread all around the world. According to some stats we have, it is still, to this day, more common to see MooTools than Angular or React or Vue or any modern framework on the average website. The code of some sites were updated to keep pace with the far less frequent, but still occasional, updates to MooTools. Others to this day are comfortable with whatever version of MooTools they have installed. Most simply haven’t updated their site at all. When the site was built, MooTools was the best option available and now, years later, the same version remains.

Array.flatten

Which brings us all the way back to the bug in the weather app that popped up in Firefox in early 2018. Remember, MooTools was modeled after Prototype. It modified native JavaScript prototype objects to add some functions planned but not yet released. In this specific case, it was a method called Array.flatten, a function that MooTools first added to their library way back in 2008 for modifying arrays. Fast forward 10 years, and the JavaScript working group finally got around to implementing their own version of Array.flatten, starting with the beta release of Firefox.

The problem was that Firefox’s Array.flatten didn’t map directly to the MooTools version of Array.flatten.

The details aren’t all that important (though you can read more about it here). Far more critical was the uncomfortable implication. The MooTools version, as it stood, broke when it collided with the new JavaScript version. That’s what broke the weather widget. If Firefox were to release their browser to the larger public, then the MooTools version of flatten would throw an error, and wipe out any JavaScript that depended on it. No one could say how many sites might be affected by the conflict, but given the popularity of MooTools, it wasn’t at all out of the question to think that the damage could be extensive.

Once the bug surfaced, hurried discussion took place in the JavaScript community, much of it in the JavaScript working group’s public GitHub repo. A few solutions soon emerged. The first was to simply release the new version of flatten. Essentially, to let the old sites break. There was, it was argued, a simple elegance to the proposal, backed fundamentally by the idea that it is the responsibility of browsers to push the web forward. Breaking sites would force site owners to upgrade, and we could finally rid ourselves of the old and outdated MooTools versions.

Others quickly jumped in to point out that the web is near limitless and that it is impossible to track which sites may be affected. A good amount of those sites probably hadn’t been updated in years. Some may have been abandoned. Others might not have the resources to upgrade. Should we leave these sites to rot? The safe, forgivable approach would be to retool the function to be either backward or fully compatible with MooTools. Upon release, nothing would break, even if the final implementation of Array.flatten was less than ideal.

Somewhere down the middle, a final proposition suggested the best course of action may simply be to rename the function entirely, essentially sidestepping the issue altogether and avoiding the need for the two implementations to play nice at all.

One developer suggested that the name Array.smoosh be used instead, which eventually lead to the whole incident to be labeled Smooshgate, which was unfortunate because it glossed over a much more interesting debate lurking just under the surface about the very soul of the web. It exposed an essential question about the responsibility of browser makers and developers to provide an accessible and open and forgiving experience for each and every user of the web and each and every builder of the web, even when (maybe especially when) the standards of the web are completely ignored. Put simply, the question was, should we ever break the web?

To be clear, the web is a ubiquitous and rapidly developing medium originally built for sharing text and links and little else, but now used by billions of people each day in every facet of their lives to do truly extraordinary things. It will, on occasion, break all on its own. But, when a situation arises that is in full view and, ultimately, preventable, is the proper course of action to try and pull the web forward or to ensure that the web in its current form continues to function even as technology advances?

This only leads to more questions. Who should be responsible for making these decisions? Should every library be actively maintained in some way, ad infinitum, even when best practices shift to anti-patterns? What is our obligation, as developers, for sites we know have been abandoned? And, most importantly, how can we best serve the many different users of the web while still giving developers new programmatic tools? These are the same questions that we continue to return to, and they have been at the core of discussions like progressive enhancement, responsive design and accessibility.

Where do we go now?

It is impossible to answer all of these questions simply. They can, however, be framed by the ideological project of the web itself. The web was built to be open, both technologically as a decentralized network, and philosophically as a democratizing medium. These questions are tricky because the web belongs to no one, yet was built for everyone. Maintaining that spirit takes a lot of work, and requires sometimes slow, but always deliberate decisions about the trajectory of web technologies. We should be careful to consider the mountains of legacy code and libraries that will likely remain on the web for its entire existence. Not just because they are often built with the best of intentions, but because many have been woven into the fabric of the web. If we pull on any one thread too hard, we risk unraveling the whole thing.

As the JavaScript working group progressed towards a fix, many of these questions bubbled up in one form or another. In the end, the solution was a compromise. Array.flatten was renamed to Array.flat, and is now active in most modern browser releases. It is hard to say if this was absolutely the best decision, and certainly, we won’t always get things right. But if we remember the foundational ideals of the web — that it was built as an accessible, inclusive and always shifting medium, and use that as a guide — then it can help our decision-making process. This seems to have been at the core of the case with Smooshgate.

Someday, you may be browsing the web and come across an old site that hasn’t been updated in years. At the top, you may even notice a widget that tells you what the weather is. And it will keep on working because JavaScript decided to bend rather than break.

Enjoy learning about web history with stories just like this? Jay Hoffmann is telling the full story of the web, all the way from the beginning, over on The History of the Web. Sign up for his newsletter to catch up on the latest… of what’s past!


Yet Another JavaScript Framework originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/yet-another-javascript-framework/feed/ 13 285011
A historical look at lowercase defaultstatus https://css-tricks.com/a-historical-look-at-lowercase-defaultstatus/ Mon, 01 Apr 2019 14:19:30 +0000 http://css-tricks.com/?p=285440 Browsers, thank heavens, take backward compatibility seriously.

Ancient websites generally work just fine on modern browsers. There is a way higher chance that a website is broken because of problems with hosting, missing or altered assets, or server changes than …


A historical look at lowercase defaultstatus originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
Browsers, thank heavens, take backward compatibility seriously.

Ancient websites generally work just fine on modern browsers. There is a way higher chance that a website is broken because of problems with hosting, missing or altered assets, or server changes than there is with changes in how browsers deal with HTML, CSS, JavaScript, another other native web technologies.

In recent memory, #SmooshGate was all about a new JavaScript feature that conflicted with a once-popular JavaScript library. Short story, JavaScript has a proposal for Array.prototype.flatten, but in a twist of fate, it would have broken MooTools Elements.prototype.flatten if it shipped, so it had to be re-named for the health of the web.

That was the web dealing with a third-party, but sometimes the web has to deal with itself. Old APIs and names-of-things that need to continue to work even though they may feel like they are old and irrelevant. That work is, surprise surprise, done by caring humans.

Mike Taylor is one such human! The post I’m linking to here is just one example of this kind of bizarre history that needs to be tended to.

If Chrome were to remove defaultstatus the code using it as intended wouldn’t break—a new global would be set, but that’s not a huge deal. I guess the big risk is breaking UA sniffing and ended up in an unanticipated code-path, or worse, opting users into some kind of “your undetected browser isn’t supported, download Netscape 2” scenario.

If you’re into this kind of long term web API maintenance stuff, that’s the whole vibe of Mike’s blog, and something tells me it will stick around for a hot while.

To Shared LinkPermalink on CSS-Tricks


A historical look at lowercase defaultstatus originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
285440
WorldWideWeb https://css-tricks.com/worldwideweb/ Wed, 27 Feb 2019 20:12:07 +0000 http://css-tricks.com/?p=283764 For the 30th anniversary of the web, CERN brought nine web nerds together to recreate the very first web browser — Or a working replication of it anyway, as you use it from your web browser, inception style.

Well …


WorldWideWeb originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
For the 30th anniversary of the web, CERN brought nine web nerds together to recreate the very first web browser — Or a working replication of it anyway, as you use it from your web browser, inception style.

Well done, Mark Boulton, John Allsopp, Kimberly Blessing, Jeremy Keith, Remy Sharp, Craig Mod, Martin Akolo Chiteri, Angela Ricci, and Brian Suda! I love that it was written in React and the font is an actual replication of what was used back then. What a cool project.

They even opened-sourced the code.

You can visit any site with Document > Open from full document reference:

CSS-Tricks ain’t too awful terrible, considering the strictness:

When this source code was written, there was no version number associated with HTML. Based on the source code, the following tags—the term ‘element’ was not yet used—were recognized:

  • ADDRESS
  • A
  • DL, DT, DD
  • H1, H2, H3, H4, H5, H6
  • HP1, HP2, HP3
  • ISINDEX
  • LI
  • LISTING
  • NEXTID
  • NODE
  • OL
  • PLAINTEXT
  • PRE
  • RESTOFFILE
  • TITLE
  • UL
  • XMP

Unrecognized tags were considered junk and ignored.

To Shared LinkPermalink on CSS-Tricks


WorldWideWeb originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
283764
The Ecological Impact of Browser Diversity https://css-tricks.com/the-ecological-impact-of-browser-diversity/ https://css-tricks.com/the-ecological-impact-of-browser-diversity/#comments Thu, 30 Aug 2018 14:01:10 +0000 http://css-tricks.com/?p=275463 Early in my career when I worked at agencies and later at Microsoft on Edge, I heard the same lament over and over: “Argh, why doesn’t Edge just run on Blink? Then I would have access to ALL THE APIs


The Ecological Impact of Browser Diversity originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
Early in my career when I worked at agencies and later at Microsoft on Edge, I heard the same lament over and over: “Argh, why doesn’t Edge just run on Blink? Then I would have access to ALL THE APIs I want to use and would only have to test in one browser!”

Let me be clear: an Internet that runs only on Chrome’s engine, Blink, and its offspring, is not the paradise we like to imagine it to be.

As a Google Developer Expert who has worked on Microsoft Edge, with Firefox, and with the W3C as an Invited Expert, I have some opinions (and a number of facts) to drop on this topic. Let’s get to it.

What is a browser, even?

Let’s clear up some terminology.

Popular browsers you know today include Google Chrome, Apple Safari, Mozilla Firefox, and Microsoft Edge, but in the past we’ve also had such greats as NCSA Mosaic and Netscape Navigator. Whichever browser you use daily (I use Firefox, thanks for asking) is only an interface layer wrapped around a browser engine. All your bookmarks, the forward and backward arrows, that URL bar thingy—those aren’t the browser. Those are the browser’s interface. Often the people who build the browser’s engine never even touch the interface!

Browser engines are the things that actually read all the HTML and CSS and JavaScript that come down across the Internet, interpret them, and display a pretty picture of a web page for you. They have their own names. Chrome’s engine is Blink. Safari runs on WebKit. Firefox uses Gecko. Edge sits on top of EdgeHTML. (I like this naming convention. Good job, Edge.)

Except for Edge, all of these engines are open source, meaning anybody could grab one, wrap it in a new interface, and boom, release their own browser—maybe with a different (perhaps better) user experience—and that’s just what some browsers are! Apple only allows webKit-based browsers in its iOS app store, so the Chrome, Firefox, and even Edge browsers on iPads and iPhones work more like Safari than their desktop counterparts. Oculus Browser, Brave, Vivaldi, Samsung Internet, Amazon’s Silk, and Opera all run on Blink. We call them “Chromium-based browsers”—Chromium is Google’s open source project from which Chrome and its engine emerged.

But what’s inside a browser engine? MOAR ENGINES! Every browser engine is comprised of several other engines:

  • A layout and rendering engine (often so tightly coupled that there’s no distinction) that calculates how the page should look and handles any paints, renders, and even animations.
  • A JavaScript engine, which is its own thing and can even run independently of the browser altogether. For instance, you can use Chrome’s V8 engine or Microsoft Edge’s Chakra to run Node on a server.

I like to compare browser engines to biological cells. Where a cell contains many organelles to perform different functions, so too do browsers. One can think of the nucleus as the rendering engine, containing the blueprints for how the page should display, and the mitochondria as the JavaScript engine, powering our everyday interactions. (Fun fact: mitochondria used to be standalone cells at one point, too, and they even carry their own DNA!)

And you know what? Another way browsers are like living things is that they evolve.

Browser evolution

Back when the first browsers came out, it was a simpler time. CSS was considered the hot new thing when it first appeared in Microsoft Internet Explorer 3 in 1996! There were far fewer JavaScript APIs and CSS specifications to implement than there are today. Over the years, browser codebases have grown to support the number of new features users and developers require to build modern web experiences. It is a delicate evolution between user needs, browser engineering effort, and specification standardization processes.

We have three major lineages of engines right now:

  • WebKit and Blink (Blink originally being a fork of WebKit) running Safari, Chrome, and Opera
  • Gecko running Firefox
  • EdgeHTML (a fork of Trident, aka MSHTML) running Microsoft Edge

Each is very different and has different strengths and weaknesses. Each could pull the Web in a different direction alone: Firefox’s engine has Servo’s multithreaded processing for rendering blazing fast graphics. Edge’s engine has the least abstraction from the operating system, giving it more direct access to system resources—but making it a Windows-only browser engine as a result. And Chrome’s Blink has the most web developers testing for it. (I’ll get back to why this is a “feature” in a little bit.)

Remember all those Chromium-based browsers we talked about? Well, none of these browsers have to build their rendering engine or JavaScript engines from scratch: they just piggy-back off of Blink. And if there are new features they need? They can develop those features and keep them to themselves, or they can share those features back “upstream” to become a part of the core engine for other browsers to use. (This process is often fraught with politics and logistics—”contributing back” is easier said than done!)

It is hard to imagine any one entity justifying the hours and expense it would take to spin up a browser engine from scratch today. Even the current three engine families are evolutions of engines that were there from the very start of the Internet. They’ve evolved piecemeal alongside us, growing to meet our needs.

Right now, the vast majority of traffic on the Web is happening on Chrome, iOS Safari, or some other permutation of Blink or WebKit.

Branches, renovations, and gut jobs

Some developers would say that WebKit and Blink were forked so long ago that the two are practically completely different browser engines now, no longer able to share contributions. That may be true to a point. But while a common chimney swift and a ruby-throated hummingbird are completely different animals in comparison to each other, when compared to the other animal families, they are still very much birds. Neither’s immediate offspring are likely to manifest teeth, hands, or tails in the near future. Neither WebKit nor Blink have the processing features that Gecko and EdgeHTML have been building up to for years.

Other developers might point out that Microsoft Edge is supposedly a “complete rewrite” of Internet Explorer. But the difference between a “complete gut job” and a mere “renovation” can be a matter of perspective. EdgeHTML is a fork of Internet Explorer’s Trident engine, and it still carries much of Trident’s backlog with it.

Browser extinction

So these are the three browser engines we have: WebKit/Blink, Gecko, and EdgeHTML. We are unlikely to get any brand new bloodlines in the foreseeable future. This is it.

If we lose one of those browser engines, we lose its lineage, every permutation of that engine that would follow, and the unique takes on the Web it could allow for.

And it’s not likely to be replaced.

Imagine a planet populated only by hummingbirds, dolphins, and horses. Say all the dolphins died out. In the far, far future, hummingbirds or horses could evolve into something that could swim in the ocean like a dolphin. Indeed, ichthyosaurs in the era of dinosaurs looked much like dolphins. But that creature would be very different from a true dolphin: even ichthyosaurs never developed echolocation. We would wait a very long time (possibly forever) for a bloodline to evolve the traits we already have present in other bloodlines today. So, why is it ok to stand by or even encourage the extinction of one of these valuable, unique lineages?

We have already lost one.

We used to have four major rendering engines, but Opera halted development of its own rendering engine Presto before adopting Blink.

Three left. Spend them wisely.

By our powers combined…

Some folks feel that if a browser like Microsoft Edge ran on Blink, then Microsoft’s engineers could help build a better Blink, contributing new features upstream for all other Chromium browsers to benefit from. This sounds sensible, right?

But remember Blink was forked from WebKit. Now there are WebKit contributors and Blink contributors, and their contributions don’t port one to one. It’s not unlikely that a company like Microsoft would, much like Samsung and Occulus, want to do things with the engine differently from Google. And if those differences are not aligned, the company will work on a parallel codebase and not contribute that work upstream. We end up with a WebKit and a Blink—but without the deep differentiation that comes from having a codebase that has been growing with the Internet for decades.

It’s is a nice idea in theory. But in practice, we end up in a similar pickle—and with less “genetic variety” in our ecosystem of browser engines.

Competition is about growing, not “winning”

I’m a fan of competition. Competing with other cartoonists to make better comics and reach a larger audience got me to where I am today. (True story: got my start building websites as a cartoonist building her own community site, newsletter, and shopping cart.) I like to remind people that competition isn’t about annihilating your competitors. If you did, you’d stagnate and lose your audience: see also Internet Explorer 6.

Internet Explorer 6 was an amazing browser when it came out: performant enough to really deliver on features previous versions of Internet Explorer introduced like the DOM, data-binding, and asynchronous JavaScript. Its rival, Netscape Navigator, couldn’t compete and crumbled into dust (only to have its engine, Gecko, rewritten from scratch—it was that small!—by the Mozilla Foundation to be reborn as Firefox later).

Thinking it had Won the Internet, Microsoft turned its attention to other fronts, and Internet Explorer didn’t advance much. This allowed Firefox to get a toehold—by giving users a better experience with features like pop-up up blocking and a better UI. (That interface layer really does matter!) Firefox also collaborated with Opera to advanced Web standards, which weren’t really a thing in the IE vs Netscape days. People using and building for the Internet loved it, and Firefox spread like fire (</dad-joke>) through word of mouth and grass roots promotional campaigns.

When the iPhone came, Apple focused on its profitable apps marketplace and cut efforts to support Flash—the Web’s most app-like interaction platform. Apps gave content creators a way to monetize their efforts with something other than an advertising model. Advertising being Google’s bread and butter, the Big G grew concerned as the threat of a walled garden of apps only using the Internet for data plumbing loomed. Microsoft, meanwhile, was preoccupied with building its own mobile OS. So Google did two things: Android and Chrome.

Chrome promised a better, faster browsing experience. It was barebones, but Google even went all out and got famous (in my circles at least) cartoonist Scott McCloud to make a comic explaining the browser’s mission to users. With Chrome’s omnipresence on every operating system and Android phone, its dev tools modeled off Firefox’s beloved Firebug extension, and increasing involvement in specs, Chrome not only shook Internet Explorer out of its slumber, it was damn near threatening to kill off every other browser engine on the planet!

Pruning the great family tree of browsers (or collection of bushes as it were) down to a single branch smacks of fragile monoculture. Monocultures are easily disrupted by environmental and ecological challenges—by market and demographic changes. What happens when the next threat to the Web rears its head, but we don’t have Firefox’s multithreading? Or Microsoft Edge’s system integration? Will we be able to iterate fast enough without them? Or will we look to the Chrome developers to do something and pray that they have not grown stagnant as Google turned, like Microsoft did, to attend to other matters after “winning the Web.”

It is ironic that the browser Google built to keep the Web from losing to the apps model is itself monopolizing web development much the same way Internet Explorer 6 did.

It’s good to be king (of the jungle)

I promised I’d get back to “user base size as a feature.” Having the vast majority of the web development community building and testing for your platform is a major competitive advantage. First off, you’re guaranteed that most sites are going to work perfectly in your browser—you won’t have to spend so much time and effort tapping Fortune 500 sites on the shoulder about this One Weird Bug that is causing all their internal users to switch to your competitor browser and never come back. A downward spiral of fewer users leading to less developer testing leading to fewer users begins that is hard to shake.

It also makes it much easier to propose new specifications that serve your parent company’s goals (which may or may not serve the web community’s goals) and have that large community of developers build to your implementation first without having to wait for other browsers to catch up. If a smaller browser proposes a spec that no one notices and you pick it up when you need it, people will remember it as being your effort, continuing to build your mindshare whether intentionally or not.

This creates downward pressure on the competition, which simply doesn’t have or cannot use the same resources the biggest browser team has at its disposal. It’s a brutally efficient method, whether by design or accident.

Is it virtuous? At the individual contributor level, yes. Is it a vicious cycle by which companies have driven their competition to extinction by forcing them to spend limited resources to catch up? Also yes. And having legions of people building for just your platform helps.

All the awesome, well-meaning, genuinely good-hearted people involved—from the Chrome team to web developers—can throw their hands up and legitimately say, “I was just trying to build the Web forward!” while contributing to further a corporate monopoly.

Chrome has the most resources and leads the pack in building the Web forward to the point that we can’t be sure if we’re building the Web we want… or the Web Google wants.

Speculative biology

There was a time when Microsoft bailed out Apple as it was about to sink. This was not because Bill Gates and Steve Jobs were friends—no, Microsoft needed Apple to succeed so there would still be operating system competition. (No business wants to be seen as a monopoly!)

But consider, for a moment, that Apple had died. What would personal computers be like today if we’d only had Linux and Windows left standing? What would mobile computing look like if Apple hadn’t been around to work on the iPhone?

Yes, it’s easier to develop and test in only one browser. I’m sure IT professionals would have loved to only support one kind of machine. But variety creates opportunity for us as developers in the long run. Microsoft saving Apple lead to apps which challenged the Web which gave us Chrome and the myriad of APIs Google is charging ahead with. If at any point in this chain of events someone had said, “Meh, it’s so much easier if we all use the same thing,” we wouldn’t have the careers—or the world—that we have now.

Develop in more than one browser. Test in more than one browser. Use more than one browser.

You are both consumer and producer. You have a say in how the future plays out.

Update: I updated this post to expand just a little bit on WebKit on iOS, the role of Firefox during the era of IE6, and to explicitly call attention to Servo, which is probably the most exciting thing happening in browsers right now.


The Ecological Impact of Browser Diversity originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/the-ecological-impact-of-browser-diversity/feed/ 7 275463
Clearfix: A Lesson in Web Development Evolution https://css-tricks.com/clearfix-a-lesson-in-web-development-evolution/ https://css-tricks.com/clearfix-a-lesson-in-web-development-evolution/#comments Tue, 03 Jul 2018 13:15:08 +0000 http://css-tricks.com/?p=272740 The web community has, for the most part, been a spectacularly open place. As such, a lot of the best development techniques happen right out in the open, on blogs and in forums, evolving as they’re passed around and improved. …


Clearfix: A Lesson in Web Development Evolution originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
The web community has, for the most part, been a spectacularly open place. As such, a lot of the best development techniques happen right out in the open, on blogs and in forums, evolving as they’re passed around and improved. I thought it might be fun (and fascinating) to actually follow this creative exchange all the way through. To take a look at a popular CSS trick, the clearfix, and find out exactly how a web design technique comes to be.

The clearfix, for those unaware, is a CSS hack that solves a persistent bug that occurs when two floated elements are stacked next to each other. When elements are aligned this way, the parent container ends up with a height of 0, and it can easily wreak havoc on a layout. All you might be trying to do is position a sidebar to the left of your main content block, but the result would be two elements that overlap and collapse on each other. To complicate things further, the bug is inconsistent across browsers. The clearfix was invented to solve all that.

An early illustration of the issue from Position is Everything

But to understand the clearfix, you have to go back even further, to the 2004 and a particular technique called the Holly hack.

2004: The Holly Hack and the origin of Clearfix

The Holly hack is named for its creator, Holly Begevin, a developer and blogger at CommunityMX. The Holly hack combines two different CSS techniques that worked in the days of Netscape Navigator and Internet Explorer (IE) 4 to solve some layout issues. Begevin noticed that if you apply a height of just 1% to each of the floated elements in the above scenario, the problem would actually fix itself (and only because it activated an entirely different bug) in Internet Explorer for Windows.

Unfortunately, the 1% trick didn’t work on a Mac. For that, Begevin added a conditional comment which used a backslash inside of her CSS, which strangely enough, blocked individual CSS rules from IE for Mac in the old days. The result was a CSS trick that looked like this:

/* Hides from IE5-mac \*/
* html .buggybox {height: 1%;}
/* End hide from IE5-mac */

Odd, I know, but it only gets more complicated.

That same year, in May, there were a few more browsers to deal with, and not all of them could be patched with one line of CSS. Tony Aslett posted a new thread to his message board, CSS Creator, proposing a new approach. He called the trick a clearfix because it centered around clearing the floated elements to fix the issue.

Aslett’s approach took advantage of what were, at the time, still very new CSS pseudo-selectors (specifically :after) to automatically clear out two floated elements. There was one pretty massive drawback in Aslett’s first version of the clearfix. Having never heard of the Holly Hack, Aslett’s code required a bit of JavaScript to fix issues that were specifically popping up on IE for Mac. In those days, JavaScript was a relatively untested technology, and relying on it in such a fundamental way was less than ideal.

Thankfully, the web is a place of iteration, and one message board user pointed Aslett in the direction of the aforementioned Holly Hack. By sprinkling in Begevin’s conditional comment, Aslett was able to make his code work in just about every browser with no JavaScript at all.

.floatcontainer:after {
  content: "."; 
  display: block; 
  height: 0; 
  clear: both; 
  visibility: hidden;
}

/* Mark Hadley's fix for IE Mac */     
.floatcontainer {
  display: inline-block;
}

/* Hides from IE Mac \*/
* html .floatcontainer {height: 1%;}
.floatcontainer{display:block;}
/* End Hack */

If you want to pick apart an important slice of web history and innovation, the discussion that followed Aslett’s post about clearfix is a great place to start. One by one, people began to experiment with the new technique, testing it in obscure browsers and devices. When they were done, they’d post their results to the message thread, sometimes alongside a helpful tweak.

For example, the user zulaica pointed out that in Mozilla browsers, the bottom border of floated elements had to be explicitly defined. User pepejeria noticed that you could leave out the dot from content, and user co2 tested the clearfix in the very first version of Apple’s Safari browser. Each time, Aslett would update his code a bit until, after more than a few rapid iterations, the clearfix was ready and, thanks to the community, pretty darn bulletproof.

2006: Clearfix gets an update

But browsers kept on advancing. Some of the more quirky bits of the clearfix code worked because of bugs that were built into older browsers. As browsers patched those bugs, in brand new versions, the clearfix stopped working. When IE 7 came out at the end of 2006, a few adjustments to the technique were needed to make it work.

Fortunately, John “Big John” Gallant was maintaining a page on his blog Position is Everything with an up to date version of the clearfix. After getting some feedback from his readers, Gallant updated his blog to reflect a few new fixes for IE 7 using a new kind of conditional comment that would work inside of Internet Explorer.

<style type="text/css">

  .clearfix:after {
    content: ".";
    display: block;
    height: 0;
    clear: both;
    visibility: hidden;
  }

</style><!-- main stylesheet ends, CC with new stylesheet below... -->

<!--[if IE]>
<style type="text/css">
  .clearfix {
    zoom: 1;     /* triggers hasLayout */
  }  /* Only IE can see inside the conditional comment
    and read this CSS rule. Don't ever use a normal HTML
    comment inside the CC or it will close prematurely. */
</style>
<![endif]-->

And once again, users took to their own browsers to test out the latest code to ensure it worked everywhere. And once again, for a time, it did.

2010: Clearfix Reloaded

In fact, this iteration of the clearfix would last until about 2010, when the team at the Yahoo! User Interface Library (YUI) noticed a few issues with the clearfix. The Holly Hack, for instance, was now long gone (IE 5 was but a distance memory), and after switching the box model, margins were handled a bit differently by modern browsers.

But the folks at YUI still needed to line up one element next to another. In fact, the need had only increased, as designers experimented with more advanced grid layouts. In 2010, there were very little options for grid layout, so clearfix had to work. They eventually came up with a few additional tweaks to the CSS ruleset, most notably by taking of advantage of both available pseudo-selectors (:before and :after), to clear out any margins. They posted their new take to their own blog and called it “clearfix reloaded.”

.clearfix:before,
.clearfix:after {
  content: ".";    
  display: block;    
  height: 0;    
  overflow: hidden;        
}
.clearfix:after { clear: both; }
.clearfix { zoom: 1 ;} /* IE < 8 */

2011: The Micro Clearfix

But even when it was published in 2010, clearfix reloaded brought with it some unnecessary baggage from older browsers. The height equals 0 trick wasn’t really a requirement anymore, and in fact, the trick was a lot more reliable when display: table was used instead. People began swapping various variations on the technique on Twitter and on blogs. About a year after the release of clearfix reloaded, developer Nicolas Gallagher compiled these variations into a much more compact version of the hack, which he appropriately labeled the micro clearfix.

After years of back and forth and slight adjustments, the clear fix now required just four CSS rules:

/*
 * For modern browsers
 * 1. The space content is one way to avoid an Opera bug when the
 *  contenteditable attribute is included anywhere else in the document.
 *  Otherwise it causes space to appear at the top and bottom of elements
 *  that are clearfixed.
 * 2. The use of `table` rather than `block` is only necessary if using
 * `:before` to contain the top-margins of child elements.
 */
.cf:before,
.cf:after {
  content: " "; /* 1 */
  display: table; /* 2 */
}

.cf:after {
  clear: both;
}

/*
 * For IE 6/7 only
 * Include this rule to trigger hasLayout and contain floats.
 */
.cf {
  zoom: 1;
}

The End of Clearfix?

These days, almost 15 years after it was first proposed, the clearfix is losing a bit of relevance. CSS Grid and Flexbox are filling in the gaps for advanced layout on the web. In January of 2017, Rachel Andrew wrote an article for her blog titled “The end of the clearfix hack?” In it, she describes a way to replace the clearfix hack with a single line of code using a new display mode rule known as flow-root.

.container {
  display: flow-root;
}

We are approaching the day when clearfix will no longer be necessary at all.

Even without flow-root, there’s lots of ways to make a grid these days. If you were just starting out on the web, there’d be little reason to even learn about it. That’s a good thing! From the beginning it was always meant as a workaround to make the best of what was available. The irony being that without the dedication and experimentation of the developers who hacked away on the clearfix for so many years, we wouldn’t have the tools today to never have to rely on it again.

Enjoy learning about web history with stories just like this? Jay Hoffmann has a weekly newsletter called The History of the Web you can sign up for here.


Clearfix: A Lesson in Web Development Evolution originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/clearfix-a-lesson-in-web-development-evolution/feed/ 12 272740
A Short History of WaSP and Why Web Standards Matter https://css-tricks.com/short-history-wasp-web-standards-matter/ https://css-tricks.com/short-history-wasp-web-standards-matter/#comments Wed, 07 Feb 2018 14:10:30 +0000 http://css-tricks.com/?p=266134 In August of 2013, Aaron Gustafson posted to the WaSP blog. He had a bittersweet message for a community that he had helped lead:

Thanks to the hard work of countless WaSP members and supporters (like you), Tim Berners-Lee’s


A Short History of WaSP and Why Web Standards Matter originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
In August of 2013, Aaron Gustafson posted to the WaSP blog. He had a bittersweet message for a community that he had helped lead:

Thanks to the hard work of countless WaSP members and supporters (like you), Tim Berners-Lee’s vision of the web as an open, accessible, and universal community is largely the reality. While there is still work to be done, the sting of the WaSP is no longer necessary. And so it is time for us to close down The Web Standards Project.

If there’s just the slightest hint of wistful regret in Gustafson’s message, it’s because the Web Standards Project changed everything that had become the norm on the web during its 15+ years of service. Through dedication and developer advocacy, they hoisted the web up from a nest of browser incompatibility and meaningless markup to the standardized and feature-rich application platform most of us know today.

I previously covered what it took to bring CSS to the World Wide Web. This is the other side of that story. It was only through the efforts of many volunteers working tirelessly behind the scenes that CSS ever had a chance to become what it is today. They are the reason we have web standards at all.

Introducing Web Standards

Web standards weren’t even a thing in 1998. There were HTML and CSS specifications and drafts of recommendations that were managed by the W3C, but they had spotty and uneven browser support which made them little more than words on a page. At the time, web designers stood at the precipice of what would soon be known as the Browser Wars, where Netscape and Microsoft raced to implement exclusive features and add-ons in an escalating fight for market share. Rather than stick to any official specification, these browsers forced designers to support either Netscape Navigator or Internet Explorer. And designers were definitely not happy about it.

Supporting both browsers and their competing feature implementations was possible, but it was also difficult and unreliable, like building a house on sand. To help each other along, many developers began joining mailing lists to swap tips and hacks for dealing with sites that needed to look good no matter where it was rendered.

From these mailing lists, a group began to form around an entirely new idea. The problem, this new group realized, wasn’t with the code, but with the browsers that refused to adhere to the codified, open specifications passed down by the W3C. Browsers touted new presentational HTML elements like the <blink> tag, but they were proprietary and provided no layout options. What the web needed was browsers that could follow the standards of the web.

The group decided they needed to step up and push browsers in the right direction. They called themselves the Web Standards Project. And, since the process would require a bit of a sting, they went by WaSP for short.

Launching the Web Standards Project

In August of 1998, WaSP announced their mission to the public on a brand new website: to “support these core standards and encourage browser makers to do the same, thereby ensuring simple, affordable access to Web technologies for all.” Within a few hours, 450 people joined WaSP. In a few months, that number would jump to thousands.

WaSP took what was basically a two-pronged approach. The first was in public, tapping into the groundswell of developer support they had gathered to lobby for better standards support in browsers. Using grassroots tactics and targeted outreach, WaSP would often send its members on “missions” such as sending emails to browsers explaining in great detail their troubles working with a lack of consistent web standards support.

They also published scathing reports that put browsers on blast, highlighting all the ways that Netscape or Internet Explorer failed to add necessary support, even go so far to encourage users to use alternative browsers. It was these reports where the project truly lived up to its acronym. One needs to look no further then a quote from WaSP’s savage takedown of Internet Explorer as an example of its ability to sting:

Quit before the job’s done, and the flamethrower’s the only answer. Because that’s our job. We speak for thousands of Web developers, and through them, millions of Web users.

The second prong of WaSP’s approach included privately reaching out to passionate developers on browser teams. The problem, for big companies like Netscape and Microsoft, wasn’t that engineers were against web standards. Quite the contrary, actually. Many browser engineers believed deeply in WaSP’s mission but were resisted by perceived business interests and red-tape bureaucracy time and time again. As a result, WaSP would often work with browser developers to find the best path forward and advocate on their behalf to the higher-ups when necessary.

Holding it All Together

To help WaSP navigate its way through its missions, reports, and outreach, a Steering Committee was formed. This committee helped set the project’s goals and reached out to the community to gather support. They were the heralds of a better day soon to come, and more than a few influential members would pass through their ranks before the project was over, including: Rachel Cox, Tim Bray, Steve Champeon, Glenn Davis, Glenda Sims, Todd Fahrner, Molly Holzschalg and Aaron Gustafson, among many, many others.

At the top of it all was a project lead who set the tone for the group and gave developers a unified voice. The position was initially held by George Olsen, one of the founders of the project, but was soon picked up by another founding member: Jeffrey Zeldman.

A network of loosely connected satellite groups orbiting around the Steering Committee helped developers and browsers alike understand the importance of web standards. There was, for instance, an Accessibility group that bridged the W3C with browser makers to ensure the web was open and accessible to everyone. Then there was the CSS Samurai, who published reports about CSS support (or, more commonly, lack thereof) in different browsers. They were the ones that devised the Box Acid test and offered guidance to browsers as they worked to expand CSS support. Todd Fahrner, who helped save CSS with doctype switching, counted himself among the CSS Samurai.

Making an Impact

WaSP was huge and growing all the time. Its members were passionate and, little by little, clusters of the community came together to enact change. And that is exactly what happened.

The changes felt kind of small at first but soon they bordered on massive. When Netscape was kicking around the idea of a new rendering engine named Gecko that would include much better standards support across the board, their initial timeline would have taken months to release. But the WaSP swarmed, emailing and reaching out to Netscape to put pressure on them to release Gecko sooner. It worked and, by the next release, Gecko (and better web standards) shipped.

Tantek Çelik was another member of WaSP. The community inspired him to take a stand on web standards at his day job as lead developer of Internet Explorer for Mac. It was through the encouragement and support of WaSP that he and his team released version 5 with full CSS Level 1 support.

Internet Explorer 5 for Mac was released with full CSS Level 1 support

In August of 2001, after years of public reports and private outreach and developer advocacy, the WaSP sting provoked seismic change in Internet Explorer as version 6 released with CSS Level 1 support and the latest HTML features. The upgrades were due in no small part to the work at the Web Standards Project and their work with dedicated members of the browser team. It appeared that standards were beginning to actually win out. The WaSP’s mission may have even been over.

But instead of calling it quits, they shifted tactics a bit.

Teaching Standards to a New Generation

In the early 2000’s, WaSP would radically change its approach to education and developer outreach.

They started with the launch of the Browser Upgrade Campaign which educated users who were coming online for the very first time and knew absolutely nothing about web standards and modern browsers. Site owners were encouraged to add some JavaScript and a banner to their sites to target these users. As a result, those surfing to a site on older versions of standards-compliant browsers, like Firefox or Opera, were greeted by a banner simply directing them to upgrade. Users visiting the site on a really old browser, like pre-IE5 or Netscape 5, would redirect visitors to an entirely new page explaining why upgrading to a modern browser with standards support was in their best interest.

A page from the Browser Upgrade Campaign

WaSP was going to bring the web up to speed, even if they had to do it one person at a time. Perhaps no one articulated this sentiment better than Molly Holzschalg when she wrote “Raise Your Standards” in February 2002. In the article, she broke down what web standards are and what they meant for developers and designers. She celebrated the work that had been done by browsers and the community working to make web standards a thing in the first place.

But, she argued, the web was far from done. It was now time for developers to step up to the plate and assume the responsibility for standards themselves by coding it into all of their sites. She wrote:

The Consortium is fraught with its own internal issues, and its actions—while almost always in the best interests of professional Web authors—are occasionally politicized.

Therefore, as Web authors, we’re personally responsible for making implementation decisions within the framework of a site’s markup needs. It’s our job to administer recommendations to the best of our abilities.

This, however, would not be easy. It would once again require the combined efforts of WaSP members to pull together and teach the web a new way to code. Some began publishing tutorials to their personal blogs or on A List Apart. Others created a standards-based online curriculum for web developers who were new to the field. A few members even formed brand-new task forces to work with popular software tools, like Adobe Dreamweaver, and ensure that standards were supported there as well.

The redesigns of ESPN and Wired, which stood as a testament and example for standards-based designs for years to come, were undertaken in part because members of those teams were inspired by the work that WaSP was doing. They would not have been able to take those crucial first steps if not for the examples and tutorials made freely available to them by gracious WaSP members.

That is why web standards is basically second nature to many web developers today. It’s also why we have such a free spirit of creative exchange in our industry. It all started when WaSP decided to share the correct way of doing things right out in the open.

Looking Past Web Standards

It was this openness that carried WaSP into the late 2010’s. When Holzschlag took over as lead, she advocated for transparency and collaboration between browser makers and the web community. The WaSP, Holzschlag realized, was no longer necessary and could be done from within. For example, she made inroads at Microsoft to help make web standards a top priority on their browser team.

With each subsequent release, browsers began to catch up to the latest standards from the W3C. Browsers like Opera and Firefox actually competed on supporting the latest standards. Google Chrome used web standards as a selling point when it was initially released around the same time. The decade-and-a-half of work by WaSP was paying off. Browser makers were listening to the W3C and the web community, even going so far as to experiment with new standards before they were officially published for recommendation.

In 2013, WaSP posted its farewell announcement and closed up shop for good. It was a difficult decision for those who had fought long and hard for a better, more accessible and more open web, but it was necessary. There are still a number of battlegrounds for the open web but, thanks to the efforts of WaSP, the one for web standards has been won.

Enjoy learning about web history? Jay Hoffmann has a weekly newsletter called The History of the Web you can sign up for here.


A Short History of WaSP and Why Web Standards Matter originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/short-history-wasp-web-standards-matter/feed/ 3 266134
A Look Back at the History of CSS https://css-tricks.com/look-back-history-css/ https://css-tricks.com/look-back-history-css/#comments Wed, 18 Oct 2017 17:00:55 +0000 http://css-tricks.com/?p=261090 When you think of HTML and CSS, you probably imagine them as a package deal. But for years after Tim Berners-Lee first created the World Wide Web in 1989, there was no such thing as CSS. The original plan for …


A Look Back at the History of CSS originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
When you think of HTML and CSS, you probably imagine them as a package deal. But for years after Tim Berners-Lee first created the World Wide Web in 1989, there was no such thing as CSS. The original plan for the web offered no way to style a website at all.

There’s a now-infamous post buried in the archives of the WWW mailing list. It was written by Marc Andreessen in 1994, who would go on to co-create both the Mosaic and Netscape browsers. In the post, Andreessen remarked that because there was no way to style a website with HTML, the only thing he could tell web developers when asked about visual design was, “sorry you’re screwed.

10 years later, CSS was on its way to full adoption by a newly enthused web community. *W**hat happened along the way?*

Finding a Styling Language

There were plenty of ideas for how the web could theoretically be laid out. However, it just was not a priority for Berners-Lee because his employers at CERN were mostly interested in the web as a digital directory of employees. Instead, we got a few competing languages for web page layout from developers across the community, most notably from Pei-Yaun Wei, Andreesen, and Håkon Wium Lie.

Take Pei-Yuan Wei, who created the graphical ViolaWWW Browser in 1991. He incorporated his own stylesheet language right into his browser, with the eventual goal of turning this language into an official standard for the web. It never quite got there, but it did provide some much-needed inspiration for other potential specifications.

ViolaWWW upon release

In the meantime, Andreessen had taken a different approach in his own browser, Netscape Navigator. Instead of creating a decoupled language devoted to a website’s style, he simply extended HTML to include presentational, unstandardized HTML tags that could be used to design web pages. Unfortunately, it wasn’t long before web pages lost all semantic value and looked like this:

<MULTICOL COLS="3" GUTTER="25">
  <P><FONT SIZE="4" COLOR="RED">This would be some font broken up into columns</FONT></P>
</MULTICOL>

Programmers were quick to realize that this kind of approach wouldn’t last long. There were plenty of ideas for alternatives. Like RRP, a stylesheet language that favored abbreviation and brevity, or PSL96 a language that actually allowed for functions and conditional statements. If you’re interested in what these languages looked like, Zach Bloom wrote an excellent deep dive into several competing proposals.

But the idea that grabbed everyone’s attention was first proposed by Håkon Wium Lie in October of 1994. It was called Cascading Style Sheets, or just CSS.

Why We Use CSS

CSS stood out because it was simple, especially compared to some of its earliest competitors.

window.margin.left = 2cm
font.family = times
h1.font.size = 24pt 30%

CSS is a declarative programming language. When we write CSS, we don’t tell the browser exactly how to render a page. Instead, we describe the rules for our HTML document one by one and let browsers handle the rendering. Keep in mind that the web was mostly being built by amateur programmers and ambitious hobbyists. CSS followed a predictable and perhaps more importantly, forgiving format and just about anyone could pick it up. That’s a feature, not a bug.

CSS was, however, unique in a singular way. It allowed for styles to cascade. It’s right there in the name. Cascading Style Sheets. The cascade means that styles can inherit and overwrite other styles that had previously been declared, following a fairly complicated hierarchy known as specificity. The breakthrough, though, was that it allowed for multiple stylesheets on the same page.

See that percentage value above? That’s actually a pretty important bit. Lie believed that both users and designers would define styles in separate stylesheets. The browser, then, could act as a sort of mediator between the two, and negotiate the differences to render a page. That percentage represents just how much ownership this stylesheet is taking for a property. The less ownership, the more likely it was to be overwritten by users. When Lie first demoed CSS, he even showed off a slider that allowed him to toggle between user-defined and developer-defined styles in the browser.

This was actually a pretty big debate in the early days of CSS. Some believed that developers should have complete control. Others that the user should be in control. In the end, the percentages were removed in favor of more clearly defined rules about which CSS definitions would overwrite others. Anyway, that’s why we have specificity.

Shortly after Lie published his original proposal, he found a partner in Bert Bos. Bos had created the Argo browser, and in the process, his own stylesheet language, pieces of which eventually made its way into CSS. The two began working out a more detailed specification, eventually turning to the newly created HTML working group at the W3C for help.

It took a few years, but by the end of 1996, the above example had changed.

html {
  margin-left: 2cm;
  font-family: "Times", serif;
}

h1 {
  font-size: 24px;
}

CSS had arrived.

The Trouble with Browsers

While CSS was still just a draft, Netscape had pressed on with presentational HTML elements like multicol, layer, and the dreaded blink tag. Internet Explorer, on the other hand, had taken to incorporating some of CSS piecemeal. But their support was spotty and, at times, incorrect. Which means that by the early aughts, after five years of CSS as an official recommendation, there were still no browsers with full CSS support.

That came from kind of a strange place.

When Tantek Çelik joined Internet Explorer for Macintosh in 1997, his team was pretty small. A year later, he was made the lead developer of the rendering engine at the same as his team was cut in half. Most of the focus for Microsoft (for obvious reasons) was on the Windows version of Internet Explorer, and the Macintosh team was mostly left to their own devices. So Starting with the development of version 5 in 2000, Çelik and his team decided to put their focus where no one else was, CSS support.

It would take the team two years to finish version 5. During this time, Çelik spoke frequently with members of the W3C and web designers using their browser. As each piece slid into place, the IE-for-Mac team verified on all fronts that they were getting things just right. Finally, in March of 2002, they shipped Internet Explorer 5 for Macintosh. The first browser with full CSS Level 1 support.

Doctype Switching

But remember, the Windows version of Internet Explorer had added CSS to their browser with more than a few bugs and a screwy box model, which describes the way elements are calculated and then rendered. Internet Explorer included attributes like border and padding inside the total width and height of an element. But IE5 for Mac, and the official CSS specification called for these values to be added to the width and height. If you ever played around with box-sizing you know exactly the difference.

Çelik knew that in order to make CSS work, these differences would need to be reconciled. His solution came after a conversation with standards-advocate Todd Fahrner. It’s called doctype switching, and it works like this.

We all know doctypes. They go at the very top of our HTML documents.

<!DOCTYPE html>

But in the old days, they looked like this:

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0//EN" "http://www.w3.org/TR/REC-html40/strict.dtd">

That’s an example of a standards-compliant doctype. The //W3C//DTD HTML 4.0//EN is the crucial bit. When a web designer added this to their page the browser would know to render the page in “standards mode,” and CSS would match the specification. If the doctype was missing, or an out of date one was in use, the browser would switch to “quirks mode” and render things according to the old box model and with old bugs intact. Some designers even intentionally opted to put their site into quirks mode in order to get back the older (and incorrect) box model.

Eric Meyer, sometimes referred to as the godfather of CSS, has gone on record and said doctype switching saved CSS. He’s probably right. We would still be using browsers packed with old CSS bugs if it weren’t for that one, simple trick.

The Box Model Hack

There was one last thing to figure out. Doctype switching worked fine in modern browsers on older websites, but the box model was still unreliable in older browsers (particularly Internet Explorer) for newer websites. Enter the Box Model Hack, a clever trick from Çelik that took advantage of a little-used CSS attribute called voice-family to trick browsers and allow for multiple widths and heights in the same class. Çelik instructed authors to put their old box model width first, then close the tag in a small hack with voice-family, followed by their new box model width. Sort of like this:

div.content { 
  width: 400px; 
  voice-family: ""}""; 
  voice-family: inherit;
  width: 300px;
}

Voice-family was not recognized in older browsers, but it did accept a string as its definition. So by adding an extra } older browsers would simply close the CSS rule before ever getting to that second width. It was simple and effective and let a lot of designers start experimenting with new standards quickly.

The Pioneers of Standards-Based Design

Internet Explorer 6 was released in 2001. It would eventually become a major thorn in the side of web developers everywhere, but it actually shipped with some pretty impressive CSS and standards support. Not to mention its market share hovering around 80%.

The stage was set, the pieces were in place. CSS was ready for production. Now people just needed to use it.

In the 10 years that the web hurtled towards ubiquity without a coherent or standard styling language, it’s not like designers had simply stopped designing. Not at all. Instead, they relied on a backlog of browser hacks, table-based layouts, and embedded Flash files to add some style when HTML couldn’t. Standards-compliant, CSS-based design was new territory. What the web needed was some pioneers to hack a path forward.

What they got was two major redesigns just a few months apart. The first from Wired followed soon after by ESPN.

Douglas Bowman was in charge of the web design team for Wired magazine. In 2002, Bowman and his team looked around and saw that no major sites were using CSS in their designs. Bowman felt almost an obligation to a web community that looked to Wired for examples of best practices to redesign their site using the latest, standards-compliant HTML and CSS. He pushed his team to tear everything down and redesign it from scratch. In September of 2002, they pulled it off and launched their redesign. The site even validated.

ESPN released their site just a few months later, using many of the same techniques on an even larger scale. These sites took a major bet on CSS, a technology that some thought might not even last. But it paid off in a major way. If you pulled aside any of the developers that worked on these redesigns, they would give you a laundry list of major benefits. More performant, faster design changes, easier to share, and above all, good for the web. Wired even did daily color changes in the beginning.

Dig through the code of these redesigns, and you’d be sure to find some hacks. The web still only lived on a few different monitor sizes, so you may notice that both sites used a combination of fixed width columns and relative and absolute positioning to slot a grid into place. Images were used in place of text. But these sites laid the groundwork for what would come next.

CSS Zen Garden and the Semantic Web

The following year, in 2003, Jeffrey Zeldman published his book Designing with Web Standards, which became a sort of handbook for web designers looking to switch to standards-based design. It kicked off a legacy of CSS techniques and tricks that helped web designers imagine what CSS could do. A year later, Dave Shea launched the CSS Zen Garden, which encouraged designers to take a basic HTML page and lay it out differently using just CSS. The site became a showcase of the latest tips and tricks, and went a long way towards convincing folks it was time for standards.

Slowly but surely, the momentum built. CSS advanced, and added new attributes. Browsers actually raced to implement the latest standards, and designers and developers added new tricks to their repertoire. And eventually, CSS became the norm. Like it had been there all along.

Enjoy learning about web history? Jay Hoffmann has a weekly newsletter called The History of the Web you can sign up for here.


A Look Back at the History of CSS originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/look-back-history-css/feed/ 15 261090
20 Years of CSS https://css-tricks.com/20-years-css/ Sun, 18 Dec 2016 02:12:33 +0000 http://css-tricks.com/?p=249098 Bert Bos, noting today as quite a notable day:

On December 17, 1996, W3C published the first standard for CSS.

Very interesting to see what historic points made the cut for the timeline. The Zen Garden, acid tests, preprocessors… …


20 Years of CSS originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
Bert Bos, noting today as quite a notable day:

On December 17, 1996, W3C published the first standard for CSS.

Very interesting to see what historic points made the cut for the timeline. The Zen Garden, acid tests, preprocessors… good times!

To Shared LinkPermalink on CSS-Tricks


20 Years of CSS originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
249098
Resilient Web Design https://css-tricks.com/resilient-web-design/ Wed, 14 Dec 2016 12:58:47 +0000 http://css-tricks.com/?p=248980 Jeremy Keith released a new book, for free, on the web only:

This is not a handbook. It’s more like a history book.

To Shared LinkPermalink on CSS-Tricks


Resilient Web Design originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
Jeremy Keith released a new book, for free, on the web only:

This is not a handbook. It’s more like a history book.

To Shared LinkPermalink on CSS-Tricks


Resilient Web Design originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
248980
The Languages Which Almost Became CSS https://css-tricks.com/languages-almost-became-css/ Thu, 30 Jun 2016 13:38:00 +0000 http://css-tricks.com/?p=243267 Today, CSS doesn’t really have any competitors. We might compile another language to get it, or apply it in unusual ways, but ultimately styling happens on the web through CSS.

But before CSS cemented itself, there was RRP, PWP, …


The Languages Which Almost Became CSS originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
Today, CSS doesn’t really have any competitors. We might compile another language to get it, or apply it in unusual ways, but ultimately styling happens on the web through CSS.

But before CSS cemented itself, there was RRP, PWP, FOSI, DSSSL, PSL, CHSS, and JSSS.

An awesome journey through web styling history by Zack Bloom.

To Shared LinkPermalink on CSS-Tricks


The Languages Which Almost Became CSS originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
243267