testing – CSS-Tricks https://css-tricks.com Tips, Tricks, and Techniques on using Cascading Style Sheets. Mon, 12 Dec 2022 20:57:00 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.1 https://i0.wp.com/css-tricks.com/wp-content/uploads/2021/07/star.png?fit=32%2C32&ssl=1 testing – CSS-Tricks https://css-tricks.com 32 32 45537868 Setting up a screen reader testing environment on your computer https://css-tricks.com/setting-up-a-screen-reader-testing-environment-on-your-computer/ https://css-tricks.com/setting-up-a-screen-reader-testing-environment-on-your-computer/#respond Mon, 12 Dec 2022 20:56:58 +0000 https://css-tricks.com/?p=375722 Sara Soueidan with everything you need, from what screen reading options are out there all the way to setting up virtual machines for them, installing them, and confguring keyboard options. It’s truly a one-stop reference that pulls together disparate …


Setting up a screen reader testing environment on your computer originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
Sara Soueidan with everything you need, from what screen reading options are out there all the way to setting up virtual machines for them, installing them, and confguring keyboard options. It’s truly a one-stop reference that pulls together disparate tips for getting the most out of your screen reading accessibility testing.

Thanks, Sara, for putting together this guide, and especially doing so while making no judgments or assumptions about what someone may or may not know about accessibility testing. The guide is just one part of Sara’s forthcoming Practical Accessibility course, which is available for pre-order.

To Shared LinkPermalink on CSS-Tricks


Setting up a screen reader testing environment on your computer originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/setting-up-a-screen-reader-testing-environment-on-your-computer/feed/ 0 375722
Writing Strong Front-end Test Element Locators https://css-tricks.com/front-end-test-element-locators/ https://css-tricks.com/front-end-test-element-locators/#respond Fri, 22 Apr 2022 18:45:38 +0000 https://css-tricks.com/?p=364218 Automated front-end tests are awesome. We can write a test with code to visit a page — or load up just a single component — and have that test code click on things or type text like a user would, …


Writing Strong Front-end Test Element Locators originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
Automated front-end tests are awesome. We can write a test with code to visit a page — or load up just a single component — and have that test code click on things or type text like a user would, then make assertions about the state of the application after the interactions. This lets us confirm that everything described in the tests work as expected in the application.

Since this post is about one of the building blocks of any automated UI tests, I don’t assume too much prior knowledge. Feel free to skip the first couple of sections if you’re already familiar with the basics.

Structure of a front-end test

There’s a classic pattern that’s useful to know when writing tests: Arrange, Act, Assert. In front-end tests, this translates to a test file that does the following:

  1. Arrange: Get things ready for the test. Visit a certain page, or mount a certain component with the right props, mock some state, whatever.
  2. Act: Do something to the application. Click a button, fill out a form, etc. Or not, for simple state-checks, we can skip this.
  3. Assert: Check some stuff. Did submitting a form show a thank you message? Did it send the right data to the back end with a POST?

In specifying what to interact with and then later what to check on the page, we can use various element locators to target the parts of the DOM we need to use.

A locator can be something like an element’s ID, the text content of an element, or a CSS selector, like .blog-post or even article > div.container > div > div > p:nth-child(12). Anything about an element that can identify that element to your test runner can be a locator. As you can probably already tell from that last CSS selector, locators come in many varieties.

We often evaluate locators in terms of being brittle or stable. In general, we want the most stable element locators possible so that our test can always find the element it needs, even if the code around the element is changing over time. That said, maximizing stability at all costs can lead to defensive test-writing that actually weakens the tests. We get the most value by having a combination of brittleness and stability that aligns with what we want our tests to care about.

In this way, element locators are like duct tape. They should be really strong in one direction, and tear easily in the other direction. Our tests should hold together and keep passing when unimportant changes are made to the application, but they should readily fail when important changes happen that contradict what we’ve specified in the test.

Beginner’s guide to element locators in front-end testing

First, let’s pretend we are writing instructions for an actual person to do their job. A new gate inspector has just been hired at Gate Inspectors, Inc. You are their boss, and after everybody’s been introduced you are supposed to give them instructions for inspecting their first gate. If you want them to be successful, you probably would not write them a note like this:

Go past the yellow house, keep going ‘til you hit the field where Mike’s mother’s friend’s goat went missing that time, then turn left and tell me if the gate in front of the house across the street from you opens or not.

Those directions are kind of like using a long CSS selector or XPath as a locator. It’s brittle — and it’s the “bad kind of brittle”. If the yellow house gets repainted and you repeat the steps, you can’t find the gate anymore, and might decide to give up (or in this case, the test fails).

Likewise, if you don’t know about Mike’s mother’s friend’s goat situation, you can’t stop at the right reference point to know what gate to check. This is exactly what makes the “bad kind of brittle” bad — the test can break for all kinds of reasons, and none of those reasons have anything to do with the usability of the gate.

So let’s make a different front-end test, one that’s much more stable. After all, legally in this area, all gates on a given road are supposed to have unique serial numbers from the maker:

Go to the gate with serial number 1234 and check if it opens.

This is more like locating an element by its ID. It’s more stable and it’s only one step. All the points of failure from the last test have been removed. This test will only fail if the gate with that ID doesn’t open as expected.

Now, as it turns out, though no two gates should have the same ID on the same road, that’s not actually enforced anywhere And one day, another gate on the road ends up with the same ID.

So the next time the newly hired gate inspector goes to test “Gate 1234,” they find that other one first, and are now visiting the wrong house and checking the wrong thing. The test might fail, or worse: if that gate works as expected, the test still passes but it’s not testing the intended subject. It provides false confidence. It would keep passing even if our original target gate was stolen in the middle of the night, by gate thieves.

After an incident like this, it’s clear that IDs are not as stable as we thought. So, we do some next-level thinking and decide that, on the inside of the gate, we’d like a special ID just for testing. We’ll send out a tech to put the special ID on just this one gate. The new test instructions look like this:

Go to the gate with Test ID “my-favorite-gate” and check if it opens.

This one is like using the popular data-testid attribute. Attributes like this are great because it is obvious in the code that they are intended for use by automated tests and shouldn’t be changed or removed. As long as the gate has that attribute, you will always find the gate. Just like IDs, uniqueness is still not enforced, but it’s a bit more likely.

This is about as far away from brittle as you can get, and it confirms the functionality of the gate. We don’t depend on anything except the attribute we deliberately added for testing. But there’s a bit of problem hiding here…

This is a user interface test for the gate, but the locator is something that no user would ever use to find the gate.

It’s a missed opportunity because, in this imaginary county, it turns out gates are required to have the house number printed on them so that people can see the address. So, all gates should have an unique human-facing label, and if they don’t, it’s a problem in and of itself.

When locating the gate with the test ID, if it turns out that the gate has been repainted and the house number covered up, our test would still pass. But the whole point of the gate is for people to access the house. In other words, a working gate that a user can’t find should still be a test failure, and we want a locator that is capable of revealing this problem.

Here’s another pass at this test instruction for the gate inspector on their first day:

Go to the gate for house number 40 and check if it opens.

This one uses a locator that adds value to the test: it depends on something users also depend on, which is the label for the gate. It adds back a potential reason for the test to fail before it reaches the interaction we want to actually test, which might seem bad at first glance. But in this case, because the locator is meaningful from a user’s perspective, we shouldn’t shrug this off as “brittle.” If the gate can’t be found by its label, it doesn’t matter if it opens or not — this is is the “good kind of brittle”.

The DOM matters

A lot of front-end testing advice tells us to avoid writing locators that depend on DOM structure. This means that developers can refactor components and pages over time and let the tests confirm that user-facing workflows haven’t broken, without having to update tests to catch up to the new structure. This principle is useful, but I would tweak it a bit to say we ought to avoid writing locators that depend on irrelevant DOM structure in our front-end testing.

For an application to function correctly, the DOM should reflect the nature and structure of the content that’s on the screen at any given time. One reason for this is accessibility. A DOM that’s correct in this sense is much easier for assistive technology to parse properly and describe to users who aren’t seeing the contents rendered by the browser. DOM structure and plain old HTML make a huge difference to the independence of users who rely on assistive technology.

Let’s spin up a front-end test to submit something to the contact form of our app. We’ll use Cypress for this, but the principles of choosing locators strategically apply to all front-end testing frameworks that use the DOM for locating elements. Here we find elements, enter some test, submit the form, and verify the “thank you” state is reached:

// 👎 Not recommended
cy.get('#name').type('Mark')
cy.get('#comment').type('test comment')
cy.get('.submit-btn').click()
cy.get('.thank-you').should('be.visible')

There are all kinds of implicit assertions happening in these four lines. cy.get() is checking that the element exists in the DOM. The test will fail if the element doesn’t exist after a certain time, while actions like type and click verify that the elements are visible, enabled, and unobstructed by something else before taking an action.

So, we get a lot “for free” even in a simple test like this, but we’ve also introduced some dependencies upon things we (and our users) don’t really care about. The specific ID and classes that we are checking seem stable enough, especially compared to selectors like div.main > p:nth-child(3) > span.is-a-button or whatever. Those long selectors are so specific that a minor change to the DOM could cause a test to fail because it can’t find the element, not because the functionality is broken.

But even our short selectors, like #name, come with three problems:

  1. The ID could be changed or removed in the code, causing the element to go overlooked, especially if the form might appear more than once on a page. A unique ID might need to be generated for each instance, and that’s not something we can easily pre-fill into a test.
  2. If there is more than one instance of a form on the page and they have the same ID, we need to decide which form to fill out.
  3. We don’t actually care what the ID is from a user perspective, so all the built-in assertions are kind of… not fully leveraged?

For problems one and two, the recommended solution is often to use dedicated data attributes in our HTML that are added exclusively for testing. This is better because our tests no longer depend on the DOM structure, and as a developer modifies the code around a component, the tests will continue to pass without needing an update, as long as they keep the data-test="name-field" attached to the right input element.

This doesn’t address problem three though — we still have a front-end interaction test that depends on something that is meaningless to the user.

Meaningful locators for interactive elements

Element locators are meaningful when they depend on something we actually want to depend on because something about the locator is important to the user experience. In the case of interactive elements, I would argue that the best selector to use is the element’s accessible name. Something like this is ideal:

// 👍 Recommended
cy.getByLabelText('Name').type('Mark')

This example uses the byLabelText helper from Cypress Testing Library. (In fact, if you are using Testing Library in any form, it is probably already helping you write accessible locators like this.)

This is useful because now the built-in checks (that we get for free through the cy.type() command) include the accessibility of the form field. All interactive elements should have an accessible name that is exposed to assistive technology. This is how, for example, a screenreader user would know what the form field they are typing into is called in order to enter the needed information.

The way to provide this accessible name for a form field is usually through a label element associated with the field by an ID. The getByLabelText command from Cypress Testing Library verifies that the field is labeled appropriately, but also that the field itself is an element that’s allowed to have a label. So, for example, the following HTML would correctly fail before the type() command is attempted, because even though a label is present, labeling a div is invalid HTML:

<!-- 👎 Not recommended  -->
<label for="my-custom-input">Editable DIV element:</label>
<div id="my-custom-input" contenteditable="true" />

Because this is invalid HTML, screenreader software could never associate this label with this field correctly. To fix this, we would update the markup to use a real input element:

<!-- 👍 Recommended -->
<label for="my-real-input">Real input:</label>
<input id="my-real-input" type="text" />

This is awesome. Now if the test fails at this point after edits made to the DOM, it’s not because of an irrelevant structure changes to how elements are arranged, but because our edits did something to break a part of DOM that our front-end tests explicitly care about, that would actually matter to users.

Meaningful locators for non-interactive elements

For non-interactive elements, we should put on our thinking caps. Let’s use a little bit of judgement before falling back on the data-cy or data-test attributes that will always be there for us if the DOM doesn’t matter at all.

Before we dip into the generic locators, let’s remember: the state of the DOM is our Whole Thing™ as web developers (at least, I think it is). And the DOM drives the UX for everybody who is not experiencing the page visually. So a lot of the time, there might be some meaningful element that we could or should be using in the code that we can include in a test locator.

And if there’s not, because. say, the application code is all generic containers like div and span, we should consider fixing up the application code first, while adding the test. Otherwise there is a risk of having our tests actually specify that the generic containers are expected and desired, making it a little harder for somebody to modify that component to be more accessible.

This topic opens up a can of worms about how accessibility works in an organization. Often, if nobody is talking about it and it’s not a part of the practice at our companies, we don’t take accessibility seriously as front-end developers. But at the end of the day, we are supposed to be the experts in what is the “right markup” for design, and what to consider in deciding that. I discuss this side of things a lot more in my talk from Connect.Tech 2021, called “Researching and Writing Accessible Vue… Thingies”.

As we saw above, with the elements we traditionally think of as interactive, there is a pretty good rule of thumb that’s easy to build into our front-end tests: interactive elements should have perceivable labels correctly associated to the element. So anything interactive, when we test it, should be selected from the DOM using that required label.

For elements that we don’t think of as interactive — like most content-rendering elements that display pieces of text of whatever, aside from some basic landmarks like main — we wouldn’t trigger any Lighthouse audit failures if we put the bulk of our non-interactive content into generic div or span containers. But the markup won’t be very informative or helpful to assistive technology because it’s not describing the nature and structure of the content to somebody who can’t see it. (To see this taken to an extreme, check out Manuel Matuzovic’s excellent blog post, “Building the most inaccessible site possible with a perfect Lighthouse score.”)

The HTML we render is where we communicate important contextual information to anybody who is not perceiving the content visually. The HTML is used to build the DOM, the DOM is used to create the browser’s accessibility tree, and the accessibility tree is the API that assistive technologies of all kinds can use to express the content and the actions that can be taken to a disabled person using our software. A screenreader is often the first example we think of, but the accessibility tree can also be used by other technology, like displays that turn webpages into Braille, among others.

Automated accessibility checks won’t tell us if we’ve really created the right HTML for the content. The “rightness” of the HTML a judgement call we are making developers about what information we think needs to be communicated in the accessibility tree.

Once we’ve made that call, we can decide how much of that is suitable to bake into the automated front-end testing.

Let’s say that we have decided that a container with the status ARIA role will hold the “thank you” and error messaging for a contact form. This might be nice so that the feedback for the form’s success or failure can be announced by a screenreader. CSS classes of .thank-you and .error could be applied to control the visual state.

If we add this element and want to write a UI test for it, we might write an assertion like this after the test fills out the form and submits it:

// 👎 Not recommended
cy.get('.thank-you').should('be.visible')

Or even a test that uses a non-brittle but still meaningless selector like this:

// 👎 Not recommended
cy.get('[data-testid="thank-you-message"]').should('be.visible')

Both could be rewritten using cy.contains():

// 👍 Recommended
cy.contains('[role="status"]', 'Thank you, we have received your message')
  .should('be.visible')

This would confirm that the expected text appeared and was inside the right kind of container. Compared to the previous test, this has much more value in terms of verifying actual functionality. If any part of this test fails, we’d want to know, because both the message and the element selector are important to us and shouldn’t be changed trivially.

We have definitely gained some coverage here without a lot of extra code, but we’ve also introduced a different kind of brittleness. We have plain English strings in our tests, and that means if the “thank you” message changes to something like “Thank you for reaching out!” this test suddenly fails. Same with all the other tests. A small change to how a label is written would require updating any test that targets elements using that label.

We can improve this by using the same source of truth for these strings in front-end tests as we do in our code. And if we currently have human-readable sentences embedded right there in the HTML of our components… well now we have a reason to pull that stuff out of there.

Human-readable strings might be the magic numbers of UI code

A magic number (or less-excitingly, an “unnamed numerical constant”) is that super-specific value you sometimes see in code that is important to the end result of a calculation, like a good old 1.023033 or something. But since that number is not unlabeled, its significance is unclear, and so it’s unclear what it’s doing. Maybe it applies a tax. Maybe it compensates for some bug that we don’t know about. Who knows?

Either way, the same is true for front-end testing and the general advice is to avoid magic numbers because of their lack of clarity. Code reviews will often catch them and ask what the number is for. Can we move it into a constant? We also do the same thing if a value is to be reused multiple places. Rather than repeat the value everywhere, we can store it in a variable and use the variable as needed.

Writing user interfaces over the years, I’ve come to see text content in HTML or template files as very similar to magic numbers in other contexts. We’re putting absolute values all through our code, but in reality it might be more useful to store those values elsewhere and bring them in at build time (or even through an API depending on the situation).

There are a few reasons for this:

  1. I used to work with clients who wanted to drive everything from a content management system. Content, even form labels and status messages, that didn’t live in the CMS were to be avoided. Clients wanted full control so that content changes didn’t require editing code and re-deploying the site. That makes sense; code and content are different concepts.
  2. I’ve worked in many multilingual codebases where all the text needs to be pulled in through some internationalization framework, like this:
<label for="name">
  <!-- prints "Name" in English but something else in a different language -->
  {{content[currentLanguage].contactForm.name}}
</label>
  1. As far as front-end testing goes, our UI tests are much more robust if, instead of checking for a specific “thank you” message we hardcode into the test, we do something like this:
const text = content.en.contactFrom // we would do this once and all tests in the file can read from it

cy.contains(text.nameLabel, '[role="status"]').should('be.visible')

Every situation is different, but having some system of constants for strings is a huge asset when writing robust UI tests, and I would recommend it. Then, if and when translation or dynamic content does become necessary for our situation, we are a lot better prepared for it.

I’ve heard good arguments against importing text strings in tests, too. For example, some find tests are more readable and generally better if the test itself specifies the expected content independently from the content source.

It’s a fair point. I’m less persuaded by this because I think content should be controlled through more of an editorial review/publishing model, and I want the test to check if the expected content from the source got rendered, not some specific strings that were correct when the test was written. But plenty of people disagree with me on this, and I say as long as within a team the tradeoff is understood, either choice is acceptable.

That said, it’s still a good idea to isolate code from content in the front end more generally. And sometimes it might even be valid to mix and match — like importing strings in our component tests and not importing them in our end-to-end tests. This way, we save some duplication and gain confidence that our components display correct content, while still having front-end tests that independently assert the expected text, in the editorial, user-facing sense.

When to use data-test locators

CSS selectors like [data-test="success-message"] are still useful and can be very helpful when used in an intentional way, instead of used all the time. If our judgement is that there’s no meaningful, accessible way to target an element, data-test attributes are still the best option. They are much better than, say, depending on a coincidence like whatever the DOM structure happens to be on the day you are writing the test, and falling back to the “second list item in the third div with a class of card” style of test.

There are also times when content is expected to be dynamic and there’s no way to easily grab strings from some common source of truth to use in our tests. In those situations, a data-test attribute helps us reach the specific element we care about. It can still be combined with an accessibility-friendly assertion, for example:

cy.get('h2[data-test="intro-subheading"]')

Here we want to find what has the data-test attribute of intro-subheading, but still allow our test to assert that it should be a h2 element if that’s what we expect it to be. The data-test attribute is used to make sure we get the specific h2 we are interested in, not some other h2 that might be on the page, if for some reason the content of that h2 can’t be known at the time of the test.

Even in cases where we do know the content, we might still use data attributes to make sure the application renders that content in the right spot:

cy.contains('h2[data-test="intro-subheading"]', 'Welcome to Testing!')

data-test selectors can also be a convenience to get down to a certain part of the page and then make assertions within that. This could look like the following:

cy.get('article[data-test="ablum-card-blur-great-escape"]').within(() => {
  cy.contains('h2', 'The Great Escape').should('be.visible')
  cy.contains('p', '1995 Album by Blur').should('be.visible')
  cy.get('[data-test="stars"]').should('have.length', 5)
})

At that point we get into some nuance because there may very well be other good ways to target this content, it’s just an example. But at the end of the day, it’s a good if worrying about details like that is where we are because at least we have some understanding of the accessibility features embedded in the HTML we are testing, and that we want to include those in our tests.

When the DOM matters, test it

Front-end tests bring a lot more value to us if we are thoughtful about how we tell the tests what elements to interact with, and what to contents to expect. We should prefer accessible names to target interactive components, and we should include specific elements names, ARIA roles, etc., for non-interactive content — if those things are relevant to the functionality. These locators, when practical, create the right combination of strength and brittleness.

And of course, for everything else, there’s data-test.


Writing Strong Front-end Test Element Locators originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/front-end-test-element-locators/feed/ 0 364218
Test Your Product on a Crappy Laptop https://css-tricks.com/test-your-product-on-a-crappy-laptop/ https://css-tricks.com/test-your-product-on-a-crappy-laptop/#comments Tue, 07 Dec 2021 19:01:05 +0000 https://css-tricks.com/?p=357637 There is a huge and ever-widening gap between the devices we use to make the web and the devices most people use to consume it. It’s also no secret that the average size of a website is huge, and …


Test Your Product on a Crappy Laptop originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
There is a huge and ever-widening gap between the devices we use to make the web and the devices most people use to consume it. It’s also no secret that the average size of a website is huge, and it’s only going to get larger.

What can you do about this? Get your hands on a craptop and try to use your website or web app.

Craptops are cheap devices with lower power internals. They oftentimes come with all sorts of third-party apps preinstalled as a way to offset its cost—apps like virus scanners that are resource-intensive and difficult to remove. They’re everywhere, and they’re not going away anytime soon.

As you work your way through your website or web app, take note of:

  • what loads slowly,
  • what loads so slowly that it’s unusable, and
  • what doesn’t even bother to load at all.

After that, formulate a plan about what to do about it.

The industry average

At the time of this post, the most common devices used to read CSS-Tricks are powerful, modern desktops, laptops, tablets, and phones with up-to-date operating systems and plenty of computational power.

Granted, not everyone who makes websites and web apps reads CSS-Tricks, but it is a very popular industry website, and I’m willing to bet its visitors are indicative of the greater whole.

In terms of performance, the qualities we can note from these devices are:

  • powerful processors,
  • generous amounts of RAM,
  • lots of storage space,
  • high-quality displays, and most likely a
  • high-speed internet connection

Unfortunately, these qualities are not always found in the devices people use to access your content.

Survivor bias

British soldiers in World War I were equipped with a Brodie helmet, a steel hat designed to protect its wearer from overhead blasts and shrapnel while conducting trench warfare. After its deployment, field hospitals saw an uptick in soldiers with severe head injuries.

A grizzled British soldier smiling back at the camera, holding a Brodie helmet with a large hole punched in it. Black and white photograph.
Source: History Daily

Because of the rise in injuries, British command considered going back to the drawing board with the helmet’s design. Fortunately, a statistician pointed out that the dramatic rise in hospital cases was because people were surviving injuries that previously would have killed them—before the introduction of steel the British Army used felt or leather as headwear material.

Survivor bias is the logical error that focuses on those who made it past a selection process. In the case of the helmet, it’s whether you’re alive or not. In the case of websites and web apps, it’s if a person can load and use your content.

Lies, damned lies, and statistics

People who can’t load your website or web app don’t show up as visitors in your analytics suite. This is straightforward enough.

However, the “use” part of “load and use your content” is the important bit here. There’s a certain percentage of devices who try to access your product that will be able to load enough of it to register a hit, but then bounce because the experience is so terrible it is effectively unusable.

Yes, I know analytics can be more sophisticated than this. But through the lens of survivor bias, is this behavior something your data is accommodating?

Blame

It’s easy to go out and get a cheap craptop and feel bad about a slow website you have no control over. The two real problems here are:

  1. Third-party assets, such as the very analytics and CRM packages you use to determine who is using your product and how they go about it. There’s no real control over the quality or amount of code they add to your site, and setting up the logic to block them loading their own third-party resources is difficult to do.
  2. The people who tell you to add these third-party assets. These people typically aren’t aware of the performance issues caused by the ask, or don’t care because it’s not part of the results they’re judged by.

What can we do about these two issues? Tie abstract, one-off business requests into something more holistic and personal.

Bear witness

I know of organizations who do things like “Testing Tuesdays,” where moderated usability testing is conducted every Tuesday. You could do the same for performance, even thread this idea into existing usability testing plans—slow websites aren’t usable, after all.

The point is to construct a regular cadence of seeing how real people actually use your website or web app, using real world devices. And when I say real world, make sure it’s not just the average version of whatever your analytics reports says.

Then make sure everyone is aware of these sessions. It’s a powerful thing to show a manager someone trying to get what they need, but can’t because of the choices your organization has made.

Craptop duty

There are roughly 260 work days in a year. That’s 260 chances to build some empathy by having someone on your development, design, marketing, or leadership team use the craptop for a day.

You can run Linux from a Windows subsystem to run most development tooling. Most other apps I’m aware of in the web-making space have a Windows installer, or can run from a browser. That should be enough to do what you need to do. And if you can’t, or it’s too slow to get done at the pace you’re accustomed to, well, that’s sort of the point.

Craptop duty, combined with usability testing with a low power device, should hopefully be enough to have those difficult conversations about what your website or web app really needs to load and why.

Don’t tokenize

The final thing I’d like to say is that it’s easy to think that the presence of a lower power device equals the presence of an economically disadvantaged person. That’s not true. Powerful devices can become circumstantially slowed by multiple factors. Wealthy individuals can, and do, use lower-power technology.

Perhaps the most important takeaway is poor people don’t deserve an inferior experience, regardless of what they are trying to do. Performant, intuitive, accessible experiences on the web are for everyone, regardless of device, ability, or circumstance.


Test Your Product on a Crappy Laptop originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/test-your-product-on-a-crappy-laptop/feed/ 10 357637
Test Your Site With Real Users https://css-tricks.com/test-your-site-with-real-users/ https://css-tricks.com/test-your-site-with-real-users/#respond Thu, 02 Dec 2021 22:13:27 +0000 https://css-tricks.com/?p=357368 A few years ago, there was this French book publisher. They specialize in technical books and published an author who wrote a book about CSS3, HTML5 and jQuery. The final version, however, a glaring typo on the cover where “HTML5” …


Test Your Site With Real Users originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
A few years ago, there was this French book publisher. They specialize in technical books and published an author who wrote a book about CSS3, HTML5 and jQuery. The final version, however, a glaring typo on the cover where “HTML5” was displayed as “HTLM5.” Read that twice. Yes. “HTLM5.” (Note that it was also missing the capitalized “Q” in jQuery in one version.)

Image of the book containing the typo. It has three cartoonish figures on it dressed as superheroes, then a product description of the book to the right of the cover.

I don’t know how many people are involved in publishing and printing a book. I bet quite a few. Yet, it looked like none of the people involved saw the typo. It made it to the printer, after all.

And this kind of thing happens all the time on projects. One of my favorite French expressions is avoir la tête dans le guidon. A literal translation is “having your head in the handlebar.” (The English official version is having your nose in the grindstone.) It comes from cycling. When cyclists are trying to win a race, at some point, they end up with their nose so close to the handlebar that nothing else around them matters. They are hyper focused on the road ahead. They can’t see anything else around anymore.

Photo of a cyclist in a black helmet and red jacket on a black and blue racing bike riding through a busy intersection with a blurry backdrop indicating a fast speed.
Credit: Max Bender via Unsplash

And this is exactly what happens to us quite often on projects. We and our teams are so focused at some point on shipping the site (or printing the book) that we get blindfolded and fail to see little (or big) details anymore. This is how you ship a book about “HTLM5” and a website with navigation issues and dead ends in user flows, or features no one needs.

Gaining an external view with user testing

If you want to avoid these sorts of things, you need an external view of your site, product or service. And the best way to gain that view is to test it with people who are not on the team. We call this usability or user testing. I have to confess that I’m biased here since part of my job is to perform user testing on websites. So, I have to say that, ideally, you want to test with your target audience — the people who actually use your website, product, or service. But, if (and this is a big if) you can’t find any users, at least have a first round of tests with people who did not work directly on the project.

You also want to test with people with different impairments to make sure the end result is as accessible as possible.

When should I start testing my project?

In a perfect world, you test as soon and as often as possible. Testing prototypes built in design tools before starting development is cheaper. If the concept doesn’t work, at least you did not invest three months of development into an ineffective feature.

You can also test HTML/CSS/JavaScript prototypes with fake data built for the tests — or test once the feature or website is developed. This does mean, though, that any changes are more complex and expensive.

Define what you want to test

The first step is to define what specific tasks or activities you want to test. Usually, you want a set of different actions with a user goal at the end. For example:

  • an account creation process
  • a whole checkout process
  • a search process from the homepage to the final blog post, etc.

List the tasks and activities the user needs to accomplish in the form of questions. We call this a creating a test script. You can find an example here from 18F.

Be careful not to bias users. This is the tricky part. For example, if you want to test an account creation flow and the button says “Sign up,” then avoid asking your test users to “sign up” because the first thing they will do is search for a button with the same verb on the screen. You could ask them to “create an account” instead and gain better insights.

Screenshots of Axure and Word side by side.
Example of a protype build in Axure and a test script

Then prepare the prototype you want to test. As mentioned before, it can range from mockups with a click-through prototype to a fully-developed prototype with real data. It’s totally up to you and how far you are in the project. Just make sure it works (technically, I mean).

Recruit participants

You know who your users are on most of your projects. The question is: how can you reach out to them? There’s plenty of ways. You might go through support or salespeople with lists of possible participants. If it’s a broad target audience, you could recruit testers right where they are. Working on an e-commerce website that sells plants? Why not try visiting actual physical shops, online communities for gardeners, community gardens, Facebook groups, etc.

You can use social media to recruit participants as long as you recruit the right people who are prospective users of the site. This is why UX professionals use screeners. A screener is a set of questions you while recruiting (and when starting the test), to make sure you are working with someone who is in the target audience.

Note that participants are usually compensated for their time. It can be gift cards, maybe getting of your product, some really nice chocolate — something that encourages people to spend time with you in a way that thanks them.

If you struggle recruiting and have a budget, you can use professional user research recruitment websites like userinterviews.com or testingtime.com.

Schedule, set up, prepare

Once you successfully recruit participants for testing, schedule a meeting with them, including the testing date, time, and place. The test can be remote or face to face. I won’t detail the logistics here, but at some point, you will need help to set up an actual room or a virtual space for the testing. If it’s a physical room, make sure it’s calm and accessible for your users. If it’s remote, make sure the tools are accessible and people can install them if needed on their computers.

Schedule some emails in advance to remind participants the day before the test, just in case.

Last but not least: do a dry run of your test using people from your team. This helps avoid typos in the scripts and prototypes. You want to make sure the prototype works properly, that there are no technical issues, etc. You want to avoid anything that could bias the test.

Facilitate the test

You need two testers to conduct a usability test. One person facilitates. The other takes care of the logistics and notes.

Welcome the participant. You can find a lot of templates for usability testing over at usability.gov, including consent forms, email template examples, and much more.

Start the recording, but only if they give you permission to do so, of course. Explain that you are testing the site, not them, and that there are no right or wrong answers. Encourage them to think out loud, and to tell you exactly what they do, see, and think.

Put them at ease by starting with a few soft questions to get them to talk. Then follow your script.

The most important thing: don’t help users accomplish the tasks. I know, this is hard. We don’t like to see people struggle. But if you help them, you will bias the results. Of course, if they struggle for five minutes and you need them to accomplish the task to go to the next one, you can unlock them. Mark that particular task as “failed.”

Once testing is finished, thank the test user for their time and offer them the compensation (or tell them how to get compensated if it was a remote test).

Get the recording, upload it somewhere in the cloud so there is a backup. Same for your notes. Trust me on that, there’s nothing worse than losing some data because the computer crashed.

Analyze and document the results

After the test, I usually like to put together a quick “first draft” of the analysis for a given participant because the testing is still fresh in my mind.

Some people do this in shared documents or Excel sheets. My favorite method is using the actual screens that were used for testing in a Miro board. And I put digital sticky notes on them with the test’s main findings. I use different colors for different types of feedback, like a user comment, feature request, usability issue, etc.

When multiple users give the same feedback or experience the same issue, I add a small dot on the note. This way, I have a visual summary of everything that happened during all the tests.

Screenshot of mockup screens in Miro with notes attached to various areas of the screens. There are 13 total screens, each with different layouts and content.

And then? Learn, iterate, improve.

We don’t test for the fun of testing. We test to improve things. So, the next step is to learn from those tests. What worked well? What can be improved? How might we improve? Of course, you might not have the time and budget to improve everything at once. My advice is to prioritize and iterate. Fix the biggest issues first. “Big” is a relative term, of course, and that depends on your project or KPIs. It could mean “most users have this issue.” Or it could mean, “if this doesn’t get fixed, we will lose users and revenue.” This is when it becomes again, a team sport.

In conclusion

I hope I’ve convinced you to test your site soon and often. This was just a small introduction to the world of testing with real users. I simplified a lot in here to give you an idea of what goes into user testing. Proper professional usability testing is more complex, especially on large projects. While I always favor hiring someone dedicated to user research and testing, I also understand that it might be complicated for smaller projects.

If you want to go further, I recommend checking out the following resources:


Test Your Site With Real Users originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/test-your-site-with-real-users/feed/ 0 357368
Testing Vue Components With Cypress https://css-tricks.com/testing-vue-components-with-cypress/ https://css-tricks.com/testing-vue-components-with-cypress/#comments Wed, 27 Oct 2021 14:25:04 +0000 https://css-tricks.com/?p=353842 Cypress is an automated test runner for browser-based applications and pages. I’ve used it for years to write end-to-end tests for web projects, and was happy to see recently that individual component testing had come to Cypress. I work on …


Testing Vue Components With Cypress originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
Cypress is an automated test runner for browser-based applications and pages. I’ve used it for years to write end-to-end tests for web projects, and was happy to see recently that individual component testing had come to Cypress. I work on a large enterprise Vue application, and we already use Cypress for end-to-end tests. Most of our unit and component tests are written with Jest and Vue Test Utils.

Once component testing arrived in Cypress, my team was all in favor of upgrading and trying it out. You can learn a lot about how component testing works directly from the Cypress docs, so I’m going skip over some of the setup steps and focus on what it is like to work with component tests — what do they look like, how are we using them, and some Vue-specific gotchas and helpers we found.

Disclosure! At the time I wrote the first draft of this article, I was the front-end team lead at a large fleet management company where we used Cypress for testing. Since the time of writing, I’ve started working at Cypress, where I get to contribute to the open source test runner.

The Cypress component test runner is open, using Chrome to test a “Privacy Policy” component. Three columns are visible in the browser. The first contains a searchable list of component test spec files; the second shows the tests for the currently-spec; the last shows the component itself mounted in the browser. The middle column shows that two tests are passing.

All the examples mentioned here are valid at the time of writing using Cypress 8. It’s a new feature that’s still in alpha, and I wouldn’t be surprised if some of these details change in future updates.

If you already have a background in testing and component tests, you can skip right to our team’s experiences.

What a component test file looks like

For a simplified example, I’ve created a project that contains a “Privacy Policy” component. It has a title, body, and an acknowledgment button.

The Privacy Policy component has three areas. The title reads “Privacy Policy”; the body text reads “Information about privacy that you should read in detail,” and a blue button at the bottom reads “OK, I read it, sheesh.”

When the button is clicked, an event is emitted to let the parent component know that this has been acknowledged. Here it is deployed on Netlify.

Now here’s the general shape of a component test in Cypress that uses some of the feature’s we are going to talk about:

import { mount } from '@cypress/vue'; // import the vue-test-utils mount function
import PrivacyPolicyNotice from './PrivacyPolicyNotice.vue'; // import the component to test

describe('PrivacyPolicyNotice', () => {
 
 it('renders the title', () => {
    // mount the component by itself in the browser 🏗
    mount(PrivacyPolicyNotice); 
    
    // assert some text is present in the correct heading level 🕵️ 
    cy.contains('h1', 'Privacy Policy').should('be.visible'); 
  });

  it('emits a "confirm" event once when confirm button is clicked', () => {
    // mount the component by itself in the browser 🏗
    mount(PrivacyPolicyNotice);

    // this time let's chain some commands together
    cy.contains('button', '/^OK/') // find a button element starting with text 'OK' 🕵️
    .click() // click the button 🤞
    .vue() // use a custom command to go get the vue-test-utils wrapper 🧐
    .then((wrapper) => {
      // verify the component emitted a confirm event after the click 🤯
      expect(wrapper.emitted('confirm')).to.have.length(1) 
      // `emitted` is a helper from vue-test-utils to simplify accessing
      // events that have been emitted
    });
  });

});

This test makes some assertions about the user interface, and some about the developer interface (shoutout to Alex Reviere for expressing this division in the way that clicked for me). For the UI, we are targeting specific elements with their expected text content. For developers, we are testing what events are emitted. We are also implicitly testing that the component is a correctly formed Vue component; otherwise it would not mount successfully and all the other steps would fail. And by asserting specific kinds of elements for specific purposes, we are testing the accessibility of the component — if that accessible button ever becomes a non-focusable div, we’ll know about it.

Here’s how our test looks when I swap out the button for a div. This helps us maintain the expected keyboard behavior and assistive technology hints that come for free with a button element by letting us know if we accidentally swap it out:

The Cypress component test runner shows that one test is passing and one is failing. The failure warning is titled 'Assertion Error' and reads 'Timed out retrying after 4000ms: Expected to find content: '/^OK/' within the selector: 'button' but never did.'

A little groundwork

Now that we’ve seen what a component test looks like, let’s back up a little bit and talk about how this fits in to our overall testing strategy. There are many definitions for these things, so real quick, for me, in our codebase:

  • Unit tests confirm single functions behave as expected when used by a developer.
  • Component tests mount single UI components in isolation and confirm they behave as expected when used by an end-user and a developer.
  • End-to-end tests visit the application and perform actions and confirm the app as whole behaves correctly when used by an end-user only.

Finally, integration testing is a little more of a squishy term for me and can happen at any level — a unit that imports other functions, a component that imports other components, or indeed, an “end-to-end” test that mocks API responses and doesn’t reach the database, might all be considered integration tests. They test more than one part of an application working together, but not the entire thing. I’m not sure about the real usefulness of that as a category, since it seems very broad, but different people and organizations use these terms in other ways, so I wanted to touch on it.

For a longer overview of the different kinds of testing and how they relate to front-end work, you can check out “Front-End Testing is For Everyone” by Evgeny Klimenchenko.

Component tests

In the definitions above, the different testing layers are defined by who will be using a piece of code and what the contract is with that person. So as a developer, a function that formats the time should always return the correct result when I provide it a valid Date object, and should throw clear errors if I provide it something different as well. These are things we can test by calling the function on its own and verifying it responds correctly to various conditions, independent of any UI. The “developer interface” (or API) of a function is all about code talking to other code.

Now, let’s zoom in on component tests. The “contract” of a component is really two contracts:

  • To the developer using a component, the component is behaving correctly if the expected events are emitted based on user input or other activity. It’s also fair to include things like prop types and validation rules in our idea of “correct developer-facing behavior,” though those things can also be tested at a unit level. What I really want from a component test as a developer is to know it mounts, and sends the signals it is supposed to based on interactions.
  • To the user interacting with a component, it is behaving correctly if the UI reflects the state of the component at all times. This includes more than just the visual aspect. The HTML generated by the component is the foundation for its accessibility tree, and the accessibility tree provides the API for tools like screen readers to announce the content correctly, so for me the component is not “behaving correctly” if it does not render the correct HTML for the contents.

At this point it’s clear that component testing requires two kinds of assertions — sometimes we check Vue-specific things, like “how many events got emitted of a certain type?”, and sometimes we check user-facing things, like “did a visible success message actually end up on the screen though?”

It also feels like component level tests are a powerful documentation tool. The tests should assert all the critical features of a component — the defined behaviors that are depended on — and ignore details that aren’t critical. This means we can look to the tests to understand (or remember, six months or a year from now!) what a component’s expected behavior is. And, all going well, we can change any feature that’s not explicitly asserted by the test without having to rewrite the test. Design changes, animation changes, improving the DOM, all should be possible, and if a test does fail, it will be for a reason you care about, not because an element got moved from one part of the screen to another.

This last part takes some care when designing tests, and most especially, when choosing selectors for elements to interact with, so we’ll return to this topic later.

How Vue component tests work with and without Cypress

At a high level, a combination of Jest and the Vue Test Utils library has becomes more or less the standard approach to running component tests that I’ve seen out there.

Vue Test Utils gives us helpers to mount a component, give it its options, and mock out various things a component might depend on to run properly. It also provides a wrapper object around the mounted component to make it a little easier to make assertions about what’s going on with the component.

Jest is a great test runner and will stand up the mounted component using jsdom to simulate a browser environment.

Cypress’ component test runner itself uses Vue Test Utils to mount Vue components, so the main difference between the two approaches is context. Cypress already runs end-to-end tests in a browser, and component tests work the same way. This means we can see our tests run, pause them mid-test, interact with the app or inspect things that happened earlier in the run, and know that browser APIs that our application depends on are genuine browser behavior rather than the jsdom mocked versions of those same features.

Once the component is mounted, all the usual Cypress things that we have been doing in end-to-end tests apply, and a few pain points around selecting elements go away. Mainly, Cypress is going to handle simulating all the user interactions, and making assertions about the application’s response to those interactions. This covers the user-facing part of the component’s contract completely, but what about the developer-facing stuff, like events, props, and everything else? This is where Vue Test Utils comes back in. Within Cypress, we can access the wrapper that Vue Test Utils creates around the mounted component, and make assertions about it.

What I like about this is that we end up with Cypress and Vue Test Utils both being used for what they are really good at. We can test the component’s behavior as a user with no framework-specific code at all, and only dig into Vue Test Utils for mounting the component and checking specific framework behavior when we choose to. We’ll never have to await a Vue-specific $nextTick after doing some Vue-specific thing to update the state of a component. That was always the trickiest thing to explain to new developers on the team without Vue experience — when and why they would need to await things when writing a test for a Vue component.

Our experience of component testing

The advantages of component testing sounded great to us, but of course, in a large project very few things can be seamless out of the box, and as we got started with our tests, we ran into some issues. We run a large enterprise SPA built using Vue 2 and the Vuetify component library. Most of our work heavily uses Vuetify’s built-in components and styles. So, while the “test components by themselves” approach sounds nice, a big lesson learned was that we needed to set up some context for our components to be mounted in, and we needed to get Vuetify and some global styles happening as well, or nothing was going to work.

Cypress has a Discord where people can ask for help, and when I got stuck I asked questions there. Folks from the community —as well as Cypress team members — kindly directed me to example repos, code snippets, and ideas for solving our problems. Here’s a list of the little things we needed to understand in order to get our components to mount correctly, errors we encountered, and whatever else stands out as interesting or helpful:

Importing Vuetify

Through lurking in the Cypress Discord, I’d seen this example component test Vuetify repo by Bart Ledoux, so that was my starting point. That repo organizes the code into a fairly common pattern that includes a plugins folder, where a plugin exports an instance of Veutify. This is imported by the application itself, but it can also be imported by our test setup, and used when mounting the component being tested. In the repo a command is added to Cypress that will replace the default mount function with one that mounts a component with Vuetify.

Here is all the code needed to make that happen, assuming we did everything in commands.js and didn’t import anything from the plugins folder. We’re doing this with a custom command which means that instead of calling the Vue Test Utils mount function directly in our tests, we’ll actually call our own cy.mount command:

// the Cypress mount function, which wraps the vue-test-utils mount function
import { mount } from "@cypress/vue"; 
import Vue from 'vue';
import Vuetify from 'vuetify/lib/framework';

Vue.use(Vuetify);

// add a new command with the name "mount" to run the Vue Test Utils 
// mount and add Vuetify
Cypress.Commands.add("mount", (MountedComponent, options) => {
  return mount(MountedComponent, {
    vuetify: new Vuetify({});, // the new Vuetify instance
    ...options, // To override/add Vue options for specific tests
  });
});

Now we will always have Vuetify along with our components when mounted, and we can still pass in all the other options we need to for that component itself. But we don’t need to manually add Veutify each time.

Adding attributes required by Vuetify

The only problem with the new mount command above is that, to work correctly, Vuetify components expect to be rendered in a certain DOM context. Apps using Vuetify wrap everything in a <v-app> component that represents the root element of the application. There are a couple of ways to handle this but the simplest is to add some setup to our command itself before it mounts a component.

Cypress.Commands.add("mount", (MountedComponent, options) => {
  // get the element that our mounted component will be injected into
  const root = document.getElementById("__cy_root");

  // add the v-application class that allows Vuetify styles to work
  if (!root.classList.contains("v-application")) {
    root.classList.add("v-application");
  }

  // add the data-attribute — Vuetify selector used for popup elements to attach to the DOM
  root.setAttribute('data-app', 'true');  

return mount(MountedComponent, {
    vuetify: new Vuetify({}), 
    ...options,
  });
});

This takes advantage of the fact that Cypress itself has to create some root element to actually mount our component to. That root element is the parent of our component, and it has the ID __cy_root. This gives us a place to easily add the correct classes and attributes that Vuetify expects to find. Now components that use Vuetify components will look and behave correctly.

One other thing we noticed after some testing is that the required class of v-application has a display property of flex. This makes sense in a full app context using Vuetify’s container system, but had some unwanted visual side effects for us when mounting single components — so we added one more line to override that style before mounting the component:

root.setAttribute('style', 'display: block');

This cleared up the occasional layout issues and then we were truly done tweaking the surrounding context for mounting components.

Getting spec files where we want them

A lot of the examples out there show a cypress.json config file like this one for component testing:

{
  "fixturesFolder": false,
  "componentFolder": "src/components",
  "testFiles": "**/*.spec.js"
}

That is actually pretty close to what we want since the testFiles property accepts a glob pattern. This one says, Look in any folder for files ending in .spec.js. In our case, and probably many others, the project’s node_modules folder contained some irrelevant spec.js files that we excluded by prefixing !(node_modules) like this:

"testFiles": "!(node_modules)**/*.spec.js"

Before settling on this solution, when experimenting, we had set this to a specific folder where component tests would live, not a glob pattern that could match them anywhere. Our tests live right alongside our components, so that could have been fine, but we actually have two independent components folders as we package up and publish a small part of our app to be used in other projects at the company. Having made that change early, I admit I sure did forget it had been a glob to start with and was starting to get off course before popping into the Discord, where I got a reminder and figured it out. Having a place to quickly check if something is the right approach was helpful many times.

Command file conflict

Following the pattern outlined above to get Vuetify working with our component tests produced a problem. We had piled all this stuff together in the same commands.js file that we used for regular end-to-end tests. So while we got a couple of component tests running, our end-to-end tests didn’t even start. There was an early error from one of the imports that was only needed for component testing.

I was recommended a couple of solutions but on the day, I chose to just extract the mounting command and its dependencies into its own file, and imported it only where needed in the component tests themselves. Since this was the only source of any problem running both sets of tests, it was a clean way to take that out of the the end-to-end context, and it works just fine as a standalone function. If we have other issues, or next time we are doing cleanup, we would probably follow the main recommendation given, to have two separate command files and share the common pieces between them.

Accessing the Vue Test Utils wrapper

In the context of a component test, the Vue Test Utils wrapper is available under Cypress.vueWrapper. When accessing this to make assertions, it helps to use cy.wrap to make the result chain-able like other commands accessed via cy. Jessica Sachs adds a short command in her example repo to do this. So, once again inside commands,js, I added the following:

Cypress.Commands.add('vue', () => {
  return cy.wrap(Cypress.vueWrapper);
});

This can be used in a test, like this:

mount(SomeComponent)
  .contains('button', 'Do the thing once')
  .click()
  .should('be.disabled')
  .vue()
  .then((wrapper) => {
    // the Vue Test Utils `wrapper` has an API specifically setup for testing: 
    // https://vue-test-utils.vuejs.org/api/wrapper/#properties
    expect(wrapper.emitted('the-thing')).to.have.length(1);
  });

This starts to read very naturally to me and clearly splits up when we are working with the UI compared to when we are inspecting details revealed through the Vue Test Utils wrapper. It also emphasizes that, like lots of Cypress, to get the most out of it, it’s important to understand the tools it leverages, not just Cypress itself. Cypress wraps Mocha, Chai, and various other libraries. In this case, it’s useful to understand that Vue Test Utils is a third-party open source solution with its own entire set of documentation, and that inside the then callback above, we are in Vue Test Utils Land — not Cypress Land — so that we go to the right place for help and documentation.

Challenges

Since this has been a recent exploration, we have not added the Cypress component tests to our CI/CD pipelines yet. Failures will not block a pull request, and we haven’t looked at adding the reporting for these tests. I don’t expect any surprises there, but it’s worth mentioning that we haven’t completed integrating these into our whole workflow. I can’t speak to it specifically.

It’s also relatively early days for the component test runner and there are a few hiccups. At first, it seemed like every second test run would show a linter error and need to be manually refreshed. I didn’t get to the bottom of that, and then it fixed itself (or was fixed by a newer Cypress release). I’d expect a new tool to have potential issues like this.

One other stumbling block about component testing in general is that, depending on how your component works, it can be difficult to mount it without a lot of work mocking other parts of your system. If the component interacts with multiple Vuex modules or uses API calls to fetch its own data, you need to simulate all of that when you mount the component. Where end-to-end tests are almost absurdly easy to get up and running on any project that runs in the browser, component tests on existing components are a lot more sensitive to your component design.

This is true of anything that mounts components in isolation, like Storybook and Jest, which we’ve also used. It’s often when you attempt to mount components in isolation that you realize just how many dependencies your components actually have, and it can seem like a lot of effort is needed just to provide the right context for mounting them. This nudges us towards better component design in the long run, with components that are easier to test and while touching fewer parts of the codebase.

For this reason, I’d suggest if you haven’t already got component tests, and so aren’t sure what you need to mock in order to mount your component, choose your first component tests carefully, to limit the number of factors you have to get right before you can see the component in the test runner. Pick a small, presentational component that renders content provided through props or slots, to see it a component test in action before getting into the weeds on dependencies.

Benefits

The component test runner has worked out well for our team. We already have extensive end-to-end tests in Cypress, so the team is familiar with how to spin up new tests and write user interactions. And we have been using Vue Test Utils for individual component testing as well. So there was not actually too much new to learn here. The initial setup issues could have been frustrating, but there are plenty of friendly people out there who can help work through issues, so I’m glad I used the “asking for help” superpower.

I would say there are two main benefits that we’ve found. One is the consistent approach to the test code itself between levels of testing. This helps because there’s no longer a mental shift to think about subtle differences between Jest and Cypress interactions, browser DOM vs jsdom and similar issues.

The other is being able to develop components in isolation and getting visual feedback as we go. By setting up all the variations of a component for development purposes, we get the outline of the UI test ready, and maybe a few assertions too. It feels like we get more value out of the testing process up front, so it’s less like a bolted-on task at the end of a ticket.

This process is not quite test-driven development for us, though we can drift into that, but it’s often “demo-driven” in that we want to showcase the states of a new piece of UI, and Cypress is a pretty good way to do that, using cy.pause() to freeze a running test after specific interactions and talk about the state of the component. Developing with this in mind, knowing that we will use the tests to walk through the components features in a demo, helps organize the tests in a meaningful way and encourages us to cover all the scenarios we can think of at development time, rather than after.

Conclusion

The mental model for what exactly Cypress as whole does was tricky for me to when I first learned about it, because it wraps so many other open source tools in the testing ecosystem. You can get up and running quickly with Cypress without having a deep knowledge of what other tools are being leveraged under the hood.

This meant that when things went wrong, I remember not being sure which layer I should think about — was something not working because of a Mocha thing? A Chai issue? A bad jQuery selector in my test code? Incorrect use of a Sinon spy? At a certain point, I needed to step back and learn about those individual puzzle pieces and what exact roles they were playing in my tests.

This is still the case with component testing, and now there is an extra layer: framework-specific libraries to mount and test components. In some ways, this is more overhead and more to learn. On the other hand, Cypress integrates these tools in a coherent way and manages their setup so we can avoid a whole unrelated testing setup just for component tests. For us, we already wanted to mount components independently for testing with Jest, and for use in Storybook, so we figured out a lot of the necessary mocking ideas ahead of time, and tended to favor well-separated components with simple props/events based interfaces for that reason.

On balance, we like working with the test runner, and I feel like I’m seeing more tests (and more readable test code!) showing up in pull requests that I review, so to me that’s a sign that we’ve moved in a good direction.


Testing Vue Components With Cypress originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/testing-vue-components-with-cypress/feed/ 3 353842
Writing Your Own Code Rules https://css-tricks.com/writing-your-own-code-rules/ https://css-tricks.com/writing-your-own-code-rules/#comments Thu, 07 Oct 2021 23:35:53 +0000 https://css-tricks.com/?p=353112 There comes a time on a project when it’s worth investing in tooling to protect the codebase. I’m not sure how to articulate when, but it’s somewhere after the project has proven to be something long-term and rough edges …


Writing Your Own Code Rules originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
There comes a time on a project when it’s worth investing in tooling to protect the codebase. I’m not sure how to articulate when, but it’s somewhere after the project has proven to be something long-term and rough edges are starting to show, and before things feel like a complete mess. Avoid premature optimization but avoid, uh, postmature optimization.

Some of this tooling is so easy to implement, it is often done right up-front. I think of Prettier here, a code formatter that keeps your code in shape, usually right as you are coding. There are whole suites of tools you can put in that “as-you-are-coding” bucket, like accessibility linting, compatibility linting, security linting, etc. Webhint bundles a bunch of those together and is probably worth a look.

Then there is tooling that protects your code via more code that you have to write. Tests are the big player here, which can even be set up to run as you code. They are about making sure your code does what it is meant to do, and as such, deliver a hell of a lot of value.

Protecting your code with more code that you write is where I wanted to go with this, not with traditional tests, but with custom linting rules. I thought about it as two different posts about custom linting crossed my desk recently:

I was interested as a user of both ESLint and Stylelint in my main codebase. But fair warning, I found the process for writing custom rules in both of those pretty difficult. You gotta know you way around an Abstract Syntax Tree. It’s nothing like if (rules.find.selector.startsWith("old")) throw("Deprecated selector.") or something easy like that.

I found this all related to an interesting question that came my way:

I work on a development team working on an old project, and we want to get of rid many of our oldest and buggiest CSS selectors. For example, one of us might open a HTML file and see an element with a class name of deprecated-selector, our goal is to have our IDE literally mark it as a linting error and say like “This is a deprecated selector, use .ui-fresh__selector instead”.

The first thing I thought of was a custom Stylelint rules that would look for selectors that your team knows to be deprecated and warn you. But unfortunately, Stylelint is for linting CSS and it sounds like the main issue here is HTML. I know html-inspector had a way to write your own rules, but it’s getting a bit long in the tooth so I don’t know if there is success to be found there or not.


Writing Your Own Code Rules originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/writing-your-own-code-rules/feed/ 1 353112
Front-End Testing is For Everyone https://css-tricks.com/front-end-testing-is-for-everyone/ https://css-tricks.com/front-end-testing-is-for-everyone/#comments Tue, 01 Jun 2021 14:17:43 +0000 https://css-tricks.com/?p=341342 Testing is one of those things that you either get super excited about or kinda close your eyes and walk away. Whichever camp you fall into, I’m here to tell you that front-end testing is for everyone. In fact, …


Front-End Testing is For Everyone originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
Testing is one of those things that you either get super excited about or kinda close your eyes and walk away. Whichever camp you fall into, I’m here to tell you that front-end testing is for everyone. In fact, there are many types of tests and perhaps that is where some of the initial fear or confusion comes from.

I’m going to cover the most popular and widely used types of tests in this article. This might be nothing new to some of you, but it can at least serve as a refresher. Either way, my goal is that you’re able to walk away with a good idea of the different types of tests out there. Unit. Integration. Accessibility. Visual regression. These are the sorts of things we’ll look at together.

And not just that! We’ll also point out the libraries and frameworks that are used for each type of test, like Mocha. Jest, Puppeteer, and Cypress, among others. And don’t worry — I’ll avoid a bunch of technical jargon. That said, you should have some front-end development experience to understand the examples we’re going to cover.

OK, let’s get started!

What is testing?

Software testing is an investigation conducted to provide stakeholders with information about the quality of the software product or service under test.

Cem Kaner, “Exploratory Testing” (November 17, 2006)

At its most basic, testing is an automated tool that finds errors in your development as early as possible. That way, you’re able to fix those issues before they make it into production. Tests also serve as a reminder that you may have forgotten to check your own work in a certain area, say accessibility.

In short, front-end testing validates that what people see on the site and the features they use on it work as intended.

Front-end testing is for the client side of your application. For example, front-end tests can validate that pressing a “Delete” button properly removes an item from the screen. However, it won’t necessarily check if the item was actually removed from the database — that sort of thing would be covered during back-end testing.

That’s testing in a nutshell: we want to catch errors on the client side and fix them before code is deployed.

Different tests look at different parts of the project

Different types of tests cover different aspects of a project. Nevertheless, it is important to differentiate them and understand the role of each type. Confusing which tests do what makes for a messy, unreliable testing suit.

Ideally, you’d use several different types of tests to surface different types of possible issues. Some test types have a test coverage analytic that shows just how much of your code (as a percentage) is looked at by that particular test. That’s a great feature, and while I’ve seen developers aim for 100% coverage, I wouldn’t rely on that metric alone. The most important thing is to make sure all possible edge cases are covered and taken into account.

So, with that, let’s turn our attention to the different types of testing. Remember, it’s not so much that you’re expected to use each and every one of these. It’s about being able to differentiate the tests so that you know which ones to use in certain circumstances.

Unit testing

Unit testing is the most basic building block for testing. It looks at individual components and ensures they work as expected. This sort of testing is crucial for any front-end application because, with it, your components are tested against how they’re expected to behave, which leads to a much more reliable codebase and app. This is also where things like edge cases can be considered and covered.

Unit tests are particularly great for testing APIs. But rather than making calls to a live API, hardcoded (or “mocked”) data makes sure that your test runs are always consistent at all time.

Let’s take a super simple (and primitive) function as an example:

const sayHello = (name) => {
  if (!name) {
    return "Hello human!";
  }

  return `Hello ${name}!`;
};

Again, this is a basic case, but you can see that it covers a small edge case where someone may have neglected to provide a first name to the application. If there’s a name, we’ll get “Hello ${name}!” where ${name} is what we expect the person to have provided.

“Um, why do we need to test for something small like that?” you might wonder. There are some very important reasons for this:

  • It forces you to think deeply about the possible outcomes of your function. More often than not, you really do discover edge cases which helps you cover them in your code.
  • Some part of your code can rely on this edge case, and if someone comes and deletes something important, the test will warn them that this code is important and cannot be removed.

Unit tests are often small and simple. Here’s an example:

describe("sayHello function", () => {
  it("should return the proper greeting when a user doesn't pass a name", () => {
    expect(sayHello()).toEqual("Hello human!")
  })

  it("should return the proper greeting with the name passed", () => {
    expect(sayHello("Evgeny")).toEqual("Hello Evgeny!")
  })
})

describe and it are just syntactic sugar. The most important lines with expect and toEqual. describe and it breaks the test into logical blocks that are printed to the terminal. The expect function accepts the input we want to validate, while toEqual accepts the desired output. There are a lot of different functions and methods you can use to test your application.

Let’s say we’re working with Jest, a library for writing units. In the example above, Jest will display the sayHello function as a title in the terminal. Everything inside an it function is considered as a single test and is reported in the terminal below the function title, making everything very easy to read.

The green checkmarks mean both of our tests have passed. Yay!

Integration testing

If unit tests check the behavior of a block, integration tests make sure that blocks work flawlessly together. That makes Integration testing super important because it opens up testing interactions between components. It’s very rare (if ever) that an application is composed of isolated pieces that function by themselves. That’s why we rely on integration tests.

We go back to the function we unit tested, but this time use it in a simple React application. Let’s say that clicking a button triggers a greeting to appear on the screen. That means a test involves not only the function but also the HTML DOM and a button’s functionality. We want to test how all these parts play together.

Here’s the code for a <Greeting /> component we’re testing:

export const Greeting = () => {  
  const [showGreeting, setShowGreeting] = useState(false);  

 return (  
   <div>  
     <p data-testid="greeting">{showGreeting && sayHello()}</p>  
     <button data-testid="show-greeting-button" onClick={() => setShowGreeting(true)}>Show Greeting</button>  
   </div>
 );  
};

Here’s the integration test:

describe('<Greeting />', () => {  
  it('shows correct greeting', () => {  
    const screen = render(<Greeting />);  
     const greeting = screen.getByTestId('greeting');  
     const button = screen.getByTestId('show-greeting-button');  

     expect(greeting.textContent).toBe('');  
     fireEvent.click(button);  
     expect(greeting.textContent).toBe('Hello human!');  
 });  
});

We already know describe and it from our unit test. They break tests up into logical parts. We have the render function that displays a <Greeting /> component in the special emulated DOM so we can test interactions with the component without touching the real DOM — otherwise, it can be costly.

Next up, the test queries <p> and <button> elements via test IDs ( #greeting and #show-greeting-button, respectively). We use test IDs because it’s easier to get the components we want from the emulated DOM. There are other ways to query components, but this is how I do it most often.

It’s not until line 7 that the actual integration test begins! We first check that <p> tag is empty. Then we click the button by simulating a click event. And lastly, we check that the <p> tag contains “Hello human!” inside it. That’s it! All we’re testing is that an empty paragraph contains text after a button is clicked. Our component is covered.

We can, of course, add input where someone types their name and we use that input in the greeting function. However, I decided to make it a bit simpler. We’ll get to using inputs when we cover other types of tests.

Check out what we get in the terminal when running the integration test:

Termain message showing a passed test like before, but now with a specific test item for showing the correct greeting. It includes the number of tests that ran, how many passed, how many snapshots were taken, and how much time the tests took, which was 1.085 seconds.
Perfect! The <Greeting /> component shows the correct greeting when clicking the button.

End-to-end (E2E) testing

  • Level: High
  • Scope: Tests user interactions in a real-life browser by providing it instructions for what to do and expected outcomes.
  • Possible tools: Cypress, Puppeteer

E2E tests are the highest level of testing in this list. E2E tests care only about how people see your application and how they interact with it. They don’t know anything about the code and the implementation.

E2E tests tell the browser what to do, what to click, and what to type. We can create all kinds of interactions that test different features and flows as the end user experiences them. It’s literally a robot that’s interacted to click through an application to make sure everything works.

E2E tests are similar to integration tests in a sort of way. However, E2E tests are executed in a real browser with a real DOM rather than something we mock up — we generally work with real data and a real API in these tests.

It is good to have full coverage with unit and integration tests. However, users can face unexpected behaviors when they run an application in the browser — E2E tests are the perfect solution for that.

Let’s look at an example using Cypress, an extremely popular testing library. We are going to use it specifically for an E2E test of our previous component, this time inside a browser with some extra features.

Again, we don’t need to see the code of the application. All we’re assuming is that we have some application and we want to test it as a user. We know what buttons to click and the IDs those buttons have. That’s all we really have to go off of.

describe('Greetings functionality', () => {  
  it('should navigate to greetings page and confirm it works', () => {
    cy.visit('http://localhost:3000')  
    cy.get('#greeting-nav-button').click()  
    cy.get('#greetings-input').type('Evgeny', { delay: 400 })  
    cy.get('#greetings-show-button').click()  
    cy.get('#greeting-text').should('include.text', 'Hello Evgeny!')  
  })  
})

This E2E test looks very similar to our previous integration test. The commands are extremely similar, the main difference being that these are executed in a real browser.

First, we use cy.visit to navigate to a specific URL where our application lies:

cy.visit('http://localhost:3000')

Second, we use cy.get to get the navigation button by its ID, then instruct the test to click it. That action will navigate to the page with the <Greetings /> component. In fact, I’ve added the component to my personal website and provided it with its own URL route.

cy.get('#greeting-nav-button').click()

Then, sequentially, we get text input, type “Evgeny,” click the #greetings-show-button button and, lastly, check that we got the desired greeting output.

cy.get('#greetings-input').type('Evgeny', { delay: 400 })
cy.get('#greetings-show-button').click()
cy.get('#greeting-text').should('include.text', 'Hello Evgeny!')  

It is pretty cool to watch how the test clicks buttons for you in a real live browser. I slowed down the test a bit so you can see what is going on. All of this usually happens very quickly.

Here is the terminal output:

Terminal showing a run test for greetings.spec.js that passed in 12 seconds.

Accessibility testing

Web accessibility means that websites, tools, and technologies are designed and developed so that people with disabilities can use them.

W3C

Accessibility tests make sure people with disabilities can effectively access and use a website. These tests validate that you follow the standards for building a website with accessibility in mind.

For example, many unsighted people use screen readers. Screen readers scan your website and attempt to present it to users with disability in a format (usually spoken) those users can understand. As a developer, you want to make a screen reader’s job easy and accessibility testing will help you understand where to start.

There are a lot of different tools, some of them automated and some that run manually to validate accessibilit. For example, Chrome already has one tool built right into its DevTools. You may know it as Lighthouse.

Let’s use Lighthouse to validate the application we made in the E2E testing section. We open Lighthouse in Chrome DevTools, click the “Accessibility” test option, and “Generate” the report.

That’s literally all we have to do! Lighthouse does its thing, then generates a lovely report, complete with a score, a summary of audits that ran, and an outline of opportunities for improving the score.

But this is just one tool that measures accessibility from its particular lens. We have all kinds of accessibility tooling, and it’s worth having a plan for what to test and the tooling that’s available to hit those points.

Visual regression testing

  • Level: High
  • Scope: Tests the visual structure of application, including the visual differences produced by a change in the code.
  • Possible tools: Cypress, Percy, Applitools

Sometimes E2E tests are insufficient to verify that the last changes to your application didn’t break the visual appearance of anything in an interface. Have you pushed the code with some changes to production just to realize that it broke the layout of some other part of the application? Well, you are not alone. Most times than not, changes to a codebase break an app’s visual structure, or layout.

The solution is visual regression testing. The way it works is pretty straightforward. Visual test merely take a screenshot of pages or components and compare them with screenshots that were captured in previous successful tests. If these tests find any discrepancies between the screenshots, they’ll give us some sort of notification.

Let’s turn to a visual regression tool called Percy to see how visual regression test works. There are a lot of other ways to do visual regression tests, but I think Percy is simple to show in action. In fact, you can jump over to Paul Ryan’s deep dive on Percy right here on CSS-Tricks. But we’ll do something considerably simpler to illustrate the concept.

I intentionally broke the layout of our Greeting application by moving the button to the bottom of the input. Let’s try to catch this error with Percy.

Percy works well with Cypress, so we can follow their installation guide and run Percy regression tests along with our existing E2E tests.

describe('Greetings functionality', () => {  
  it('should navigate to greetings page and confirm everything is there', () => {  
    cy.visit('http://localhost:3000')  
    cy.get('#greeting-nav-button').click()  
    cy.get('#greetings-input').type('Evgeny', { delay: 400 })  
    cy.get('#greetings-show-button').click()  
    cy.get('#greeting-text').should('include.text', 'Hello Evgeny!')  


    // Percy test
     cy.percySnapshot() // HIGHLIGHT
  })  
})

All we added at the end of our E2E test is a one-liner: cy.percySnapshot(). This will take a screenshot and send it to Percy to compare. That is it! After the tests have finished, we’ll receive a link to check our regressions. Here is what I got in the terminal:

Terminal output that shows white text on a black background. It displays the same result as before, but with a step showing that Percy created a build and where to view it.
Hey, look, we can see that the E2E tests have passed as well! That shows how E2E testing won’t always catch a visual error.

And here’s what we get from Percy:

Animated gif of a webpage showing a logo and navigation above a form field. The animation overlays the original snapshot with the latest to reveal differences between the two.
Something clearly changed and it needs to be fixed.

Performance testing

Performance testing is great for checking the speed of your application. If performance is crucial for your business — and it likely is given the recent focus on Core Web Vitals and SEO — you’ll definitely want to know if the changes to your codebase have a negative impact on the speed of the application.

We can bake this into the rest of our testing flow, or we can run them manually. It’s totally up to you how to run these tests and how frequently to run them. Some devs create what’s called a “performance budget” and run a test that calculates the size of the app — and a failed test will prevent a deployment from happening if the size exceeds a certain threshold. Or, test manually every so often with Lighthouse, as it also measures performance metrics. Or combine the two and build Lighthouse into the testing suite.

Performance tests can measure anything related to performance. They can measure how fast an application loads, the size of its initial bundle, and even the speed of a particular function. Performance testing is a somewhat broad, vast landscape.

Here’s a quick test using Lighthouse. I think it’s a good one to show because of its focus on Core Web Vitals as well as how easily accessible it is in Chrome’s DevTools without any installation or configuration.

A Lighthouse report open in Chrome DevTools showing a Performance score of 55 indicated by a orange circle bearing the score. Various metrics are listed below the score, including a timeline of the page as it loads.
Not a great score, but at least we can see what’s up and we have some recommendations for how to make improvements.

Wrapping up

Here’s a breakdown of what we covered:

TypeLevelScopeTooling examples
UnitLowTests the functions and methods of an application.
IntegrationMediumTests Interactions between units.
End-to-endHighTests user interactions in a real-life browser by providing it instructions for what to do and expected outcomes.
AccessibilityHighTests the interface of your application against accessibility standards criteria.
Visual regressionHighTests the visual structure of application, including the visual differences produced by a change in the code. 
PerformanceHighTests the application forperformance and stability.

So, is testing for everyone? Yes, it is! Given all the available libraries, services, and tools we have to test different aspects of an application at different points, there’s at least something out there that allows us to measure and test code against standards and expectations — and some of them don’t even require code or configuration!

In my experience, many developers neglect testing and think that a simple click-through or post check will help any possible bugs from a change in the code. If you want to make sure your application works as expected, is inclusive to as many people as possible, runs efficiently, and is well-designed, then testing needs to be a core part of your workflow, whether it’s automated or manual.

Now that you know what types tests there are and how they work, how are you going to implement testing into your work?


Front-End Testing is For Everyone originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/front-end-testing-is-for-everyone/feed/ 11 341342
React Component Tests for Humans https://css-tricks.com/react-component-tests-for-humans/ https://css-tricks.com/react-component-tests-for-humans/#comments Tue, 23 Feb 2021 19:06:31 +0000 https://css-tricks.com/?p=334643 React component tests should be interesting, straightforward, and easy for a human to build and maintain.

Yet, the current state of the testing library ecosystem is not sufficient to motivate developers to write consistent JavaScript tests for React components. Testing …


React Component Tests for Humans originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
React component tests should be interesting, straightforward, and easy for a human to build and maintain.

Yet, the current state of the testing library ecosystem is not sufficient to motivate developers to write consistent JavaScript tests for React components. Testing React components—and the DOM in general—often require some kind of higher-level wrapper around popular test runners like Jest or Mocha.

Here’s the problem

Writing component tests with the tools available today is boring, and even when you get to writing them, it takes lots of hassle. Expressing test logic following a jQuery-like style (chaining) is confusing. It doesn’t jive with how React components are usually built.

The Enzyme code below is readable, but a bit too bulky because it uses too many words to express something that is ultimately simple markup.

expect(screen.find(".view").hasClass("technologies")).to.equal(true);
expect(screen.find("h3").text()).toEqual("Technologies:");
expect(screen.find("ul").children()).to.have.lengthOf(4);
expect(screen.contains([
  <li>JavaScript</li>,
  <li>ReactJs</li>,
  <li>NodeJs</li>,
  <li>Webpack</li>
])).to.equal(true);
expect(screen.find("button").text()).toEqual("Back");
expect(screen.find("button").hasClass("small")).to.equal(true);

The DOM representation is just this:

<div className="view technologies">
  <h3>Technologies:</h3>
  <ul>
    <li>JavaScript</li>
    <li>ReactJs</li>
    <li>NodeJs</li>
    <li>Webpack</li>
  </ul>
  <button className="small">Back</button>
</div>

What if you need to test heavier components? While the syntax is still bearable, it doesn’t help your brain grasp the structure and logic. Reading and writing several tests like this is bound to wear you out—it certainly wears me out. That’s because React components follow certain principles to generate HTML code at the end. Tests that express the same principles, on the other hand, are not straightforward. Simply using JavaScript chaining won’t help in the long run.

There are two main issues with testing in React:

  • How to even approach writing tests specifically for components
  • How to avoid all the unnecessary noise

Let’s further expand those before jumping into the real examples.

Approaching React component tests

A simple React component may look like this:

function Welcome(props) {
  return <h1>Hello, {props.name}</h1>;
}

This is a function that accepts a props object and returns a DOM node using the JSX syntax.

Since a component can be represented by a function, it is all about testing functions. We need to account for arguments and how they influence the returned result. Applying that logic to React components, the focus in the tests should be on setting up props and testing for the DOM rendered in the UI. Since user actions like mouseover, click, typing, etc. may also lead to UI changes, you will need to find a way to programmatically trigger those too.

Hiding the unnecessary noise in tests

Tests require a certain level of readability achieved by both slimming the wording down and following a certain pattern to describe each scenario.

Component tests flow through three phases:

  1. Arrange: The component props are prepared.
  2. Act: The component needs to render its DOM to the UI and register any user actions (events) to be programmatically triggered.
  3. Assert: The expectations are set, verifying certain side effects over the component markup.

This pattern in unit testing is know as Arrange-Act-Assert.

Here is an example:

it("should click a large button", () => {
  // 1️⃣ Arrange
  // Prepare component props
  props.size = "large";

  // 2️⃣ Act
  // Render the Button's DOM and click on it
  const component = mount(<Button {...props}>Send</Button>);
  simulate(component, { type: "click" });

  // 3️⃣ Assert
  // Verify a .clicked class is added 
  expect(component, "to have class", "clicked");
});

For simpler tests, the phases can merge:

it("should render with a custom text", () => {
  // Mixing up all three phases into a single expect() call
  expect(
    // 1️⃣ Preparation
    <Button>Send</Button>, 
    // 2️⃣ Render
    "when mounted",
    // 3️⃣ Validation
    "to have text", 
    "Send"
  );
});

Writing component tests today

Those two examples above look logical but are anything but trivial. Most of the testing tools do not provide such a level of abstraction, so we have to handle it ourselves. Perhaps the code below looks more familiar.

it("should display the technologies view", () => {
  const container = document.createElement("div");
  document.body.appendChild(container);
  
  act(() => {
    ReactDOM.render(<ProfileCard {...props} />, container);
  });
  
  const button = container.querySelector("button");
  
  act(() => {
    button.dispatchEvent(new window.MouseEvent("click", { bubbles: true }));
  });
  
  const details = container.querySelector(".details");
  
  expect(details.classList.contains("technologies")).toBe(true);
});

Compare that with the same test, only with an added layer of abstraction:

it("should display the technologies view", () => {
  const component = mount(<ProfileCard {...props} />);

  simulate(component, {
    type: "click",
    target: "button",
  });

  expect(
    component,
    "queried for test id",
    "details",
    "to have class",
    "technologies"
  );
});

It does look better. Less code and obvious flow. This is not a fiction test, but something you can achieve with UnexpectedJS today.

The following section is a deep dive into testing React components without getting too deep into UnexpectedJS. Its documentation more than does the job. Instead, we’ll focus on usage, examples, and possibilities.

Writing React Tests with UnexpectedJS

UnexpectedJS is an extensible assertion toolkit compatible with all test frameworks. It can be extended with plugins, and some of those plugins are used in the test project below. Probably the best thing about this library is the handy syntax it provides to describe component test cases in React.

The example: A Profile Card component

The subject of the tests is a Profile card component.

A card component where the persons name, photo, and number of posts are displayed to the left in a single column with a light red background, and the bio is displayed on the right in paragraph form with a title against a white background.

And here is the full component code of ProfileCard.js:

// ProfileCard.js
export default function ProfileCard({
  data: {
    name,
    posts,
    isOnline = false,
    bio = "",
    location = "",
    technologies = [],
    creationDate,
    onViewChange,
  },
}) {
  const [isBioVisible, setIsBioVisible] = useState(true);

  const handleBioVisibility = () => {
    setIsBioVisible(!isBioVisible);
    if (typeof onViewChange === "function") {
      onViewChange(!isBioVisible);
    }
  };

  return (
    <div className="ProfileCard">
      <div className="avatar">
        <h2>{name}</h2>
        <i className="photo" />
        <span>{posts} posts</span>
        <i className={`status ${isOnline ? "online" : "offline"}`} />
      </div>
      <div className={`details ${isBioVisible ? "bio" : "technologies"}`}>
        {isBioVisible ? (
          <>
            <h3>Bio</h3>
            <p>{bio !== "" ? bio : "No bio provided yet"}</p>
            <div>
              <button onClick={handleBioVisibility}>View Skills</button>
              <p className="joined">Joined: {creationDate}</p>
            </div>
          </>
        ) : (
          <>
            <h3>Technologies</h3>
            {technologies.length > 0 && (
              <ul>
                {technologies.map((item, index) => (
                  <li key={index}>{item}</li>
                ))}
              </ul>
            )}
            <div>
              <button onClick={handleBioVisibility}>View Bio</button>
              {!!location && <p className="location">Location: {location}</p>}
            </div>
          </>
        )}
      </div>
    </div>
  );
}

We will work with the component’s desktop version. You can read more about device-driven code split in React but note that testing mobile components is still pretty straightforward.

Setting up the example project

Not all tests are covered in this article, but we will certainly look at the most interesting ones. If you want to follow along, view this component in the browser, or check all its tests, go ahead and clone the GitHub repo.

## 1. Clone the project:
git clone git@github.com:moubi/profile-card.git

## 2. Navigate to the project folder:
cd profile-card

## 3. Install the dependencies:
yarn

## 4. Start and view the component in the browser:
yarn start

## 5. Run the tests:
yarn test

Here’s how the <ProfileCard /> component and UnexpectedJS tests are structured once the project has spun up:

/src
  └── /components
      ├── /ProfileCard
      |   ├── ProfileCard.js
      |   ├── ProfileCard.scss
      |   └── ProfileCard.test.js
      └── /test-utils
           └── unexpected-react.js

Component tests

Let’s take a look at some of the component tests. These are located in src/components/ProfileCard/ProfileCard.test.js. Note how each test is organized by the three phases we covered earlier.

  1. Setting up required component props for each test.
beforeEach(() => {
  props = {
    data: {
      name: "Justin Case",
      posts: 45,
      creationDate: "01.01.2021",
    },
  };
});

Before each test, a props object with the required <ProfileCard /> props is composed, where props.data contains the minimum info for the component to render.

  1. Render with status online.

Now we check if the profile renders with the “online” status icon.

And the test case for that:

it("should display online icon", () => {
  // Set the isOnline prop
  props.data.isOnline = true;

  // The minimum to test for is the presence of the .online class
  expect(
    <ProfileCard {...props} />,
    "when mounted",
    "queried for test id",
    "status",
    "to have class",
    "online"
  );
});
  1. Render with bio text.

<ProfileCard /> accepts any arbitrary string for its bio.

So, let’s write a test case for that:

it("should display bio text", () => {
  // Set the bio prop
  props.data.bio = "This is a bio text";

  // Testing if the bio string is rendered in the DOM
  expect(
    <ProfileCard {...props} />,
    "when mounted",
    "queried for test id",
    "bio-text",
    "to have text",
    "This is a bio text"
  );
});
  1. Render “Technologies” view with an empty list.

Clicking on the “View Skills” link should switch to a list of technologies for this user. If no data is passed, then the list should be empty.

Here’s that test case:

it("should display the technologies view", () => {
  // Mount <ProfileCard /> and obtain a ref
  const component = mount(<ProfileCard {...props} />);

  // Simulate a click on the button element ("View Skills" link)
  simulate(component, {
    type: "click",
    target: "button",
  });

  // Check if the details element contains a .technologies className
  expect(
    component,
    "queried for test id",
    "details",
    "to have class",
    "technologies"
  );
});
  1. Render a list of technologies.

If a list of technologies is passed, it will display in the UI when clicking on the “View Skills” link.

Yep, another test case:

it("should display list of technologies", () => {
  // Set the list of technologies
  props.data.technologies = ["JavaScript", "React", "NodeJs"];
 
  // Mount ProfileCard and obtain a ref
  const component = mount(<ProfileCard {...props} />);

  // Simulate a click on the button element ("View Skills" link)
  simulate(component, {
    type: "click",
    target: "button",
  });

  // Check if the list of technologies is present and matches the prop values
  expect(
    component,
    "queried for test id",
    "technologies-list",
    "to satisfy",
    {
      children: [
        { children: "JavaScript" },
        { children: "React" },
        { children: "NodeJs" },
      ]
    }
  );
});
  1. Render a user location.

That information should render in the DOM only if it was provided as a prop.

The test case:

it("should display location", () => {
  // Set the location 
  props.data.location = "Copenhagen, Denmark";

  // Mount <ProfileCard /> and obtain a ref
  const component = mount(<ProfileCard {...props} />);
  
  // Simulate a click on the button element ("View Skills" link)
  // Location render only as part of the Technologies view
  simulate(component, {
    type: "click",
    target: "button",
  });

  // Check if the location string matches the prop value
  expect(
    component,
    "queried for test id",
    "location",
    "to have text",
    "Location: Copenhagen, Denmark"
  );
});
  1. Calling a callback when switching views.

This test does not compare DOM nodes but does check if a function prop passed to <ProfileCard /> is executed with the correct argument when switching between the Bio and Technologies views.

it("should call onViewChange prop", () => {
  // Create a function stub (dummy)
  props.data.onViewChange = sinon.stub();
  
  // Mount ProfileCard and obtain a ref
  const component = mount(<ProfileCard {...props} />);

  // Simulate a click on the button element ("View Skills" link)
  simulate(component, {
    type: "click",
    target: "button",
  });

  // Check if the stub function prop is called with false value for isBioVisible
  // isBioVisible is part of the component's local state
  expect(
    props.data.onViewChange,
    "to have a call exhaustively satisfying",
    [false]
  );
});
  1. Render with a default set of props.

A note on DOM comparison:
You want to stay away from the DOM details in the tests most of the time. Use test IDs instead.
If for some reason you need to assert against the DOM structure, reference the example below.

This test checks the whole DOM produced by the component when passing name, posts, and creationDate fields.

Here’s what the result produces in the UI:

And here’s the test case for it:

it("should render default", () => {
  // "to exhaustively satisfy" ensures all classes/attributes are also matching
  expect(
    <ProfileCard {...props} />,
    "when mounted",
    "to exhaustively satisfy",
    <div className="ProfileCard">
      <div className="avatar">
        <h2>Justin Case</h2>
        <i className="photo" />
        <span>45{" posts"}</span>
        <i className="status offline" />
      </div>
      <div className="details bio">
        <h3>Bio</h3>
        <p>No bio provided yet</p>
        <div>
          <button>View Skills</button>
          <p className="joined">{"Joined: "}01.01.2021</p>
        </div>
      </div>
    </div>
  );
});

Running all the tests

Now, all of the tests for <ProfileCard /> can be executed with a simple command:

yarn test

Notice that tests are grouped. There are two independent tests and two groups of tests for each of the <ProfileCard /> views—bio and technologies. Grouping makes test suites easier to follow and is a nice way to organize logically-related UI units.

Some final words

Again, this is meant to be a fairly simple example of how to approach React component tests. The essence is to look at components as simple functions that accept props and return a DOM. From that point on, choosing a testing library should be based on the usefulness of the tools it provides for handling component renders and DOM comparisons. UnexpectedJS happens to be very good at that in my experience.

What should be your next steps? Look at the GitHub project and give it a try if you haven’t already! Check all the tests in ProfileCard.test.js and perhaps try to write a few of your own. You can also look at src/test-utils/unexpected-react.js which is a simple helper function exporting features from the third-party testing libraries.

And lastly, here are a few additional resources I’d suggest checking out to dig even deeper into React component testing:


React Component Tests for Humans originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/react-component-tests-for-humans/feed/ 12 334643
A Continuous Integration and Deployment Setup with CircleCI and Coveralls https://css-tricks.com/a-continuous-integration-and-deployment-setup-with-circleci-and-coveralls/ https://css-tricks.com/a-continuous-integration-and-deployment-setup-with-circleci-and-coveralls/#respond Mon, 09 Nov 2020 15:29:33 +0000 https://css-tricks.com/?p=324864 Continuous Integration (CI) and Continuous Deployment (CD) are crucial development practices, especially for teams. Every project is prone to error, regardless of the size. But when there is a CI/CD process set up with well-written tests, those errors are a …


A Continuous Integration and Deployment Setup with CircleCI and Coveralls originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
Continuous Integration (CI) and Continuous Deployment (CD) are crucial development practices, especially for teams. Every project is prone to error, regardless of the size. But when there is a CI/CD process set up with well-written tests, those errors are a lot easier to find and fix.

In this article, let’s go through how to check test coverage, set up a CI/CD process that uses CircleCI and Coveralls, and deploys a Vue application to Heroku. Even if that exact cocktail of tooling isn’t your cup of tea, the concepts we cover will still be helpful for whatever is included in your setup. For example, Vue can be swapped with a different JavaScript framework and the basic principles are still relevant.

Here’s a bit of terminology before we jump right in:

  • Continuous integration: This is a practice where developers commit code early and often, putting the code through various test and build processes prior to merge or deployment.
  • Continuous deployment: This is the practice of keeping software deployable to production at all times.
  • Test Coverage: This is a measure used to describe the degree to which software is tested. A program with high coverage means a majority of the code is put through testing.

To make the most of this tutorial, you should have the following:

  • CircleCI account: CircleCI is a CI/CD platform that we’ll use for automated deployment (which includes testing and building our application before deployment).
  • GitHub account: We’ll store the project and its tests in a repo.
  • Heroku account: Heroku is a platform used for deploying and scaling applications. We’ll use it for deployment and hosting.
  • Coveralls account: Coveralls is a platform used to record and show code coverage.
  • NYC: This is a package that we will use to check for code coverage.

A repo containing the example covered in this post is available on GitHub.

Let’s set things up

First, let’s install NYC in the project folder:

npm i nyc

Next, we need to edit the scripts in package.json to check the test coverage. If we are trying to check the coverage while running unit tests, we would need to edit the test script:

"scripts": {
  "test:unit": "nyc vue-cli-service test:unit",
},

This command assumes that we’re building the app with Vue, which includes a reference to cue-cli-service. The command will need to be changed to reflect the framework used on the project.

If we are trying to check the coverage separately, we need to add another line to the scripts:

"scripts": {
  "test:unit": "nyc vue-cli-service test:unit",
  "coverage": "nyc npm run test:unit"
},

Now we can check the coverage by with a terminal command:

npm run coverage

Next, we’ll install Coveralls which is responsible for reporting and showing the coverage:

npm i coveralls

Now we need to add Coveralls as another script in package.json. This script helps us save our test coverage report to Coveralls.

"scripts": {
  "test:unit": "nyc vue-cli-service test:unit",
  "coverage": "nyc npm run test:unit",
  "coveralls": "nyc report --reporter=text-lcov | coveralls"
},

Let’s go to our Heroku dashboard and register our app there. Heroku is what we’ll use to host it.

We’ll use CircleCI to automate our CI/CD process. Proceed to the CircleCI dashboard to set up our project.

We can navigate to our projects through the Projects tab in the CircleCI sidebar, where we should see the list of our projects in our GitHub organization. Click the “Set Up Project” button. That takes us to a new page where we’re asked if we want to use an existing config. We do indeed have our own configuration, so let’s select the “Use an existing config” option.

After that, we’re taken to the selected project’s pipeline. Great! We are done connecting our repository to CircleCI. Now, let’s add our environment variables to our CircleCI project.

To add variables, we need to navigate into the project settings.

The project settings has an Environment Variables tab in the sidebar. This is where we want to store our variables.

Variables needed for this tutorial are:

  • The Heroku app name: HEROKU_APP_NAME
  • Our Heroku API key: HEROKU_API_KEY
  • The Coveralls repository token: COVERALLS_REPO_TOKEN

The Heroku API key can be found in the account section of the Heroku dashboard.

The Coveralls repository token is on the repository’s Coveralls account. First, we need to add the repo to Coveralls, which we do by selecting the GitHub repository from the list of available repositories.

Now that we’ve added the repo to Coveralls. we can get the repository token by clicking on the repo.

Integrating CircleCI

We’ve already connected Circle CI to our GitHub repository. That means CircleCI will be informed whenever a change or action occurs in the GitHub repository. What we want to do now is run through the steps to inform CircleCI of the operations we want it to run after it detects change to the repo.

In the root folder of our project locally, let’s create a folder named .circleci and, in it, a file called config.yml. This is where all of CircleCI’s operations will be.

Here’s the code that goes in that file:

version: 2.1
orbs:
  node: circleci/node@1.1 // node orb 
  heroku: circleci/heroku@0.0.10 // heroku orb
  coveralls: coveralls/coveralls@1.0.6 // coveralls orb
workflows:
  heroku_deploy:
    jobs:
      - build
      - heroku/deploy-via-git: # Use the pre-configured job
        requires:
          - build
        filters:
          branches:
            only: master
jobs:
  build:
    docker:
      - image: circleci/node:10.16.0
    steps:
      - checkout
      - restore_cache:
        key: dependency-cache-{{ checksum "package.json" }}
      - run:
        name: install-npm-dependencies
        command: npm install
      - save_cache:
        key: dependency-cache-{{ checksum "package.json" }}
        paths:
          - ./node_modules
      - run: # run tests
        name: test
        command: npm run test:unit
      - run: # run code coverage report
        name: code-coverage
        command: npm run coveralls
      - run: # run build
        name: Build
        command: npm run build
      # - coveralls/upload

That’s a big chunk of code. Let’s break it down so we know what it’s doing.

Orbs

orbs:
  node: circleci/node@1.1 // node orb 
  heroku: circleci/heroku@0.0.10 // heroku orb
  coveralls: coveralls/coveralls@1.0.6 // coveralls orb

Orbs are open source packages used to simplify the integration of software and packages across projects. In our code, we indicate orbs we are using for the CI/CD process. We referenced the node orb because we are making use of JavaScript. We reference heroku because we are using a Heroku workflow for automated deployment. And, finally, we reference the coveralls orb because we plan to send the coverage results to Coveralls.

The Heroku and Coverall orbs are external orbs. So, if we run the app through testing now, those will trigger an error. To get rid of the error, we need to navigate to the “Organization Settings” page in the CircleCI account.

Then, let’s navigate to the Security tab and allow uncertified orbs:

Workflows

workflows:
  heroku_deploy:
    jobs:
      - build
      - heroku/deploy-via-git: # Use the pre-configured job  
          requires:
            - build
          filters:
            branches:
              only: master

A workflow is used to define a collection of jobs and run them in order. This section of the code is responsible for the automated hosting. It tells CircleCI to build the project, then deploy. requires signifies that the heroku/deploy-via-git job requires the build to be complete — that means it will wait for the build to complete before deployment.

Jobs

jobs:
  build:
    docker:
      - image: circleci/node:10.16.0
    steps:
      - checkout
      - restore_cache:
          key: dependency-cache-{{ checksum "package.json" }}
      - run:
          name: install-npm-dependencies
          command: npm install
      - save_cache:
          key: dependency-cache-{{ checksum "package.json" }}
          paths:
            - ./node_modules

A job is a collection of steps. In this section of the code, we restore the dependencies that were installed during the previous builds through the restore_cache job.

After that, we install the uncached dependencies, then save them so they don’t need to be re-installed during the next build.

Then we’re telling CircleCI to run the tests we wrote for the project and check the test coverage of the project. Note that caching dependencies make subsequent builds faster because we store the dependencies hence removing the need to install those dependencies during the next build.

Uploading our code coverage to coveralls

- run: # run tests
    name: test
    command: npm run test:unit
  - run: # run code coverage report
    name: code-coverage
    command: npm run coveralls
# - coveralls/upload

This is where the Coveralls magic happens because it’s where we are actually running our unit tests. Remember when we added the nyc command to the test:unit script in our package.json file? Thanks to that, unit tests now provide code coverage.

Unit tests also provide code coverage so we’ll those included in the coverage report. That’s why we’re calling that command here.

And last, the code runs the Coveralls script we added in package.json. This script sends our coverage report to coveralls.

You may have noticed that the coveralls/upload line is commented out. This was meant to be the finishing character of the process, but at the end became more of a blocker or a bug in developer terms. I commented it out as it may be another developer’s trump card.

Putting everything together

Behold our app, complete with continuous integration and deployment!

A screenshot of CircleCI showing 9 successful development, test, and build steps.
A successful build

Continuous integration and deployment helps in so many cases. A common example would be when the software is in a testing stage. In this stage, there are lots of commits happening for lots of corrections. The last thing I would want to do as a developer would be to manually run tests and manually deploy my application after every minor change made. Ughhh. I hate repetition!

I don’t know about you, but CI and CD are things I’ve been aware of for some time, but I always found ways to push them aside because they either sounded too hard or time-consuming. But now that you’ve seen how relatively little setup there is and the benefits that comes with them, hopefully you feel encouraged and ready to give them a shot on a project of your own.


A Continuous Integration and Deployment Setup with CircleCI and Coveralls originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/a-continuous-integration-and-deployment-setup-with-circleci-and-coveralls/feed/ 0 324864
Develop, Preview, Test https://css-tricks.com/develop-preview-test/ https://css-tricks.com/develop-preview-test/#comments Fri, 17 Jul 2020 18:03:54 +0000 https://css-tricks.com/?p=317222 Guillermo:

I want to make the case that prioritizing end-to-end (E2E) testing for the critical parts of your app will reduce risk and give you the best return. Further, I’ll show how you can adopt this methodology in mere


Develop, Preview, Test originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
Guillermo:

I want to make the case that prioritizing end-to-end (E2E) testing for the critical parts of your app will reduce risk and give you the best return. Further, I’ll show how you can adopt this methodology in mere minutes.

His test is:

  1. Spin up Puppeteer (Headless Chrome) and Chai
  2. Go to the homepage
  3. Test if the homepage has his name on it.

YES.

Just one super basic integration goes a long way. If your site spins up, returns a page, and renders stuff on it that you expect, a lot is going right. Then, once you have that, you can toss in a handful more where you navigate around a little and click some things. And, if it still works, you’re in pretty good shape.

I’ve had a little trouble with Cypress over the years, but you’ll probably have better luck than I did. Overall, I think it’s the nicest player in the integration test market.

To Shared LinkPermalink on CSS-Tricks


Develop, Preview, Test originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/develop-preview-test/feed/ 2 317222
Freezing User-Agent Strings https://css-tricks.com/freezing-user-agent-strings/ https://css-tricks.com/freezing-user-agent-strings/#comments Mon, 03 Feb 2020 20:29:36 +0000 https://css-tricks.com/?p=302206 There’s been news about Chrome freezing their User-Agent string (and all other major browsers are on board). That means they’ll still have a User-Agent (UA) string (that comes across in headers and is available in JavaScript as navigator.userAgent. By …


Freezing User-Agent Strings originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
There’s been news about Chrome freezing their User-Agent string (and all other major browsers are on board). That means they’ll still have a User-Agent (UA) string (that comes across in headers and is available in JavaScript as navigator.userAgent. By freezing it, it will be less useful over time in detecting the browser/platform/version, although the quoted reason for doing it is more about privacy and stopping fingerprinting rather than developer concerns.

In the front-end world, the general advice is: you shouldn’t be doing UA sniffing. The main problem is that so many sites get it wrong, and the changes they make with that information ends up hurting more than it helps. And the general advice for avoiding it is: you should test based on the reality of what you are trying to do instead.

Are you trying to test if a browser supports a particular feature? Then test for that feature, rather than the abstracted idea of a particular browser that is supposed to support that feature.

In JavaScript, sometimes features are very easy to test because you test for the presence of their APIs:

  if (navigator.geolocation) {
    navigator.geolocation.getCurrentPosition(showPosition);
  } else {
    console.warn("Geolocation not supported");
  }

In CSS, we have a native mechanism via @supports:

@supports (display: grid) {
  .main {
    display: grid;
  }
}

That is exposed in JavaScript via an API that returns a boolean answer:

CSS.supports("display: flex");

Not everything on the web platform is this easy to test, but it’s generally possible without doing UA sniffing. If you’re in a difficult position, it’s always worth checking to see if Modernizr has a test for it, which is kinda the gold-standard of feature testing as chances are it has been battle-tested and has dealt with edge cases in a way you might not foresee. If you actually use the library, it gives you clean logical breaks:

if (Modernizr.requestanimationframe) {
  // supported
} else {
  // not-supported
}

What if you just really need to know the browser type, platform, and version? Well, apparently that information is still possible to get, via a new thing called User-Agent Client Hints (UA-CH).

Wanna know the platform? You set a header on the request called Sec-CH-Platform and theoretically, you’ll get that information back in the response. You have to essentially ask for it, which is apparently enough to prevent the problematic privacy fingerprinting stuff. It appears there are headers like Sec-CH-Mobile for mobile too, which is a little curious. Who is deciding what a “mobile” device is? What decisions are we expected to make with that?

Knowing information about the browser, platform and version at the server level if often desirable as well (sending different code in different situations) — just as much as it is client-side, but without the benefit of being able to do tests. Presumably, the frozen UA strings will be useful for long enough that server-side situations can port over to using UA-CH.

Jon Arne Sæterås is nervous:

Professionally, I’ve been hands on with the mobile web space and seen it develop for more than 15 years and I know that many, big and small, websites rely on device detection based on the User-Agent header. From Google’s perspective it may seem easy to switch to the alternative UA-CH, but this is where the team pushing this change doesn’t understand the impact:

Functionality based on device detection is critical, widespread and not only in front end code. Huge software systems with backend code rely on device detection, as well as entire infrastructure stacks.

In my most major codebase, we do a smidge of server-side UA detection. We use a Rails gem called Browser that exposes UA-derived info in a nice API. I can write:

if browser.safari?

end

We also expose information from that gem on the client-side so it can be used there as well. There is only a handful of instances of usage for both front and back, none of which look like they would be particularly difficult to handle in some other way.

In the past it’s been kinda tricky to relay front-end information back to the server in such a way that’s useful on the first page load (since the UA doesn’t know stuff like viewport size). I remember some pretty fancy dancing I’ve done where I load up a skeleton page that executes a tiny bit of JavaScript that did things like measure the viewport width and screen size, then set a cookie and force-refreshed the page. If the cookie was present, the server had what it needed and didn’t load the skeleton page at all on those requests.

Tricky stuff, but then the server has information about the viewport width on the server-side, which is useful for things, like sending small-screen assets (e.g.different HTML), which was otherwise impossible.

I mention that because UA-CH stuff is not to be confused with regular ol’ Client Hints. We’re supposed to be able to configure our servers to send an Accept-CH header and then have our client-side code whitelist stuff to send back, like:

<meta http-equiv="Accept-CH" content="DPR, Viewport-Width">

That means a server can have information from the client about these things on subsequent page loads. That’s a nice API, but Firefox and Safari don’t support it. I wonder if it will get a bump if both of those browsers are signaling interest in UA-CH because of this frozen UA string stuff.


Freezing User-Agent Strings originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/freezing-user-agent-strings/feed/ 6 302206
Getting Started with Front End Testing https://css-tricks.com/getting-started-with-front-end-testing/ Mon, 20 Jan 2020 15:31:56 +0000 https://css-tricks.com/?p=301606 Amy Kapernick covers four types of testing that front-end devs could and should be doing:

  1. Linting (There’s ESLint for JavaScript and Stylelint and/or Prettier for CSS.)
  2. Accessibility Testing (Amy recommends pa11y, and we’ve covered Axe.)
  3. Visual Regression Testing


Getting Started with Front End Testing originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
Amy Kapernick covers four types of testing that front-end devs could and should be doing:

  1. Linting (There’s ESLint for JavaScript and Stylelint and/or Prettier for CSS.)
  2. Accessibility Testing (Amy recommends pa11y, and we’ve covered Axe.)
  3. Visual Regression Testing (Amy recommends Backstop, and we’ve covered Percy.)
  4. End to End Testing (There’s Cypress and stuff like jest-puppeteer.)

Amy published something similar over on 24 ways, listing out 12 different testing tools.

As long as we’re being comprehensive, we might consider performance testing to be part of all this, ala SpeedCurve or Calibre to mention some web services.

I’ve liked what Harry Roberts has said lately about performance budgets. They don’t need to be fancy; they just need to prevent you from bad screwups.

[…] most organisations aren’t ready for challenges, they’re in need of safety nets. Performance budgets should not be things to work toward, they should be things that stop us slipping past a certain point. They shouldn’t be aspirational, they should be preventative.

To Shared LinkPermalink on CSS-Tricks


Getting Started with Front End Testing originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
301606
Testing React Hooks With Enzyme and React Testing Library https://css-tricks.com/testing-react-hooks-with-enzyme-and-react-testing-library/ https://css-tricks.com/testing-react-hooks-with-enzyme-and-react-testing-library/#comments Fri, 29 Nov 2019 15:03:08 +0000 https://css-tricks.com/?p=299275 As you begin to make use of React hooks in your applications, you’ll want to be certain the code you write is nothing short of solid. There’s nothing like shipping buggy code. One way to be certain your code is …


Testing React Hooks With Enzyme and React Testing Library originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
As you begin to make use of React hooks in your applications, you’ll want to be certain the code you write is nothing short of solid. There’s nothing like shipping buggy code. One way to be certain your code is bug-free is to write tests. And testing React hooks is not much different from how React applications are tested in general.

In this tutorial, we will look at how to do that by making use of a to-do application built with hooks. We’ll cover writing of tests using Ezyme and React Testing Library, both of which are able to do just that. If you’re new to Enzyme, we actually posted about it a little while back showing how it can be used with Jest in React applications. It’s not a bad idea to check that as we dig into testing React hooks.

Here’s what we want to test

A pretty standard to-do component looks something like this:

import React, { useState, useRef } from "react";
const Todo = () => {
  const [todos, setTodos] = useState([
    { id: 1, item: "Fix bugs" },
    { id: 2, item: "Take out the trash" }
  ]);
  const todoRef = useRef();
  const removeTodo = id => {
    setTodos(todos.filter(todo => todo.id !== id));
  };
  const addTodo = data => {
    let id = todos.length + 1;
    setTodos([
      ...todos,
      {
        id,
        item: data
      }
    ]);
  };
  const handleNewTodo = e => {
    e.preventDefault();
    const item = todoRef.current;
    addTodo(item.value);
    item.value = "";
  };
  return (
    <div className="container">
      <div className="row">
        <div className="col-md-6">
          <h2>Add Todo</h2>
        </div>
      </div>
      <form>
        <div className="row">
          <div className="col-md-6">
            <input
              type="text"
              autoFocus
              ref={todoRef}
              placeholder="Enter a task"
              className="form-control"
              data-testid="input"
            />
          </div>
        </div>
        <div className="row">
          <div className="col-md-6">
            <button
              type="submit"
              onClick={handleNewTodo}
              className="btn btn-primary"
            >
              Add Task
            </button>
          </div>
        </div>
      </form>
      <div className="row todo-list">
        <div className="col-md-6">
          <h3>Lists</h3>
          {!todos.length ? (
            <div className="no-task">No task!</div>
          ) : (
            <ul data-testid="todos">
              {todos.map(todo => {
                return (
                  <li key={todo.id}>
                    <div>
                      <span>{todo.item}</span>
                      <button
                        className="btn btn-danger"
                        data-testid="delete-button"
                        onClick={() => removeTodo(todo.id)}
                      >
                        X
                      </button>
                    </div>
                  </li>
                );
              })}
            </ul>
          )}
        </div>
      </div>
    </div>
  );
};
export default Todo; 

Testing with Enzyme

We need to install the packages before we can start testing. Time to fire up the terminal!

npm install --save-dev enzyme enzyme-adapter-16 

Inside the src directory, create a file called setupTests.js. This is what we’ll use to configure Enzyme’s adapter.

import Enzyme from "enzyme";
import Adapter from "enzyme-adapter-react-16";
Enzyme.configure({ adapter: new Adapter() }); 

Now we can start writing our tests! We want to test four things:

  1. That the component renders
  2. That the initial to-dos get displayed when it renders
  3. That we can create a new to-do and get back three others
  4. That we can delete one of the initial to-dos and have only one to-do left

In your src directory, create a folder called __tests__ and create the file where you’ll write your Todo component’s tests in it. Let’s call that file Todo.test.js.

With that done, we can import the packages we need and create a describe block where we’ll fill in our tests.

import React from "react";
import { shallow, mount } from "enzyme";
import Todo from "../Todo";

describe("Todo", () => {
  // Tests will go here using `it` blocks
});

Test 1: The component renders

For this, we’ll make use of shallow render. Shallow rendering allows us to check if the render method of the component gets called — that’s what we want to confirm here because that’s the proof we need that the component renders.

it("renders", () => {
  shallow(<Todo />);
});

Test 2: Initial to-dos get displayed

Here is where we’ll make use of the mount method, which allows us to go deeper than what shallow gives us. That way, we can check the length of the to-do items.

it("displays initial to-dos", () => {
  const wrapper = mount(<Todo />);
  expect(wrapper.find("li")).toHaveLength(2);
});

Test 3: We can create a new to-do and get back three others

Let’s think about the process involved in creating a new to-do:

  1. The user enters a value into the input field.
  2. The user clicks the submit button.
  3. We get a total of three to-do items, where the third is the newly created one.
it("adds a new item", () => {
  const wrapper = mount(<Todo />);
  wrapper.find("input").instance().value = "Fix failing test";
  expect(wrapper.find("input").instance().value).toEqual("Fix failing test");
  wrapper.find('[type="submit"]').simulate("click");
  expect(wrapper.find("li")).toHaveLength(3);
  expect(
    wrapper
      .find("li div span")
      .last()
      .text()
  ).toEqual("Fix failing test");
});

We mount the component then we make use of find() and instance() methods to set the value of the input field. We assert that the value of the input field is set to “Fix failing test” before going further to simulate a click event, which should add the new item to the to-do list.

We finally assert that we have three items on the list and that the third item is equal to the one we created.

Test 4: We can delete one of the initial to-dos and have only one to-do left

it("removes an item", () => {
  const wrapper = mount(<Todo />);
  wrapper
    .find("li button")
    .first()
    .simulate("click");
  expect(wrapper.find("li")).toHaveLength(1);
  expect(wrapper.find("li span").map(item => item.text())).toEqual([
    "Take out the trash"
  ]);
});

In this scenario, we return the to-do with a simulated click event on the first item. It’s expected that this will call the removeTodo() method, which should delete the item that was clicked. Then we’re checking the numbers of items we have, and the value of the one that gets returned.

The source code for these four tests are here on GitHub for you to check out.

Testing With react-testing-library

We’ll write three tests for this:

  1. That the initial to-do renders
  2. That we can add a new to-do
  3. That we can delete a to-do

Let’s start by installing the packages we need:

npm install --save-dev @testing-library/jest-dom @testing-library/react

Next, we can import the packages and files:

import React from "react";
import { render, fireEvent } from "@testing-library/react";
import Todo from "../Todo";
import "@testing-library/jest-dom/extend-expect";

test("Todo", () => {
  // Tests go here
}

Test 1: The initial to-do renders

We’ll write our tests in a test block. The first test will look like this:

it("displays initial to-dos", () => {
  const { getByTestId } = render(<Todo />);
  const todos = getByTestId("todos");
  expect(todos.children.length).toBe(2);
});

What’s happening here? We’re making use of getTestId to return the node of the element where data-testid matches the one that was passed to the method. That’s the <ul> element in this case. Then, we’re checking that it has a total of two children (each child being a <li> element inside the unordered list). This will pass as the initial to-do is equal to two.

Test 2: We can add a new to-do

We’re also making use of getTestById here to return the node that matches the argument we’re passing in.

it("adds a new to-do", () => {
  const { getByTestId, getByText } = render(<Todo />);
  const input = getByTestId("input");
  const todos = getByTestId("todos");
  input.value = "Fix failing tests";
  fireEvent.click(getByText("Add Task"));
  expect(todos.children.length).toBe(3);
});

We use getByTestId to return the input field and the ul element like we did before. To simulate a click event that adds a new to-do item, we’re using fireEvent.click() and passing in the getByText() method, which returns the node whose text matches the argument we passed. From there, we can then check to see the length of the to-dos by checking the length of the children array.

Test 3: We can delete a to-do

This will look a little like what we did a little earlier:

it("deletes a to-do", () => {
  const { getAllByTestId, getByTestId } = render(<Todo />);
  const todos = getByTestId("todos");
  const deleteButton = getAllByTestId("delete-button");
  const first = deleteButton[0];
  fireEvent.click(first);
  expect(todos.children.length).toBe(1);
});

We’re making use of getAllByTestId to return the nodes of the delete button. Since we only want to delete one item, we fire a click event on the first item in the collection, which should delete the first to-do. This should then make the length of todos children equal to one.

These tests are also available on GitHub.

Linting

There are two lint rules to abide by when working with hooks:

Rule 1: Call hooks at the top level

…as opposed to inside conditionals, loops or nested functions.

// Don't do this!
if (Math.random() > 0.5) {
  const [invalid, updateInvalid] = useState(false);
}

This goes against the first rule. According to the official documentation, React depends on the order in which hooks are called to associate state and the corresponding useState call. This code breaks the order as the hook will only be called if the conditions are true.

This also applies to useEffect and other hooks. Check out the documentation for more details.

Rule 2: Call hooks from React functional components

Hooks are meant to be used in React functional components — not in React’s class component or a JavaScript function.

We’ve basically covered what not to do when it comes to linting. We can avoid these missteps with an npm package that specifically enforces these rules.

npm install eslint-plugin-react-hooks --save-dev

Here’s what we add to the package’s configuration file to make it do its thing:

{
  "plugins": [
    // ...
    "react-hooks"
  ],
  "rules": {
    // ...
    "react-hooks/rules-of-hooks": "error",
    "react-hooks/exhaustive-deps": "warn"
  }
}

If you are making use of Create React App, then you should know that the package supports the lint plugin out of the box as of v3.0.0.

Go forth and write solid React code!

React hooks are equally prone to error as anything else in your application and you’re gonna want to ensure that you use them well. As we just saw, there’s a couple of ways we can go about it. Whether you use Enzyme or You can either make use of enzyme or React Testing Library to write tests is totally up to you. Either way, try making use of linting as you go, and no doubt, you’ll be glad you did.


Testing React Hooks With Enzyme and React Testing Library originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/testing-react-hooks-with-enzyme-and-react-testing-library/feed/ 6 299275
How We Perform Frontend Testing on StackPath’s Customer Portal https://css-tricks.com/how-we-perform-frontend-testing-on-stackpaths-customer-portal/ Fri, 15 Nov 2019 14:49:22 +0000 https://css-tricks.com/?p=298727 Nice post from Thomas Ladd about how their front-end team does testing. The list feels like a nice place to be:

  1. TypeScript – A language, but you’re essentially getting various testing for free (passing the right arguments and types of


How We Perform Frontend Testing on StackPath’s Customer Portal originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
Nice post from Thomas Ladd about how their front-end team does testing. The list feels like a nice place to be:

  1. TypeScript – A language, but you’re essentially getting various testing for free (passing the right arguments and types of variables)
  2. Jest – Unit tests. JavaScript functions are doing the right stuff. Works with React.
  3. Cypress – Integration tests. Page loads, do stuff with page, expected things happen in DOM. Thomas says their end-to-end tests (e.g. hitting services) are also done in Cypress with zero mocking of data.

I would think this is reflective of a modern setup making its way across lots of front-end teams. If there is anything to add to it, I’d think visual regression testing (e.g. with a tool like Percy) would be the thing to add.

As an alternative to Cypress, jest-puppeteer is also worth mentioning because (1) Jest is already in use here and (2) Puppeteer is perhaps a more direct way of controlling the browser — no middleman language or Electron or anything.

Thomas even writes that there’s a downside here: too-many-tools:

Not only do we have to know how to write tests in these different tools; we also have to make decisions all the time about which tool to use. Should I write an E2E test covering this functionality or is just writing an integration test fine? Do I need unit tests covering some of these finer-grain details as well?

There is undoubtedly a mental load here that isn’t present if you only have one choice. In general, we start with integration tests as the default and then add on an E2E test if we feel the functionality is particularly critical and backend-dependent.

I’m not sure we’ll ever get to a point where we only have to write one kind of test, but having unit and integration tests share some common language is nice. I’m also theoretically opposite in my conclusion: integration/E2E tests are a better default, since they are closer to reality and prove that a ton is going right in just testing one thing. They should be the default. However, they are also slower and flakier, so sad trombone.

To Shared LinkPermalink on CSS-Tricks


How We Perform Frontend Testing on StackPath’s Customer Portal originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
298727
#178: Percy Catches Visual Changes in any Workflow https://css-tricks.com/video-screencasts/178-percy-catches-visual-changes-in-any-workflow/ Wed, 06 Nov 2019 16:50:48 +0000 https://css-tricks.com/?page_id=298175 I wanted to make sure you understand exactly what Percy can do for you, hence the title. When you commit a change to your websites Git repo, like in a Pull Request workflow most of us live in, Percy will …


#178: Percy Catches Visual Changes in any Workflow originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
I wanted to make sure you understand exactly what Percy can do for you, hence the title. When you commit a change to your websites Git repo, like in a Pull Request workflow most of us live in, Percy will let you know if that change causes any visual changes to your site. It will show you exactly what those changes are: what pages, what media query breakpoint, what browser, etc.

It’s rather amazing.

Here’s a screenshot of the Percy dashboard when I did a change increasing the size of a button:

Hopefully, I intended that change. If I didn’t this is the moment that Percy is saving my butt. I can easily accidentally make visual changes by changing CSS that has a wider impact than I originally thought it would.

Once Percy is set up, it will be a part of the Pull Request checks that happen automatically:

Putting this kind of testing into your CI (Continous Integration) is the mighty powerful.

Percy has all kinds of powerful configuration, but it can also be quite simple. Percy! Go to this URL and take a screenshot of it! Percy! Go to this URL, click this button, then take a screenshot of that! If you’re familiar with the fantastically simple browser automation language Puppeteer, that’s what PercyScript uses.


#178: Percy Catches Visual Changes in any Workflow originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
298175