6 Examples of Hard to Test JavaScript

- select the contributor at the end of the page -
The following content comes from the second module of my Pluralsight course entitled: Front-End First: Testing and Prototyping JavaScript Apps. The rest of the course covers an introduction to Unit Testing, Mocha (a JavaScript test runner), Grunt (a JavaScript task runner), Sinon.js (a mocking and spying library), Mockjax (a way to mock Ajax requests), mockJSON (a way to generate semi-random complex objects for prototyping), and more.

In this post we will be looking at 6 different ways that accidentally make JavaScript code hard to test including...

  1. Tightly Coupled Components
  2. Private Parts
  3. Singletons
  4. Anonymous Functions
  5. Mixed Concerns
  6. New Operators

As we introduce each of these concepts we will describe the issue, take a look at some sample code that has the issue, and then refactor the code to alleviate the problem.

Tightly Coupled Components

Our first hard to test scenario is when we have tightly coupled components. This means that two or more components have a direct reference to each other. This code is characterized by the following...

  • It is difficult if you need to test one component apart from the other
  • Makes code more brittle because a change to one piece could break another

Hard to Test Code

For example, let's take a look at the following code snippet.

var polls = {

add: function (poll) { /*...*/ },

getList: function (callback) { /*...*/ }

};

var submit = { add: function (p) { polls.add(p); } };

var view = {

init: function () { polls.getList(this.render); },

render: function (list) { list.forEach(/*...*/); }

};

The polls object has two methods: add and getList. The submit and view objects rely on polls and interact with its methods.

The concern here is that there is a direct reference from submit and view to the polls object. If we changed the name of the polls object or its methods or parameters, then we would have a problem on our hands. In this case the code is small, but imagine if these were large pieces of code.

In order to resolve this issue we could relax the coupling by passing polls in the submit and view objects. We would basically be manually injecting the dependency at this point. Another way to solve this issue is to have the components communicate to each other using a message bus.

Refactored Code

Let's take a look at this code again, but somewhat refactored.

var polls = {

add: function(poll) {

$.post("/polls", poll);

},

getList: function(callback) {

$.get("/polls", function(data) {

callback(data.list);

});

}

};

var pollBridge = {

add: polls.add,

getList: polls.getList

};

var submit = {

init: function(polls) {

this.polls = polls;

},

add: function(poll) {

this.polls.add(poll);

}

};

var view = {

init: function(polls) {

this.polls = polls;

this.polls.getList(this.render);

},

render: function(list) {

list.forEach(function(poll) {

$("<li />", { html: poll }).appendTo("#output");

});

}

};

submit.init(pollBridge);

submit.add("What is your favorite color?");

submit.add("What programming language do you list best?");

submit.add("Do you enjoy Visual Basic 6?");

submit.add("Have you ever pair programmed before?");

submit.add("Do you unit test your code?");

view.init(pollBridge);

In the above code refactor we introduced a pollBridge object that will act as a contract between the various components. We pass the bridge into the submit and view.

By adding a bridge we are reducing the tight coupling between the various components. For example, if the polls add method was changed to addPoll we would only need to change the pollBridge mapping to add: polls.addPoll, and wouldn't need to change any code in the submit object since it uses the bridge and not the polls object directly.

Private Parts

Another area that can be of concern is having private parts in your component. Now don't get me wrong, encapsulation and data hiding are a great thing and are encouraged. However, by doing so it can lead to the following problems...

  • Encapsulation can make testing harder
  • This may be acceptable, it just depends on what you need

You don't have to get to have 100% unit test coverage so this may be okay for your project, so having private sections of your code is okay. However, if you do want to test these areas then you may need to expose them somehow.

Hard to Test Code

Let's take a look at the following code snippet...

var person = (function () {

var chew = function () { console.log("chew") },

swallow = function () { console.log("swallow") },

eat = function () {

for (var i = 0, len = 10; i < len; index++) {

chew();

}

swallow();

};

return { eat: eat }

}());

In the above code snippet we are using the revealing module pattern and returning a person object with a public method called eat. If you aren't familiar with this pattern, the idea is that whatever is returned at the bottom of the IIFE (Immediately Invoked Function Expression) will be public while everything else will be private to the closure.

Internally there are 2 private functions named chew and swallow. The public eat method calls the chew function 10 times followed by the swallow method.

If you wanted to unit test either the chew or swallow functions then you'd be out of luck. There isn't a way to get at that functionality directly. This may be cool with you and if so then rock on. However, if you did want to unit test that code then you'd need to expose those methods as well.

Refactored Code

In the following code snippet we are exposing all the methods as public so that they can be unit tested. However, it could be argued that the revealing module pattern isn't necessary since we are making everything public.

var person = (function () {

var chew = function () { console.log("chew"); },

swallow = function () { console.log("swallow"); },

eat = function () {

for (var i = 0, len = 10; i < len; i++) {

chew();

}

swallow();

};

return {

eat: eat,

chew: chew,

swallow: swallow

};

}());

person.eat();

person.chew();

person.swallow();

Singletons

A singleton is one of the Gang of Four design patterns and it's a known and popular pattern. The essence of the pattern is that you can have only one instance of an object. Some concerns that can come into play are...

  • It isn't so good when unit testing multiple use cases
  • You may need to reset the singleton's state for each test

Hard to Test Code

Let's take a look at the following code snippet...

var data = {

token: null,

users: []

};

function init (username, password) {

data.token = username + password;

}

function addUser (user) {

if (data.token) { data.users.push(user) }

}

The previous code snippet has a data object, which serves as our singleton. Technically this it isn't a true singleton because I could use JavaScript's Object.create on it to make another one, but for all intents and purposes our object literal will suit just fine.

The singleton is internally being used by the init and addUser methods.

If I had been writing a unit test on the init function and then moved to test the addUser function I would run into a problem if I wanted to verify what happens when there isn't a token set. The token would have been already set in the data singleton since it is shared across tests. In cases like this you'll probably need to reset the data singleton between tests. This is just something you'll need to consider and keep in mind if you use singletons.

Refactored Code

However, you could refactor your code somewhat to get around this potential problem.

var newData = function() {

return {

token: null,

users: []

};

};

var users = {

init: function(data) {

this.data = data;

},

setToken: function(username, password) {

this.data.token = username + password;

},

addUser: function(user) {

if ( this.data.token ) { this.data.users.push(user); }

}

};

users.init(newData());

users.setToken( "eliahmanor", "password" );

users.addUser({ username: "elijahmanor" });

In the above refactor code we created a factory function called newData that will make a new object with properties that are initialized to the correct values. By doing so each unit test can create its own fresh version of data to use as it is being tested. Using this technique can get around the need to reset the values to some known state between each test.

Anonymous Functions

Another technique that can be problematic when testing is using anonymous functions. These are very convenient to use and you've probably seen it used in many blog posts and tutorials.

The issue with having so many anonymous functions is that it isn't easy to test the callback in isolation since there is no name or handle to target the function.

Hard to Test Code

Let's take the following code snippet for example and examine why this might be a problem.

 $.ajax({

url: "/people",

success: function (data) {

var $list = $("#list");

$.each(data.people, function (index, person) {

$("<li />", {

text: person.fullName

}).appendTo($list);

});

}

});

In the above snippet we are calling the jQuery ajax method to request a list of people from the server. We have set an anonymous callback function to the success option parameter. Although this is a very common way of coding this, it does make testing the code in the callback difficult without actually making an Ajax request.

If you really wanted to unit test the callback, you'd either need to make the actual Ajax request or simulate the Ajax request using a stub or a library like Mockjax. However, with some minimal refactoring we can provide a clean separation that will enable us to test the callback in isolation.

Refactored Code

A possible refactored solution could be the following...

function render(data) {

var $list = $("#list");

$.each(data.people, function (index, person) {

$("<li />", {

text: person.fullName

}).appendTo($list);

});

}

$.ajax({

url: "/people",

success: render

});

All we did was moved out the success method into its own function so that we can unit test the render function apart from actually making an Ajax request. You might go a step further and attach the function onto a common object, but the main idea is to move the functionality outside of the Ajax request.

Mixed Concerns

Something else you might want to keep your eye on is having mixed concerns in your code.

You should try to be wary of code that tries to do too many things in one method, especially if they don't go together. For example, it's a good idea to separate your DOM code from your data manipulation code. This should sound familiar if you've had any experience with Model View Controller type of frameworks.

Some issues that you can run into when having mixed concerns are...

  • Testing code with mixed concerns can be very awkward and require more setup and verification code
  • It often requires that you know more details about the implantation

Hard to Test Code

Let's take a look at the following code snippet and talk about why this could be an issue...

var people = {

list: [],

add: function (person) {

this.list.push(person);

$("#numberOfPeople").html(this.list.length);

}

};

The above snippet has mixed concerns. The add method takes its parameter and adds it to an internal list array property, which seems appropriate. However, it also updates the DOM in the same method.

The method mixes data and presentation concerns, which is typically not a good idea. It would be better if the add method didn't update the DOM directly, but rather publish a message or possibly a higher level method could update the DOM after the add method was called.

Refactored Code: Add Method

A possible refactored solution could be the following...

var people = {

list: [],

add: function (person) {

this.list.push(person);

$(this).trigger("person.added", [person]);

},

addAndRender: function(person) {

this.add(person);

this.render(person);

},

render: function(person) {

$("#numberOfPeople").html(this.list.length);

$("#lastPersonAdded").html(person.name);

}

};

people.addAndRender({ name: "Brendan Eich" });

people.addAndRender({ name: "John Resig" });

The above code refactor uses a new addAndRender method to help combine the two different actions that are being performed. I don't particularly like introducing this new method although it is very descriptive.

Refactored Code: Add Event

Let's take a look at another implementation that uses an event to communicate that something has happened.

var people = {

list: [],

initialize: function() {

$(this).on("person.added", this.render.bind(this));

},

add: function (person) {

this.list.push(person);

$(this).trigger("person.added", [person]);

},

render: function(e, person) {

$("#numberOfPeople").html(this.list.length);

$("#lastPersonAdded").html(person.name);

}

};

people.initialize();

people.add({ name: "Brendan Eich" });

people.add({ name: "John Resig" });

The above code uses a custom event called person.added to handle the communication that the DOM needs to be updated. Our initialize method helps wire-up the render method to be triggered when our person.added message is triggered. I like this solution better than the previous one, but in the end it is up to you.

Regardless either refactor solution is better than the initial code in that the concerns are now split up into separate methods.

New Operators

Another issue that can make unit testing difficult is using the new operator frequently in your application code. Sure, you'll probably new up your stuff somewhere, but doing so alongside other code that you want to test can cause issues.

Instead, you might consider injecting your dependencies into your component or provide enough information needed for it to create itself.

Hard to Test Code

Let's take a look at the following code snippet and see why there might be an issue.

// ... more code ...

nextBirthday: function () {

var now = new Date(),

next = Number.MAX_VALUE;

person;

$.each(this.list, function (index, item) {

/* ... */

});

return person;

}

// ... more code ...

The previous code is a small snippet to determine whose birthday is next from an array of individuals. The implementation is fairly straightforward, but the reason it is problematic is because it creates a new Date which ends up being the current date and time. That sounds reasonable at first thought, but it becomes a nuisance when we want to unit test the behavior.

Refactored Code

The following code shows a simple way to get around this issue, but hopefully bringing up this issue will help you think about these types of concerns.

var people = {

list: [],

add: function(person) { this.list.push(person); },

nextBirthday: function (date) {

var now = date ? new Date(date) : new Date(),

next = Number.MAX_VALUE, person;

$.each(this.list, function (index, item) {

var dob = new Date(item.dob),

year = dob.setFullYear(now.getFullYear()) > now ?

now.getFullYear() : now.getFullYear() + 1,

diff = dob.setFullYear(year) - now;

if (diff < next) { next = diff; person = item; }

});

return person;

}

};

people.add({ name: "Mark", dob: "12/19/1976" });

people.add({ name: "Jane", dob: "10/18/1979" });

people.add({ name: "Jim", dob: "8/17/1983" });

people.add({ name: "Mary", dob: "7/9/1981" });

people.add({ name: "Alex", dob: "8/17/2010" });

people.add({ name: "Bob", dob: "3/1/1985" });

people.add({ name: "John", dob: "1/1/1982" });

people.add({ name: "Sue", dob: "2/1/1987" });

console.log(people.nextBirthday("12/15/2013").name);

The nextBirthday code isn't overly complex, but there is a little bit of logic there.

At the bottom we are building up a list of people with their birthdays and then console.logging whose birthday is next.

This seems great at first, but then you realize that you aren't necessarily testing all the code paths in your nextBirthday method. It is dependant on the current Date! What if there were no more birthdays this year in your list? Are you testing for birthdays that have already happened this year? It would be much better if we could control the so-called-current-date so we could be certain what edge cases are in fact being tested.

The easiest way to fix this code is to pass in an optional date parameter that can override what the date is set to. It's a pretty easy fix, but it provides much more flexibility to the system.

So, the main point here is to just be aware of what objects you are creating and make sure there is a way to control them for testability.

Conclusion

The above problematic code snippets aren't the only types of code you should watch out for, but it is a good start and should hopefully open your eyes to some of the common pitfalls you could run into when developing your application. To review here are some concepts you should keep in mind...

  • Try not to tightly couple your components
  • Be aware that anything you make private will be unavailable to test
  • Limit the use of singletons, otherwise you'll need to reset them
  • Be careful of using too many anonymous functions
  • Try not to mix various non-related concerns in your code
  • Be aware of the new operator when the constructor does work for you

The content from this post was taken from the following Pluralsight course entitled: Front-End First: Testing and Prototyping JavaScript Apps. For more information about testing and prototyping your application feel free to watch the course.

Get our content first. In your inbox.

Loading form...

If this message remains, it may be due to cookies being disabled or to an ad blocker.

Contributor

elijahmanor