Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How do you write testable code? #189

Open
MartinMuzatko opened this issue Feb 1, 2022 · 10 comments
Open

How do you write testable code? #189

MartinMuzatko opened this issue Feb 1, 2022 · 10 comments

Comments

@MartinMuzatko
Copy link

MartinMuzatko commented Feb 1, 2022

Not sure if that is out of the scope of this best practice guide. I, however, miss an integral part that explains how to write code that is easier to test in the first place.

That is only explained on the side in point 1.13 - other generic testing hygiene. This could be split into two or three points that specificially focus on the do's and don'ts to enable testing and how to structure such code.

This is my subjective view or recommendation I would put into the list:

Bad examples:

  • Using global state can make testing a nightmare, when two tests attempt to access the same state at the same time.
  • Mutability grants loopholes for impossible state

Good examples:

  • Extract pure data handling and decision making to their own side-effect free function to promote easier testing
  • Make options and behavior configurable (using dependency injection/rejection in function parameters/class constructor) to allow unit-testing.

In my experience, that goes hand in hand with benefits like more readable code, configurable behavior and options, focused functions. Some recommendation how to structure code or functions in a way that enables testing in the first place would be great.

What do you think?

@MartinMuzatko MartinMuzatko changed the title How do you write code that is easy to test? How do you write testable code? Feb 1, 2022
@goldbergyoni
Copy link
Owner

@MartinMuzatko Great idea although I'm not a fan of making big changes to code only for the sake of testing. That said, there are unnecessary things that can greatly complicate the testing experience:

  • Avoid floating side-effect, better put those in functions - For example, If a developer starts the server using floating code, this will happen at the moment someone is importing the code, leaving no chance for the tests to configure/stub things before the side-effect
  • Avoid 'testing-production-code' when a developer has different behaviour for testing and production (e.g. if(env===test than do something else). Doing this disallows to test the non-test branches
  • Thinking about more

Agree with what you wrote except with DI/configurable code. I don't like to change my code for testing, not to say sacrifice encapsulation. I prefer monkey-patching.

@goldbergyoni
Copy link
Owner

One more point from today:

  • Minimize module singletons - When holding a singleton/static value in a module, it's hard to test various scenario of this module because after the singleton state is set, the further tests will always face the last existing state (no way to reset, initialize a new SUT)

@MartinMuzatko
Copy link
Author

I love the new bullet points :)

I think especially important is the module singletons problem. But I wouldn't limit it to singletons. I would go as far as to ban any side effects in the top level of the code, unless they are an application entrypoint. But yeah, singletons are definitly the core of the problem.
In an internal work-related wiki, I recently have written about these best practices. Maybe they help to further flesh out these points.

Top-Level Code should be free from side-effects
Unless in an application entrypoint, like src/index.ts or one-off scripts, your modules should not cause any side effects.
That makes both testing and re-use of functionality easier, as modules don't open handles outside functions.

Your module should

  • Not open any conection to database/MQTT/HTTP as clients or servers (unless it is a stateless client)
  • Not create a timer (setTimeout, setInterval)
  • Not to call any functions unless they are pure (cause no side-effect)
  • Side effects include:
    • Modifying a variable
    • Setting a field on an object
    • Throwing an exception
    • Logging
    • Reading/writing from Database

That makes modules more portable and usage intuitive and enables to build on top predictably.

@goldbergyoni
Copy link
Owner

Great observations. I guess by top-level you mean to import/require

Now I'm faced with something beyond that - When my logger.info is called, it sets locally a default configuration (no reason to set config state in every call, also we can't assume that someone will call logger.configure before the 1st call). Now my 1st test pass, but my 2nd can't simulate a scenario because the configuration is already set

@MartinMuzatko

@goldbergyoni
Copy link
Owner

Adding one more to the list - Never relate to fixed port rather makes it configurable. Test can pass "0" and get an emepheral port. Otherwise, parallelizing tests won't be possible

@goldbergyoni
Copy link
Owner

One more - Exporting objects with functions is easier to mock that exporting just functions. I don't suggest that one should change her coding style, only to be aware that 'conventional' lib like sinon/test-doubles act on an object level

@palmerj3
Copy link

One good practice I like is that each test should explicitly state (via comment or otherwise) who the user is. Every test should be written with a user in mind.

Unit tests often the users are other engineers on your team.
Integration/E2E are typically for end users of your system.

If you communicate this and tag these tests as such it becomes much easier to define best practices for each type of test and evolve them as rewrites occur.

@goldbergyoni
Copy link
Owner

@palmerj3 That is surely interesting, can you provide an example for test name/comment?

Is this related with 'testable code' or a general testing best practices (which is the right place to share!)?

@palmerj3
Copy link

A simple docblock at the top of each test will suffice. We hear often about having a healthy split between unit, integration, e2e and other. But usually this split can't be measured unless we use different test runners or completely different suites for each type.

This gives you some ability to measure this and set standards.

So you can analyze your distribution and set standards for each type. So this is not a best practice but something that enables you to have effective best practices enforced.

@MartinMuzatko
Copy link
Author

Great observations. I guess by top-level you mean to import/require

Now I'm faced with something beyond that - When my logger.info is called, it sets locally a default configuration (no reason to set config state in every call, also we can't assume that someone will call logger.configure before the 1st call). Now my 1st test pass, but my 2nd can't simulate a scenario because the configuration is already set

@MartinMuzatko

That's tricky. What are the options? Maybe with types you can ensure that only a configured instance of the logger gets used.
That can only be done using dependency injection as far as I know.
Dependency injection would also ensure that you have to configure your logger before passing it to the consumers. Or you could pass a logger that does nothing at all to enable unit tests. Although I admit, not quite elegant and not easy to understand.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants