Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Future Dredd proposal #1186

Open
6 of 25 tasks
honzajavorek opened this issue Jan 8, 2019 · 5 comments
Open
6 of 25 tasks

Future Dredd proposal #1186

honzajavorek opened this issue Jan 8, 2019 · 5 comments

Comments

@honzajavorek
Copy link
Contributor

honzajavorek commented Jan 8, 2019

Dredd reads the API description document, executes HTTP requests it finds, and validates the HTTP responses it receives.

  • “Designing your API prior to its implementation makes it human-friendly”
  • “Dredd enables you to design your API prior to its implementation”

I think Dredd's architecture should get more modular, decoupled. This is my idea on how it should look like:

Direction

  • Be active player in OAS3 adoption, make it a first-class citizen, use OAS3 examples in the docs with a priority over OAS2 examples
  • As Dredd gets decoupled, it should have multiple commands - dredd server for testing server, dredd client for testing clients (aka client-testing mock server: see this), dredd traffic for testing arbitrary previously recorded traffic
  • Explore the ability to test whole scenarios (comes with testing arbitrary recorded traffic)
  • Explore the ability to use FaaS for hooks

Support

  • Write a blog post about how to use Dredd - Apiary blog, dev.to, docs (tutorial), ...
  • Talk about Dredd at meetups and conferences
  • Get a new set of stickers printed and give them away at meetups and conferences
  • Monitor Twitter for new articles and mentions, actively RT and reply - TweetDeck search is (dredd api) OR (dredd openapi) OR (dredd oas)

Architecture

  • API description parsing is a separate library (done as dredd-transactions/parse), the input interface for the Dredd core is API Elements
  • There is a separate library to turn API Elements into an internal sequence of expected transactions (done as dredd-transactions/compile)
  • There is Dredd core, which takes a sequence of expected transactions (Test and document the possibility to modify the transactions list #995), performs the HTTP requests, and returns a sequence of real transactions, streamed
  • There is Dredd CLI, which takes care of the command-line interface and provides API Elements together with configuration
  • Dredd Hooks are a separate engine, which can transparently execute arbitrary code in JavaScript, other language, or delegate execution to FaaS
  • All hooks are provided with the real transaction(s) to modify, with the expected transaction(s) to inspect (Add description field to transaction object #1145), and with configuration (Access Custom options in Hooks #215)
  • Transaction names are completely agnostic to the API description format and work as a query language rather than paths or pointers
  • There is a validation engine, which gets a stream of expected/real transactions and is able to validate them either as units (one-by-one) or as a whole sequence
  • There is one good verbose reporter, one good minimal reporter, and TAP
  • Dredd is able to export the sequence of real transactions, e.g. to .har
  • The HTML reporter should render the same locally as the Apiary reporter and apiary.io should share the same (React) components for this

IMG_3495

IMG_3494

Gardening

  • Dredd is a monorepo, containing dredd as multiple separate libraries, then dredd-transactions, gavel-spec, and gavel.js. Later we could add other repositories as well, some good candidates would be dredd-hooks-template or dredd-docker.
  • Dredd is tested on CircleCI for speed and usability, together with AppVeyor for verifying Windows compatibility.
  • Re-evaluate the usefulness of test coverage and perhaps remove it altogether
  • e2e and possibly also integration tests are in Cucumber, with the use cases and design decisions clearly described in plain text (status: Cucumber introduced, but majority of e2e tests are not rewritten into it)
  • use Prettier to make the decaffeinate-formatted codebase readable
  • describe API in TypeScript annotations
@artem-zakharchenko
Copy link
Contributor

Thanks once more for sharing these insights. Giving my feedback below.

Hooks

All hooks are provided with the real transaction(s) to modify

What implications do you see if we would allow instead to return the next transaction from a hook?

Hooks API

I completely support Hooks API improvements you've mentioned.

Here's how the current API looks like:

const hooks = require('hooks');

hooks.before('Articles > Publish an article', (transaction) => {
  transaction.request.headers.Authorization = 'Basic: YWxhZGRpbjpvcGVuc2VzYW1l';
});

Get rid of a hacky require('hooks') and let hooks interface be explicitly imported (from Dredd, or a dedicated package), or implicitly injected.

Since hooks can be written in different languages, I don't have a sufficient level of expertise to suggest a proper dependency injection pattern. However, I think it would be less confusing to remove the require call and execute a hook within a certain context. Just like any other testing framework does with injection of their before/after/etc. hooks. This also brings the surface of injection "closer" to Dredd, making it less troublesome for the end users to maintain.

// If this is not installed I'm getting confused.
// IDE may complain module not found.
// What if there is an actual package called "hooks"?
const hooks = require('hooks')

hooks.before()
hooks.afterAll()
// No explicit import, but I'm aware this is a hook file
// and I've worked with other testing frameworks, so
// I understand these hook functions are injected for me. 
before()
afterAll()

Such hook functions are easier to type as well, as we need to ship a type definition module and the end users may simply include it to state those functions are known:

// tsconfig.json
{
  "compilerOptions": {
    // What type definitions to incliude
    "types": ["dredd/hooks"]
  },
  // Affected files
  "include": "hooks.js",
}

Typing something like require('hooks') would mean an introduction of type definitions for a module, which may conflict if such module actually exists.

Change how scenarios may be referenced in the API description document.

  • Allow wildcard:
    • hooks.before('User > *', appendAuthenticationHeader)
    • hooks.after('* > User', arbitraryHookFunc)

I am not sure if it's currently possible to declare something like beforeAll hook with Dredd to trigger a certain logic before a group of scenarios. Generally, I'm open to any kind of API reogranization at this point.

Other

The HTML reporter should render the same locally as the Apiary reporter and apiary.io should share the same (React) components for this

This effectively means re-creating the Tests page from Apiary and shipping it as a part of reporter.

  1. How would we be able to import the UI components (are they open-sourced)?
  2. Who will maintain this implementation? There is a build pipeline needed for such reporter (React app) to be built. It sounds like having reporters in separate packages may be a viable option.

describe API in TypeScript annotations

I would absolutely love to do that.

Tests

The changes you've described above affect if not all test suites. I often find the cost of adjusting test suites unjustified and taking more time than actual development. Do you consider to write tests from scratch (TDD, BDD) for this proposal? I know it looks like a monstrous effort at first, but from my practice I believe we would save much more time writing them (properly) from scratch than trying to adjust (and inevitably refactor) existing tests.

Dredd is tested on CircleCI for speed and usability, together with AppVeyor for verifying Windows compatibility.

Perhaps by the time we get to this state CircleCI will already support testing on Windows machines.

Diagram (flow)

  1. If TransactionRunner should accept any recorded traffic as an input that means its input signature must be the same for such recorded traffic and for the list of transactions emitted from the TransactionCompiler. I think we should make a recorded traffic an input to TransactionCompiler so the flow would look like this:
-- traffic --> TransactionCompiler -- transactions --> TransactionRunner

This makes their responsibilities properly scoped:

  • TransactionCompiler: turns input into a list of transactions
  • TransactionRunner: runs given transactions. No extra operations, this should be as plain as possible.

I now see there is a dotted arrow pointing from HAR to "Parsing". But there is still a primary error pointing to the TransactionRunner, so I thought to clarify this.


Overall, if we adopt a monorepo pattern together with a clear vision of responsibilities and restrictions of each module, we can start developing this incrementally at a good rate.

I can play around with type definitions for those modules, and brainstorm proper responsibilities of each.

@honzajavorek
Copy link
Contributor Author

JS Hooks

What implications do you see if we would allow instead to return the next transaction from a hook?

A few very significant implications - all current Dredd users would need to rewrite their existing hook files, and perhaps all hooks handlers for other languages should do a similar change to be aligned with the interface change. Rolling out such change is a delicate thing which could significantly harm adoption of Dredd by its current users. Streamlined interface? Yes. But for me the price is too high and benefit too low. I think it would make more sense to:

  • keep the current behavior, contain it within the code responsible for hooks
  • allow for writing hooks as pure functions (alongside what works now, without deprecating it)
  • change all examples in Dredd's docs to prefer returning instead of mutations
  • suggest authors of hooks handlers to do a similar change...? bring it to the dredd-hooks-template?

Get rid of a hacky require('hooks')

I think having require('dredd/hooks') has been discussed in #1058 (comment) and would be a desired change, but again, this breaks all people's hooks, so require('hooks') should probably somehow work for people as well as a fallback.

Allow wildcard

This is already supported by PHP hooks and there is an issue for that: #248

However, in long-term, I don't think using anything like "path" to address HTTP transactions is a good idea. After many design sessions and research, I'd propose the transaction names to become independent on structure of the API description document and rather becoming a "query language" for the list of transactions (e.g. 'gimme all transactions with HTTP 200 response' or 'all JSON body requests') possibly addressing more than one (which is a generalization of the wildcard idea). Syntax of such transaction names is TBD, but I think something has been written down as a proposal internally in Apiary whitepapers.


👉 Check out existing flaws of the JS hooks and their suggested solutions.

Other

This effectively means re-creating the Tests page from Apiary and shipping it as a part of reporter. How would we be able to import the UI components (are they open-sourced)?

I think it means re-thinking the test report page from Apiary and re-implementing it from scratch in React components as part of the Dredd repository, i.e. open sourced. Then re-using these components in Apiary SaaS.

There is a build pipeline needed for such reporter (React app) to be built. It sounds like having reporters in separate packages may be a viable option.

That sounds like implementation detail at this stage. If Dredd becomes a monorepo, it doesn't matter much if it lives in another package (directory).

Tests

Do you consider to write tests from scratch (TDD, BDD) for this proposal?

We discussed this in person pretty much, but I'll repeat my thoughts here. I'm not a fan of rewriting tests, it's dangerous and it doesn't bring any/much user value. However, the test suite as it is now is not sustainable. I believe following approach could work:

  • Every new change should follow the testing pyramid, having full coverage of unit tests, a few integration tests, and at least one e2e test. New tests are written with sustainability in mind, i.e. using the BDD infrastructure, not using stubs, etc.

  • Write all new e2e tests using the BDD infrastructure. With enough tests added, this causes creation of the 'common Cucumber steps library' as a side effect. With the library getting populated,

    • writing new e2e tests gets much faster and easier,
    • and re-writing old e2e tests gets feasible.
  • As we refactor Dredd's internals for the purpose of delivering user value, we gradually ditch affected unit tests or integration tests and write new ones.

Diagram (flow)

TransactionCompiler and TransactionRunner

That's kind of TBD, that's why the arrows are confusing and dotted. Now I tend to think it would be cleaner architecture if all input traffic went through the Fury toolchain and Dredd could consume it as API Elements, no matter what was it originally:

API Blueprint -> [ Fury ] -> API Elements -> [ Dredd ]
HAR -> [ Fury ] -> API Elements -> [ Dredd ]

Dredd needs four things to validate a transaction: expected request, actual request, expected response, actual response. The only difference between the API description formats and the "recorded traffic" formats will be in which elements of the 4-tuple they provide. If some elements are missing, Dredd will need to somehow acquire them before it can perform the validation:

API Blueprint
    -> (expected request, ???, expected response, ???)

In this case, Dredd needs to infer an actual request and to perform the request in order to acquire an actual response. Then it can validate.

API Blueprint
    -> (expected request, ???, expected response, ???)
HAR
    -> (???, actual request, ???, actual response)

In this case, sharp reader will notice that the tuples zip together. Dredd doesn't need to infer actual requests and it doesn't need to do any HTTP communication to acquire actual responses. It only needs to pair the actual traffic to the corresponding expectations and continue with validation. (The Dredd run in this case would be super-fast as apart from reading & parsing the source files there's no IO.) It will still run some hooks though (beforeValidation, beforeEachValidation, ...).

The architecture should be ready for this. That means Dredd needs to know what kind of source it is parsing (API description with expectations or actual traffic) and then combine what's available and/or infer what's missing (compile & perform HTTP request). Sounds like we'll need a lot of decoupling :)

@honzajavorek
Copy link
Contributor Author

IMG_3613

@mathieulaude
Copy link

@artem-zakharchenko @honzajavorek @abtris Do you plan to implement the future any soon ? Or should we (users) consider this tool as "not maintained anymore" ?

@honzajavorek
Copy link
Contributor Author

@mathieulaude Nobody from the original team works on Dredd anymore. AFAIK there's no dedicated team as of now, thus no future.

@kuba-kubula at least still works in the company behind Dredd, so might have some info. But he might not be legally or process-wise allowed to make public announcements.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants