Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Separation of concerns #19

Merged
merged 28 commits into from
Oct 11, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
28 commits
Select commit Hold shift + click to select a range
ecf1a05
Pare back code to just load the tests.
matatk Sep 1, 2019
f450161
Remove examples dir
matatk Sep 1, 2019
5a9ee89
Remove example scripts and template README
matatk Sep 1, 2019
1e8ab36
Simplify index.js to fix ESLint errors
matatk Sep 1, 2019
f2eac06
Include combined stuff when it changes, before commit
matatk Sep 1, 2019
e283cc6
Rename test file for consistency
matatk Sep 1, 2019
3222c67
Return full-page tests from index.js
matatk Sep 1, 2019
1644020
Offer an option to return the HTML inline, or by file path
matatk Sep 1, 2019
09e1ac7
Factor out the code that loops over all the tests
matatk Sep 2, 2019
785d990
Factor out the code that gets the full-page tests
matatk Sep 2, 2019
1c9d7c6
Nicer readingtons for index.js
matatk Sep 2, 2019
c53ac7e
Fix adding of the results of the build - we only need to add the comb…
matatk Sep 2, 2019
db49c89
Trim and clarify README
matatk Sep 2, 2019
64f0fe7
Embed code in README
matatk Sep 2, 2019
0f478e2
Remove "json" from fenced block to stop GitHub highlighting the ellip…
matatk Sep 2, 2019
064290d
Add note about combined tests; remove blank line (though it does not …
matatk Sep 2, 2019
8cad4bc
More README tweakage
matatk Sep 2, 2019
4015275
Remove redundant line in package.json
matatk Sep 2, 2019
be37afc
Clarity of purpose
matatk Sep 3, 2019
245a338
README simplification and clarification
matatk Sep 3, 2019
6101611
code -> setting
matatk Sep 3, 2019
79d40fc
setting -> secene (too many "set"s)
matatk Sep 3, 2019
2654ddd
Pluralise "scene" in definition of "fixtures"
matatk Sep 3, 2019
2f7649a
Let's try to move on after this :-)
matatk Sep 3, 2019
c9a504c
rly :-)
matatk Sep 3, 2019
0b91f9a
Use new ESLint directive
matatk Sep 6, 2019
c426c1e
Bump deps' deps
matatk Sep 6, 2019
23fa63d
Remove mention of other repo for now
matatk Oct 11, 2019
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .eslintrc.json
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@
"parserOptions": {
"ecmaVersion": 2018
},
"reportUnusedDisableDirectives": true,
"root": true,
"rules": {
"block-scoped-var": "error",
Expand Down
191 changes: 72 additions & 119 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,131 +3,84 @@ Page Structural Semantics Scanner Tests

[![Build Status](https://travis-ci.com/matatk/page-structural-semantics-scanner-tests.svg?branch=master)](https://travis-ci.com/matatk/page-structural-semantics-scanner-tests)

This is a test and benchmarking suite that can be used to check tools that scan for semantic structural information on web pages, such as landmarks \[with headings and articles in the works\]. Such information is often used to afford or improve the accessibility of pages, allowing people using screen-readers and alternative browsers to navigate and understand the content.
This is a test suite for tools that scan for semantic structural information on web pages, such as landmarks \[with headings and articles in the works\]. Such information is often used to afford or improve the accessibility of pages, allowing people using screen-readers and alternative browsers to navigate and understand the content.

You can use this tool as:
The following topics are covered below:

- [A test suite for code that checks for landmarks](#use-as-a-test-suite)
- [A performance checker for the code](#use-as-a-benchmarking-tool)
- [A place to find results of tests carried out on tools that scan for landmarks](#use-as-a-comparison-between-accessible-structural-semantics-scanners)
* [Test suite info](#test-suite-info)
* [Support for landmarks](#support-for-landmarks)
* [Development](#development)

You may find the following information helpful:
Test suite info
---------------

- [Support for landmarks](#support-for-landmarks)
- [Development](#development)
The test suite provides a set of:

Use as a test suite
-------------------
* **Fixtures:** HTML scenes (one for each test). [Example fixture](https://github.com/matatk/page-structural-semantics-scanner-tests/blob/master/fixtures/aria-labelledby-multiple-idrefs.html)
* **Expectations:** JSON objects that describe the correct set of landmarks (one for each fixture). [Example expectation](https://github.com/matatk/page-structural-semantics-scanner-tests/blob/master/expectations/aria-labelledby-multiple-idrefs.json)

There are three main components that come together in order to test structure-scanning code:

- **Fixtures** are HTML files. [Example fixture](https://github.com/matatk/page-structural-semantics-scanner-tests/blob/master/fixtures/aria-labelledby-multiple-idrefs.html)
- **Expectations** are JSON objects that contain the correct set of landmarks for a given fixture. [Example expectation](https://github.com/matatk/page-structural-semantics-scanner-tests/blob/master/expectations/aria-labelledby-multiple-idrefs.json)
- A **scanner** function is the code under test. It runs on the DOM created from the fixture and returns the found landmarks. Its signature is `scanner(window, document): object`.

The tool you're testing probably doesn't report results in the same format that the expectations use, so there's also the concept of a **converter** function that takes the expectation object and transforms it into the tool's format. Its signature is `covnerter(expectation: object): object`. Some [converters for known tools](https://github.com/matatk/page-structural-semantics-scanner-tests/blob/master/lib/converters.js) are provided.

You also need a test environment in which to check whether the scanner returns the expected results. This package can be used in three different ways, depending on if you're looking for an out-of-the-box solution, or already have your own test environment:

- [Out-of-the-box test suite](#out-of-the-box-test-suite)
- [Iterating over the tests in your own test environment](#iterating-over-the-tests-in-your-own-test-environment)
- [Loading the fixture and expectation files directly](#loading-the-fixture-and-expectation-files-directly)

### Out-of-the-box test suite
The fixtures and expectations are provided in two formats:

If you don't already have a test environment for your project, you can use the `runner(converter, scanner)` function. You pass in:
* A set of full-HTML-page fixtures and separate matching expectation files can be found in the "fixtures/" and "expectations/" directories. It's recommended that you use these, as they cover all of the tests.

- a `converter(expectation: object): object` function, and
- the tool's `scanner(window, document): object` function
* A single fixture file, containing all but two of the tests, can be found alongside a matching single expectation file, in the "combined/" directory. The HTML file contains only the fixtures, in a series of `<div>` elements; it is not a fully-formed HTML document.

and the runner does the rest:
These may be useful if your test runner runs inside a browser **but they have a limitation:** two of the tests ("application-alone-on-body-is-ignored" and "landmark-role-on-body") require an ARIA `role` attribute to be set on the `<body>` element, so cannot be included.

- uses the converter on each expectation,
- runs the scanner against each fixture, and
- reports if the results didn't match the expectation (using [node-tap](https://github.com/tapjs/node-tap)).
### Convenience code to iterate over the full-page tests

```javascript
// This file is examples/runner.js
<!-- embedme script/example.js -->
```js
'use strict'
const pssst = require('page-structural-semantics-scanner-tests')
const runner = pssst.runner

const converter = function(expectation) {
return expectation // pass-through
}

const scanner = function(window, document) {
return [] // don't find any landmarks - this will pass some tests
}

runner(converter, scanner)
console.log(JSON.stringify(pssst.getFullPageTests(), null, 2))
// console.log(JSON.stringify(pssst.getFullPageTestsInline(), null, 2))
```

**Note on global variables:** The tests are run on Node, using a DOM created by [jsdom](https://github.com/jsdom/jsdom). This means that `window` and `document` are not global variables. If your code requires them to be global, you'll need to use the following iterator method instead, for now. An option to run the out-of-the-box suite in a browser is being researched.

**Predefined runners for known tools:** For known tools, such as the [Landmarks browser extension](http://matatk.agrip.org.uk/landmarks/), a converter and custom runner function are exported. Consult [the Landmarks extension's test code](https://github.com/matatk/landmarks/blob/master/test/test-landmarks.js) for an example.

#### Runner options \[TBC\]

The following are being considered as options that could be passed to `runner()` in an optional third argument that is an options object.
...will give you a result of the form...

- `generateResults`: set to "true" to generate a JSON file summarising the results of each test. This could be used to create an HTML file giving the results of tests for one or more scanner tools.
- The following keys would control the environment in which the tests are run.
- `browsers`: "firefox", "chrome", ...
- `jsdom`: set to "true" to run in jsdom on Node (**default**).

### Iterating over the tests in your own test environment

If you already have a preferred test environment, you could use the exported `iterator()` function to run your test code for each fixture-expectation pair. The HTML file is loaded into a string and the expectation into an object.

```javascript
// This file is examples/iterator.js
'use strict'
const pssst = require('page-structural-semantics-scanner-tests')
const iterator = pssst.iterator

iterator(function(meta, fixture, expectation) {
console.log('========= ' + meta.name + ' =========')
console.log('Fixture:')
console.log(fixture)
console.log('Expectation:')
console.log(JSON.stringify(expectation, null, 2))
console.log()
})
```
{
. . .
"main-alone-is-recognised": {
"meta": {
"name": "Main element is recognised"
},
"fixture": ".../page-structural-semantics-scanner-tests/fixtures/main-alone-is-recognised.html",
"expected": [
{
"type": "landmark",
"role": "main",
"roleDescription": null,
"label": null,
"selector": "body > main"
}
]
},
. . .
}
```

### Loading the fixture and expectation files directly

The fixtures and expectations are provided in two formats:

* Individual fixtures and expectations can be found in the "fixtures/" and "expectations/" directories. These files are useful when running the tests from Node.
* A combined fixture file, containing all but two of the tests, can be found alongside a combined expectation file, in the "combined/" directory. These may be useful if your test runner runs inside a browser. The HTML file contains only the fixtures, in a series of `<div>` elements; it is not a fully-formed HTML document.

The reason why the combined files don't contain all of the test fixtures is that two ("application-alone-on-body-is-ignored" and "landmark-role-on-body") require an ARIA `role` attribute to be set on the `<body>` element.

Use as a benchmarking tool
--------------------------

FIXME
Two functions are provided, allowing you to control whether you want the file paths for the HTML, or to have their contents inline:

Use as a comparison between accessible structural semantics scanners
--------------------------------------------------------------------
* `getFullPageTests()`
* `getFullPageTestsInline()`

FIXME
There are no convenience functions for iterating over the combined tests mentioned above—it makes most sense to just load them directly.

Support for landmarks
---------------------

All of the core [WAI-ARIA landmark roles](https://www.w3.org/TR/wai-aria-1.1/#landmark_roles), both as supplied via the `role` attribute and as [implicit landmarks via HTML 5 elements](https://www.w3.org/TR/html-aam-1.0/#html-element-role-mappings) are supported, with some caveats, as described below.

- banner<sup>1</sup>
- complementary
- contentinfo<sup>1</sup>
- form<sup>2, 3, 4</sup>
- main
- navigation
- region<sup>2, 3, 5</sup>
- search
* banner<sup>1</sup>
* complementary
* contentinfo<sup>1</sup>
* form<sup>2, 3, 4</sup>
* main
* navigation
* region<sup>2, 3, 5</sup>
* search

### Caveats

Expand All @@ -151,33 +104,33 @@ If an `aria-labelledby` attribute references multiple elements, all of those ele

It is possible to use the [`aria-roledescription`](https://www.w3.org/TR/wai-aria-1.1/#aria-roledescription) attribute to provide a custom label to be used for the *type* of landmark. This allows you to, for example, provide more application-specific and thus user-friendly names for the roles.

This can be very helpful in some cases, but don't be tempted to over-use this technique; swapping conventional role names for custom ones can decrease usability. The examples and guidelines given in the ARIA specification, linked above, are most helpful.
This can be very helpful in some cases, but don't be tempted to over-use this technique: swapping conventional role names for custom ones can decrease usability. The examples and guidelines given in the ARIA specification, linked above, are most helpful.

You do not need to use this attribute in an attempt to localise your site if you're using standard landmark roles: user agents (browsers, browser extensions and assistive technologies) should already support this.

### Digital publishing ARIA landmarks

The following additional landmark roles defined in the [Digital Publishing WAI-ARIA Module 1.0](https://www.w3.org/TR/dpub-aria-1.0/) are also supported.

- `doc-acknowledgments`
- `doc-afterword`
- `doc-appendix`
- `doc-bibliography`
- `doc-chapter`
- `doc-conclusion`
- `doc-credits`
- `doc-endnotes`
- `doc-epilogue`
- `doc-errata`
- `doc-foreword`
- `doc-glossary`
- `doc-index` (is a landmark via `navigation`)
- `doc-introduction`
- `doc-pagelist` (is a landmark via `navigation`)
- `doc-part`
- `doc-preface`
- `doc-prologue`
- `doc-toc` (is a landmark via `navigation`)
* `doc-acknowledgments`
* `doc-afterword`
* `doc-appendix`
* `doc-bibliography`
* `doc-chapter`
* `doc-conclusion`
* `doc-credits`
* `doc-endnotes`
* `doc-epilogue`
* `doc-errata`
* `doc-foreword`
* `doc-glossary`
* `doc-index` (is a landmark via `navigation`)
* `doc-introduction`
* `doc-pagelist` (is a landmark via `navigation`)
* `doc-part`
* `doc-preface`
* `doc-prologue`
* `doc-toc` (is a landmark via `navigation`)

Development
-----------
Expand Down
Loading