Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement feedback #19

Open
karacolada opened this issue Dec 13, 2023 · 5 comments
Open

Implement feedback #19

karacolada opened this issue Dec 13, 2023 · 5 comments

Comments

@karacolada
Copy link
Member

karacolada commented Dec 13, 2023

Detailed Description

Each failed metric test would trigger the release of feedback to the user, explaining why the test failed and suggesting next steps to improve the FAIRness of the test object. This would be test-specific but easily understandable feedback, making no/few assumptions about the user's familiarity with concepts like internet protocols, signpositing and metadata standards.

Instead of focussing only how to pass the test, effectively introducing normative behaviour by the backdoor, the feedback should also make it clear how passing this test improves FAIRness of the test object. It could be a good to acknowledge that automated tests might not capture all possibilities of fulfilling the FAIR principle but explaining that methods that are easily machine-readable (i.e. fit for automated testing) have advantages for findability and accessibility, too. Showing people a minimalist example they can apply to their repository and adjust as they go could improve uptake. The feedback should be constructive and encouraging, as depending on where the user is in the research lifecycle, they might not feel ready to mint DOIs or purls just yet.

The feedback could be presented similarly to the debug messages, or as a separate block of text. It would need to be added into the client for display as well.

Context

When evaluating multiple automated assessment tools, we found that the feedback provided to the user is often missing, or too technical to help improving the FAIRness of the test object.
In F-UJI, the debug messages can help to understand where a test failed, but they are only debug messages so might not always be considered by the user, and they are often quite technical. Even together with the metric name, it is not always clear what could be done to improve the FAIRness.

Adding feedback that is easily understandable and actionable makes it clearer to the community why they should be interested in improving their compliance with the metrics, thereby increasing their motivation to take steps to do so.

Possible Implementation

Ideally, the feedback would be configurable through the metric YAML file. However, there might be some difficulty in tests that have more than one way of passing and wouldn't be able to reflect when the test object is "halfway there". This might not be a bad thing though, as long as the feedback is well worded.

In terms of presentation, displaying the feedback would need to become part of the web client, e.g. simpleclient. Either as an additional section, or in boxes for each test similar to the debug messages.

@karacolada
Copy link
Member Author

karacolada commented Jan 18, 2024

Regarding implementation: Feedback messages/text could be defined in the config file, but that's less specific if tests test for more than one thing. Take license test for example, where it should be named "LICENSE.txt" and be located in the root. During evaluation, more specific feedback could be determined than just by looking at the test status (fail/pass).

@karacolada
Copy link
Member Author

@marioa Could you review the description above please and make any changes where you disagree with it? Your input would be much appreciated since you thought about this much more in-depth. Thanks!

@karacolada karacolada changed the title Implement recommendation messages Implement feedback Jan 29, 2024
@marioa
Copy link

marioa commented Jan 29, 2024

I have some questions.

  • Are you just testing for the existence of a file LICENSE.txt?
  • Do you check the contents are for a valid license? (don't know if you can piggyback off the GitHub API)

I also used to call the license licence as that is the noun here as opposed to the verb which meant some of the automated tests failed. Can it be made case insensitive so that license.txt or License.txt would work? I guess these are extras and/or may be wrong. What you propose sounds fine for a first iteration.

@karacolada
Copy link
Member Author

karacolada commented Jan 29, 2024

I have some questions.

* Are you just testing for the existence of a file `LICENSE.txt`?

* Do you check the contents are for a valid license? (don't know if you can piggyback off the GitHub API)

I also used to call the license licence as that is the noun here as opposed to the verb which meant some of the automated tests failed. Can it be made case insensitive so that license.txt or License.txt would work? I guess these are extras and/or may be wrong. What you propose sounds fine for a first iteration.

This is related to #15 - I followed the CESSDA-specific test for license information. It's used as an example here where feedback could be more or less tailored. E.g., the feedback could be "to pass this specific test, make sure you have a license file in your root that is named exactly LICENSE.txt, i.e. all uppercase and with TXT format", or it could be along the lines of "We detected a license file in the root of your repository, so in principle adhere to FAIR principles, but it doesn't have the TXT ending hence does not pass this automated test.". The fact that there is a license file that is just differently named would only come up during the evaluation and cannot be deduced simply from the test failing/passing (it will always have to fail as long as it is defined this way, unfortunately). It might still be valuable feedback to the user to go "well yes you have a license file but for this and that reason this one specific automated test does not pass regardless".

I'm not checking for validity of the license file per se, but the GitHub API only recognises things as license if the content matches to a certain degree. It's not fool-proof though.

@marioa
Copy link

marioa commented Jan 29, 2024

The general approach is fine.

Instead of focussing on how to pass the test, effectively introducing normative behaviour by the backdoor, the feedback should make clear how passing this test improves FAIRness of the test object.

is going to be difficult to do without prescribing some form of behaviour to make automated testing easier. I think showing people a minimalist example they can apply to their repository would improve uptake and later grow. It also depends on where you are in the research lifecycle - if you are just starting you may not feel ready to mint DOIs or purls, which is fine - you can do that later, i.e. be constructive and encouraging.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants