Skip to content

fix: adjust all lints to reflect new outputs: structure#1073

Closed
dlaehnemann wants to merge 29 commits intobioconda:masterfrom
dlaehnemann:fix/adjust-missing-run-exports-lint-to-new-outputs-structure
Closed

fix: adjust all lints to reflect new outputs: structure#1073
dlaehnemann wants to merge 29 commits intobioconda:masterfrom
dlaehnemann:fix/adjust-missing-run-exports-lint-to-new-outputs-structure

Conversation

@dlaehnemann
Copy link
Copy Markdown
Member

@dlaehnemann dlaehnemann commented Jan 26, 2026

When outputs: are specified, the main package is not something that is built, but only a meta package. Netiher a build/run_exports:, nor a requirements: section make sense on that level, but only for the specified outputs:. Accordingly, we include a new disallowed_top_level_section_with_outputs lint.

Also, the missing_run_exports lint and all of the check_deps() lints need to either check for things on the main level (if no outputs: are specified), or on all of the specified outputs:. And the build/noarch field can be either on the top level or per output, but needs to be checked per output.

When `outputs:` are specified, the main package is not something that is built, but only a meta package. Netiher a `noarch:`, nor a `run_exports:` section make sense on that level, but only for the specified outputs.
@dlaehnemann dlaehnemann marked this pull request as draft January 26, 2026 15:23
@dlaehnemann dlaehnemann changed the title fix: adjust missing_run_exports lint to reflect new outputs: structure fix: adjust all lints to reflect new outputs: structure Jan 30, 2026
…this still throws a linting error

```
FAILED test/test_lint.py::test_lint[should_be_noarch_python-meta-repodata0] - AssertionError: In test 'should_be_noarch_python' on 'meta_package':'skipping_recipe' emitted unexpectedly
assert 'skipping_recipe' in {'should_be_noarch_python'}
 +  where 'skipping_recipe' = str(<class 'bioconda_utils.lint.skipping_recipe'>)
 +    where <class 'bioconda_utils.lint.skipping_recipe'> = LintMessage(recipe=Recipe "meta_package", check=<class 'bioconda_utils.lint.skipping_recipe'>, severity=<Severity.INFO: 10>, title='skipping linting of this recipe as requested', body='\nAs specified via `extra: skip-recipes:`, this recipe is being\nskipped during linting. This is meant for the linter test\nsuite.', start_line=0, end_line=0, fname='/tmp/pytest-of-dlaehnemann/pytest-122/test_lint_should_be_noarch_pyt2/recipes/meta_package/meta.yaml', canfix=False).check
 ```
@dlaehnemann
Copy link
Copy Markdown
Member Author

OK, this is once again getting much more complex than I thought. This comes from trying to migrate the snakemake recipe to the new structure following CEP 13 and 14, where you can only specify EITHER a single top-level recipe package to be built, OR packages under outputs:. See the outputs: explanation in CEP 14.

I guess that eventually, one would move towards using rattler-build / pixi in some way here for bioconda, and then have their internal linting set up, plus any extra lints we might want to have on top. But this is far too complex for me to tackle with my current knowledge. So I would stick with trying to fix this initial step of enforcing the "either outputs or top level"s rule with the current linting setup.

I think I mostly have the linting setup figured out, but am now running into trouble with the testing setup. And I don't have a good enough understanding of how all the pytest setup bits play together (in automatically generating all the test cases), so I am asking @bioconda/build-system for help on the following:

A lot of the test cases work with adding or removing stuff from all of the example recipes specified at the top of lint_cases.yaml:
https://github.com/dlaehnemann/bioconda-utils/blob/d0d0ef7a69b329f3c60da6ffe6aa104132fae848/test/lint_cases.yaml#L2

I previously added an example recipe with outputs: specified, as we generally want all of the lints to work on these kinds of recipes, as well. However, most lints so far simply didn't check this setup properly, but only the top level of the recipes. So now that I set about actually testing them, I run into a particular problem:
When I add something to the top level of all recipes that is not allowed in the outputs: case, this will always trigger certain lints (for example the new disallowed_top_level_section_with_outputs lint). And vice versa if I add anything to the outputs: subsection to test a particular lint on them, I have the same problem. And which lints get triggered will be different for the top-level only recipes vs. the outputs recipe. So specifying lints to skip is not helpful, as it will differ between recipes which lints should be skipped---and this will just make the test case definitions more complex.

So I think we need a way to handle these two types of recipes differently for the testing setup, as well. I see three general approaches (there are probably more, and I probably overlook some simple solution...), but haven't gotten any ideas on how to implement them (from looking at the code and the pytest documentation):

  1. Have a mechanism like the add:, remove: or extra: skip-lints: sections for test cases, which excludes a recipe. So something like extra: skip-recipes:, which I tried to implement with the latest commit, but can't really get to work.
  2. Have a mechanism to selectively add: or remove: section only to or from specific recipes.
  3. Just have one dedicated YAML file for each test case (with expected lint failures alongside the actual recipe) and have separate test cases for each lint with and without outputs: in the recipe.

I think I would actually be for option 3.
The current setup requires constantly cross-referencing different parts of the same file, and mentally figuring out what is added and what is removed, to get the full recipe that is actually being tested. A standalone YAML file per test case, with expected lint failures at the top and the recipe specification below, would make that connection instantaneous.
Also, this would isolate test cases from one another and would avoid accidentally messing up another test case by changing the main recipe specification.

But this is a bigger change, and I'm wondering whether this might be better to tackle in a separate pull request before continuing work on this one here (rebased onto those changes). Otherwise this pull request here will have a gigantic diff with several unrelated things, so changes will be harder to track (and blame ':).

@johanneskoester johanneskoester marked this pull request as ready for review February 4, 2026 15:48
@johanneskoester
Copy link
Copy Markdown
Contributor

johanneskoester commented Feb 4, 2026

I think I would actually be for option 3.
The current setup requires constantly cross-referencing different parts of the same file, and mentally figuring out what is added and what is removed, to get the full recipe that is actually being tested. A standalone YAML file per test case, with expected lint failures at the top and the recipe specification below, would make that connection instantaneous.
Also, this would isolate test cases from one another and would avoid accidentally messing up another test case by changing the main recipe specification.

But this is a bigger change, and I'm wondering whether this might be better to tackle in a separate pull request before continuing work on this one here (rebased onto those changes). Otherwise this pull request here will have a gigantic diff with several unrelated things, so changes will be harder to track (and blame ':).

I agree with your suggestion, and indeed, please do that in a separate PR.
Note that I today formatted the bioconda-utils codebase with ruff and fixed some (apparently rarely or never encountered) errors while doing that. This PR is already updated to latest master branch.

@dlaehnemann dlaehnemann marked this pull request as draft February 5, 2026 09:22
github-actions bot and others added 5 commits February 11, 2026 19:14
🤖 I have created a release *beep* *boop*
---


##
[4.0.0](bioconda/bioconda-utils@v3.9.2...v4.0.0)
(2026-02-11)


### ⚠ BREAKING CHANGES

* also find tests under outputs, ensure all outputs have tests, ensure
outputs names are different from package name
([bioconda#1057](bioconda#1057))

### Bug Fixes

* also find tests under outputs, ensure all outputs have tests, ensure
outputs names are different from package name
([bioconda#1057](bioconda#1057))
([dd17aa7](bioconda@dd17aa7))
* update actions
([bioconda#1076](bioconda#1076))
([e301b5c](bioconda@e301b5c))
* Update anaconda-client version to 1.14.*
([bioconda#1075](bioconda#1075))
([0bdd2a9](bioconda@0bdd2a9)),
closes [bioconda#1074](bioconda#1074)

---
This PR was generated with [Release
Please](https://github.com/googleapis/release-please). See
[documentation](https://github.com/googleapis/release-please#release-please).

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
…conda#1081)

## Summary
- When building with Docker, `get_package_paths()` triggers a full
finalized conda solver run on the host via `api.render(finalize=True)`.
This is redundant because Docker's `conda-build` re-solves from scratch
anyway.
- Adds a `finalize` parameter to `get_package_paths()` and skips the
expensive finalized render when a `docker_builder` is present
- Non-finalized metas use `bypass_env_check=True`, which avoids costly
dependency resolution. As documented in `utils.py:479-480`, this does
not change the package build string.
- Adds `--no-fast-resolve` CLI flag for debugging/opt-out

## Savings
10-60s per recipe by eliminating a redundant solver invocation. For R
packages, a single solve can take 10+ minutes.

## Risks
- **Medium**: Non-finalized metas may produce slightly different build
strings if `pin_compatible` or `run_exports` affect the hash. This could
cause:
- False "not in channel" -> rebuilds something that exists (wasteful but
not incorrect)
- False "in channel" -> skips a needed build (lower risk since
`check_recipe_skippable()` already does a fast check)
- **Mitigation**: Debug logging when fast path is used;
`--no-fast-resolve` opt-out flag
- Only changes behavior when `docker_builder is not None`

## Test plan
- [x] Run existing test suite: `pytest test/`
- [x] Integration test with Docker: `bioconda-utils build recipes/
config.yml --docker` on pyfaidx
- [x] Compare build results with and without `--no-fast-resolve`
- [x] Verify built packages are identical
## Summary
- Pre-solves the mulled test environment on the host using `conda create
--dry-run --json`, then passes an `@EXPLICIT` spec file to the container
- The container installs from the explicit spec (no solver needed), runs
`create-env` post-processing, then executes tests
- Graceful fallback: if pre-solve fails for any reason, falls back to
the original `mulled-build build-and-test` path
- Automatically disabled when `mulled_upload_target` is set (upload
needs the Docker image produced by `mulled-build`)
- Adds `--no-presolved-mulled-test` CLI flag for opt-out

Related: bioconda#817

## Savings
60-180s per mulled test by eliminating a redundant solver run inside the
container.

## Risks
- **Medium**: `conda create --dry-run --json` output format varies
across conda versions. Robust fallback to original path mitigates this.
- **Medium**: Bypassing `mulled-build` means maintaining a parallel test
execution path. The `create-env --conda=: /usr/local` POSTINSTALL step
is replicated from the involucro wrapper.
- Backward compatible: graceful fallback ensures original behavior is
preserved on any failure.

## Test plan
- [x] Run existing test suite: `pytest test/`
- [x] Integration test: `bioconda-utils build recipes/ config.yml
--docker --mulled-test` on pyfaidx
- [x] Verify tests pass identically with and without
`--no-presolved-mulled-test`
- [x] Test fallback: presolved path failed (host/container platform
mismatch on macOS) and correctly fell back to mulled-build
- [x] Verify `mulled_upload_target` correctly disables pre-solved path
(confirmed: goes straight to mulled-build)

---------

Co-authored-by: Elmar Pruesse <epruesse@users.noreply.github.com>
Add missing `raise_for_status()` call after the CircleCI
`/workflow/{id}/job` API request in `get_circleci_artifacts`. Without
it, an HTTP error (e.g. 500) causes `KeyError: 'items'` instead of a
clear HTTP error.

See failed CircleCI pipeline
[#178886](https://app.circleci.com/pipelines/github/bioconda/bioconda-recipes/178886)
(`build_and_upload-osx-arm64`, `build_and_upload-linux-aarch64`).

---------

Co-authored-by: Elmar Pruesse <epruesse@users.noreply.github.com>
🤖 I have created a release *beep* *boop*
---


##
[4.1.0](bioconda/bioconda-utils@v4.0.0...v4.1.0)
(2026-03-04)


### Features

* eliminate redundant host-side solver run for Docker builds
([bioconda#1081](bioconda#1081))
([3d4c9e1](bioconda@3d4c9e1))
* pre-solved environments for mulled tests
([bioconda#1082](bioconda#1082))
([af1ac9c](bioconda@af1ac9c))


### Bug Fixes

* add raise_for_status to CircleCI workflow API call
([bioconda#1083](bioconda#1083))
([2f558ee](bioconda@2f558ee))

---
This PR was generated with [Release
Please](https://github.com/googleapis/release-please). See
[documentation](https://github.com/googleapis/release-please#release-please).

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
@dlaehnemann
Copy link
Copy Markdown
Member Author

The overhaul of the linting tests as suggested here, is now its own pull request in #1085.

This pull request switches to a linting testsuite with one YAML file per
linting test, to have all the info needed to evaluated a single linting
test in one place.

This also simplifie the linting tests setup a bit, as some keywords from
the testing suite (like `remove:`) are not needed any more.

Also, this pull requests ensures that in YAML files with multiple
recipes, more than just one of them actually gets tested. Previously,
only one of them would run, as any subsequent recipes would have an
empty list of checks (I guess they were being consumed in some way and
not reloaded).
…cture' of github.com:dlaehnemann/bioconda-utils into fix/adjust-missing-run-exports-lint-to-new-outputs-structure
@dlaehnemann
Copy link
Copy Markdown
Member Author

I am closing this in favor of the cleaner (because cherry-picked) new pull request #1086 .

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants