Skip to content

[pre-commit.ci] pre-commit autoupdate #430

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: develop
Choose a base branch
from

Conversation

pre-commit-ci[bot]
Copy link
Contributor

@pre-commit-ci pre-commit-ci bot commented May 5, 2025

updates:

Summary by Sourcery

Chores:

  • Upgrade Ruff pre-commit hook from v0.11.7 to v0.11.8

Copy link

semanticdiff-com bot commented May 5, 2025

Review changes with  SemanticDiff

Changed Files
File Status
  .pre-commit-config.yaml  0% smaller

Copy link
Contributor

sourcery-ai bot commented May 5, 2025

Reviewer's Guide

This pull request updates the version of the ruff-pre-commit hook used in the pre-commit configuration.

File-Level Changes

Change Details Files
Update ruff-pre-commit hook version.
  • Bumped ruff-pre-commit revision from v0.11.7 to v0.11.8.
.pre-commit-config.yaml

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link
Contributor

coderabbitai bot commented May 5, 2025

Important

Review skipped

Bot user detected.

To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.


🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Join our Discord community for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PR Summary

Your free trial has ended. If you'd like to continue receiving code reviews, you can add a payment method here: https://app.greptile.com/review/github.

1 file(s) reviewed, no comment(s)
Edit PR Review Bot Settings | Greptile

Copy link

what-the-diff bot commented May 5, 2025

PR Summary

  • Upgraded Pre-commit Tool
    This PR updates the version of our pre-commit framework (ruff-pre-commit), which helps keep our code clean and efficient before we finalize any changes. This upgrade should bring in new features or fixes from the tool's latest version.

Copy link

codiumai-pr-agent-free bot commented May 5, 2025

CI Feedback 🧐

(Feedback updated until commit 403a1c7)

A test triggered by this PR failed. Here is an AI-generated analysis of the failure:

Action: cpython-lxml (3.12)

Failed stage: Test with pytest [❌]

Failed test name: tests/hypothesis/style_test.py::TestLxml::test_fuzz_style_map_one_pair, tests/hypothesis/views_test.py::TestLxml::test_fuzz_region

Failure summary:

The action failed because two hypothesis-based tests failed:

1. test_fuzz_style_map_one_pair in tests/hypothesis/style_test.py (line 472) - The test failed
during string roundtrip testing where an object serialized with Verbosity.terse and then
deserialized didn't match the original when both were serialized with Verbosity.verbose. The issue
appears to be related to having two Pair objects with the same PairKey.normal key in a StyleMap.

2. test_fuzz_region in tests/hypothesis/views_test.py (line 184) - Similar to the first failure,
this test also failed during string roundtrip testing with a Region object containing a Lod object
with specific pixel and fade extent values.

Both failures point to issues with object serialization and deserialization consistency. The log
indicates line 764 in fastkml/styles.py is particularly problematic for the first failure.

Relevant error logs:
1:  ##[group]Operating System
2:  Ubuntu
...

282:  tests/hypothesis/style_test.py ..........F.                              [ 67%]
283:  tests/hypothesis/times_test.py ..                                        [ 67%]
284:  tests/hypothesis/views_test.py ..F..                                     [ 68%]
285:  tests/kml_test.py .......................................                [ 73%]
286:  tests/links_test.py ....                                                 [ 73%]
287:  tests/model_test.py ........                                             [ 75%]
288:  tests/network_link_control_test.py ....                                  [ 75%]
289:  tests/overlays_test.py ............................................      [ 81%]
290:  tests/registry_test.py ....                                              [ 82%]
291:  tests/repr_eq_test.py ......                                             [ 83%]
292:  tests/styles_test.py ..............................................      [ 89%]
293:  tests/times_test.py ..................................................   [ 96%]
294:  tests/utils_test.py ......                                               [ 96%]
295:  tests/validator_test.py ........                                         [ 98%]
296:  tests/views_test.py ..............                                       [100%]
297:  =================================== FAILURES ===================================
298:  ____________________ TestLxml.test_fuzz_style_map_one_pair _____________________
...

325:  ),
326:  )
327:  tests/hypothesis/style_test.py:436: 
328:  _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
329:  tests/hypothesis/style_test.py:472: in test_fuzz_style_map_one_pair
330:  assert_str_roundtrip_terse(style_map)
331:  _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
332:  obj = fastkml.styles.StyleMap(ns='{http://www.opengis.net/kml/2.2}', name_spaces={'kml': '{http://www.opengis.net/kml/2.2}',... target_id='', color='ffffffff', color_mode=ColorMode.normal, fill=True, outline=True, **{},)], **{},), **{},)], **{},)
333:  def assert_str_roundtrip_terse(obj: _XMLObject) -> None:
334:  new_object = type(obj).from_string(
335:  obj.to_string(verbosity=Verbosity.terse),
336:  )
337:  >       assert obj.to_string(verbosity=Verbosity.verbose) == new_object.to_string(
338:  verbosity=Verbosity.verbose,
339:  )
340:  E       AssertionError
341:  E       Falsifying example: test_fuzz_style_map_one_pair(
...

343:  E           id=None,  # or any other generated value
344:  E           target_id=None,  # or any other generated value
345:  E           pairs=[Pair(key=PairKey.normal, style=StyleUrl(url='http://A.COM/')), Pair(
346:  E                key=PairKey.normal,
347:  E                style=Style(
348:  E                    styles=[PolyStyle(
349:  E                         color='ffffffff',
350:  E                         color_mode=ColorMode.normal,
351:  E                         fill=True,
352:  E                         outline=True,
353:  E                     )],
354:  E                ),
355:  E            )],
356:  E       )
357:  E       Explanation:
358:  E           These lines were always and only run by failing examples:
359:  E               /home/runner/work/fastkml/fastkml/fastkml/styles.py:764
360:  E       
361:  E       You can reproduce this example by temporarily adding @reproduce_failure('6.131.15', b'AXicc2RwZHBkZHRkgEAgALIbHYEsMAJJMDoydaRBAVCEkQEAudkIUw==') as a decorator on your test case
362:  tests/hypothesis/common.py:107: AssertionError
363:  __________________________ TestLxml.test_fuzz_region ___________________________
...

369:  lod=st.one_of(st.none(), lods()),
370:  )
371:  tests/hypothesis/views_test.py:163: 
372:  _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
373:  tests/hypothesis/views_test.py:184: in test_fuzz_region
374:  assert_str_roundtrip_terse(region)
375:  _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
376:  obj = fastkml.views.Region(ns='{http://www.opengis.net/kml/2.2}', name_spaces={'kml': '{http://www.opengis.net/kml/2.2}', 'a...w.google.com/kml/ext/2.2}'}, min_lod_pixels=256, max_lod_pixels=0, min_fade_extent=0, max_fade_extent=0, **{},), **{},)
377:  def assert_str_roundtrip_terse(obj: _XMLObject) -> None:
378:  new_object = type(obj).from_string(
379:  obj.to_string(verbosity=Verbosity.terse),
380:  )
381:  >       assert obj.to_string(verbosity=Verbosity.verbose) == new_object.to_string(
382:  verbosity=Verbosity.verbose,
383:  )
384:  E       AssertionError
385:  E       Falsifying example: test_fuzz_region(
386:  E           self=<tests.hypothesis.views_test.TestLxml object at 0x7f48e94421b0>,
387:  E           id=None,  # or any other generated value
388:  E           target_id=None,  # or any other generated value
389:  E           lat_lon_alt_box=None,  # or any other generated value
390:  E           lod=Lod(
391:  E               min_lod_pixels=256,
392:  E               max_lod_pixels=0,
393:  E               min_fade_extent=0,
394:  E               max_fade_extent=0,
395:  E           ),
396:  E       )
397:  E       
398:  E       You can reproduce this example by temporarily adding @reproduce_failure('6.131.15', b'AEEAQQBBAEEBQgEAQQBBAEEA') as a decorator on your test case
399:  tests/hypothesis/common.py:107: AssertionError
400:  ================================ tests coverage ================================
401:  _______________ coverage: platform linux, python 3.12.10-final-0 _______________
402:  Coverage XML written to file coverage.xml
403:  Required test coverage of 95% reached. Total coverage: 100.00%
404:  =========================== short test summary info ============================
405:  FAILED tests/hypothesis/style_test.py::TestLxml::test_fuzz_style_map_one_pair - AssertionError
406:  Falsifying example: test_fuzz_style_map_one_pair(
...

408:  id=None,  # or any other generated value
409:  target_id=None,  # or any other generated value
410:  pairs=[Pair(key=PairKey.normal, style=StyleUrl(url='http://A.COM/')), Pair(
411:  key=PairKey.normal,
412:  style=Style(
413:  styles=[PolyStyle(
414:  color='ffffffff',
415:  color_mode=ColorMode.normal,
416:  fill=True,
417:  outline=True,
418:  )],
419:  ),
420:  )],
421:  )
422:  Explanation:
423:  These lines were always and only run by failing examples:
424:  /home/runner/work/fastkml/fastkml/fastkml/styles.py:764
425:  You can reproduce this example by temporarily adding @reproduce_failure('6.131.15', b'AXicc2RwZHBkZHRkgEAgALIbHYEsMAJJMDoydaRBAVCEkQEAudkIUw==') as a decorator on your test case
426:  FAILED tests/hypothesis/views_test.py::TestLxml::test_fuzz_region - AssertionError
427:  Falsifying example: test_fuzz_region(
428:  self=<tests.hypothesis.views_test.TestLxml object at 0x7f48e94421b0>,
429:  id=None,  # or any other generated value
430:  target_id=None,  # or any other generated value
431:  lat_lon_alt_box=None,  # or any other generated value
432:  lod=Lod(
433:  min_lod_pixels=256,
434:  max_lod_pixels=0,
435:  min_fade_extent=0,
436:  max_fade_extent=0,
437:  ),
438:  )
439:  You can reproduce this example by temporarily adding @reproduce_failure('6.131.15', b'AEEAQQBBAEEBQgEAQQBBAEEA') as a decorator on your test case
440:  ================== 2 failed, 728 passed in 214.14s (0:03:34) ===================
441:  ##[error]Process completed with exit code 1.
442:  Post job cleanup.

Copy link

github-actions bot commented May 5, 2025

Preparing review...

Copy link

@llamapreview llamapreview bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Auto Pull Request Review from LlamaPReview

Review Status: Automated Review Skipped

Dear contributor,

Thank you for your Pull Request. LlamaPReview has analyzed your changes and determined that this PR does not require an automated code review.

Analysis Result:

PR only contains version updates and formatting changes

Technical Context:

Version and formatting changes detected, which include:

  • Package version updates
  • Dependency version changes
  • Code formatting adjustments
  • Whitespace modifications
  • Structural formatting changes

We're continuously improving our PR analysis capabilities. Have thoughts on when and how LlamaPReview should perform automated reviews? Share your insights in our GitHub Discussions.

Best regards,
LlamaPReview Team

Copy link

github-actions bot commented May 5, 2025

Failed to generate code suggestions for PR

Copy link

github-actions bot commented May 5, 2025

Preparing review...

updates:
- [github.com/astral-sh/ruff-pre-commit: v0.11.7 → v0.11.9](astral-sh/ruff-pre-commit@v0.11.7...v0.11.9)
@pre-commit-ci pre-commit-ci bot force-pushed the pre-commit-ci-update-config branch from f070578 to 403a1c7 Compare May 12, 2025 16:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

0 participants