Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DNS spam #728

Closed
1 task done
smrganic opened this issue Apr 25, 2024 · 8 comments · Fixed by #810
Closed
1 task done

DNS spam #728

smrganic opened this issue Apr 25, 2024 · 8 comments · Fixed by #810
Labels

Comments

@smrganic
Copy link

Description

There seems to be an issue that makes Jellyseerr spam my DNS with aaaa requests for my internal Jellyfin URL. Not sure why there is even a request for an aaaa record since I don't use ipv6.

I tested this on 1.7.0 because that is the version I had. Updated to 1.8.1 because I thought maybe it was fixed in the meantime but it wasn't.

Not sure what triggers this. Maybe it's a sync job. This has been going on for a while but I only just noticed because my primary DNS would ratelimit the IP for this machine just long enough for my secondary DNS to take over and serve the requests. Then the secondary DNS would ratelimit the IP just in time for the primary to take over again.

Version

1.7.0., 1.8.1

Steps to Reproduce

I used Pi hole to monitor traffic volume
On the host where Jellyseerr is installed I used: sudo tcpdump -i ens18 'dst port 53' to monitor for outgoing DNS requests.
I am running Jellyseerr in a docker container so I added this to my compose file
extra_hosts: - "internal.mrga.dev:IP"
This adds the local ip in the container hosts file but I don't like it as a permanent solution.

Screenshots

Screenshot 2024-04-25 163944
Screenshot 2024-04-25 164027
secondary dns Screenshot 2024-04-25 170603

Logs

Will add if needed.

Platform

desktop

Device

N/A

Operating System

N/A

Browser

N/A

Additional Context

No response

Code of Conduct

  • I agree to follow Jellyseerr's Code of Conduct
@Fallenbagel
Copy link
Owner

Related to #387?

@Fallenbagel
Copy link
Owner

Also try this option:
#722 (comment)

@smrganic
Copy link
Author

Related to #387?

Could very well be this. Can the internal IP be cached? This would almost never change so it would make sense to reuse the value insted of reaching out to DNS every time.

Why are there requests for a and aaaa records since I have not set up ipv6? Is that a Node quirk?

@Fallenbagel
Copy link
Owner

Why are there requests for a and aaaa records since I have not set up ipv6? Is that a Node quirk?

It might be an alpine-node quirk. Try the option i sent on the other comment

@smrganic
Copy link
Author

#722 (comment) tried this without the extra hosts in docker compose and ran a recently added scan. It did not help, a lot of DNS traffic was generated. This problem probably only gets worse with a large library and / or a full scan. This definitely needs caching. If this is deployed on a host with a lot of containers it could make other services unavailable indirectly due to DNS rate limiting.

So far only the extra hosts attribute helps.

@simoncaron
Copy link

@Fallenbagel Just had the same issue on a fresh install. NodeJS does not cache DNS requests. When running on large libraries lots of calls to the Jellyfin server are done. Since the DNS lookup is not cached, it will reach for the DNS server for each call to resolve the hostname. Some DNS servers (like Pi-Hole) will throttle/block calls when a certain threshold of requests is reached in a limited timeframe. This is what generates these errors:

[Jellyfin API]: Something went wrong while getting library content from the Jellyfin server: getaddrinfo EAI_AGAIN jellyfin.example.com

Jellystat had the same issue, and it was fixed by implementing cacheable-lookup in axios. Maybe the same could be done for Jellyseerr?

@Fallenbagel
Copy link
Owner

@Fallenbagel Just had the same issue on a fresh install. NodeJS does not cache DNS requests. When running on large libraries lots of calls to the Jellyfin server are done. Since the DNS lookup is not cached, it will reach for the DNS server for each call to resolve the hostname. Some DNS servers (like Pi-Hole) will throttle/block calls when a certain threshold of requests is reached in a limited timeframe. This is what generates these errors:

[Jellyfin API]: Something went wrong while getting library content from the Jellyfin server: getaddrinfo EAI_AGAIN jellyfin.example.com

Jellystat had the same issue, and it was fixed by implementing cacheable-lookup in axios. Maybe the same could be done for Jellyseerr?

Oh yeah. Actually I looked into axios-cached-dns-resolve and it might be better for jellyseerr

Fallenbagel added a commit that referenced this issue May 23, 2024
…nd it to external api class

This fix should in theory use nodeCache with the help of the cacheManager class to cache
jellyfin/emby api requests. Jellyfin's standard Time-To-Live was set as 6 hours so as to ensure that
the cached data is relatively up to date without making excessive API requests. In addition, this
fix sets the checkPeriod for the jellyfin cache to 30 minutes as it sounds suitable enough for
checking and cleaning up expired cach entries without causing perfomance overhead.

fix #728, fix #319
Fallenbagel added a commit that referenced this issue May 23, 2024
…nd it to external api class

This fix should in theory use nodeCache with the help of the cacheManager class to cache
jellyfin/emby api requests. Jellyfin's standard Time-To-Live was set as 6 hours so as to ensure that
the cached data is relatively up to date without making excessive API requests. In addition, this
fix sets the checkPeriod for the jellyfin cache to 30 minutes as it sounds suitable enough for
checking and cleaning up expired cach entries without causing perfomance overhead.

fix #728, fix #387
Fallenbagel added a commit that referenced this issue May 24, 2024
…quests

refactors jellyfin api requests to be handled by the external api
to be consistent with how other external api requests are made

related #728, related #387
Fallenbagel added a commit that referenced this issue May 25, 2024
)

* refactor(jellyfinapi): use the external api class for jellyfin api requests

refactors jellyfin api requests to be handled by the external api
to be consistent with how other external api requests are made

related #728, related #387

* style: prettier formatted

* refactor(jellyfinapi): rename device in auth header as jellyseerr

* refactor(error): rename api error code generic to unknown

* refactor(errorcodes): consistent casing of error code enums
gauthier-th added a commit that referenced this issue Jun 1, 2024
gauthier-th added a commit that referenced this issue Jun 11, 2024
Fallenbagel pushed a commit that referenced this issue Jun 11, 2024
@Fallenbagel
Copy link
Owner

🎉 This issue has been resolved in version 1.9.1 🎉

The release is available on:

Your semantic-release bot 📦🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
3 participants