Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Preferred nodes #139

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open

Preferred nodes #139

wants to merge 2 commits into from

Conversation

pwdng
Copy link

@pwdng pwdng commented Mar 30, 2018

Allow preferred nodes to be configured

If a cluster is distributed geographically, it can be preferable to query the local nodes. For readonly commands, preferred nodes let us chose a subset of the cluster nodes that will be prioritised when choosing a node. A specific local slave can therefore be used.

If a cluster is distributed geographically, it can be preferable to query the local nodes. For readonly commands, we have the posibility to chose a subset of the clsuter nodes that will be prioritized when choosing a node. A specific local slave can therefore be used.
@doyoubi
Copy link
Contributor

doyoubi commented Apr 2, 2018

That's cool. If I'm not mistaken, you built a single redis cluster across multiple data centers. Before we dive into the implementation, I'm so curious on how you can build a geographically distributed redis cluster.
As far as I know, due to the unreliable network between data centers, the slaves in such geographically distributed cluster will keep promoting to master when suffering high network latency. It's just not reliable to simply deploy the unmodified official redis cluster in multiple data centers. The only team using this solution I know did a great change on the origin redis source code and reimplement the failure detection and master promotion themselves.
So how can you do that? Does it run perfectly in your production environment?

@pwdng
Copy link
Author

pwdng commented Apr 3, 2018 via email

@doyoubi
Copy link
Contributor

doyoubi commented Apr 3, 2018

To be honest I'm not in favor of doing this.
If you want single way replication I recommend using separate clusters and replicating data from one to another. You might want to do this for the following reason.

  • Network problems between DCs will not affect the availability of redis cluster.
  • You can use queue to eliminate full sync replication (You may need large psync queue inside redis in your current solution).

There're also some great tools for you:

Please check them out.

@tevino
Copy link
Contributor

tevino commented Apr 4, 2018

Here's one more: https://github.com/CodisLabs/redis-port

@pwdng
Copy link
Author

pwdng commented Apr 4, 2018 via email

@doyoubi
Copy link
Contributor

doyoubi commented Apr 4, 2018

Yes, it might be easier to just use a single cluster across multiple DCs for the one-to-many replication.
But for the case that the slave pulling data from another DC goes down, it would be better off using a separated cluster because this cluster can guarantee high availability itself and we don't have to read data from remote DC.
Redis is primarily used for cache and reading from another DC is not acceptable in most use cases.

Further, the frequently exchanged gossip packet goes crazy in large clusters with over hundreds of nodes which occupies the bandwidth between DCs.
You may also find master sometimes starts full sync replication and fail again and again without a large enough queue between DCs.

Building your own replication tools with a large queue makes better performance and availability and has already been chosen by most of teams building multiple DC replication as far as I know.

Your solution is easy to build but lacks multiple backup slaves in the DCs receiving data. I suggest try building your own one.

@CLAassistant
Copy link

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.
You have signed the CLA already but the status is still pending? Let us recheck it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants