-
-
Notifications
You must be signed in to change notification settings - Fork 502
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Enhancement]: Docs/example for redis cluster #2400
Comments
@hughesjj |
I did an example using Testcontainers for Java in the past. See SO https://stackoverflow.com/questions/74822259/testcontainers-jedisconnectionexception-could-not-get-a-resource-from-the-pool/75251597#75251597 |
@hughesjj Since you shared already your tests that utilizes a Redis cluster setup, would you like to contribute it? @mdelapenya Should such an example go into the tests, or do we also have a plan for dedicated example modules within tc-go? |
For examples, I usually suggest using the testable example pattern, in the |
Hello, thanks to all the people that have contributed to this. With your help, I managed to get a Redis Cluster (using the Redis Cluster image from grokzen) working with testcontainers that, at this time, does not seem to be flakey. Here is the code I'm using to create a container. req := testcontainers.ContainerRequest{
Image: "grokzen/redis-cluster:6.2.5",
ExposedPorts: exposedPorts,
Env: map[string]string{"IP": "0.0.0.0", "REDIS_PASSWORD": password, "REDIS_TLS_ENABLED": "no", "INITIAL_PORT": strconv.Itoa(port)},
// This is required because these ports are hardcoded into the image
// Without this, the ping commands will fail, because there is way to properly intercept and route the requests.
// The ping requests will go to port 7000 to 7005
HostConfigModifier: func(hostConfig *container.HostConfig) {
hostConfig.PortBindings = portMap
},
}
container, err := testcontainers.GenericContainer(ctx, testcontainers.GenericContainerRequest{
ContainerRequest: req,
Started: true,
}) I was working on quiet abit on customization, but the general idea is there. You'll notice that there isn't a Without this, the tests would generally be flakey, especially given that I was running multiple instances of this testcontainer across different ports. On each run, there would be 1-2 containers where the down := make([]string, len(redisHosts))
// copy the elements from the original slice to the new slice
copy(down, redisHosts)
// use the redis cli to test and check if the cluster is ok
// this technically does not need to be done, but this just acts as a x host count to check that the cluster state is ok
for len(down) > 0 {
client := redis.NewClient(&redis.Options{
Addr: down[0],
Password: password,
})
info := client.ClusterInfo(ctx).String()
if strings.HasPrefix(info, "cluster info: cluster_state:ok") {
down = down[1:]
}
} Shown above is what I eventually settled on. So far, I have not encountered any flakey tests due to the testcontainers. There are alot more optimzations that can be done. For example, storing only 1 client per host, and using that same client. The client that is created isn't properly shut down either. Given that this was only for testing, I got alittle lazy. I hope this helps anyone else that is trying to get the Redis Cluster image up and running with testcontainers |
Proposal
I'm a maintainer of
opentelemetry-collector-contrib/receiver/redis
, and have attempted to implement testing a redis cluster. This is important as some metrics only appear (ctrl+fIf the instance is a replica
) when a redis is configured as a cluster.While we have good examples for creating a single redis container, it would be nice to do so as well for clusters.
It seems like I'm not alone in this desire, as someone had previously tried the same
I'll note that this line of desire would also apply to kafka, and pretty much every open source distributed service.
The text was updated successfully, but these errors were encountered: