Skip to content
This repository has been archived by the owner on Mar 26, 2020. It is now read-only.

Validation for cluster.brick-multiplex cluster option. #1373

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

vpandey-RH
Copy link
Contributor

Updates #1367

Add validation to setting cluster.brick-multiplex cluster option.
Discard all garbage strings and allow only "on", "yes", "true", "enable", "1" OR
"off", "no", "false", "disable", "0"

Signed-off-by: Vishal Pandey [email protected]

@centos-ci
Copy link
Collaborator

Can one of the admins verify this patch?

@ghost ghost assigned vpandey-RH Dec 6, 2018
@ghost ghost added the in progress label Dec 6, 2018
@aravindavk
Copy link
Member

add to whitelist

@ghost ghost assigned Madhu-1 Dec 7, 2018
Copy link
Member

@Madhu-1 Madhu-1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please add e2e for this fix.

Copy link
Contributor

@atinmu atinmu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We already have the validator function in ClusterOptMap which defines the respective validation function per cluster option. We need to check why the validation isn't being happening through that infra. I don't think we should add validation like this for individual options instead of a generic framework.

@vpandey-RH
Copy link
Contributor Author

@atinmu The validations are happening through that infra. Only thing is that we call RegisterClusterOpValidationFunc() to register these validation functions.
We can add validations directly in clusterOptMap. But, since we do have a method to register the validation function, so I am thinking of leaving the framework as generalised as possible and just use the functions already made available to register the options validator. Even if we add validation functions in clusterOptMap, those validation functions structure is bound to be same to what's happening now.

@vpandey-RH
Copy link
Contributor Author

@atinmu As per our discussion in the morning, I tried using ValidateBool () but its not possible as the validation function for cluster options expects a function with string arguments and the ValidateBool() function has different types of arguments to what is expected for cluster option validation function.

@vpandey-RH vpandey-RH force-pushed the add-validations-to-brickmux-option branch from 79c0ecd to fca2699 Compare December 7, 2018 08:57
Copy link
Contributor

@atinmu atinmu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also CI seems to have failed from a shd test, please have a look.

optReq := api.ClusterOptionReq{
Options: map[string]string{"cluster.brick-multiplex": "on"},
Options: map[string]string{"cluster.brick-multiplex": "invalidValue"},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so we don't have a test where we set brick multiplexing to off now?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, we don't have a test for this as of now but I will add it.

@ghost ghost assigned atinmu Dec 11, 2018
@atinmu
Copy link
Contributor

atinmu commented Dec 11, 2018

@vpandey-RH Can you please address the comment?

@vpandey-RH
Copy link
Contributor Author

@atinmu The shd test is failing because max-bricks-per-process is still not merged and in all shd tests I am killing one of the bricks from volume, so if brick-multiplexing is on, killing one brick will result in all bricks going in offline state and that's why the CI fails, saying transport endpoint is not connected.

@atinmu
Copy link
Contributor

atinmu commented Dec 11, 2018

@vpandey-RH you should be testing shd on a multi node cluster like environment, running a replica config on the same node isn’t ideal where such behavior can’t be tested. Agree?

@atinmu
Copy link
Contributor

atinmu commented Dec 15, 2018

@vpandey-RH CI still fails.

@rishubhjain
Copy link
Contributor

retest this please

@Madhu-1
Copy link
Member

Madhu-1 commented Dec 15, 2018

11:40:11 --- FAIL: TestVolume/Statedump (1.03s)
11:40:11 require.go:157:
11:40:11 Error Trace: volume_ops_test.go:530
11:40:11 Error: Not equal:
11:40:11 expected: 2
11:40:11 actual : 8
11:40:11 Test: TestVolume/Statedump
11:40:11 --- PASS: TestVolume/Stop (0.66s)
11:40:11 --- PASS: TestVolume/List (0.04s)
11:40:11 --- PASS: TestVolume/Info (0.01s)
11:40:11 --- PASS: TestVolume/Edit (0.05s)
11:40:11 --- PASS: TestVolume/VolumeFlags (0.71s)
11:40:11 --- PASS: TestVolume/Delete (0.02s)
11:40:11 --- PASS: TestVolume/Disperse (0.47s)
11:40:11 --- PASS: TestVolume/DisperseMount (0.14s)
11:40:11 --- PASS: TestVolume/DisperseDelete (0.26s)
11:40:11 --- PASS: TestVolume/testShdOnVolumeStartAndStop (1.90s)
11:40:11 --- PASS: TestVolume/testArbiterVolumeCreate (0.88s)
11:40:11 --- FAIL: TestVolume/SelfHeal (0.62s)
11:40:11 require.go:765:
11:40:11 Error Trace: glustershd_test.go:94
11:40:11 utils_test.go:33
11:40:11 Error: Expected nil, but got: &os.PathError{Op:"open", Path:"/tmp/gd2_func_test/TestVolume/SelfHeal/mnt020307401/file1.txt", Err:0x6b}
11:40:11 Test: TestVolume/SelfHeal
11:40:11 Messages: failed to open file: open /tmp/gd2_func_test/TestVolume/SelfHeal/mnt020307401/file1.txt: transport endpoint is not connected
11:40:11 --- FAIL: TestVolume/GranularEntryHeal (0.01s)
11:40:11 require.go:765:
11:40:11 Error Trace: glustershd_test.go:195
11:40:11 utils_test.go:33
11:40:11 Error: Expected nil, but got: &errors.errorString{s:"volume already exists"}
11:40:11 Test: TestVolume/GranularEntryHeal
11:40:11 --- FAIL: TestVolume/SelfHeal#01 (0.01s)
11:40:11 require.go:765:
11:40:11 Error Trace: glustershd_test.go:53
11:40:11 utils_test.go:33
11:40:11 Error: Expected nil, but got: &errors.errorString{s:"volume already exists"}
11:40:11 Test: TestVolume/SelfHeal#01
11:40:11 --- FAIL: TestVolume/SplitBrainOperations (0.49s)
11:40:11 require.go:347:
11:40:11 Error Trace: glustershd_test.go:309
11:40:11 utils_test.go:33
11:40:11 Error: Should be false
11:40:11 Test: TestVolume/SplitBrainOperations
11:40:11 Messages: glustershd is still running
11:40:11 --- PASS: TestVolume/VolumeProfile (1.04s)
11:40:11 === RUN TestVolumeOptions

@vpandey-RH CI failure PTAL

Discard all garbage strings and allow only "on", "yes", "true", "enable", "1" OR
"off", "no", "false", "disable", "0"

Signed-off-by: Vishal Pandey <[email protected]>
@vpandey-RH vpandey-RH force-pushed the add-validations-to-brickmux-option branch from 60e5aa2 to ebb281e Compare December 17, 2018 09:20
@atinmu
Copy link
Contributor

atinmu commented Dec 24, 2018

@vpandey-RH any update on this PR?

@vpandey-RH
Copy link
Contributor Author

@atinmu Still working on this. Stuck in an issue regarding local mount by glfsheal binary because of which the self heal tests are failing.

@atinmu
Copy link
Contributor

atinmu commented Jan 7, 2019

@atinmu Still working on this. Stuck in an issue regarding local mount by glfsheal binary because of which the self heal tests are failing.

@vpandey-RH Did we manage to figure out the root cause?

@vpandey-RH
Copy link
Contributor Author

@atinmu No. I have asked @aravindavk for some help.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants