-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
INFRA-388 Converting smartmon into python and adding mock tests #1327
base: stackhpc/2024.1
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is great start, many thanks. It would be good to investigate external libraries for the smart data collection / parsing.
def parse_smartctl_attributes(disk, disk_type, serial, json_data): | ||
labels = f'disk="{disk}",type="{disk_type}",serial_number="{serial}"' | ||
metrics = [] | ||
smartmon_attrs = set([ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
style nit: It would be helpful to move these to a 'constant' eg. SMARTMON_ATTRS = set(..
and put each attribute on a new line.
try: | ||
version_output = run_command([SMARTCTL_PATH, '-j'], parse_json=True) | ||
smartctl_version_list = version_output.get('smartctl', {}).get('version', []) | ||
if smartctl_version_list: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have you looked at Python libraries for doing this?
Eg. This one looks reasonable: https://pypi.org/project/pySMART/
It can handle loading all the device metrics in via smartctl
as you are doing here.
It seems to provide some abstraction over the health state as well.
The work you've put into the tests should be directly usable
class TestSmartMon(unittest.TestCase): | ||
@patch('smartmon.run_command') | ||
def test_parse_smartctl_info(self, mock_run_command): | ||
devices_info = [ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be useful to factor this example data out into a dedicated file. Then we can have a file for each HW profile, and loop over them.
93fd068
to
146b26c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @technowhizz , good effort.
return json.loads(result.stdout) | ||
return result.stdout.strip() | ||
|
||
def parse_device_info(device): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's worth considering a type hint to describe that the expected type
of device
is a pySMART device, or at least describing it in the doc string.
if callable(val): | ||
continue # skip methods | ||
|
||
# Convert CamelCase or PascalCase -> snake_case, e.g. dataUnitsRead -> data_units_read |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This deserves its own method. Does it also need a reference?
https://stackoverflow.com/questions/1175208/elegant-python-function-to-convert-camelcase-to-snake-case
Test parse_device_info() for every JSON fixture in ./drives/. | ||
We do subTest() so each fixture is tested individually. | ||
""" | ||
for fixture_path in self.fixture_files: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we have lots of fixture files and the test fails, we will get generic test_parse_device_info [failed]
, and then have to go and figure out which device(s) weren't parsed correctly. This is ok at the moment, but if we have lots of test files it could be come unwieldy.
Generally it's best to keep tests very simple and targeted, so that if one fails, it gives you a strong pointer about where the issue is.
A way around this would be to factor out the test logic into a _test_parse_device_info(self, fixture_file)
method and then have a test per device, with each test just calling _test_parse_device_info
.
@@ -0,0 +1,24 @@ | |||
{ | |||
"device_info": { | |||
"name": "/dev/nvme0", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps rename the subfolder drives
to tests
? And then later on (not in this change) we can move the script to a dedicated repo.
|
||
def create_mock_device_from_json(self, device_info, if_attributes=None): | ||
""" | ||
Given a 'device_info' dict and optional 'if_attributes', build |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you create an actual pySmart device object here from the file output?
Eg. https://github.com/truenas/py-SMART/blob/master/tests/test_device.py#L43
Test parse_if_attributes() for every JSON fixture in ./drives/. | ||
We do subTest() so each fixture is tested individually. | ||
""" | ||
for fixture_path in self.fixture_files: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar here - let's swap the loop for dedicated tests
# we expect a line in the script's output. | ||
for attr_key, attr_val in if_attrs.items(): | ||
# Convert from e.g. "criticalWarning" -> "critical_warning" | ||
snake_key = re.sub(r'(?<!^)(?=[A-Z])', '_', attr_key).lower() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd consider this too complex for a unit test. It would be better to hard code the expected outcomes for a given input.
No description provided.