Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

INFRA-388 Converting smartmon into python and adding mock tests #1327

Draft
wants to merge 4 commits into
base: stackhpc/2024.1
Choose a base branch
from

Conversation

technowhizz
Copy link
Contributor

No description provided.

@technowhizz technowhizz self-assigned this Oct 11, 2024
Copy link
Member

@dougszumski dougszumski left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is great start, many thanks. It would be good to investigate external libraries for the smart data collection / parsing.

def parse_smartctl_attributes(disk, disk_type, serial, json_data):
labels = f'disk="{disk}",type="{disk_type}",serial_number="{serial}"'
metrics = []
smartmon_attrs = set([
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

style nit: It would be helpful to move these to a 'constant' eg. SMARTMON_ATTRS = set(.. and put each attribute on a new line.

try:
version_output = run_command([SMARTCTL_PATH, '-j'], parse_json=True)
smartctl_version_list = version_output.get('smartctl', {}).get('version', [])
if smartctl_version_list:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have you looked at Python libraries for doing this?

Eg. This one looks reasonable: https://pypi.org/project/pySMART/

It can handle loading all the device metrics in via smartctl as you are doing here.

It seems to provide some abstraction over the health state as well.

The work you've put into the tests should be directly usable

class TestSmartMon(unittest.TestCase):
@patch('smartmon.run_command')
def test_parse_smartctl_info(self, mock_run_command):
devices_info = [
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be useful to factor this example data out into a dedicated file. Then we can have a file for each HW profile, and loop over them.

@Alex-Welsh Alex-Welsh added stackhpc-ci Automated action performed by stackhpc-ci Caracal Targets the Caracal OpenStack release labels Nov 15, 2024
@Alex-Welsh Alex-Welsh added enhancement New feature or request and removed stackhpc-ci Automated action performed by stackhpc-ci labels Nov 22, 2024
@technowhizz technowhizz changed the title Converting smartmon into python and adding mock tests INFRA-388 Converting smartmon into python and adding mock tests Dec 13, 2024
Copy link
Member

@dougszumski dougszumski left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @technowhizz , good effort.

return json.loads(result.stdout)
return result.stdout.strip()

def parse_device_info(device):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's worth considering a type hint to describe that the expected type of device is a pySMART device, or at least describing it in the doc string.

if callable(val):
continue # skip methods

# Convert CamelCase or PascalCase -> snake_case, e.g. dataUnitsRead -> data_units_read
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test parse_device_info() for every JSON fixture in ./drives/.
We do subTest() so each fixture is tested individually.
"""
for fixture_path in self.fixture_files:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we have lots of fixture files and the test fails, we will get generic test_parse_device_info [failed], and then have to go and figure out which device(s) weren't parsed correctly. This is ok at the moment, but if we have lots of test files it could be come unwieldy.

Generally it's best to keep tests very simple and targeted, so that if one fails, it gives you a strong pointer about where the issue is.

A way around this would be to factor out the test logic into a _test_parse_device_info(self, fixture_file) method and then have a test per device, with each test just calling _test_parse_device_info.

@@ -0,0 +1,24 @@
{
"device_info": {
"name": "/dev/nvme0",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps rename the subfolder drives to tests? And then later on (not in this change) we can move the script to a dedicated repo.


def create_mock_device_from_json(self, device_info, if_attributes=None):
"""
Given a 'device_info' dict and optional 'if_attributes', build
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you create an actual pySmart device object here from the file output?
Eg. https://github.com/truenas/py-SMART/blob/master/tests/test_device.py#L43

Test parse_if_attributes() for every JSON fixture in ./drives/.
We do subTest() so each fixture is tested individually.
"""
for fixture_path in self.fixture_files:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similar here - let's swap the loop for dedicated tests

# we expect a line in the script's output.
for attr_key, attr_val in if_attrs.items():
# Convert from e.g. "criticalWarning" -> "critical_warning"
snake_key = re.sub(r'(?<!^)(?=[A-Z])', '_', attr_key).lower()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd consider this too complex for a unit test. It would be better to hard code the expected outcomes for a given input.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Caracal Targets the Caracal OpenStack release enhancement New feature or request size: l
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants