Skip to content

DELL R430 raid_config #4

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
lovilak opened this issue Jun 1, 2018 · 17 comments
Open

DELL R430 raid_config #4

lovilak opened this issue Jun 1, 2018 · 17 comments

Comments

@lovilak
Copy link

lovilak commented Jun 1, 2018

I've some issue with this hardware :
When I apply this playbook :

hosts: all
gather_facts: no
roles:
- role: drac
drac_address: "{{ ip_drac }}"
drac_username: 'root'
drac_password: 'calvin'
drac_bios_config:
NumLock: 'On'
SysProfile: 'PerfOptimized'
drac_raid_config:
- name: SYSTEM
raid_level: 1
span_length: 2
span_depth: 1
pdisks:
- 'Disk.Bay.0:Enclosure.Internal.0-1:RAID.Integrated.1-1'
- 'Disk.Bay.1:Enclosure.Internal.0-1:RAID.Integrated.1-1'
- name: DATA
raid_level: 5
span_length: 4
span_depth: 1
pdisks:
- 'Disk.Bay.2:Enclosure.Internal.0-1:RAID.Integrated.1-1'
- 'Disk.Bay.3:Enclosure.Internal.0-1:RAID.Integrated.1-1'
- 'Disk.Bay.4:Enclosure.Internal.0-1:RAID.Integrated.1-1'
- 'Disk.Bay.5:Enclosure.Internal.0-1:RAID.Integrated.1-1'
reboot: True
timeout: 600

I must first force the disk from NON-RAID to RAID manually. Because otherwise I get :
The full traceback is: File "/tmp/ansible_A0_fAK/ansible_module_drac.py", line 1048, in commit_raid bmc.commit_pending_raid_changes(controller, False) File "/usr/lib/python2.7/site-packages/dracclient/client.py", line 478, in commit_pending_raid_changes cim_name='DCIM:RAIDService', target=raid_controller, reboot=reboot) File "/usr/lib/python2.7/site-packages/dracclient/resources/job.py", line 151, in create_config_job expected_return_value=utils.RET_CREATED) File "/usr/lib/python2.7/site-packages/dracclient/client.py", line 673, in invoke raise exceptions.DRACOperationFailed(drac_messages=messages)

And When I've done this manual change and I launch my playbook, it creates a wrong virtual volume size for the RAID5 DATA (4 x 1716352,MB = 1716351MB ).

@markgoddard
Copy link

Hi @lovilak, there is code in the drac ansible module to perform the conversion of disks from non-RAID to RAID mode if necessary. I wonder why this is not being executed.

Are these issues that you have time to investigate and/or fix?

@lovilak
Copy link
Author

lovilak commented Jun 7, 2018

No, don't where to look ..

@markgoddard
Copy link

The code for the drac module is here.

This is where it decides which disks to convert: https://github.com/stackhpc/drac/blob/master/library/drac.py#L551.

The actual conversion is done here: https://github.com/stackhpc/drac/blob/master/library/drac.py#L1161.

@lovilak
Copy link
Author

lovilak commented Jul 27, 2018

Could You show me where to look for the raid5 size problem ?

@markgoddard
Copy link

I think this is where the size is calculated: https://github.com/stackhpc/drac/blob/master/library/drac.py#L528, as the minimum of all physical disks. We then pass this into the virtual disk creation, multiplied by the span depth: https://github.com/stackhpc/drac/blob/master/library/drac.py#L594.

@lovilak
Copy link
Author

lovilak commented Jul 27, 2018

For the size PB :

I tried to do a RAID5 with 4x1,7To pdisk and I set a span_depth of 1 so it created
min_size_mb x 1 = 1,7To
and if I set a span_depth of 3 which is my goal I get an error : Provided Physical disk not valid for this operation.

@lovilak lovilak closed this as completed Jul 27, 2018
@lovilak lovilak reopened this Jul 27, 2018
@markgoddard
Copy link

For RAID5 with 4 disks, don't you need a 2x2 configuration? span_depth = 2, span_length = 2?

@markgoddard
Copy link

Actually, maybe not.

@lovilak
Copy link
Author

lovilak commented Jul 27, 2018

I think I should have : span_lengt =4 and span_depth =1 but this gives me a wrong vd size .. I've x 3 the min_size_mb var to do the trick ..

@markgoddard
Copy link

Oh, so the problem is that it's not accounting for the parity data when calculating the size?

@lovilak
Copy link
Author

lovilak commented Jul 27, 2018

I think that the bug is the span_depth level that should accept a value of 3. This is dealed by the dracclient python module. To overcome this I temporaly did min_size_mb x 3.

@markgoddard
Copy link

So we need a new way to calculate the size when it is RAID5 (or 4?):

size = min_size_mb * (span_length - 1)

Is that correct?

@lovilak
Copy link
Author

lovilak commented Jul 30, 2018

Hi, sorry week-end delay .. This would work for raid 1 and raid 5 but not for raid 0 and 6 ?

@markgoddard
Copy link

I would only apply that logic to RAID configurations that use a parity disk - 4 and 5. Other configurations could use the existing logic. More generally it could be:

parity_disks = 2 if RAID == 6 else 1 if RAID in (4, 5) else 0
length = span_length - parity_disks
depth = span_depth
size = min_size_mb * length * depth

@lovilak
Copy link
Author

lovilak commented Aug 2, 2018

parity_disks = 2 if RAID == 6 else 1 if RAID in (1, 5) else 0
length = span_length - parity_disks
size = min_size_mb * length

@markgoddard
Copy link

markgoddard commented Aug 2, 2018

RAID 1 does not have any parity disks, it mirrors the data. We need to multiply by depth to allow for nested RAID, such as RAID 10. We're also not really catering for mirrored setups i.e. RAID 0.

parity_disks = 2 if RAID == 6 else 1 if RAID in (3, 4, 5) else 0
length = span_length - parity_disks
effective_length = 1 if RAID in (1, 10) else length
size = min_size_mb * effective_length * span_depth

@lovilak
Copy link
Author

lovilak commented Aug 3, 2018

Sound Good to me !!

priteau added a commit that referenced this issue Oct 25, 2019
The current calculation of the size_mb parameter is incorrect for RAID
levels using parity disks, such as RAID 5 or RAID 6. It results in a
virtual disk that is much smaller than expected. For example, a RAID 5
would only be as big as the smallest disk.

This commit uses the formula described in issue #4 [1], but adapted to
support all RAID levels listed in python-dracclient [2].

[1] #4 (comment)
[2] https://opendev.org/openstack/python-dracclient/src/tag/3.1.1/dracclient/resources/raid.py#L25-L34

Co-authored-by: Mark Goddard <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants