-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LDAP Mapping Collision of 1 user prevents all new LDAP users from being synced into ownCloud #751
Comments
Apparently it can be solved by setting config "reuse_accounts" to "yes" |
Hi @dsobon, thanks for the hint. Setting I'm saying workaround instead of solution because:
|
Yes. the collision will then happen on directory_uuid. I only managed to have it triggered once a few days ago... if it happens again, I will file a bug report with support as we have a support contract. I am not sure how the problem is happening for us, as we are also in the transition of migrating all AD users from using username-based SPN to email-based. I do know it happens when a user is deleted, and then recreated with the same username (directory_uuid), but different DN - that should be covered by reuse_accounts. |
Here's a fix for when ObjectGUID (directory_uuid) & DN (ldap_dn) changes, but the UPN (owncloud_name) does not change.
Honestly... the mapping table is lacking when the link to the accounts table is lost. Ideally it should have - separate from owncloud_name - the upn, maybe samaccountname, and email as columns... and for cleaning up purposes, timestamp column when it was last validated. |
Or a more reliable patch... also requires one less (3->2) sync step.
|
Just some clarifications:
What happens when you move a user in ldap is that, usually you remove the old user and create a new one, at least from ownCloud's perspective. Note that we're just talking about one specific case, which is moving a user in ldap. We also need to consider cases such as a collision with a local user (local user "user001" exists, so ldap user "user001" can't be mapped), or a collision with another user in a different part of the tree. As far as I know, the "reuse_accounts" option covers the case movements in the ldap directory. However, it could cause problems if multiple ldap users (from different parts of the ldap tree) could be mapped to the same account. It's very likely that the second account, which should cause the collision, would be mapped over the previous account. |
Yes, the problem is "from owncloud's perspective", because it does not have the (correct) logic to handle lost/disconnected/stale mappings. Currently, the code does not handle the collision, so what is the solution for the end-user admin? |
I don't think we can remap a ldap user automatically. If a remap is needed, I think there should be admin intervention somehow. As said, the entryuuid (or whatever attribute is used as uuid) must remain the same in order to detect a movement in the LDAP. If the entryuuid is different, the user is is different. So, for your case, if you moved the user in LDAP and the entryuuid changed, you should check why that happened and how you can prevent that. If the entryuuid has changed while moving the user, a remap might be needed. This is currently only possible with the "reuse_accounts" options, but it could be problematic because it will affect all the ownCloud accounts. At the moment, there is no other option, so we might need to include an occ command for the admin to remap a user |
yes, we are changing to upn (which on AD has also been migrated from sam@domain to email address) on owncloud, which is a pending task for me to do on owncloud production. It can legitimately happen for these scenarios (there is a reason - but I do not know the technical justification - why they do delete/re-create rather than update the attributes):
The above scenarios is what may (and will) happen in a large enterprise environment. So it is difficult for me to prevent that. Also, other internal web application have not suffered any of these type of issues (every app has a different way of treating what is the primary key from the source, and then treating what is the linking key, and if it's a single key, or if all but one key changes, then the link is lost). In the end, I do not want to touch the database to manually delete entries as that is way more risky than having optional logic implemented in user_ldap. |
I share @jvillafanez point of view here. The LDAP server in question serves some ~30k user accounts. While in this instance it actually was the same user being recreated at a different location, it could very well be an entirely different user (think of common names like John Smith). This other user would then automatically see all the ownCloud data from the previous user. So we cannot simply set But this is not really the thing that's the issue here. The main issue (for us) is not that mapping collisions can appear, but that a single mapping collision blocks all other new LDAP accounts from being synced. |
@jvillafanez Please elaborate if we have options for the migration Please summarize the case and then we have to discuss this on the product board. @cdamken |
Mapping collision stops syncingI think this will need a redesign of the syncing mechanism. I'm not finding a good way to fix this issue otherwise.
Basically, what happens is: mapping collision happens -> ldap returns less users than what have been asked for -> syncing stops thinking there are no more users. We could change the condition, so if there are less users, we could request the next bunch and stop if there are 0, but this is still fragile because there could be enough collisions for the next bunch to return 0 results and there could still be users to be synced. So changing the condition still have room for error. In addition, note that core doesn't really know about the mapping collision since it's handled by the ldap app. This means that core will ask for users 0-49, 50-99, etc; if there is a mapping collision in the first bunch, the ldap app will need to track that collision because core will ask for user 50-99, but ldap will need to get users from 54-103. I'd expected that if I ask again out of the blue for users 50-99, ldap will return the same users, but ldap will have to somehow track what users couldn't be mapped in order to skip them. Manually remapping ldap usersThis doesn't seem a good idea. All the user syncing is done automatically via occ command, and this command is usually setup in a cron job. We have 3 fields we could touch in the DB: the owncloud_user, ldap_dn and directory_uuid.
Regardless of the outcome, most of these cases should go through the sync job, and as such, should be fixed without intervention, but most of them can't be done automatically without having a security risk. The only special case would be if the user is moved in ldap and somehow the directory_uuid also changes. Assuming that this is the only collision that happens, the Anyway, I think the priority should be the syncing mechanism, and we probably need to redesign it. |
Discuss in product board |
In my interpretation,
|
We will continue to look at each customer. It is part of our Enterprise support offering. Errors go into the logfile. |
We should check whether #757 could be helpful for the customers or not. The feature is technically ready, but it's currently in draft because it isn't clear if we want it in the app or not. |
Customer applied custom patch. Is happy now. |
ownCloud appears to handle users that are renamed and/or moved to a different position in the LDAP tree quite well.
However, when a user (
bob.example
in the example below) is deleted and then re-created at a different position in the LDAP tree (leading to a "mapping collision" due to having a different LDAPentryUUID
in addition to the changed DN), the LDAP sync starts showing some issues:alice.example
in the example below) are synced without any issues.charlie.example
in the example below) once even a single recreated user exists, and the new users do not show up inocc user:list
.occ user:sync
finishes without reporting any of these errors:Steps to reproduce
uid=alice.example,cn=users,ou=deptA,dc=example,dc=org
uid=bob.example,cn=users,ou=deptA,dc=example,dc=org
uid=charlie.example,cn=users,ou=deptA,dc=example,dc=org
occ user:sync "OCA\User_LDAP\User_Proxy" -u alice.example
occ user:sync "OCA\User_LDAP\User_Proxy" -u bob.example
occ user:list .example
uid=bob.example,cn=users,ou=deptA,dc=example,dc=org
uid=bob.example,cn=users,ou=deptB,dc=example,dc=org
(same name, different OU)occ user:sync "OCA\User_LDAP\User_Proxy"
occ user:list .example
Expected behaviour
Actual behaviour
Server configuration
Operating system: Debian 10
Web server: Apache2 2.4.38-3+deb10u7
Database: MariaDB (Galera cluster)
PHP version: 7.3.31-1~deb10u1
ownCloud version: ownCloud Enterprise 10.10.0.3 (clustered)
Updated from an older ownCloud or fresh install: first_install_version: 10.4.1.3
Signing status (ownCloud 9.0 and above):
Failed integrity check due to changes files provided by ownCloud, (see ownCloud Support Case #00018427). However, the same issue appears on our PROD environment (10.7.0.4, integrity check passes)
The content of config/config.php:
List of activated apps:
LDAP configuration (delete this part if not used)
Logs
ownCloud log (data/owncloud.log)
The text was updated successfully, but these errors were encountered: