Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NC | Multi Protocol Access | List object with conflicting ownership #8751

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
27 changes: 20 additions & 7 deletions src/sdk/namespace_fs.js
Original file line number Diff line number Diff line change
Expand Up @@ -145,6 +145,7 @@ function _get_filename(file_name) {
}
return file_name;
}

/**
* @param {fs.Dirent} first_entry
* @param {fs.Dirent} second_entry
Expand Down Expand Up @@ -877,13 +878,7 @@ class NamespaceFS {

const prefix_dir_key = prefix.slice(0, prefix.lastIndexOf('/') + 1);
await process_dir(prefix_dir_key);
await Promise.all(results.map(async r => {
if (r.common_prefix) return;
const entry_path = path.join(this.bucket_path, r.key);
//If entry is outside of bucket, returns stat of symbolic link
const use_lstat = !(await this._is_path_in_bucket_boundaries(fs_context, entry_path));
r.stat = await nb_native().fs.stat(fs_context, entry_path, { use_lstat });
}));
await this.validate_results(results, fs_context);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure this is a good name for this function. there is no much validation going on here. maybe change it to something like _get_stats_for_list_object. also, in my opinion, if you like to move it to a function it would be better to move the lambda function to a desperate function, instead of the whole Promise.all. WDYT?

const res = {
objects: [],
common_prefixes: [],
Expand Down Expand Up @@ -924,6 +919,24 @@ class NamespaceFS {
}
}

async validate_results(results, fs_context) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if you remove from the list right before it's being returned to the user, you might be left with less than 1000 objects, even if you have access to more than 1000 objects, can you please check that?

await Promise.all(results.map(async r => {
if (r.common_prefix) return;
const entry_path = path.join(this.bucket_path, r.key);
//If entry is outside of bucket, returns stat of symbolic link
const use_lstat = !(await this._is_path_in_bucket_boundaries(fs_context, entry_path));
try {
r.stat = await nb_native().fs.stat(fs_context, entry_path, { use_lstat });
} catch (err) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

notice that you throw for all errors and not just EACCES( permission denied) is this intentional? maybe it would be better to create a separate function that doesn't throw for selected errors. like we do in unlink_ignore_enoent. that way we can reuse in case we would like to handle permission issues in other places. see stat for possible error list

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also that way we will be able to handle what happens in case of EACCES outside the catch. something like:

const stat = stat_ignore_eaccess();
if (!stat) {
    ....
}

dbg.warn('NamespaceFS: validate_results : couldnt access file entry_path', entry_path, ", skipping...");
const index = results.indexOf(r);
if (index > -1) {
results.splice(index, 1);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what about bucket owner, do we always want to remove this entry from the list? this should be compatible to S3

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@romayalon When removing an item from the list, there is a chance the list object returns with less than 1000 items One solution would be to check the access while adding the item to result in an array and return 1000 items

}
}
}));
}

/////////////////
// OBJECT READ //
/////////////////
Expand Down
135 changes: 135 additions & 0 deletions src/test/unit_tests/test_nsfs_access.js
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,11 @@ const fs_utils = require('../../util/fs_utils');
const nb_native = require('../../util/nb_native');
const test_utils = require('../system_tests/test_utils');
const fs = require('fs');
const { TMP_PATH } = require('../system_tests/test_utils');
const NamespaceFS = require('../../sdk/namespace_fs');
const buffer_utils = require('../../util/buffer_utils');
const endpoint_stats_collector = require('../../sdk/endpoint_stats_collector');
const SensitiveString = require('../../util/sensitive_string');

const new_umask = process.env.NOOBAA_ENDPOINT_UMASK || 0o000;
const old_umask = process.umask(new_umask);
Expand Down Expand Up @@ -162,7 +167,137 @@ mocha.describe('new tests check', async function() {
assert.equal(err.code, 'EACCES');
}
});

mocha.describe('list object access check', function() {
this.timeout(10 * 60 * 1000); // eslint-disable-line no-invalid-this

const key_files_set_first = make_keys(10, i => `small_key_files_set_first${i}`);
const key_files_set_second = make_keys(10, i => `small_key_files_set_second${i}`);
const access_src_bkt = 'access_src';
const tmp_fs_path = path.join(TMP_PATH, 'test_namespace_access_fs');
const ns_tmp_bucket_path = path.join(tmp_fs_path, access_src_bkt);
const first_file_path = path.join(ns_tmp_bucket_path, 'small_key_files_set_first1');
const bucket1 = 'access_bucket1';
const ns_src = new NamespaceFS({
bucket_path: ns_tmp_bucket_path,
bucket_id: '5',
namespace_resource_id: undefined,
access_mode: undefined,
versioning: undefined,
force_md5_etag: false,
stats: endpoint_stats_collector.instance(),
});
const custom_dummy_object_sdk1 = make_custom_dummy_object_sdk(200, 200);
const custom_dummy_object_sdk2 = make_custom_dummy_object_sdk(300, 200);
const custom_dummy_object_sdk3 = make_custom_dummy_object_sdk(400, 400);
mocha.before(async function() {
await fs_utils.create_fresh_path(tmp_fs_path, 0o777);
await fs_utils.file_must_exist(tmp_fs_path);
await fs_utils.create_fresh_path(ns_tmp_bucket_path, 0o770);
await fs_utils.file_must_exist(ns_tmp_bucket_path);
await fs.promises.chmod(tmp_fs_path, 0o777);
await fs.promises.chmod(ns_tmp_bucket_path, 0o770);
await fs.promises.chown(ns_tmp_bucket_path, custom_dummy_object_sdk1.requesting_account.nsfs_account_config.uid,
custom_dummy_object_sdk1.requesting_account.nsfs_account_config.gid);
});
mocha.after(async function() {
fs_utils.folder_delete(ns_tmp_bucket_path);
fs_utils.folder_delete(tmp_fs_path);
});

mocha.it('list object with inaccessible item, smae UI and GID', async function() {
await upload_objects(key_files_set_first, custom_dummy_object_sdk1, bucket1, ns_src);
// change ownership for one file, and account can not access this file
await fs.promises.chown(first_file_path, 999, 999);
const r = await ns_src.list_objects({
bucket: bucket1,
}, custom_dummy_object_sdk1);
// skipping inaccessible file, list rest of the files
assert_list_items(r, [...key_files_set_first], 9);
});

mocha.it('list object with different account and same GID', async function() {
await upload_objects(key_files_set_second, custom_dummy_object_sdk2, bucket1, ns_src);
const r = await ns_src.list_objects({
bucket: bucket1,
}, custom_dummy_object_sdk2);
assert_list_items(r, [...key_files_set_first, ...key_files_set_second], 19);
});

mocha.it('list object with different account and different GID', async function() {
try {
await upload_objects(["Banana"], custom_dummy_object_sdk3, bucket1, ns_src);
} catch (err) {
assert.strictEqual(err instanceof Error, true);
assert.strictEqual(err.code, 'EACCES');
}
const r = await ns_src.list_objects({
bucket: bucket1,
}, custom_dummy_object_sdk2);
assert_list_items(r, [...key_files_set_first, ...key_files_set_second], 19);
});
});
});

async function upload_objects(keys, custom_object_sdk, user_bucket, user_ns) {
return Promise.all(keys.map(async key => {
await user_ns.upload_object({
bucket: user_bucket,
key,
content_type: 'application/octet-stream',
source_stream: buffer_utils.buffer_to_read_stream(null),
size: 0
}, custom_object_sdk);
}));
}

function make_custom_dummy_object_sdk(uid, gid) {
return {
requesting_account: {
force_md5_etag: false,
nsfs_account_config: {
uid: uid,
gid: gid,
}
},
abort_controller: new AbortController(),
throw_if_aborted() {
if (this.abort_controller.signal.aborted) throw new Error('request aborted signal');
},

read_bucket_sdk_config_info(name) {
return {
bucket_owner: new SensitiveString('dummy-owner'),
owner_account: {
id: 'dummy-id-123',
}
};
}
};
}

/**
* validate list objects counts and items
* @param {Object} r
* @param {string[]} splice_array
* @param {number} object_items_count
*/
function assert_list_items(r, splice_array, object_items_count) {
assert.equal(r.objects.length, object_items_count);
const index = splice_array.indexOf('small_key_files_set_first1');
splice_array.splice(index, 1);
assert.deepStrictEqual(r.objects.map(it => it.key), splice_array.sort());
}

/**
* @param {number} count
* @param {(i:number)=>string} gen
* @returns {string[]}
*/
function make_keys(count, gen) {
const arr = new Array(count);
for (let i = 0; i < count; ++i) arr[i] = gen(i);
arr.sort();
Object.freeze(arr);
return arr;
}