-
-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
unresolved schemas #76
Comments
SEC replied: All XBRL taxonomies currently accepted in EDGAR filings are posted on https://www.sec.gov/info/edgar/edgartaxonomies.shtml. An XML version can be found at https://www.sec.gov/info/edgar/edgartaxonomies.xml. We do not maintain anything similar for what has historically been accepted, but you can find the information in the latest Release Notes, for example Figure 1 in https://xbrl.sec.gov/doc/releasenotes-2022-draft.pdf So does this mean we can poll a website once after startup to get a list, instead of hardcoding? (maybe on demand for non SEC users) |
It would be possible to query the XML Version of the Edgar Taxonomies list instead of hardcoding it into the libary. Keep in mind, however, that this only applies to SEC Edgar submissions. I deliberately did not include this functionality because I did not want to optimize py-xbrl for a specific XBRL source, but wanted to keep it as general as possible for all XBRL documents. If I were to include such functionality, I would decouple it modularly from the xbrl parser core modules (the core modules being |
Unfortunately SEC drops old items from xml, but this would still give us 10years to add anything in xml to github, which is better than doing it every year. My proposal would be to keep existing items in taxonomy.ns_schema_map but provide a utility function that grabs the latest ones from SEC and user can extend the dict (also with custom ones).
This eliminates user having to write the download/parsing, and for us having the update the lib every year. I will chase SEC up to get a list with older items, so there might be more urls in get_SEC_schemas. My implementation: grab.txt |
SEC also suggested this file, contains all the old schemas, we should add these to the hard coded ones, and poll the new ones less frequently. |
Yes that's a good resource. The xml file also contains the namespace-schema mapping. It is probably the best to create an abstraction layer above the core parsing modules and implement both ways simutainiously.
|
Sounds good, I would still use my API suggestion:
I parsed arelle hardcoded ones into python dict format: |
I extended with these hard coded ones in #77 for first step. |
Thank you! I will check and merge it in the following days. |
I only left https protocol to ones which would auto redirect from http->https anyway, otherwise it wastes time with redirection or certificate checking. |
Now that we're in 2022, filings are using the 2022 taxonomies. This means the library may fail to parse filings that assume presence of those common taxonomies. @mrx23dot would you suggest using the function you wrote above (get_SEC_schemas) to fill the gaps? |
@Ajmed I think the suggestion of @mrx23dot is quite good and i am planning to implement a similar function that queries the common taxonomy file for SEC submissions. Normally, if every submission would strictly follow the XBRL Standard, we wouldn't need this function. But i understand that it is really helpful if you want to parse SEC submissions from all companies (also those that fail to completely comply to the standards). Also it's probably better than keeping a static list that does the namespace-to-schemaUrl mapping. The reason why i haven't implemented it yet is that I want to implement an advanced caching for this file. The libary should not automatically download this file from the SEC Servers whenever the taxonomy module is imported. On the other hand the libary should also not download it once and never update the local copy. |
@manusimidt makes sense. No rush, wouldn't want extra things entering the library that become cruft. In the meantime I've used the above code and it's successfully parsing the ill-formed filings. |
Got some more unresolved schemas.
As I understand these are not real URIs, so what't the official way to resolve them?
There must be a way to look these up instead of hard coding them. Let me ask SEC.
The text was updated successfully, but these errors were encountered: