-
Notifications
You must be signed in to change notification settings - Fork 154
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[JOSS] Comments #769
Comments
Thank you very much for the review, @rcannood! We'll get back to you! |
I'm sorry it takes us so long. We need a bit more time to compile the benchmarks we have. We'll get back in a week. |
Hi Alex et al.! How are things going? Let me know if anything is unclear or a misinterpretation from my end. I'd be happy to discuss! :) |
Robrecht! Apologies for the slow response! Thank you very much for the in-depth review. It definitely helped us improve the paper! General issues
Functionality → performance → benchmarks
Documentation
Software paper
All changes discussed in this response can be viewed together in this PR: #779. The up-to-date version of the paper is in the paper branch #666, as required by JOSS. We'd be grateful for guidance on the question regarding Figure 1! We're also happy to further adapt the paper to your suggestions! |
Thanks for your hard work @falexwolf and colleagues! I ticked off the check boxes for which I have no further questions or comments. I also updated the progress of my review checklist at openjournals/joss-reviews#4371 (comment) accordingly :) My responses to your comments:
|
Hey @rcannood, sorry also missed the update here!
Looking into making something more up to date here!
They are not. At the moment, we just treat this as tooling for development, and don't publish results. We need to get dedicated hardware set up to have meaningful historical results (since performance can be quite variable between machines otherwise). Additionally, the benchmarks here can be quite specific, and I'm not sure they would be informative by themselves. Here is an example of output: https://www.astropy.org/astropy-benchmarks/ |
Hello all! Thank you for the very well-written paper and similarly well-built and well-maintained anndata package. I concur with @rcannood that it is worthy of publication, after a few things have been addressed:
A couple remarks I think warrant some interaction/debate:
I also have a few other comments that I would like to share that (I do not think block publication but) might be helpful if you would like to further polish this submission or if you intend to submit a follow-up paper in the future:
Thank you all again for the opportunity to review this paper and render remarks! Best, Lance Hepler |
Thank you for this in-depth review! It provides very valuable feedback! We'll thoroughly respond, but it's likely gonna take a little time. |
Hi @falexwolf ! Any updates on this issue? (I feel Lance should have created a separate issues, so the topics can be tackled separately instead of being intermingled, but ok.) Many of my comments are relatively minor, so it'd be a pity of them to hold up the submission of AnnData to JOSS. |
Hi @rcannood, @ivirshup and I made a list and we worked out many of them (e.g., benchmarking). We'll definitely get back. I'm sorry that I on my end got much busier with @laminlabs in the past months, but we'll definitely not leave this unfinished. I'll ping Isaac to have another meeting to finalize this. |
@ivirshup & I are scheduled to meet today for the first time after 2 months. We split up the work. I'll add my responses here for now and very much hope we can finalize with another comment in this thread later today. I'll address Lance's points in the same order and grouping as he provided them. Regarding the text:
Debate:
Outlook:
Let us follow up with another post to address all outstanding remarks. All changes discussed in this response can be viewed together in this PR: #825. They follow on the changes made in response to @rcannood (#779 / #769 (comment)). We'll also address @rcannood's outstanding points beyond #769 (comment). The up-to-date version of the paper is in the paper branch #666, as required by JOSS. |
Thank you for the responses! A remaining nitpick. Text:
Debate:
|
@nlhepler Next time, please create a separate issue for your own comments. It's becoming hard to track what still needs to be done :) |
To recap, I think my remaining issues are: General issues
Functionality
Documentation
Let's make one more push towards getting AnnData published in JOSS. Let's go @falexwolf @ivirshup ! |
I'm fully onboard, @rcannood - I think the remaining points are what you point out + the font sizes of the figures. The repo location can't be fixed within the paper itself, there is no metadata section. So, I'll clarify that with the editors. Unfortunately, I can't do the other things independently but need @ivirshup for this. @ivirshup, let's go! We've formally worked on this for more than 2 years (https://github.com/ivirshup/anndata-paper/graphs/contributors) and had prepared it for even longer. 😅 It'd be a shame if we didn't finish it after so much time. |
@ivirshup - Trying to ping you here to get this back on your radar. 😅 Let me know if I can help in any way! |
I met up with @ivirshup during the scverse hackathon. He mentioned that there are benchmarks on ±100'000 cells performed by a CI. I'm eager to see the results :) |
Hi @ivirshup! Do you have any updates on the availability of the benchmarks on decently sized datasets? |
Having passed the 1 year anniversary for the AnnData JOSS submission, I'd really like to have this submission wrapped up and dealt with. AnnData is a great resource to researchers all over the world, and I think it'd be a pity if this work doesn't get published at JOSS. @falexwolf The paper mentions:
@falexwolf Can you remind me where these results are published? The Benchmarks page only links to a simple benchmark and the ivirshup/anndata-benchmarks repository, but I don't see any actual results. Could you help me find them? |
Downtuned performance claims as requested by the editor: #1267 See discussion in the JOSS review: openjournals/joss-reviews#4371 |
This completes a sequence of 3 PRs to the paper branch
For reference, the paper branch is here: |
Thanks @falexwolf! This resolves all of my outstanding comments :) Closing this issue. |
Hi all!
I'm currently reviewing anndata's submission to JOSS (openjournals/joss-reviews#4371). I think the paper is very well written and is worthy of publication at JOSS. However, in going over the reviewer checklist provided by JOSS, there are some minor outstanding issue which I cannot check off at the moment.
Below is an overview of all my relatively minor comments. I'm happy to discuss them in this thread or in a separate issue, if need be.
Robrecht
General issues
Functionality
There are various claims throughout the paper that anndata takes into account scRNAseq's dataset sizes in terms of cpu and memory usage. For example:
"Due to the increasing scale of data, we emphasized efficient operations with low memory and runtime overhead."
"In particular, AnnData takes great pains to support efficient operations with sparse data."
I totally agree with the authors on this matter. However, I don't see any benchmarks in the paper or the documentation where anndata is compared to SCE, Seurat and/or loom in terms of memory usage, disk usage and/or execution time.
Documentation
Software paper
@Wickham2014
instead of[@Wickham2014]
.The text was updated successfully, but these errors were encountered: