Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reorganizing the research paper list #91

Open
MrNeRF opened this issue Apr 2, 2024 · 28 comments
Open

Reorganizing the research paper list #91

MrNeRF opened this issue Apr 2, 2024 · 28 comments

Comments

@MrNeRF
Copy link
Owner

MrNeRF commented Apr 2, 2024

Due to the overwhelming number of published research papers, the list has become somewhat disorganized. As categories expand and mature, there's a clear need for more fine-grained organization. This document aims to gather ideas and concrete tasks for reorganizing the repository. Contributions in the form of suggestions and assistance are welcome.
Proposed Restructuring

I suggest dividing the list into two main sections: Fundamental Research and Applications. Below are some preliminary thoughts on potential categories and questions for further refinement:

Fundamental Research Categories

  • Classic Work

    • Inquiry into what defines "Classic Work"
  • Compression

  • Regularization and Optimization

  • Rendering

    • Inquiry into more specific sub-categories
  • Reviews

Applications

  • Autonomous Driving

  • Avatars

    • Consideration on splitting into head avatars and full-body avatars
  • Diffusion

    • Important sub-categories to consider
  • Dynamics and Deformation

    • Possible significant sub-categories
  • Editing

  • Language Embedding

  • Mesh Extraction and Physics

    • Should these be split into two categories?
  • Misc

    • Needs cleanup; currently a catch-all category
  • SLAM

  • Sparse

  • Navigation and Autonomous Driving

    • Redundancy with the Autonomous Driving category noted
  • Poses

    • Focus on pose optimization, SFM free poses
  • Large-Scale

    • Pertains to efficient rendering of large-scale scenes

Additional Considerations

  • Potential additions:
    • Medical applications
    • Anti-Aliasing
@MrNeRF MrNeRF changed the title Reorganizing the list Reorganizing the research paper list Apr 2, 2024
@benmodels
Copy link

I wonder if there is a way of representing the list in an interactive way in a WebUI to allow multiple ways of organizing the list dynamically, filtering the criteria, allowing multiple categories for each publication, etc (e.g. using tableau or gradio, and hosting it on a free server like HF for gradio).
I personally don't have the right expertise to contribute to it, but wanted to bounce the idea in case anyone else has some experience and interest in building such functionality.

@aaronpurewal
Copy link

I wonder if there is a way of representing the list in an interactive way in a WebUI to allow multiple ways of organizing the list dynamically, filtering the criteria, allowing multiple categories for each publication, etc (e.g. using tableau or gradio, and hosting it on a free server like HF for gradio).

I personally don't have the right expertise to contribute to it, but wanted to bounce the idea in case anyone else has some experience and interest in building such functionality.

Super interesting, can definitely help with building this out.

Organize and filter by different categories, by author, maybe even by affiliation etc.

@Mariusmarten
Copy link

Mariusmarten commented Apr 2, 2024

Maybe larger collapsible sections could help, collapsible by subtopics and year for example. I assume Sparse includes few-image methods? The Fundamental Research and Applications divide seems like a sensible idea. Thanks for this repo!

@MrNeRF
Copy link
Owner Author

MrNeRF commented Apr 2, 2024

Great. I let the discussion a bit flow to see what people will come up with. Your input is very valuable. I think there is a lot of potential to present the research in a more structured way. People can go to arxiv. But the value here should be in faster and more compact access to clustered information. For instance, if I want to know more about better ways of densification, it should be right away accessible. This leads then to a llm 🙈

@rafaelspring
Copy link

Some ideas:
IMO the high-level structure should be

  • Papers of Fundamental 3D Reconstruction Research. This is stuff such as the original 3DGS paper but also things such as InstantNGP, https://github.com/lfranke/TRIPS or https://fhahlbohm.github.io/inpc/ which are different to 3DGS but nevertheless super interesting.
  • Papers of 3DGS Improvements. Everything that makes 3DGS faster, better, smaller, more accurate, etc...
  • Papers of 3DGS / NeRF Applications.
  • Resources for 3DGS / NeRF. Links to interesting resources, such as open source implementations (not necessarily Paper code) for various languages, Links to other overview / survey pages, optimization or differentiable rendering libraries, etc..

Within the 3DGS Improvements category I propose the following subgroups:

@DenisKochetov
Copy link

It will be really convenient to have a table with filters and search. (Sort by date, check if code is available 😄)
I had some experience with plotly, it works well out of the box.

@w-m
Copy link
Contributor

w-m commented Apr 3, 2024

Keeping track might also be helped by densifying the individual listings a bit. On my monitor only 6-7 papers fit vertically in the current structure. And personally I struggle a bit with the large paper headings, when trying to parse multiple papers quickly.

I prefer the layout of https://github.com/uzh-rpg/event-based_vision_resources, which fits about twice as many papers in the same vertical space. It does not have the collapsible abstracts and full authors names, though. But I think it's a good trade-off. (You could go even further with a single line per entry, like https://github.com/awesome-NeRF/awesome-NeRF, which I consider too dense).

I don't want to derail the discussion on the high-level structure in this issue. The layout of individual listings is probably highly subjective, borderline on bikeshedding. Maybe people can just leave a thumbs-up on this comment for increasing density, and a thumbs-down for keeping the listing layout as-is.

@w-m
Copy link
Contributor

w-m commented Apr 3, 2024

If somebody wants to play around with interactive ways to organize the data, here's a quick and dirty way to parse the current list into a table (thanks ChatGPT):

https://gist.github.com/w-m/c0c31581f53bc4cf84b4427fdda43219

And the resulting .csv imported into Google Sheets:

https://docs.google.com/spreadsheets/d/1k9KcnI3DUb6BioFOQ_zg_0pUrOdL6n7oZ1N7nYUDbBE/edit?usp=sharing

@fhahlbohm
Copy link

You could also create a .github.io page and do a list like Jon Barron does for his publications (https://jonbarron.info/). I realise that sorting by topic is important for some, but I personally need some way to see the new stuff that comes out.

So I guess my main suggestion is to do some kind of HTML/CSS-based list with some kind of (animated) thumbnail that also supports filtering by category as well as sorting by publication date or citation count.

I think it should not be that difficult to add filtering/sorting to Barron's template.

@MrNeRF
Copy link
Owner Author

MrNeRF commented Apr 3, 2024

You could also create a .github.io page and do a list like Jon Barron does for his publications (https://jonbarron.info/). I realise that sorting by topic is important for some, but I personally need some way to see the new stuff that comes out.

So I guess my main suggestion is to do some kind of HTML/CSS-based list with some kind of (animated) thumbnail that also supports filtering by category as well as sorting by publication date or citation count.

I think it should not be that difficult to add filtering/sorting to Barron's template.

There is a log already that shows what was added. Generating gifs, etc. is nice. But someone has to do it. Updating the page should be possible in a reasonable amount of time.

ping track might also be helped by densifying the individual listings a bit. On my monitor only 6-7 papers fit vertically in the current structure. And personally I struggle a bit with the large paper headings, when trying to parse multiple papers quickly.

I prefer the layout of https://github.com/uzh-rpg/event-based_vision_resources, which fits about twice as many papers in the same vertical space. It does not have the collapsible abstracts and full authors names, though. But I think it's a good trade-off. (You could go even further with a single line per entry, like https://github.com/awesome-NeRF/awesome-NeRF, which I consider too dense).

I fully agree. It could be way more dense. In that case I would rather prefer the second option over the first.

I wonder if there is a way of representing the list in an interactive way in a WebUI to allow multiple ways of organizing the list dynamically, filtering the criteria, allowing multiple categories for each publication, etc (e.g. using tableau or gradio, and hosting it on a free server like HF for gradio). I personally don't have the right expertise to contribute to it, but wanted to bounce the idea in case anyone else has some experience and interest in building such functionality.

I think if the lists sticks with github, there are not many options other than markdown?

Some ideas: IMO the high-level structure should be

* **Papers of Fundamental 3D Reconstruction Research**. This is stuff such as the original 3DGS paper but also things such as InstantNGP, https://github.com/lfranke/TRIPS or https://fhahlbohm.github.io/inpc/ which are different to 3DGS but nevertheless super interesting.

* **Papers of 3DGS Improvements**. Everything that makes 3DGS faster, better, smaller, more accurate, etc...

* **Papers of 3DGS / NeRF Applications**.

* **Resources for 3DGS / NeRF**. Links to interesting resources, such as open source implementations (not necessarily Paper code) for various languages, Links to other overview / survey pages, optimization or differentiable rendering libraries, etc..

Within the 3DGS Improvements category I propose the following subgroups:

* **Pruning / Compression / Datastructures / Large Scale**: Everything that makes 3DGS smaller and/or allows streaming.

* **Initialisation / Sparse Views**: Everything that makes 3DGS less dependent on good initialization. Or ways to improve the initialisation.

* **Projection**: Everything that covers how splats are projected and rendered and how the final pixel colour is assembled from splats. This is work such as https://niujinshuchong.github.io/mip-splatting/, https://arxiv.org/abs/2402.00525, 2D GS, etc..

* **Deblurring**: Work such as https://benhenryl.github.io/Deblurring-3D-Gaussian-Splatting/ or https://lingzhezhao.github.io/BAD-Gaussians/ or https://spectacularai.github.io/3dgs-deblur/

I like those ideas how to structure the content. I also agree that other approaches could be more prominently placed that are not gs, but kind of close. However, that makes it harder to decide what to include/exclude. But yes, those you are mentioning should be part of this list. But then, should be zip-nerf included? What are the seminal papers?

For now I would like to stick with github even if the possibilities are limited unless there is more hands-on support. In general the barrier to contribute on github is very low.

How about the abstracts? Does it have value at all? Should it be completely removed?

@fhahlbohm
Copy link

There is a log already that shows what was added. Generating gifs, etc. is nice. But someone has to do it. Updating the page should be possible in a reasonable amount of time.

I fully understand. Making this editable is really simple though. You could just parse from a text file formatted like this:

#
My Fancy Paper Title
Author 1, Author 2, Author 3
Short description of the paper.
Optional link to project page.
Optional link to (arXiv) paper.
Optional link to GitHub repository.
Optional link to YouTube video.
Optional path (inside the github.io repo) to a thumbnail image and/or video.
#
Second Fancy Paper Title
...

Took me like ten minutes to get this working (with a little help from ChatGPT 😅).

If you want I can actually make a nice looking version of this tomorrow (April, 4th) and also try to add something that allows for sorting/filtering. I feel like it could look a lot nicer than a text-only Markdown version.

I don't mind doing this even if you do not end up using it. Worst case I learn something about HTML/CSS/JavaScript.

Regarding the thumbnails, one could either do it manually or even parse this from the papers or project pages (no idea how this works in terms of copyrights, but some of the guys on twitter seem to do similar things). There is probably some really elegant solution for this whole thing with GitHub Actions, which I am not really familiar with.

@MrNeRF
Copy link
Owner Author

MrNeRF commented Apr 4, 2024

@fhahlbohm Sure. Just go for it!

@w-m
Copy link
Contributor

w-m commented Apr 4, 2024

Regarding the thumbnails, one could either do it manually or even parse this from the papers or project pages (no idea how this works in terms of copyrights, but some of the guys on twitter seem to do similar things). There is probably some really elegant solution for this whole thing with GitHub Actions, which I am not really familiar with.

There is a system for this in html, the meta tags for social previews. You can check them manually for any page with https://www.heymeta.com for example.

Unfortunately, next to nobody is setting them apparently. Tried downloading them for all the pages in the list: https://github.com/w-m/awesome-3D-gaussian-splatting/blob/reformatting/awesome_thumbnails.py

Only six out of the listed 141 project pages are usable:

awesome_3dgs_thumbs

So it's no good after all. I'm logging this in this comment, so nobody else needs to try. @fhahlbohm unless you'd like to create 135 tickets in 135 different repos :), you may want to go for another route. Could ask an LLM to extract the most thumbnaily image from the PDF or the HTML, I guess. Although at some point you really are just recreating scholar-inbox.com or similar services...

w-m added a commit to w-m/awesome-3D-gaussian-splatting that referenced this issue Apr 4, 2024
@rafaelspring
Copy link

rafaelspring commented Apr 4, 2024

+1 for densifying the list. It's so many papers, each individual one should only be one line or two on the screen.

I like those ideas how to structure the content. I also agree that other approaches could be more prominently placed that are not gs, but kind of close. However, that makes it harder to decide what to include/exclude. But yes, those you are mentioning should be part of this list. But then, should be zip-nerf included? What are the seminal papers?

NeRF, ADOP, 3DGS, InstantNGP, Zip-NeRF, SMERF

My biggest wishlist item are the 3DGS Improvements subgroups though.

@w-m
Copy link
Contributor

w-m commented Apr 4, 2024

I fully agree. It could be way more dense. In that case I would rather prefer the second option over the first.

Here's a preview of how that could look:

Three lines

Three lines +Abstract

Single line

Could also try out your own styles, the ones above were created with https://github.com/w-m/awesome-3D-gaussian-splatting/blob/reformatting/awesome_3dgs_3line.py

@henrypearce4D
Copy link
Collaborator

henrypearce4D commented Apr 4, 2024

@w-m I like the 3 lines and really like the 3 lines with abstract - perhaps the "Abstract" from the span (drop down) as there is no need to repeat it for every paper.

I'm wondering if the hyperlink on the paper title is needed if you have the paper link?

Could you try a version without the title hyperlink and perhaps in bold?

@w-m
Copy link
Contributor

w-m commented Apr 4, 2024

As for the original question, I think the focus should stay on 3DGS. There are great alternative resources out there already for the important NeRF stuff.

Personally I'm not expanding the abstracts, I usually go directly to the paper or the project page.

I think it's a great idea to split in Fundamentals and Applications. Here's an attempt to combine the list from the original post above with the ideas of @rafaelspring. I've also had a look at a NeRF survey paper called NeRF: Neural Radiance Field in 3D Vision, Introduction and Review, which is quite helpful due to the NeRF field being much more mature. Have a look a the taxonomy in Figures 3 and 8.

Also I don't think it's particularly problematic to restructure the page from time to time, if some categories are growing too much.

Fundamentals

  • Surveys and Reviews

  • Dynamics

    • 4D Scenes
    • Deformations
  • Photometric / Geometric Quality

    • Regularization and Optimization
  • Rendering Speed

  • Representation

    • Compression
    • Size and Pruning
  • Composition

    • Background
    • Semantic/Object Composition
  • Pose Estimation

    • SLAM
    • BA and others
  • Sparse Views and Initialisation

Applications

  • Urban

    • Street Level
    • Remote-Aerial
    • Large Scale
  • Image Processing

    • Anti-Aliasing
    • Editing
    • Semantics
    • HDR/Tone Mapping
    • Denoising/Deblurring/Super-Resolution
  • Generative Models

    • GAN
    • Diffusion
  • 3D Reconstruction

    • SDF
    • Occupancy
  • Humans

    • Face/Head
    • Body
  • Medical

@w-m
Copy link
Contributor

w-m commented Apr 4, 2024

Could you try a version without the title hyperlink and perhaps in bold?

Sure.

I think there is value from having the title as the link target, as you can click directly through to a bunch of papers when reading their titles. But I'm not too hung up on which particular style will end up being used, as these discussions can go on endlessly.

@fhahlbohm
Copy link

fhahlbohm commented Apr 4, 2024

So here's a working example.

Instead of manually adding stuff to the Markdown files one maintains a YAML file of the following format:

- id: "kerbl20233dgs"  # has to be unique within the file
  title: "3D Gaussian Splatting for Real-Time Radiance Field Rendering"
  authors: "Bernhard Kerbl, Georgios Kopanas, Thomas Leimkühler, George Drettakis"
  year: "2023"
  conference/journal: "ACM Transactions on Graphics"
  description: "The initial publication on 3D Gaussian Splatting."  # used for the HTML-based list
  abstract: "Radiance Field methods have recently revolutionized novel-view synthesis of scenes captured with multiple photos or videos. However, achieving high visual quality still requires neural networks that are costly to train and render, while recent faster methods inevitably trade off speed for quality. For unbounded and complete scenes (rather than isolated objects) and 1080p resolution rendering, no current method can achieve real-time display rates. We introduce three key elements that allow us to achieve state-of-the-art visual quality while maintaining competitive training times and importantly allow high-quality real-time (≥ 30 fps) novel-view synthesis at 1080p resolution. First, starting from sparse points produced during camera calibration, we represent the scene with 3D Gaussians that preserve desirable properties of continuous volumetric radiance fields for scene optimization while avoiding unnecessary computation in empty space; Second, we perform interleaved optimization/density control of the 3D Gaussians, notably optimizing anisotropic covariance to achieve an accurate representation of the scene; Third, we develop a fast visibility-aware rendering algorithm that supports anisotropic splatting and both accelerates training and allows real-time rendering. We demonstrate state-of-the-art visual quality and real-time rendering on several established datasets."
  project_page: "https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/"
  paper: "https://arxiv.org/abs/2308.04079"
  code: "https://github.com/graphdeco-inria/gaussian-splatting"
  video: "https://www.youtube.com/watch?v=T_kXY43VZnk"
  thumbnail_image: true
  thumbnail_video: true

I created Python scripts that process this data into Markdown and HTML. Current results using some "dummy" data:
Markdown within README.md / HTML automatically deployed as a GitHub Page

I saw that @w-m already created a parser for the current Markdown stuff so it should be quite easy to migrate in theory.

I guess the main thing that is missing is to add a category field to my YAML structure and use that to make the Markdown list more like the existing one. Also, for the HTML page a search/filter function would be nice.

All the URL fields are currently implemented as optional, meaning if they are not available simply put null instead.
The thumbnail_image and thumbnail_video fields determine whether there is a thumbnail image (id.jpg) and video (id.mp4) available in the ./assets/ directory.
I also created a preprocessing script for this that downscales and pads images so that everything looks decent without using large files. It works well for the images, so one can just go into the paper, screenshot the coolest figure, paste it into the assets directory with the correct name, and the code does the rest.
For the videos, I think there is some weird bug regarding aspect ratio in the function that does the rescaling as, e.g. the INPC thumbnail video is a bit stretched and idk why.

@MrNeRF
Copy link
Owner Author

MrNeRF commented Apr 4, 2024

I fully agree. It could be way more dense. In that case I would rather prefer the second option over the first.

Here's a preview of how that could look:

Three lines

Three lines +Abstract

Single line

Could also try out your own styles, the ones above were created with https://github.com/w-m/awesome-3D-gaussian-splatting/blob/reformatting/awesome_3dgs_3line.py

I think the 3 line version is the best. But having the abstract in place is also not wrong. Another possibility could be to create for every entry its own markdown file. It would contain the abstract, images, etc. This could be automatically pulled from arxiv and then linked. But this is then in competition to the project page if available and then it is a worse alternative.

I created Python scripts that process this data into Markdown and HTML. Current results using some "dummy" data:
Markdown within README.md / HTML automatically deployed as a GitHub Page

This looks awesome. But also more like a personal publication list. The information for the yaml could be automatically scraped...

My question: Why is this repo useful? I think the answer is to get a quick overview depending on different categories what papers are available and if there is some code and additional material. I also know that many reference the non-academic links like different views, tutorials, etc...

So how to proceed from here? I would suggest that we should adopt one of the suggested more compact markdown formats, reorganize the paper into a merge given the suggested categories. From here it can then be deployed as website that has some filter function. But this would be less priority.

I would go with the 3 line version without abstract? More radical, just remove the author list in favor of the abstract. The decision to consider a paper would be less biased and you get the author list when you decide to read the paper anyway. But I feel many would dislike this option :)

@w-m
Copy link
Contributor

w-m commented Apr 4, 2024

I think it's sensible to keep a Markdown version as the main document. Things that are stupidly simple just have the lowest bar for contributions. No moving parts (=GitHub Action) also means nothing can ever break.

The interactive HTML does look great @fhahlbohm. Maybe it makes more sense to pick a few highlights for this though, than all of the 240+ papers? Update the page with fresh things you particularly are excited about? As a data format, you may want to consider using BibTex, then you don't have to come up with your own format, and can import from other places more easily.

So how to proceed from here? I would suggest that we should adopt one of the suggested more compact markdown formats, reorganize the paper into a merge given the suggested categories. From here it can then be deployed as website that has some filter function. But this would be less priority.

👍

I would go with the 3 line version without abstract? More radical, just remove the author list in favor of the abstract. The decision to consider a paper would be less biased and you get the author list when you decide to read the paper anyway. But I feel many would dislike this option :)

Nothing at all against removing sources of vanity, but the author list does hold information not found in the title (and the abstract is not greppable when collapsed). Humans are good at remembering other humans. I find myself quite often grepping for some name in one of these lists, when I totally forgot the title, but remember one of the authors.

I think the 3 line version is the best. But having the abstract in place is also not wrong. Another possibility could be to create for every entry its own markdown file. It would contain the abstract, images, etc. This could be automatically pulled from arxiv and then linked. But this is then in competition to the project page if available and then it is a worse alternative.

Funny, I had the idea last year to make a markdown list of papers like this repo, but every entry links to a GitHub issue within the repo. So, a GitHub issue instead of the individual markdown document in your suggestion. In the issue, you can add images, videos, markdown content, notes, questions. And anybody can comment and discuss, and review. You can tag the issues with categories and year and publication and what not, for sorting and filtering. I did it for exactly one paper: w-m/awesome-3d-splatting-survey#7 (it's a lot of work ok? life is hard :) - also kudos to you and the other people keeping this repo up to date). I do 100% believe that the current situation of arXiv PDF + project page + YouTube video + awesome list entry on GitHub + scholar inbox + announcement tweet + conference reviews that never see the light of day + ... is broken and we should come up with something better.

Heh and I just noticed I didn't put the authors in the global list in that repo, somewhat destroying my previous argument that it's useful to have the authors. Oh well :D https://github.com/w-m/awesome-3d-splatting-survey

@fhahlbohm
Copy link

I played around with @w-m's parsing script. You may check the links I shared for the complete list of papers without categories. Oh and just to be clear, I will of course remove all of this from my own GitHub asap. Might keep the template but we'll see. I mainly hope that this can be used to speed up the process of adding new papers, without sacrificing how it looks. I think using a custom format that isn't BibTeX is good in that regard.

Is there some sort of updated paper-category mapping already? I would like to try adding that next. I will start with the current one, I guess.

Regarding what's actually important:

  1. I do not mind having the abstracts. With the collapsible thingy its actually pretty neat.
  2. Removing authors feels wrong to me. For me personally, it is very important to associate names with certain papers or even a whole line of work.
  3. From what I understand, none of you actually care about the conference/journal name, which is fair. So I guess that can go. For me its quite nice knowing conference+year because then I roughly know when it came out.

Here's the format I would go with:

  • Gaussian Splatting for Real-Time Radiance Field Rendering — 2023
    Bernhard Kerbl, Georgios Kopanas, Thomas Leimkühler, George Drettakis
    📄 Paper | 🌐 Project Page | 💻 Code | 🎥 Video
    AbstractRadiance Field methods have recently revolutionized novel-view synthesis of scenes captured with multiple photos or videos. However, achieving high visual quality still requires neural networks that are costly to train and render, while recent faster methods inevitably trade off speed for quality. For unbounded and complete scenes (rather than isolated objects) and 1080p resolution rendering, no current method can achieve real-time display rates. We introduce three key elements that allow us to achieve state-of-the-art visual quality while maintaining competitive training times and importantly allow high-quality real-time (≥ 30 fps) novel-view synthesis at 1080p resolution. First, starting from sparse points produced during camera calibration, we represent the scene with 3D Gaussians that preserve desirable properties of continuous volumetric radiance fields for scene optimization while avoiding unnecessary computation in empty space; Second, we perform interleaved optimization/density control of the 3D Gaussians, notably optimizing anisotropic covariance to achieve an accurate representation of the scene; Third, we develop a fast visibility-aware rendering algorithm that supports anisotropic splatting and both accelerates training and allows real-time rendering. We demonstrate state-of-the-art visual quality and real-time rendering on several established datasets.

I would love to put the Abstract thingy next to the video link but that sadly does not seem to work.

@MrNeRF
Copy link
Owner Author

MrNeRF commented Apr 5, 2024

Thx for your work. I got multiple requests adding the conferences or journals. So I would like to keep that.

I would suggest to go ahead with the current categorie. Unfortunately I got a severe conjunctivitis so I am a bit out of order trying to not spend much time in front of screens. I will adapt the suggested categorization as soon as my eyes are good again.

@MrNeRF
Copy link
Owner Author

MrNeRF commented Apr 7, 2024

At least I added the missing papers that were released last week. Currently, I try still to spend as little time as possible in front of the screen. Once my eyes are back to normal, I will apply the new categorization.

@yuedajiong
Copy link

yuedajiong commented Apr 9, 2024

My dear reasearchs, a few months ago, I submited an issue: #58

Now, there is a demo page:

https://yuedajiong.github.io/super-ai-paper/

on developing, more interactive features on the way ...

is this you guys needed?

Everyone has his own reading habits when conducting research. Personally, I find these points, very important:
0)In 'plain plain plain plain plain plain plain' language, describe what the key parts of the paper did.
1)What are the highlight technology poits of this paper that might be useful for my task?
2)Has the claimed results of the paper been verified by others who have conducted experiments?

Without good research aids, today I'm using 1 to 3 pages PowerPoint slides to maintain each paper.
Clearly quite silly. :-(

Goal and Overview/Big-Picture:

image

Paper:
image

image

image

image

image

@SharkWipf
Copy link

I just found this list, and shortly after this issue.
My 2 cents: What I am missing most from the current lists is the dates the whitepapers were released.
With how quickly stuff gets outdone by new research in this field currently, having the full date with each paper/project would help me personally a lot when doing research into what's available.

@yuedajiong
Copy link

Hi, ALLLLLLL:

I have enough free time, related develop skills, interest/motivation, and ... (I am also greatly affected by it.)
I would like to develop a paper-reading system for your searching, with:
visual and structured interface,
new paper discovery,
paper rapid-reading,
paper orgnization,
algorihtm developopment path/benchmark,
researcher interaction,
personal filter and favor,
......

Do you gus really need it? And has any similar system you know?

Thansk

@yuedajiong
Copy link

yuedajiong commented Apr 26, 2024

Dear research experts, I am sprinting towards the ultimate-task-definition of my ideal as defined on my homepage, through the process of algorithm design, code implementation, and training optimization.
Personally, I believe that organizing papers with a view according to technical points is also essential.

(
There may be meticulous designs such as loss and regularization: rgb, mask, direct3d, camera-pose <distance; azimuth/elevation vs. Rt vs. different representation>, surface, compress/compact,... One compress/compact-represent could be enough to work on for a long time;
refining chapter might also be necessary: for example, using images as conditional inputs involves utilizing both intrinsic and extrinsic<eg: the low-resolution input photo lacks texture on the skin, but the high-resolution model generated by the refining module has texture; Even achieving a loss of 0 is not enough; it's not just about simply reconstructing the input image, but rather about approaching the real world.> information, ultimately producing details and quality equivalent to capturing 4K formal scenes with a camera;...
)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests