A curated list of resources on implicit neural representations, originally forked from vsitzmann/awesome-implicit-representations.
Note
This is a forked repository that includes additional search results and papers curated by the current maintainer. Please note that these curations are primarily focused on image-related tasks. Other categories from the original list may be intentionally not updated or removed to maintain this specific focus.
For non-image INR applications (audio, PDEs, generic signals), see Non-Image INRs. For methods that are not strictly INRs but commonly used alongside them, see Related Non-INR Works.
We consider a method an Implicit Neural Representation (INR) if it:
- Represents a continuous signal
$x \mapsto f_\theta(x)$ with a coordinate-based neural network (typically an MLP), rather than a discrete grid. - Uses this network as the primary representation of the signal (image, shape, field, etc.), not merely as an auxiliary module.
- Falls into the broader family of neural fields (NeRF, SDF fields, signed distance functions, occupancy fields, etc.).
We exclude works that:
- Use the word “implicit” only conceptually, while the representation itself is voxel- or grid-based.
- Use an MLP only as a classifier, without directly modeling a continuous signal over coordinates.
- Surveys & Reviews
- Computational Imaging, ISP & Color
- Inverse Rendering & 3D Reconstruction
- Generative Visual Models
- Dynamics & Video
- Semantics & Visual Representation
- Foundations & Theory
- Colabs
- Talks
- Related but Non-INR Works
- Where Do We Stand with Implicit Neural Representations? A Technical and Performance Survey (Essakine et al. 2024) - Classifies INR methods and compares performance across multiple tasks.
- Implicit Neural Representation in Medical Imaging: A Comparative Study (Molaei et al. ICCV 2023 Workshop) - Systematic study comparing INR-based methods across various medical imaging tasks.
- Neural Fields in Visual Computing and Beyond (Xie et al. 2022) - Comprehensive survey covering neural field methods across visual computing, providing a unified framework and taxonomy for coordinate-based neural representations.
- GamutMLP: A Lightweight MLP for Color Loss Recovery (Le & Brown, CVPR 2023) - Optimizes a lightweight MLP during gamut reduction to predict clipped color values.
- NILUT: Conditional Neural Implicit 3D Lookup Tables for Image Enhancement (Zhang et al. AAAI 2024) - Implicitly defined continuous 3D color transformations for memory-efficient and controllable image enhancement.
- Signal Processing for Implicit Neural Representations (Xu et al. NeurIPS 2022) - Performs classical signal processing operations (denoising, smoothing, filtering) directly on INR-parameterized signals.
- PBR-NeRF: Inverse Rendering with Physics-Based Neural Fields (2023/2024) - Jointly estimates geometry, materials, and lighting using physics-based priors for realistic relighting.
- Benchmarking Implicit Neural Representation and Geometric Rendering in Real-Time RGB-D SLAM (Hua & Wang, CVPR 2024) - Comparative analysis of INR/Geometric representations in SLAM.
- Rethinking Implicit Neural Representations for Vision Learners (Song et al. 2022) - Reinterprets INR learning/usage from a vision learner perspective.
- Light Field Networks: Neural Scene Representations with Single-Evaluation Rendering (Sitzmann et al. 2021) - Represents 3D scenes via their 360-degree light field.
- Neural Radiance Fields (NeRF) (Mildenhall et al. 2020) - The foundational work on volumetric rendering for novel view synthesis.
- Pixel-NERF (Yu et al. 2020) - Conditions a NeRF on local features lying on camera rays.
- Multiview neural surface reconstruction by disentangling geometry and appearance (Yariv et al. 2020) - Sphere-tracing with positional encodings for complex 3D scenes.
- Neural Unsigned Distance Fields for Implicit Function Learning (Chibane et al. 2020) - Learning unsigned distance fields from raw point clouds.
- Scene Representation Networks (Sitzmann et al. 2019) - Continuous 3D-structure-aware neural scene representations.
- DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation (Park et al. 2019)
- Occupancy Networks: Learning 3D Reconstruction in Function Space (Mescheder et al. 2019)
- Pifu: Pixel-aligned implicit function for high-resolution clothed human digitization (Saito et al. 2019)
- Implicit Geometric Regularization for Learning Shapes (Gropp et al. 2020) - Learns signed distance fields (SDFs) from raw 3D data using an Eikonal regularization for smooth implicit surfaces.
- AutoInt: Automatic Integration for Fast Neural Volume Rendering (Lindell et al. 2020) - Accelerates neural volume rendering by learning closed-form integral approximations along rays in neural fields.
- GIRAFFE: Representing Scenes as Compositional Generative Neural Feature Fields (Niemeyer et al. 2021)
- Unsupervised Discovery of Object Radiance Fields (Yu et al. 2021)
- pi-GAN: Periodic Implicit Generative Adversarial Networks for 3D-Aware Image Synthesis (Chan et al. 2020)
- Generative Radiance Fields for 3D-Aware Image Synthesis (GRAF) (Schwarz et al. 2020)
- Learning Continuous Image Representation with Local Implicit Image Function (LIIF) (Chen et al. 2020) - Continuous image representation for super-resolution.
- Neural Radiance Flow for 4D View Synthesis (2020/2021)
- Space-time Neural Irradiance Fields for Free-Viewpoint Video (2020/2021)
- Non-Rigid Neural Radiance Fields (2020/2021)
- Nerfies: Deformable Neural Radiance Fields (2020/2021)
- D-NeRF: Neural Radiance Fields for Dynamic Scenes (2020/2021)
- X-Fields: Implicit Neural View-, Light- and Time-Image Interpolation (Bemana et al. 2020)
Methods that use implicit neural fields primarily as representational substrates for classification, segmentation, or generic vision encoding, rather than for direct image synthesis or 3D reconstruction.
- End-to-End Implicit Neural Representations for Classification (Gielisse & van Gemert, CVPR 2025) - Note: Pre-print/accepted paper.
- Implicit Neural Representation Facilitates Unified Universal Vision Encoding (Hu et al. 2026/2025) - Note: "HUVR" paper, arxiv ID placeholder pending final pub.
- H-SIREN: Improving implicit neural representations with hyperbolic periodic functions (Gao & Jaiman 2024) - Uses hyperbolic periodic activation functions to improve INR performance and convergence.
- Improved Implicit Neural Representation with Fourier Reparameterized Training (Shi et al. CVPR 2024)
- Multiresolution Neural Networks for Imaging (Paz et al. 2022) - Proposes multiresolution coordinate-based networks that are continuous in space and scale, with applications to continuous image representation and multilevel reconstruction.
- Rethinking Positional Encoding (Zheng et al. 2021) - Generalizes Fourier feature positional encoding by showing alternative non-Fourier embeddings work, with a unifying theory based on stable rank and distance preservation of the embedded matrix. (Code)
- Multiplicative Filter Networks (Fathony et al. ICLR 2021) - Coordinate-based architecture using element-wise multiplication of Fourier/Gabor filter banks as an alternative to additive activations for representing continuous signals. (Code)
- Fourier features let networks learn high frequency functions (Tancik et al. 2020)
- Implicit Neural Representations with Periodic Activation Functions (SIREN) (Sitzmann et al. 2020) - The canonical foundational INR architecture; shows sinusoidal activations with principled initialization enable fitting high-frequency signals such as images and 3D scenes with coordinate-based MLPs.
- On the Spectral Bias of Neural Networks (Rahaman et al. 2018) - Shows deep ReLU networks are biased towards low-frequency functions; INR-adjacent — the key theoretical motivation for positional encodings and periodic activations in INRs.
- Implicit Neural Representations with Periodic Activation Functions
- Neural Radiance Fields (NeRF)
- MetaSDF & MetaSiren
- Neural Descriptor Fields
- SIGGRAPH 2023 Course: Neural Fields for Visual Computing - Course covering neural field methods for visual computing applications.
- CVPR 2022 Tutorial: Neural Fields in Computer Vision - Full-day tutorial defining the taxonomical basis and design space of neural fields. (Recording)
- Vincent Sitzmann: Implicit Neural Scene Representations
- Andreas Geiger: Neural Implicit Representations for 3D Vision
- Gerard Pons-Moll: Shape Representations: Parametric Meshes vs Implicit Functions
- Yaron Lipman: Implicit Neural Representations
Methods that are not strictly INRs (i.e., they do not use a coordinate-based MLP as their primary signal representation) but are commonly used alongside or inspired by neural field techniques.
- Alias-Free Generative Adversarial Networks (StyleGAN3) (Karras et al. 2021) - Alias-free image GAN architecture. While INR-adjacent (its continuous signal analysis informs neural field design), the generator itself is convolutional/grid-based, not a coordinate-based MLP.
- awesome-NeRF - List of implicit representations specifically on neural radiance fields (NeRF)
License: MIT