LNNs are a novel Neuro = Symbolic
framework designed to seamlessly provide key
properties of both neural nets (learning) and symbolic logic (knowledge and reasoning).
- Every neuron has a meaning as a component of a formula in a weighted real-valued logic, yielding a highly interpretable disentangled representation.
- Inference is omnidirectional rather than focused on predefined target variables, and corresponds to logical reasoning, including classical first-order logic theorem proving as a special case.
- The model is end-to-end differentiable, and learning minimizes a novel loss function capturing logical contradiction, yielding resilience to inconsistent knowledge.
- It also enables the open-world assumption by maintaining bounds on truth values which can have probabilistic semantics, yielding resilience to incomplete knowledge.
To install the LNN:
- Make sure that the python version you use in line with our setup file, using a fresh environment is always a good idea:
conda create -n lnn python=3.9 -y conda activate lnn
- Install the
master
branch to keep up to date with the latest supported features:pip install git+https://github.com/IBM/LNN
As part of coursework on Neuro-symbolic AI, I earned this badge for demonstrating foundational knowledge and the ability to formulate AI reasoning problems within a neuro-symbolic framework. The badge holder has the ability to:
- Create a Logical Neural Network (LNN) model from logical formulas.
- Perform inference using LNNs.
- Explain the logical interpretation of LNN models.
🔗 https://www.credly.com/badges/d2a9e4b2-b718-4267-9c05-6ae8e3c9b935
View Certificate]