A Python (Tensorflow) suite for neural network checking ("does the net satisfy P?") and neural network building ("construct a net that satisfies P.") We do this by leveraging a neuro-symbolic translation between logical symbols and sets of neurons in the network.
à la Mode is the more hasty and confident sister of Argyle. Both share the same core features, but à la Mode releases faster and is easier to use (for those Python+Tensorflow inclined). However, unlike Argyle, à la Mode is not formally verified (although it's models are proved correct by hand).
NOTE: This program is currently very much in development, and many of the planned features involve significant research efforts (this is my PhD). So what the program can do right now is somewhat limited.
- Model checking (assuming we know the meaning of base atoms)
- Countermodel generation (via a random search, no construction yet)
- Modal and conditional reasoning
- Inferring what the net has learned before and after learning
- Nets must be feed-forward, with a binary step activation function
- Nets currently must be hand-crafted
- Nets must be used for classification tasks in discrete domains
- Nets learn via (unsupervised) Hebbian learning
- Knowledge bases are expressed in a certain restricted modal syntax (see below)
- Base atoms (input, output, and perhaps hidden states) must be given as input
- Model building
- Counter-model building
- Proper sigmoid activation functions
- Recurrent nets (LSTMs and Transformers)
- Plug-and-play with your existing Tensorflow model
- Learning via backpropagation (!)
- (stretch goal) Nice user interface for editing/updating knowledge bases
- (stretch goal) Tasks beyond classification
- (stretch goal) Automatically producing base atoms from a dataset and net
- (extreme stretch goal) Predicate/quantifier reasoning
This program runs on Python3 and Tensorflow. So first make sure you have these installed:
- Python: Download and install any version >= 3.x
- Tensorflow 2.x: Installation Instructions
- Make sure to install any version 2.x
- This should also install the Keras front-end.
For development I'm using the
tf-nightly
, which is probably your best bet for getting this to work.
In addition, you will need the following Python libraries:
- Pyparsing >= 3.0.x via
python3 -m pip install pyparsing
- I used version 3.0.7 for development. Note that older versions (<= 3.0.x) use deprecated function names, and are not compatible with our scripts at present.
- Numpy >= 1.22.x via
python3 -m pip install numpy
- Older versions will probably do just fine.
Once you have all of the dependencies installed, clone this repo and run (within this directory):
python3 setup.py install
or, if that doesn't work (e.g. in a Google Colab environment):
pip3 install .
If this is successful, you can now import neuralsemantics
in your Python programs!
At the moment, this program lets you hand-craft a neural network and infer some things about what the net knows, expects, and learns. (There is planned support for being able to plug-and-play with your own Tensorflow model.)
To get you started, run the example program examples/penguin.py
.
from neuralsemantics.BFNN import BFNN
from neuralsemantics.Model import Model
nodes = set(['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h'])
layers = [['a', 'b', 'c', 'd', 'e'], ['f', 'g'], ['h']]
weights = {('a', 'f'): 1.0, ('a', 'g'): 0.0,
('b', 'f'): 0.0, ('b', 'g'): -2.0,
('c', 'f'): 0.0, ('c', 'g'): 3.0,
('d', 'f'): 0.0, ('d', 'g'): 3.0,
('e', 'f'): 0.0, ('e', 'g'): 3.0,
('f', 'h'): 2.0, ('g', 'h'): -2.0}
threshold = 0.0
rate = 1.0
prop_map = {'bird': {'a'}, 'penguin': {'a', 'b'},
'orca': {'b', 'c'}, 'zebra': {'b', 'd'},
'panda': {'b', 'e'}, 'flies': {'h'}}
net = BFNN(nodes, layers, weights, threshold, rate)
model = Model(net, prop_map)
print("penguin → bird : ", model.is_model("penguin implies bird"))
print("bird ⇒ flies : ", model.is_model("(typ bird) implies flies"))
print("penguin ⇒ flies : ", model.is_model("(typ penguin) implies flies"))
print("orca+ zebra+ panda+ (bird ⇒ flies) : ", model.is_model("orca+ (zebra+ (panda+ ((typ bird) implies flies)))")) # should be True
print("orca+ zebra+ panda+ (penguin ⇒ flies) : ", model.is_model("orca+ (zebra+ (panda+ ((typ penguin) implies flies)))")) # should be False
This file shows how you can hand-code a small feed-forward network model and evaluate its expectations (about whether penguins fly) before and after learning. Notice that we need to give prop_map
as input to the net. The keys of prop_map
are atomic facts that we know ahead of time produce a specific activation. e.g. 'penguin': {'a', 'b'}
indicates that a 'penguin' image will always activate 'a'
and 'b'
. Usually these atomic facts are inputs and outputs, although occasionally we have prior knowledge of hidden states as well.
The key functions for you to use are
model.is_model(expr)
(checks ifexpr
is follows in the net)model.interpret(expr)
(gives the set of neurons denotingexpr
).
Here, expr
accepts the following syntax:
Easy to Write | Easy to Read |
---|---|
top , bot |
⊤ , ⊥ |
not P |
¬ P |
P and Q |
P ∧ Q |
P or Q |
P ∨ Q |
P implies Q |
P → Q |
P iff q |
P ↔ Q |
knows P |
K P |
typ P |
T P |
P up Q |
P+ Q |
P
and Q
can be replaced by almost any string of alphas, with one major exception: Avoid using capital T
and capital K
. The convention I recommend is to use A
, B
, C
, ... for variables, and lowercase
, strings
when you want to use actual strings. Also, parsing is somewhat experimental, so be generous with parentheses.
Each operator has an interpretation 1) as an operator in logic, and 2) as some internal behavior of the neural network:
Syntax | Properties | Reading |
---|---|---|
K P |
S4 modality | "knows P" |
T P |
non-normal ENT4 modality | "typically P"/"the typical P" |
P+ Q |
dynamic modality that only meaningfully interacts with T |
"upgrade preference for P, eval Q" |
Syntax | Neural Network |
---|---|
K P |
The neurons reachable from the set P |
T P |
Forward-Propagation of P |
P+ Q |
Do Hebbian Update on P , then eval Q |
For more details on what makes this neuro-symbolic interface work, see [Redacted]
What drives our program is the idea that neural networks can be used as formal semantics of knowledge bases. If you're interested in learning more, I highly recommend starting with:
- Leitgeb, Hannes. Neural network models of conditionals: An introduction. [link to pdf]
- A.S. d’Avila Garcez, K. Broda, D.M. Gabbay. Symbolic knowledge extraction from trained neural networks: A sound approach. [pdf]
- Laura Giordano, Valentina Gliozzi, and Daniele Theseider Dupré. A conditional, a fuzzy and a probabilistic interpretation of self-organising maps. [pdf]
- Is something broken? You can't run the program? Suggestions for features? Feel free to file an issue!
- Want to contribute to bug fixes/quality of life improvements? Submit a pull request! (I'd be surprised and flattered!)
- Interested in collaborating on the bigger (i.e. open research question) TODOs? Head over to [Redacted]