Skip to content

Releases: opennars/OpenNARS-for-Applications

OpenNARS for Applications v0.9.3

22 Apr 17:09
Compare
Choose a tag to compare

image

New features

  • Spaces / Explicit measurements / Numeric terms

    • for comparative reasoning
    *space 10 B
    <(x * B_0.1) --> brightness>. :|:
    <(y * B_0.3) --> brightness>. :|:
    |- <(y * x) --> (+ brightness)>. :|:
    
    • for innate similarity of sensations:
    *space 10 B
    *motorbabbling=false
    <(<(a * B_0.4) --> brightness> &/ ^right) =/> G>.
    <(<(a * B_0.1) --> brightness> &/ ^left) =/> G>.
    <(a * B_0.3) --> brightness>. :|:
    G! :|:
    //expected: ^right executed with args
    
  • Memory query showing all answer candidates above truth expectation threshold rather than only the best:

    <a --> c>.
    <b --> c>.
    *query 0.6 <?1 --> c>?
    Answer: <b --> c>. creationTime=2 Stamp=[2] Truth: frequency=1.000000, confidence=0.900000
    Answer: <a --> c>. creationTime=1 Stamp=[1] Truth: frequency=1.000000, confidence=0.900000
    
  • Reliable declarative implication handling:

    • Reliable handling of induction with multiple involved variables, example with symmetry between relations:
    <(a * b) --> r>.
    <(b * a) --> s>.
    |- <<($1 * $2) --> r> ==> <($2 * $1) --> s>>.
    |- <<($1 * $2) --> s> ==> <($2 * $1) --> r>>.
    
    • Implication tables now also for declarative implications for very effective storage&use
  • Sensorimotor performance is not compromised by the declarative inference abilities anymore. These represent new restrictions that will only be lifted once similarly effective solutions will be found.

    • Selective compound term formation: Ext/Int intersections, image transformations considering concept familiarity
    • Declarative inference on events only with eternal beliefs.

Demos & Ongoing Projects that rely on v0.9.3

Below is an updated overview of everything already running (✅ Complete) or still in progress (🚧 In‑Progress) on top of the v0.9.3 release.


🧠 AniNAL IQ Test Suite  ✅

Advanced sensorimotor capabilities demonstrated through a series of publicly accessible web demos. In earlier releases, AniNAL required a custom‑tuned configuration; with v0.9.3 it now runs on the default "base" config shared with the rest of ONA, showcasing the full capability range.

image image
Demo Description
Complex continuous – verbal Tasks with verbal cues and graded shades
Complex continuous – shades Tasks with graded color perception
Complex – shades Tasks with discrete colors, shapes, shades
Complex Tasks with discrete colors and shapes (more colors&shapes)
Simple Tasks with colors and shapes

🔗 "MeTTa‑NARS": NARS ↔️ Hyperon Integration  ✅

image image

A fully‑featured MeTTa interface for SingularityNET’s OpenCog Hyperon stack. It enables seamless embedding of ONA into MeTTa programs and effortless interoperability with other Hyperon modules. Narsese expressions become first‑class citizens that can be reasoned over efficiently.

Format to use ONA as "MeTTa-NARS":

(Command (Statement (Frequency Confidence)))

where Command is either AddBeliefEvent, AddBeliefEternal, AddGoalEvent.

Source files:


💭 Declarative Reasoning for NACE   ✅

Early prototype bringing high‑level declarative reasoning to NACE.

Here an example of NACE preparing and bringing coffee to the table (full file: https://github.com/patham9/NACE/blob/main/spaces.metta ),
Hereby declarative knowledge is given and NACE figures out how to achieve it. Relevant knowledge and goal given in MeTTa:

!(AddBeliefEternal ((((u x G) --> up) ==> (coffee --> (IntSet made))) (1.0 0.9)))
!(AddBeliefEternal ((((u x T) --> up) ==> (table --> (IntSet prepared))) (1.0 0.9)))
!(SequenceGoal (coffee --> (IntSet made)) (table --> (IntSet prepared)))

NACE making and delivering coffee


🤖 NARTECH ROS 2 Bridge  🚧

ROS 2‑enabled mobile robots can use NARS as reasoning & learning engine with MeTTa.

image

Last updated · 23 April 2025

OpenNARS for Applications v0.9.2

15 Jun 22:02
Compare
Choose a tag to compare

image

This release includes all additions up to the end of 2023, which also led to AniNAL, a sensorimotor-reasoning-optimized config file for ONA with corresponding shape picking demo ( https://github.com/patham9/AniNAL )

Features since v0.9.1:

  • Occurrence time index (a FIFO structure of recently selected concept references) for efficient temporal compounding.
  • Support for parallel conjunction compounding (as https://github.com/patham9/AniNAL needed)
  • Support for experimental comparative reasoning
  • Support for experimental numeric terms and numeric term similarity
    TODO add the other changes

Win64 Binary

04 Nov 05:03
Compare
Choose a tag to compare
Win64 Binary Pre-release
Pre-release
Win64_November4_2022

Update: Stack and Hashtable test: do not use large stack sizes

OpenNARS for Applications v0.9.1

10 Jun 17:53
Compare
Choose a tag to compare

image

This is a new release with new features:

  • Ability to do temporal compounding (sequences, temporal implications) among derived events using attention-based selection as Tony Lofthouse proposed.
  • Layered goal Priority Queue for better balanced resource allocation among goals of different derivation depth, taken from Robert Wuensche's 20NAR1.
  • Ability to learn and execute compound operations as suggested by Pei Wang, whereby output arguments can be the input arguments of other operations.
  • Relational frame theory Python experiments contributed by Robert Johansson, including Word Sorting Task, Identity Matching and an experiment for compound conditioning and usage of equivalence.
  • New command *opconfig as suggested by Adrian Borucki to print the current operations which are registered, and their babbling configuration.
  • 50ms input delay in NAR.py eliminated by using the Subprocess standard Python module instead of the Pexpect library.

The new experimental language learning ability (mutual and combinatorial entailment according to Relational Frame Theory) was left out to allow for a mature release, it will be back once stable, likely in v0.9.2.

OpenNARS for Applications v0.9.0

10 Jan 22:20
Compare
Choose a tag to compare

image

This is a major new release with tons of new features:

  • Very complete NAL 1-6 support for the first time (NAL 5/6 support was still poor in v0.8.x)
  • More effective handling of acquired NAL-4 relations. This includes the ability to effectively learn relation properties (symmetry, transitivity, asymmetry) from examples, and to use these learned relation properties to derive new relations.
  • Further simplified and improved handling of temporal implication
  • The system can derive new temporal implications / contingencies using semantic inference, by utilizing beliefs which are similarity, implication and equivalence statements
  • The system can use NAL-6 induction to derive knowledge about contingencies, which for instance allows it to derive a behaviour for a new object (it has a door knob, maybe it can be pushed to open the door)
  • Inhibition / subsumption of general hypotheses for a specific case if it fails repeatedly in the specific case. This forces the system to try alternative strategies to get a desired outcome achieved.
  • Example Python notebook to use ONA with your favourite scientific computing libraries: https://colab.research.google.com/drive/1YSfquCubY6_YnMAJLu8EJMJNZrkp5CIA?usp=sharing
  • New Narsese examples which show various new reasoning capabilities.
  • Better derivation filter to avoid duplicate derivations by looking into the corresponding concept if it already exists. This improves Q&A performance further.
  • Higher term complexity limits and unification depth as required for higher-order semantic reasoning.
  • Support for arguments in motorbabbling via *setopargs, and *decisionthreshold for changing the decision threshold at runtime.
  • Ability to specify the maximum NAL-level in semantic inference in the configuration and term filters.

The ONA-running robot already can:

  • Autonomously search for objects and remember their location when seen visually.
  • Go back to the remembered location of objects of interest (a form of object permanence).
  • It can pick up objects autonomously with its manipulator, by taking both visual and servo feedback into account.
  • It can learn and improve goal-directed behaviours (such as object avoidance if desired) from observed outcomes.
  • Real-time question answering with the user at runtime, via the ONA shell embedded in transbot.py. (also supports the NLP interface via pipe)

OpenNARS for Applications v0.8.8

16 Jul 04:37
Compare
Choose a tag to compare

image

This version consists of updates and additions for v0.8.7:

  • Improved subgoal handling: treat their occurrence as "as soon as possible", which avoids projection discount and allows for deeper planning.

  • Improved and visualized procedure learning examples which previously didn't have ASCII visualization.

  • Skip events FIFO finally made it into master (events can be skipped in sequences)

  • Scripts to generate output videos of runs with metrics attached were added, including a script which automatically runs and creates a video of all examples combined.

UPDATE: since only the utility scripts have been updated in the meanwhile, this release tag was updated to the October commit 094926e

OpenNARS for Applications v0.8.7

11 Apr 10:40
Compare
Choose a tag to compare

image

This version consists of updates and additions for v0.8.6:

  • Improved concept usefulness, increasing significantly the system's ability to learn and remember declarative knowledge.

  • Missing NAL-3 decomposition rules added.

  • More restricted nesting of terms for compound term formation.

  • Var introduction slightly improved and left to sensorimotor inference.

  • Support for questions with future/past tense.

image

image

  • Multiple new examples showing some of the system's capabilities which weren't shown before.

Alternative branches which are kept updated with master:

SkipEventsFIFO: Allows FIFO to skip events when building sequences. (higher noise tolerance)
MSC2: Sensorimotor reasoning only, see https://github.com/opennars/OpenNARS-for-Applications/tree/MSC2
Curiosity: Motorbabbling dependent on confidence of applicable hypotheses, making it prefer sampling the operations which consequences are less known in current context.
NegGoals: support for negative goals to inhibit decisions (like necessary in https://gist.github.com/patham9/47cfd750488a48c57259d049073c5280 )
QLearner: for comparison of Q-Learning with ONA.
More to come, potentially.

OpenNARS for Applications v0.8.6

31 Dec 01:17
Compare
Choose a tag to compare

image
This version consists of updates and additions for v0.8.5:

  • NAL-8 sequence decomposition support.

  • Some NAL-3 set decomposition rules added and set handling improved.

  • An issue regarding variable elimination for goal derivation is resolved.

  • Procedure knowledge can be used even when it has a variable instead of {SELF} if it eliminates to {SELF}.

  • SpongeBot robot example added.

  • Minor parser fix, 42. :|: will parse as event now instead of "perform 42 inference steps".

  • comparison.py and plot.py for automated procedure learning comparison between branches.

  • narsese_to_english.py supports translation of Narsese into a controlled English form easier for beginners.

  • Operator renaming functionality at runtime *setopname index ^newName

  • english_to_narsese.py supports conditionals (sentences with an if in them)

OpenNARS for Applications v0.8.5

25 Oct 03:07
Compare
Choose a tag to compare

image
This version consists of updates and additions for v0.8.4:

OpenNARS for Applications v0.8.4

17 Jun 02:59
3b599f8
Compare
Choose a tag to compare

image
This version consists of fixes and additions for v0.8.3:

  • libbuild.sh to build a shared and static lib, with a C and C++ hello world example included
  • The system compiles and runs with tcc (a simple C99 compiler) as well (not just Clang and GCC)
  • UDPNAR terminates correctly on Mac too (via UDPNAR_Stop)
  • analysis.py to show, plot, and calc stats for attention dynamics
  • A new simulated robot test example has been added (an attempt to unify Microworld (direct sensorimotor interaction) and Testchamber (more complex tasks))
  • Structural rule typo fix for intensional set reduction
  • Valgrind is now also completely happy about the UDP part
  • Ability to set bucket amount of the hashtables
  • A clean Python 3 interface with pexpect
  • A cartpole experiment has been added
  • All scripts run with Python 3 now.
  • Narsese and english can now be mixed in .english files to be able to provide background knowledge which is too complex in English.
  • An Asthma-related declarative reasoning example has been added.