Skip to content

OpenNARS for Applications v0.9.0

Compare
Choose a tag to compare
@patham9 patham9 released this 10 Jan 22:20
· 496 commits to master since this release

image

This is a major new release with tons of new features:

  • Very complete NAL 1-6 support for the first time (NAL 5/6 support was still poor in v0.8.x)
  • More effective handling of acquired NAL-4 relations. This includes the ability to effectively learn relation properties (symmetry, transitivity, asymmetry) from examples, and to use these learned relation properties to derive new relations.
  • Further simplified and improved handling of temporal implication
  • The system can derive new temporal implications / contingencies using semantic inference, by utilizing beliefs which are similarity, implication and equivalence statements
  • The system can use NAL-6 induction to derive knowledge about contingencies, which for instance allows it to derive a behaviour for a new object (it has a door knob, maybe it can be pushed to open the door)
  • Inhibition / subsumption of general hypotheses for a specific case if it fails repeatedly in the specific case. This forces the system to try alternative strategies to get a desired outcome achieved.
  • Example Python notebook to use ONA with your favourite scientific computing libraries: https://colab.research.google.com/drive/1YSfquCubY6_YnMAJLu8EJMJNZrkp5CIA?usp=sharing
  • New Narsese examples which show various new reasoning capabilities.
  • Better derivation filter to avoid duplicate derivations by looking into the corresponding concept if it already exists. This improves Q&A performance further.
  • Higher term complexity limits and unification depth as required for higher-order semantic reasoning.
  • Support for arguments in motorbabbling via *setopargs, and *decisionthreshold for changing the decision threshold at runtime.
  • Ability to specify the maximum NAL-level in semantic inference in the configuration and term filters.

The ONA-running robot already can:

  • Autonomously search for objects and remember their location when seen visually.
  • Go back to the remembered location of objects of interest (a form of object permanence).
  • It can pick up objects autonomously with its manipulator, by taking both visual and servo feedback into account.
  • It can learn and improve goal-directed behaviours (such as object avoidance if desired) from observed outcomes.
  • Real-time question answering with the user at runtime, via the ONA shell embedded in transbot.py. (also supports the NLP interface via pipe)