You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I had the following idea since I started my PhD and would like to build something for educational and promotional purposes as described below:
A framework that allows to host competitions where students / researchers from the outside / etc have to solve one or multiple problems by specifying generative models.
Example problems that these generative models should solve are
system identification / state estimation of an environment, e.g. of a dynamical system
control inference, e.g. (partial) control of an agent acting in an environment to perform some task, e.g. a robot (drone, car, robot dog), team of (or individual) soccer players, etc
Participants (e.g. students or other people interested in playing around with reactive bayesian / active inference), only have to specify a generative model that has a certain type of input and has to produce a certain type of output.
After specifying the generative model, the participants upload the generative model to the competition framework and can look up their performance on a (live) leaderboard.
This idea is similar to either CTFs (capture the flags) competitions or Kaggle competitions, in which participants have to solve a problem to compete for points and/or money.
In CTFs there are two modes
Team mode: Participants have to 1) build teams, 2) join a local network with multiple identical servers each with security flaws where they assigned to one server, 3) protect their own server for defense points while 4) infiltrate the servers of the other teams for attack points. This mode is typically "live" as in you the competition is going on for only a short amount of time, so there is a lot of time pressure.
"Individual Problem mode": Participants typically are competing solo or at duo teams at most. The competition hosts a variety of problems with varying difficulty which participants can solve for points depending on the difficulty. These types of competitions can go over multiple days (typically 3), so there is less time pressure.
In Kaggle, you typically have to solve a problem by specifying a model which then is ranked by some metric.
All competitions have a (live) leaderboard which shows the best performing teams/participants, which motivates other participants to beat those by coming up with better solutions.
are ranked by different metrics.
The metric for AIF/generative model competitions then naturally can be total free energy (or individual terms such as accuracy and complexity).
There are at least two advantages of hosting these kinds of competitions:
Educational improvement by gamification - students compete against each other to become the best in the course/competition which requires learning about generative models, bayesian inference, system dynamics etc. This fared well in the cyber security world and I assume this will fare well in the active inference / intelligent agents world.
Possibilities to collaborate with other universities and free advertisement by word of mouth amongst students / active inference researchers that participate (and spread the gospel about AIF competitions if it was a good experience). In CTFs or Kaggle, companies associated with the field typically give out prizes for the top N people of the competition, with the hopes or recruiting those people in the future.
Quick development of generative models and documentation around how they work, and discovery of interesting solutions/generative models. For most CTFs, people started to write up their solutions (writeups) especially when there was a unique / genius solution that almost nobody thought of. I can see the same thing happening with generative models, e.g. ones which can minimize free energy for a specific problem further than other models.
If you want to work on this idea or think this idea is great/can be improved, let me know.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I had the following idea since I started my PhD and would like to build something for educational and promotional purposes as described below:
A framework that allows to host competitions where students / researchers from the outside / etc have to solve one or multiple problems by specifying generative models.
Example problems that these generative models should solve are
Participants (e.g. students or other people interested in playing around with reactive bayesian / active inference), only have to specify a generative model that has a certain type of input and has to produce a certain type of output.
After specifying the generative model, the participants upload the generative model to the competition framework and can look up their performance on a (live) leaderboard.
This idea is similar to either CTFs (capture the flags) competitions or Kaggle competitions, in which participants have to solve a problem to compete for points and/or money.
In CTFs there are two modes
In Kaggle, you typically have to solve a problem by specifying a model which then is ranked by some metric.
All competitions have a (live) leaderboard which shows the best performing teams/participants, which motivates other participants to beat those by coming up with better solutions.
are ranked by different metrics.
The metric for AIF/generative model competitions then naturally can be total free energy (or individual terms such as accuracy and complexity).
There are at least two advantages of hosting these kinds of competitions:
If you want to work on this idea or think this idea is great/can be improved, let me know.
Beta Was this translation helpful? Give feedback.
All reactions