Neither, actually. The connection strengths are dynamic and change
based on which Goals are active, number of competency modules affected
by a given environmental state, etc. I see the players coding up
their own competency modules and writing new sensor code if necessary
rather than working with pre-built networks.
> Do the players collaborate on one robot tank or compete against each
> other, one player per tank?
I'm sure it could go either way. There's plenty of sub-tasks to be
figured out, even sub-sub tasks, like `do we navigate to specific
waypoints or do we drive the tank based on current local terrain or
both?'
> Have you seen a game called crobots?
Ayup and played it many times back in my childhood (along with
P-ROBOTS and Omega). This would be a similar beast, though written in
Scheme in a bit more object-oriented mode.
> I'm not sure why you call these memes. It sounds more like you are
> referring to agents as described in Minsky's Society of Mind, or
> Rodney Brook's papers on subsumption architecture.
It /is/ a series of interconnected agents, however, that doesn't keep
the meme abstraction from being applicable, especially if you keep the
actual mechanism of the system black boxed. For instance, let's say
that we have a robotank with the Goals `Go to Point A' and `Go to
Point B' both in the network. Depending on what the activation
threshold is set to, it may sit there a bit and then grind its way
toward either Point A or Point B, but once on the way its unlikely to
change the course of action unless an environmental state changes the
situation.
We /can/ look at this memetically, saying that the robotank knows
certain things about its environment and has certain memes that
influence its behaviour. Ah, look, there its decided to go to Point
A. Depending on what its sensors pick up, it'll generate different
memes which might change which meme about destination will get more
mental resources. See? Because its lost line of sight to Point A,
its headed to the other Point.
The point, at least of this little bit, is that barring knowledge of
the actual mechanism, a memetic model works just as well and, in fact,
models the behaviour we see. This does show some interesting use if
someone attacks memetics for saying things about how thinking is
actually done. It needn't. The mechanisms could be /vastly/
different, but as long as the model fits the data and is predictive,
it works just as well.