Reactions on my post Building the beast.There have been several interesting reactions on my post of yesterday.
Showing that there is interest in what Gaia is heading for. And I thank all of the participants.
I didn't anticipated that, but I should have.
As my answers were in general quite long, I think it might be useful to group them together in this post. (it will be easier to read).
The only thing I modified is that in the comments the glyph She cannot be used and I used the "null-She" instead. In this post they has been replaced by the glyph for improved readability.
The topics are grouped in a few categories to allow you more easily to read the ones you're interested in.
If I understand correctly Gaia learns from evaluating her own output. The Experience center looks very promissing. If I understand correctly she will be able to forget.My answer:
You're right. Learning means having someway or another feedback. As there are no humans in the loop for the moment she uses the evaluation as a feedback loop. She will forget memory patterns if they are not used for a long period. They sort fade-out over time. I didn't plan to apply the same technique to what is stored in the experience centre.
Interesting approach - my one question is whether frequency makes any 'sense' as a base capability? What I mean, is that 'she' considers freq a high value, and computers/machines are good at that - but humans, are generally not - for freq things we push that to a 'muscle memory" but few of us would know how many things occur in any environment.
I wonder, it it makes more importance to use freq in places we would care - like if nouns are used many times (places, people etc). And quite oddly, if they occur very in-frequently (we often recollect rare and odd singular experiences). Dunno. Interesting work though.
I fully agree that the frequency aspect (or better the tools that might use that) is something I expect her not to use at a more mature level.
At this stage however it seems an acceptable solution to make a first shift in the byte garbage she is receiving.
We'll see what will be used when there are more tools available (even at this level).
As humans we handle a lot of the high frequent stimulations at sub- or unconscious level. This allows us to drive car without being exhausted after a few minutes. The infrequent stimuli will often induce special attention.
When handling video I expect Gaia to rapidly focus on patterns that represent moving objects against a more or less static background: i.e. important low frequent patterns in a high frequent pattern background.
How do you plan Gaia's communication with its surroundings in order to connect meanings to plain words? In that sense what internal structures do you plan to redefine her importance-classification by relying on something else than frequency?My answer:
While I was working on the semantic network technologies one of the important findings was related to meaning.
We can only attribute a meaning to something if that something is associated with some context. If it isn't that "something" remains meaningless.
Contexts can be provided by various means: your memory, a reality situation, a virtual reality situation, ...
When memory is the context provider the initial provided context will probably not be unique. A whole range of potential correct contexts will pop up. Further information (later in time, or in parallel) allows you hopefully to narrow down to one context. If this is not the case there is an ambiguity in meaning (which is not necessarily a problem).
Gaia's infant memory has no associations. Her next level memory has. In the infant memory, besides the identifying byte patterns, frequency and time are about the only things that can be associated with these elements.
For the moment I don't see the need for absolute time, so I do not use it. Talking about relative time implies association (later, sooner, at the same time) which is one the of kinds of association she will make in her next level of memory.
A written word, a spoken word and an image (three different memory elements) will be associated with links that carry a meaning like < is an alternate representation of>. The parallel channel handling is one of the important (but not the only one) ways of creating these associations.
Compare this to a baby's early perceptions of "mama": a sound, a face, and a range of emotions probably representing well being or happiness.
If you're familiar with the holoarchy paradigm you can see these three associated memory elements as the smallest possible (useful) holon.
These three kinds of elements, even associated, are still meaningless because they are not related to any context.
Her next level of memory will extend this first initial kind of association with a whole bunch of other kinds of associations providing context to these elements.
I'm inclined to associate the words importance and classification in your question not so much with building the memory but more with using the memory.
I owe you the answer later on (please remind me of that if I forget it).+Jochen Oekonomopulos asked:
Do you have any plans about a 'brain part' that handles some kind of rewards and feelings? Might be useful for Gaia to develop a kind of will to learn and answer.My answer:
In the terms I use to describe how Gaia is build the (almost) equivalents are:
rewards -> positive evaluation
feelings -> an internal state (change)
will to learn -> goal(s)
These three are all closely related to the Experience Center and do occur there already.
Language+Gustav Olav Lundby asked:
I see! You are trying to make an AI learning from scratch. Some philosophers think that is how the human child's brain works; tabula rasa, the empty slate. No innate knowledge. Innate knowledge is reserved for the animals.My answer:
The question is if this is the most effective approach. Chomsky, in his book the language instinct, presupposes that a meta grammar is present initially, on which the child build the specific grammar it detects in its language environment. May be such innate tools should be implemented to jumpstart the AI learning process?
I have no reason to believe that we are so much different from animals such that they do have innate knowledge and we don't. Having said that I have no idea what innate knowledge humans might have.
I'm far from convinced that Chomsky is on the right track with his idea about meta grammar.
It simply doesn't feel right. That it is to much an artificial model, and as with all models doesn't express what's really going on.
One of the expectations I have with Gaia, is that she will be able, after a while (that may take some time though), to really understand a language. And more then one. But not from texts alone.
One of the fundamental ideas behind the concept of Gaia comes from the way languages can be learned.
In my opinion there are only two important ones. Either by translation or anchored in reality.
The first is how we learned languages at school (although not entirely). Google translate is also based on those foundations (but they do use, if I'm right, some anchoring in reality with the help of image tags).
The second is based on association. Combining written and spoken word with a visualization of the context and the objects in that context.
In another post in my blog I'm talking about an application for learning languages with the help of an AI mentor. This is one of the things Gaia can be used for.
Vision+Gustav Olav Lundby asked:
When you connect video camera (two for 3d?), will you then do likewise, to let it stare at the world (or garbage TV?) until it makes sense out of it? Or will you let it start with innate knowledge of geometry, colors, surfaces, methods for inferring objects and such? The last would be equivalent to having a meta grammar for the language part. My answer:
If I can maintain the general aspect of the Sensor Output Consumer, i.e. not specific for any kind of byte stream, the plan is as follows.
In the beginning she will not even know how to detect a pixel (3 or 4 byte patterns). After
she has done that she will have to discover the width of an image (in terms number of pixels).
Both steps will require a similarity function. I'm not sure yet whether these two properties will be discovered one after another or at the same time.
For the detection of what a pixel is, the axiom is that similar adjacent pixels are more frequent then adjacent pixels that change completely.
For the width detection she will have to detect that certain ranges of similar pixels occur during a certain period (number of lines) at a certain distance (the width). And that these recurring patterns will overlap (right and left side of an image) within that distance.
In the next stage she will have to switch her center of interest to the break points in successive similar pixels. We here enter a category of algorithms that will produce probably the same output as the classical contour detection algorithms.
I'm not sure yet if the algorithms will be the same though.
This will need some hands-on experiments.
The idea is that from elementary pattern detection building blocks, including disruption detection (what I call tools) she will come up with her own suite of tools to apply. Sort of making her own algorithm.
In terms of the learning center, she uses actually one tool with different settings.
The image handling will require the discovery of tool suites (some of which will have parameters, e.g. the similarity functions) and second to apply them in cycles.
This is Gaia's next challenge, and mine for building it.
Newborns can only see sharp a few inches from their eyes. The background is blurred. The reason for this might very well be the need for a progressive evolution / training of our vision system. Limiting initial pattern detection in only a small area with relatively big objects.
If the process described above doesn't work well with full sharp images, I do have the fall back option to down scale to the way young babies start using their vision system. Not by providing smaller images but by blurring the outsides. The presence of the blurred area is necessary for the contour detection.
I don't know exactly what the influence is of the baby's possibility to interact with the objects within their vision range on the evolution of the vision system itself.
It might be very important.
What I do know is that Gaia will have no actuators to start with. I know (some of) the experiments with robots where the interaction with the environment is learned through experience, but I'm not aware of any research where this is done in combination with the development of a vision system.
Traditional Machine learning+Romain Beaumont asked:
It's not very convincing. Do you know about http://en.wikipedia.org/wiki/Tf%E2%80%93idf ?My answer:
The term frequency–inverse document frequency requires the notion of document and the notion of term.+Romain Beaumont asked:
So please consider the following.
For Gaia (at this stage) the concept of document doesn't exist. In fact she is receiving one single stream of bytes finding their origin in several documents. There are no special "patterns" defining the beginning or ending of a document.
Second she only deals with patterns of bytes (again at this stage). Such a pattern might correspond to what we would call a term, and sometimes it will, but often it will not as you can see in the memory dump.
Furthermore the existence of "terms" only exist through notion of identified term-separators. And she is not there yet.
But in the memory dump you can also see that she is not far from identifying at least three of them.
In fact I did test the algorithm by providing the most frequent 1 byte pattern and the results are much better. But that was in a unit test of the algorithm. I want her to discover that alone.
Finally, and perhaps not the least, consider the goal of Gaia and the purpose of the tf-idf algorithm. The algorithm is targeting document classification, ranking and so forth. The actual goal of Gaia is (sort of) trying to make sense of the data stream. These two perspectives are not compatible.
I certainly do not exclude that she will use such an algorithm if it provides constructive results for one of her goals at that time.
I'm not sure how using this kind of method would help that AI understand anything faster than usual natural language processing methods.My answer:
For example syntaxic analysis to begin with, or named entity recognition,...
I think you would need an incredibly good algorithm for the AI to be able to understand any stream of byte.
You'd need something like the brain.
That seems a bit too hard an objective to me.
I've worked with NLP tools for the past decade. In what I call my Semantic Network Age which lasted for 2 decades.
I've been there, got the T-Shirt.
I know the things Google is doing, in particular the views of , and the DNN of .
Although continuous progress is made and the actual techniques are far better then those of 3 years ago, something is missing.
In my blog series "A leap in AI?" (notice the question mark) I try to pin point what might be necessary to improve our actual AI not by 10% but by a factor of 10 . call these moonshots.
So I fully agree that we'd need something like the brain.
It will be hard, but that's the case with all moonshots.
That's what is all about.
+Romain Beaumont asked:
Ok I see.
What I didn't understand is what kind of algorithm you are using. Are you using machine learning ?
Good luck with that objective though.My answer:
Gaia is learning and even learning to learn. And as she is software it's machine learning.
If you're referring to "classical" ML techniques, the answer is not at this moment.
I do not reject traditional ML techniques at all and if useful I'll provide Gaia access to them. Although DNN e.g. is out of reach with my actual hardware resources.
If you like to support the development of this new kind of AI you can donate Bitcoins (or fragments of it) at
When Gaia will be more mature and she will be able to interact with the environment she might also need to spent money. She is not there yet but Bitcoin donations for her can already been done at: