Ivan Bogachev





Search results for "intelligence"

Intuition

2025 / 09 / 30
Intuition

Sequences of data are basic building blocks of memory. They may have various functional roles. Let's take a look at some.

Facts. These are your most standard pieces of information. If a sequence is a fact, then it exists. It's true.

Associations. They may look differently, depending on your technical design, but their role is to connect other sequences. These are like edges of the graph of memories.

Rules. These are like instructions for reasoning and behavior, based on one-directional associations. It's your program.

Patterns. We save them as symbols and build an internal world. When we combine them with fuzzy transitivity and create complex associations, we get some understanding of reality.

These roles exist without any languages. Yes, it may sound odd outside of the IT world, but that's the fact. Programs in our computers don't speak, but they work with all these things.

What happens when we add a language? We add a bunch of facts. Words. We find more patterns for grammar and stuff. We add associations to connect words with our facts and symbols.

When you walk through your graph of memories, you can grab the words that are connected to your path. Make a sentence. It doesn't have to be grammatically perfect, but you can explain your line of thought to others. But what if your path goes through multiple sequences with no words associated with them?

Fact can be reproduced as it is. You can draw a picture, make a sound, or whatever your design allows you to do. If your listener has similar experiences, it can be enough to make the right guess. It's not a silver bullet, but it works.

Associations are easy to explain. Things come together. We connect them. Rules are the same. You can forget why you created some rules, but you can share them with us. If you use transitivity, then just explain everything step by step.

But what about patterns? If you don't have any words for a symbol or its parts, then you have a problem. It would be practically impossible to explain it. You may share every fact in a symbolic group, but it's highly unlikely that your opponent will find some tricky pattern immediately.

If you spend some time with data from the same environment, you find a lot of patterns and get a good understanding of it, but there are no words to explain it. Everything works, but you can't share anything. It's like personal magic.

Intuition.

Internal world

2025 / 09 / 27
Internal world

Let's say you have some basic consciousness. You save and extract data from your memory. What's next? How to evolve? Try to mix things. Associate sequences of data if they come together.

Now compare some sequences. Are they identical? Similar? You don't understand anything at this moment, but you can make a rule. A to B. Once you receive a sequence A, find that rule and proceed to B.

Do you have any free will in your pockets? Use it. Override your instructions for behavior with new rules. You can create rules and follow them. You got a tiny bit of intelligence. Congratulations! You're a genius bacteria! It's time to increase your cache and play with more complex relations.

Take three sequences. A. B. S. If A = S, and B = S, then A = B. You just learned Euclidean relations. Great. Try to work with parts. If A includes S1 and S2, and B includes S1 and S2, then A = B. We're getting somewhere. Try to compare a lot of sequences. Find a pattern.

A = B = C = D = E... = S1 + S2 + X.

What is this? It's a symbol. How do you call it? You don't. You don't speak any languages yet. But you can save it. You can play with sequences of data that exist only in your head, finding patterns in patterns, and creating a symbolic world.

Patterns and symbols are cool, but you may want to learn fuzzy transitivity as well. If you have a sequence S, it's connected to A, and A to B, then S is connected to B.

Now you can see a destination of a sequence. Reuse skills with various subjects in a symbolic group. Play with associations. You can build a lot of things, even a fancy conflict-resolving module for your rules, where you wander around the memory graph and find alternative rules for some tricky situation.

We may be from different species, but we exist in the same environment and have similar sensors. We likely receive similar sequences of data. We find more or less the same patterns and save them as symbols. This is our collective unconscious. By using it, we align our actions even if we don't know each other and don't have any common language.

Oh, right. We'll need a language. Our internal worlds may be relatively unique, but we associate sequences that come together. One day we'll find a common sequence. A word. An adapter between one of my sequences and one of yours. We can make a conversation now.

Do you monitor some of your data channels? Yes? So you have a live feed of your... thoughts? You observe sequences of data? Feel symbolic connections? And even say the words of our language somewhere inside?

Wow!

It's quite a mind you have!

Self-awareness

2025 / 09 / 16
Self-awareness

Consciousness and self-awareness. They affect our behavior. But how do you technically recognize yourself in data?

I see a system as conscious if it saves and extracts data in real time. If you stop all data flows, the consciousness goes away. Complex systems may have many data channels connected to their memory. We can see different levels of consciousness, depending on which set of channels is active at the moment.

If you have a memory, a bunch of sensors, and data channels to connect everything, you may start to react to things in a structured manner. On the lower levels of evolution, where random mutations affect everything, we would expect to see all sorts of odd reactions. They don't have to make any logical sense to us. Natural selection will take care of them.

If your system is functionally independent from other things, it would be very convenient to mark sensors or channels as internal or external. You get that mutation, sooner or later.

These are simple binary flags. It definitely works on the level of bacteria. They don't necessarily have enough brains to understand what they're doing in detail, but they collect data and use it to guide their behavior in the environment.

At this moment your system is aware of the fact that there is you and there is an environment. And it's not the same. This is the most primal version of self-awareness that you can get.

You would probably say that this is far from our human self-awareness. Yes. But we add more internal sensors and connect data channels in loops. This is where the real magic begins.

You may observe data extraction from your memory. Now you're aware that you're aware. You can clearly see your selfies being used. Some people would probably argue that this is where you get the "real" consciousness and self-awareness.

If you have intelligence and work with rules, you may observe their creation. You make your own decisions! This data comes from your internal channels. It's yours! It's your will! Philosophers may call this an illusion, in the sense that this will is not free from prior causes, but it's a functional part of the system anyway. You won't be a human without it.

New sensors. More loops. More data. More rules. More bizarre effects. Eventually you pass the mirror test. But. It's not a functional self-awareness test. It's an IQ test. It's not enough to distinguish yourself from the environment. That's easy. You need to work with Euclidean relations in your data to pass the mirror test. We have to be aware of that.

P.S.: I would expect biological organisms to get self-awareness first, acquire all the abilities to guide their behavior, and then gradually evolve to the level of complex reasoning. However, in an artificial environment, we can design a program that will make all the required computations to formally pass the mirror test with no internal sensors and related self-interpretations. It'll never survive natural selection, but it can make us misinterpret the test results.

The magical number 6

2025 / 05 / 04
The magical number 6

What is the capacity of our working memory? Some classical researches suggest it's a magical number 7 for a human. Plus-minus. Some other say it's 4. Plus-minus. And something like 2 for a chimpanzee. Plus-minus. In any case, it seems that the test subjects count things and that is being used as the evidence. The rules of counting affect the results and our interpretations of them.

But how many things do we really need to hold in working memory in order to learn new things? Not just count some objects, not just mirror the actions (monkey see, monkey do), but to understand what's going on?

In order to learn, the biological processor should be able to work with transitivity and Euclidean relations. So, how many memory cells do we need to put in its cache to make everything work?

Let's start with transitivity. We need three memory cells to work with one connection from A to B. We need to save A, to save B, and to save the connection. We need five memory cells to work with two connections. A to B. B to C. We need to save A, B, C, and two connections. Five memory cells in total. If we want to use this data to create a new connection from A to C, to learn something, we need the sixth memory cell. We need to save that new connection somewhere. The same logic works with the Euclidean relations as well. The same 6 memory cells are required.

This means that the evolution from 5 to 6 memory cells is the step from the intellectually disabled monkey to the organism that can learn things. It's quite an important step. I'm surprised that the cognitive psychologists don't talk about this elephant in the room.