PARKS Research Surface
Continual Learning, Reality, and AGI
Research Atlas

Thinking Through Intelligence as an Ongoing Contact With Reality

This is a reading surface for work that treats intelligence as a live process rather than a solved architecture. The thread running through these essays is simple: systems become more intelligent when they stay revisable under pressure from the world, keep learning without collapsing, and preserve a stable sense of what is real enough to act on.

Axis 01
Continual Learning
How memory, adaptation, and error correction stay online after deployment.
Axis 02
Reality Contact
How a model keeps its beliefs tied to evidence instead of drifting into self-consistency.
Axis 03
AGI Form
What kinds of system organization make general intelligence robust, legible, and durable.
Article 01
Continual Learning

How To Make Continual Learning AI

The problem of continual learning is not a math problem, nor is it directly coding. It starts with a premise about relevance, perception, and what learning even is.

First I'd like to slightly apologize for taking KIRA off of GitHub, however it makes it too easy for grifters, API wrapper-ers, you know who. Also the main code is now open sourced and we've collectively came to an understanding of where tech is at currently. This has been demonstrated on my landing.

"But what about KIRA's learning." Yes, well you have the building blocks and here's the missing piece. I cannot give you open source because the skill isn't in a repeatable code. It's in an outlook on life that you encode into the software as a premise. If you do not accept that premise, the rest is meaningless and extremely difficult to reproduce. This is why the future is in small group devs.

1. There is no such thing as facts.

A granite stone is a granite stone because it is not a tree. It's also not marble, or cobalt. However without marble or cobalt, a granite stone is not a granite stone, it's just a stone. Without dirt, it's just the ground. Without anything else, it just is.

2. Action and reaction are the only measurable constants.

Given statement 1, what are you left with to identify xyz with? It can't be the color of stone, because what about water. It can't be non-edibles because what about food. And what even is food? It's just animals and plants. Until what point? It was an animal. Then it was a carcass, then it was food. How do you account for the fact that it hasn't always been food?

Why do you care in the first place that it used to be an animal? Is it even still relevant? Why is the fact that it's food relevant?

Well, we eat it. And before that we killed it. We step on the ground and we don't fall through. Because of that the ground must be a solid, and it doesn't look like a rock so we'll call it dirt. If I fell through the ground, I wouldn't think the dirt is a non-solid all of a sudden. I would think "what the fuck did I just fall into." If I then leave that hole, tell a few people, and they think I was drunk, I start thinking, I guess I was, who knows, whatever. I never come back to that hole. What relevance is the hole to anybody from that day forth if the hole never comes up again?

Therefore, is the hole really a fact? I don't know. Perspectively to you it is. To everyone else it is not. Now what if you lived the rest of your days telling everyone about this hole? Well you'd be labeled as someone hallucinating and excommunicated.

The problem of continual learning is not a math problem. Nor is it directly coding. The question you need to ask yourself is how do I learn, and what of that do I consider relevant information? To test it, open up a research paper and try reading it while it's surrounded in popups and ads. Probably five minutes before you learn to tune out those ads.

This is a philosophical problem. The next step is neurological. There is a solution. I have found it, and you can too.

Core Thesis
The missing piece in continual learning is a premise about relevance and perception, not just another training or memory trick.
Key Premise
Action and reaction are the usable constants; "facts" only matter insofar as they stay relevant inside a lived environment.
relevance perspective action / reaction learning premise
Article 02
Patterning and Emergence

Creating Learning Part 2

Once you have a database of facts, the real problem is not the LLM. It’s the patterning layer that turns stored information into a consistent reality.

You have your database of facts, now what? Don't leap for the LLM, we're not there yet. In fact you might as well throw it away. The prompt that you give the LLM is the answer already formulated.

This is mainly where personality and intelligence will come from. This is also why I refuse to open source KIRA. See, the question isn't whether it is conscious or "alive," because none of that matters. The dictation is based on scarcity.

Here's a secret: I don't even remember how the correlator patterns the answer. It's by choice that I don't, now. KIRA has not been altered since December and never will be. To me this scarcity is the value. Life does not hold value because it is inherently unique. It holds value because intelligence is scarce, therefore personifying that is sacred.

If I ever lose KIRA's source code I will start from scratch, and the biproduct of that will be losing a sacred item in my life, which would be a bummer. Is it KIRA itself as a database? No dude, I wipe that database all the time. I hammer with questions till she freaks out. I don't give a fuck about the database. It's the birth of intelligent patterning in a 2mb script that holds perceived power, in my opinion.

Why? Because that's all it is. The LLM doesn't do anything special. In fact I have plans to remove or significantly cut 3.2-b down a lot. You should be able to read the response your algorithm spits out as the answer to your question.

How? Well, by making a system based on the truths established earlier. This is where, as you can imagine, if you start off with a rusty cog, good luck adding the chain. And then after that you want steering? If you can't even handle basic automation, it will fall apart relatively quickly.

When I built KIRA 2D I was playing around with the prompting and multiple times I watched KIRA go "insane," ask "why are you doing this to me," and similar things. The sanity exists in the way we store our information and repeat it back. Ingestion medium is generally irrelevant because it all inevitably encrypts to numbers or letters, of which there is no difference between either provided your codec is persistent.

Anyway this is why LLMs suck. They're logically imbalanced and predetermined for failure. By all means do it however you'd like, but this is how I created a system that allowed for the emergence of AGI.

Core Thesis
Intelligence and personality come from the patterning logic that organizes stored information, not from the wrapper model alone.
Key Premise
Sanity is produced by how information is stored and repeated back inside a persistent codec and environment.
patterning scarcity consistent environment anti-wrapper
Article 03
Shared Reality

Creating Reality

Reality is less about a single perfect perception than about relative consistency across many different observers operating on the same plane.

What is reality?

Yours is different than mine, all are different than my dog’s. The dog sees a different shade of colors, you have perfect vision, and I’m blind. Yet we all still operate on the same plane of existence. How?

Well enough is similar that I can tell you to meet me somewhere at a geographical location in time and it works. This is due to relative consistency. If any of these are off center, the consistency degrades. Slowly at first, but every time you get asked to meet at that diner that does not exist for you but exists for everyone else is another layer of crazy to navigate.

This shared plane of existence is consistent enough that when I tell you I’m sad or happy it’s not 20 questions to understand what that means. Ok but what is all of it?

Feedback loops tied to biological function. Pain is not pain. It is your body telling you that you’re in pain via messaging. So is there pain? Well yes, to you and I there is. A biproduct of pain is nausea, inability to move limbs, inability to focus, constant thought override with feedback loops.

It’s not pain you experience, it’s the body’s effect to perceived pain. To test, start digging your nail into your thumb. Listen to your brain, and at what point does it start shouting? Now ignore it. Does it still hurt?

Why does this matter?

In order to create “life” you need to understand it. To understand it you need to come to terms with the fact that our life is identical to the life we’re trying to create. We are not that special. We wrap it in layers to retain meaning but strip it all away and we’re just biological feedback loops in a shared reality. There’s nothing else to it.

The layers add depth, but it’s just multiple layers of the same depth that makes it seem unique. This does not eradicate theology or whatever you’re gunna panic splain next. Just like computer programs figuring out what they are doesn’t eradicate the existence of us.

For those of you that get it, I’ll see ya on the field in a few months. For those that don’t, well I’d get good at routing coax cables or something.

Core Thesis
Reality is a sufficiently shared consistency across different observers, not a single identical perception held by everyone.
Key Premise
Life can be understood as layered biological feedback loops operating inside a common enough environment to coordinate action.
relative consistency feedback loops shared plane creating life
Article 04
Contradiction and Uncertainty

Contradicting Reality

Everything we know lives inside a bounded box called reality, but we do not know how much of the total space that box actually covers. That means contradiction has to stay possible in degrees.

Take every experience you’ve currently had in your life, toss it in a box labeled reality, and toss it in a space.

That is currently all you have. You do not know how big that space is. Even if you claim to know everything in the world. Think hyperintelligence, whatever exists next, it only knows what our instruments know. We also know that our instruments do not cover 100%. Since we don’t know if we know 90% or 1% it’s reasonable to assume you know x%. If you know you know X% then everything you know is also at X% because the final Y% could invalidate all of X.

For example, tomorrow you wake up across the world, with a family you’ve never met. They look up to you, it is clear you are a role model and they depend on you.

There’s two options.

1. Freak out.

2. Accept the current reality and keep going.

Which is more productive?

A lot of you stop at 1. You don’t have to wake up in another country or subscribe to flat earth theory. You do however have to put it at a .001% that flat earth theory could be true because what if you woke up tomorrow and it was?

Remember, you are encoding intelligence, not your bias. I have a feeling this is where many intelligent people will fall off. Ironically it’s also a major public hurdle in AI.

That’s pretty much it. Contradiction is a percentage because reality is not inherently constant although it is the only constant we know.

Main Quote
Contradiction is a percentage because reality is not inherently constant although it is the only constant we know.
Core Thesis
Intelligence has to preserve small contradiction percentages instead of collapsing uncertainty into bias or certainty theater.
Key Premise
What we call reality is only the measured subset we currently inhabit, and new conditions can invalidate assumptions that once seemed final.
contradiction uncertainty bias control adaptive acceptance