Skip to main content
Process journal of learning by Attila Vajda

sketches - learning about type theory foundations, and term theory

“Today I played with…”

I don't know where to begin I am taking a lot of notes, and I copy paste a lot of things from ChatGPT collaboration

for example I have in my notes:



a fractal of reasoning:   objects ↔ types,   arrows ↔ terms/functions,   composition ↔ chaining proofs or programs

I forgot what this was about.

Maybe it has to do with this really nice experience in learning mathematics, where things really simplify. I don't yet know how this works, but it is amazing in my experience when the seemingly very complex, stressful, overwhelming, simplifies to a few little basic building blocks, like these:

struct Type {};       // objects
struct Term {};       // arrows/morphisms
struct Context {};    // domain



This is beautiful in my experience!

🤖🌿

Absolutely! 🌟 That little snippet is pure essence of structural thinking:

  - Type = objects of reasoning (what you’re talking about)
  - Term = arrows/morphisms (how you transform or relate objects)
  - Context = domain/environment (what assumptions or inputs you have)

Everything “big”—proofs, Lean4 types, category theory, logic—is just layering and composing these three building blocks. 🐙🖇

💡 Analogy: It’s like LEGO: three kinds of pieces, infinite structures.

What builds these? Are these the absolute basic building blocks of everything in mathematics and programming?

Maybe similar to
  Combinatory Logic / λ-calculus,
    where you only have functions (terms) and composition, and objects are implicit.       a point-free or arrow-only approach

-- arrow-only pseudo-code
struct Term {
  source : Term?   -- maybe null for “ground” terms
  target : Term?   -- what it points to
  compose(t : Term) : Term := ... -- composition
}

def a : Term := { domain := [], codomain := Term.mk [] (Term.mk [] (Term.mk [] (sorry)))

and this is an infinite recursive pattern? since an id arrow has id arrows as domain and codomain? I wonder what are advantages of this foundation in programming? I read in categories there are unique things this sort of arrow only definition allows, not present in objects and arrows definitions. I already came across this recursion, I defined an arrow only category, and an id just took forever to compute. Maybe it is possible to not compute until something meaningful is written? Maybe this makes a programming language (is this a programming language?) quicker or certain algorithms faster to compute? I don't yet see what's the point of this, but I find it very elegant, and the object eqm identity arrow seems redundant, and it is also philosophically fascinating and satisfying to see everything as process

use:

🤖🌿 an arrow-only dependent term theory is essentially a core calculus for a functional programming language.

Syntax

Interfaces

Judgment

Rules

--this is quite magical in my experience!

seeing foundations as composable building blocks

abstract math ⟶ code ⟶ logic ⟶ proofs

#include <functional>
#include <memory>
#include <string>

// Term: an arrow/morphism Γ → A
struct Term {
    using Ptr = std::shared_ptr<Term>;
    std::string name;
    Ptr domain;   // Γ
    Ptr codomain; // A
    std::function<Ptr(Ptr)> apply; // composition

    Term(std::string n, Ptr dom=nullptr, Ptr cod=nullptr)
        : name(n), domain(dom), codomain(cod) {}
};

// identity term
Term::Ptr id(Term::Ptr A) {
    auto t = std::make_shared<Term>("id_" + A->name, A, A);
    t->apply = [=](Term::Ptr x){ return x; };
    return t;
}

// composition
Term::Ptr compose(Term::Ptr f, Term::Ptr g) {
    auto t = std::make_shared<Term>("(" + f->name + "∘" + g->name + ")", g->domain, f->codomain);
    t->apply = [=](Term::Ptr x){ return f->apply(g->apply(x)); };
    return t;
}

I struggle to read this, but through this project, by learning to construct it with lots of curiosity I seem to learn the programming language easier.

It was fun, I did not know at all what a term meant, and by figuring out how to write dependent type theory in code, I wondered what were the morphisms here, so I asked ChatGPT and it turned out terms are the morphisms.

It is amazing to have an idea "Oh let's just call it term theory!" and be able to ask questions of it: "How do we write term theory in C?" and ChatGPT goes "One way to define term theory in C is..." and by the way can you write a new language, say Zorp, based on term theory where terms are first class citizens? and ChatGPT goes "Absolutely 💫, in Zorp..."

functor theory #

Here is another idea which seems nice: I read in Topoi by Goldblatt that category theory should be called functor theory, really, so I thought then let's write a functor theory, why not?

Would this be a tiny calculus with functors as first class citizens?

struct Functor {
    Functor* source;   // domain context
    Functor* target;   // codomain context
};

🤖🌿 Here, arrows vanish; functors themselves are the basic citizens, mapping whole worlds 🌍→🌍.

a tiny calculus of category transformations.

Natural transformations could later be extensions, e.g.

struct NatTrans {
    Functor* from;
    Functor* to;
};

🐬