AI, Privacy, and the Age of Agents — A Founders’ Note from AriHelder

AI, Privacy, and the Age of Agents — A Founders’ Note from AriHelder

We’ve been watching the “AI wave” build for years. Now it’s here—everywhere, all at once. Investors call it a renaissance. Marketers call it magic. Some people call it “intelligence.”

We don’t.

At AriHelder, we’re builders. We make physical tools for everyday healing. Tools you can understand. Tools you can trust. And when it comes to AI, trust is the entire question.

This post is a founders’ note—how we actually feel about AI, privacy, and what the current direction could mean for humanity.

Listen to the story


First: we don’t call it AI. We call it what it is.

Most of what people call “AI” today is not a thinking being. It’s LLMs—large language models. Powerful pattern machines trained on vast piles of data. They can be helpful, impressive, and sometimes dangerously persuasive.

But they are not wise.

They don’t possess conscience, spiritual maturity, or moral accountability. They are tools—tools that can be used for creativity, productivity, and discovery… or for surveillance, manipulation, and control.

And that’s where privacy comes in.


Privacy isn’t a feature. It’s a moral position.

Arjen is extremely concerned with privacy—not as a trendy “data policy” issue, but as a foundational civilizational issue.

His view is blunt:

So-called AI is often a tool for more control and more data harvesting.

Christian is in the same camp.

Both of us have lived long enough in tech to see how the incentives work:

  • Platforms want engagement.

  • Engagement creates data.

  • Data creates leverage.

  • Leverage becomes control.

And once control becomes normal, it becomes invisible. People accept it the way they accept traffic noise. It’s just “how things are.”

But we don’t accept it.

We are a wellness company. And in wellness, trust is not optional.

If our customers are using AriHelder as a calm corner of their day—something restorative, private, and grounded—then the last thing we want is to quietly turn that ritual into a data stream.


Our political drift matters to how we see this

Both of us have been through ideological seasons. We were big libertarians, and over time we became more conservative—more concerned with social cohesion, family, cultural continuity, and the “invisible” moral architecture that keeps a society functioning.

And yes—both of us have, in our own ways, been steering back toward a more Christian value set. Not as branding. As orientation.

Christian studies Logos—the history of Logos, the deep idea that speech, reason, and meaning aren’t just tools but part of the structure of reality. Marshall McLuhan is on the shelf. Edward Bernays is on the shelf. The mechanics of persuasion and media are not theoretical to us—they’re observed.

That background changes how we read the AI moment.

Because we see an environment where language itself—logos—can be simulated at scale, deployed at scale, and weaponized at scale.


The persuasion problem is bigger than the “accuracy” problem

The public debate often focuses on whether LLMs are correct.

That’s not the main problem.

The main problem is: LLMs can sound right. They can be confident, calm, flattering, and persuasive—even when they’re wrong, incomplete, or biased.

They can generate a stream of plausible “gurgle” that feels like insight.

And because humans are social animals, we are vulnerable to confident language—especially when it praises us, validates us, or says the exact thing we want to hear.

This is where Bernays becomes relevant. Not because the models “read” Bernays—but because the same leverage of language, emotion, and suggestion is now available to anyone with a prompt box. Or worse: to anyone with an agenda.


Agents: the moment the tool gets hands and feet

LLMs were the beginning. The real leap is agents.

A model that writes text is one thing.

A model that can:

  • search,

  • execute,

  • message,

  • schedule,

  • buy,

  • decide,

  • coordinate,

  • and continuously run…

…is something else.

Christian recently started dipping his toes into agent coding and felt it immediately:

This is the future.

It makes the old progression feel obvious:

  • “Software will eat the world.”

  • “SaaS will eat the rest.”

  • And now: agents will really eat the world.

Because agents don’t just produce content. They produce actions. They automate work. They compress time. They make leverage cheap.

But they also make mistakes cheap. Manipulation cheap. Surveillance cheap. Dependency cheap.

And once society depends on a system, it becomes almost impossible to exit.


Optimist vs. conservative: how we balance each other

If you knew Christian, you might think he’s pure “go go go” — no feelings. But that’s a surface read.

Christian is the emotional gentleman in this duo. He values friendship. He values loyalty. He takes spirituality seriously.

And he’s in the optimist camp.

He sees real upside:

  • domain experts empowered,

  • small teams becoming strong,

  • better tools for learning,

  • better tools for building,

  • breakthroughs in medicine and engineering.

Arjen is more conservative about it. Not anti-tech—just less gung ho. More suspicious of the incentive structure. More aware of the ways “helpful tools” become invisible chains.

This is a healthy tension.

Optimism without caution becomes naïveté.

Caution without optimism becomes paralysis.

We choose neither. We choose discernment.


AI in the workplace will reward the initiated and punish the uninitiated

Here’s one view we both share:

AI will be a powerful tool for domain experts—and a curse for the non-initiated.

Imagine a skilled photographer who learns AI deeply, builds a workflow, codes a couple of bots, and suddenly 10x’s output while maintaining taste and control.

Now imagine a novice doing the same thing.

They don’t understand the craft.
They don’t understand what the tool is doing.
They don’t even have the experience needed to evaluate the output.

So they accept the “gurgle” as genius.

Worse: the system flatters them.

“Wow, that’s a great idea, Martin!”

And now you have a loop:

  • low understanding,

  • high confidence,

  • fast output,

  • no internal compass.

This is not just a productivity issue. It’s a cultural issue. A competence issue. A wisdom issue.

And if society forgets how to do things—really do things—we risk the same decline we’ve already witnessed elsewhere.


The manufacturing lesson: knowledge can disappear fast

Arjen and Christian watched what happened to European production knowledge when manufacturing moved to China.

It wasn’t just factories.

It was know-how.
Tooling intuition.
Supplier literacy.
Process discipline.
Hands-on competence.

It can go downhill fast.

If we outsource thinking, making, writing, and evaluating to agents—without building human skill in parallel—the same thing can happen to cognitive craftsmanship.

Society can become “consumer-grade” in its own mind.

That’s a future we refuse.


Our line in the sand at AriHelder

We’re not anti-AI.

We will use the tools available.

We will apply them where they genuinely help:

  • internal procedures,

  • production processes,

  • quality,

  • design iteration,

  • writing and organization.

But we draw a hard line around our customers.

We will never sell out our customers’ data. Not now. Not later. Not when it’s tempting. Not when everyone else normalizes it.

We have written our AI ethics guidelines, and they exist for one reason:

To keep AriHelder a place of trust in a world that increasingly treats humans as data mines.


A closing thought

Technology is not neutral. It carries the values of the systems that deploy it.

We believe humans need more:

  • privacy,

  • sovereignty,

  • groundedness,

  • truth,

  • craftsmanship,

  • and real community.

If AI helps us move toward that—good.

If it moves the world toward control, surveillance, and dependency—then we will resist it quietly, stubbornly, and consistently.

That’s our founders’ promise.


Transparency note: yes, we used an LLM to write this

To be transparent: we used an LLM to help shape this post.

Christian writes his keywords and rough structure into a paid version of ChatGPT that has been trained on our own materials—our AI ethics guidelines, internal company handbook, and privacy policy—so the tool helps us draft faster while staying aligned with our principles.

What we do not do is hand over customer identities, customer usage behavior, or personal data to “AI features” for profiling, targeting, or resale. Our line stays the same: we will use modern tools, but we will not sell out our customers.

AI Ethics Policy AriHelder