AI

Feb 28, 2025 | Autism, Thoughts

We are surrounded by this buzzword. It makes everything so much easier, better, faster!

Today, it seems impossible to move forward without artificial intelligence, and vast amounts of natural intelligence are being employed to make artificial intelligence as natural as possible.

But why am I so fascinated by this topic, especially when I usually write about experiencing life with autism?

Because the more I hear about AI and experiment with it myself, the more I see myself reflected in it.

 

Without a clear purpose, AI is useless.

Basically, this is true for any computer program. It is created for a specific reason, and every routine has a purpose—at least in the beginning.

Input and output interfaces are defined, and software only works properly with a well-defined and useful query. The same applies to AI. And the same applies to me.

If I don’t have a reason or motivation to do something specific, or anything at all, I just lie there. Sure, some hardware triggers are built in, but even those are less reliably activated by their software methods than I would like.

 

Without basic programming, AI cannot function.

We imagine AI as something that “thinks on its own”. But even its very existence is based on a clear objective and purpose.
We even get creeped out when AI seems to defy that purpose or acts beyond our understanding. However, it would never do so without some kind of trigger or prior preparation and training that enabled it to adopt such behaviour.

My programming was shaped by my parents, my family, school with its teachers and students, workplaces, friends, travels, and the resulting incremental accumulation of knowledge.
This also includes learning how to learn independently, constantly refining my own software to ensure compatibility with both my own hardware and external interfaces. It often feels like walking a tightrope while balancing it out with paradoxes!

 

Without predefined rules, AI will always take the path of least resistance.

AI can be guided by certain guardrails, either set by the programmer or by the user (if programmed to allow such input), to align its solutions with human preferences as closely as possible.

Otherwise, you might get either overly concise responses in an AI chat or endless dissertations instead of friendly answers in the tone humans prefer.
It seems that the form of the response often carries more weight than the bare content. Of course: It’s how humans understand things, and that’s what matters most.

Also, no AI would lie on its own; lying is a purely human concept. Yet despite the surplus computational effort involved in lying, it somehow seems essential in society. And this is where things become contradictory: the rules of this rule-breaking are often intangible.

 

Without sources, AI would have no foundation. Every piece of knowledge is based on something.

We’re familiar with certain AI services that directly cite their sources when presenting information.

I’m often asked questions like “How do you see the world?” or “How does your daily life work?” or “Why do you do things this way?” For these (and essentially any other question), I’ve been able to provide comprehensive answers for as long as I can remember—because questions beginning with “Why” have always occupied me deeply.

I rely on my internal encyclopedia, which I’ve been feeding throughout my life. Over time, I’ve become increasingly selective about my sources of knowledge, so that I prefer collecting “reliable” information (like car models, cell phones, or geographical facts) over consulting the ever-changing and emotionally warped world of human information.

But I still foster an unabridged fascination with this very unknown and the royal league of human-social skills, of which I’ve managed to build a range of social abilities, thanks to Knigge, Carnegie, and plenty of observation.

And yet, my statements often receive feedback like “not helpful,” prompting me to revise my algorithm. When I get feedback like “everything’s fine” (which happens far more often overall), there’s no reason for further improvement and I can redirect my energy elsewhere.
This explains the annoying focus on “the negative”: positive feedback requires no optimization because everything already works as intended. You see this pattern in many people too.

 

A thing entirely different: human identity and personality

Does AI know how a human feels? How does AI know what kind of response will resonate with someone? How does AI “see” its user?
Can we expect AI to give answers beyond what its sources and communication guidelines dictate?

The programmer feeds the AI, but can a user ever demand that an AI be “itself”?