Let me start with a confession: I’ve got FOMA. Fear of Meaningless Algorithms. Not of AI tools themselves, but of everything orbiting them. The tsunami of AI opinions, hot takes and clickbait. Snack-sized content that sounds just smart enough to keep you scrolling, but too shallow to teach you anything real. Still, it’s not all bad.
The clickbait titles and thumbnails are painfully cringe. And the endless stream of self-proclaimed AI experts makes me sad. But at the same time, I’m still blown away, every single day. By the speed with which AI turns ideas into prototypes. By the productivity boost it gives to experienced people. By how search and discovery suddenly feel seamless, without cookie walls or mindless scanning.
What am I afraid of?
That tension, between irritation and fascination, is what generative AI feels like to me.
The promise of AI is massive. The examples you see online look otherworldly. Everyone has an opinion on how generative AI will change the future of learning, work and creativity.
I’ve been around AI for about fifteen years now. I’ve built semantic search, entity recognition and sentiment analysis tools. Introduced knowledge graphs built on employees’ tacit expertise. Worked with vector databases before they were cool. And even co-developed a robotic cat that expressed emotions to help older people feel less lonely.
So yes, I know how hard it is to get AI right, and how easy it is to generate mediocrity. Humans have a built-in nonsense detector. Bad AI hits that uncanny valley feeling straight away. If you want to bring real value, you need to get dangerously close to passing the Turing test. The problem isn’t that AI is stupid, it’s that we settle for too little, too soon.
The software industry, especially in B2B, is flooding the web with AI-generated content. Hyper-targeted SEO articles that match your search terms perfectly… but tell you nothing you actually need. Their goal isn’t to teach you. It’s to push you further down the funnel. AI writes, a human tweaks, publish, repeat.
Technically impressive? Absolutely. Humanly valuable? Hardly.
And yet, despite the FOMA, despite the annoyance, I keep being amazed.
The tools are brilliant, as long as there’s a human in the loop. What worries me is that many juniors don’t yet have that human layer. They trust AI blindly, without questioning what it produces. And that lack of critical thinking might end up being more dangerous than a wrong answer.
We use AI. And we’ll use it even more. But always guided by the same principles that shape everything we build. We prefer to practise, move late, and strike with precision, rather than jump on the hype train and ruin a carefully crafted product.
Some experiments fail spectacularly (like smart expertise profiles an clever document search). Others truly add value: AI that speeds up translation, or acts as a power tool for great writing. I remain optimistic. Especially about ways AI can add intelligence, instead of just adding noise.
For those who are curious: Our design principles are Playful, Personalised and Goal-driven.
And our operating principles?
Yes, I have FOMA: Fear of Meaningless Algorithms. A healthy fear of drowning in shallow AI opinions and empty hype. But I also believe that if we stay critical, keep practising and hold onto our humanity, AI won’t replace our intelligence, it will amplify it.
And that, dear AI, is exactly how you go from useless to meaningful.