August 07, 2025

Why I Don't Believe in the Pension System (Even Though I'm Pro-Social Policies)

I'm very socialist when it comes to pensions — and broadly, when it comes to education, research, healthcare, and social safety nets. I deeply believe in the state’s responsibility to provide basic protections for its citizens.


But in most other areas, I’m quite capitalist. I believe in market dynamics, personal responsibility, and economic freedom.


That’s exactly why I don’t believe in the pension system — not Pillar I, not Pillar II, not Pillar III.


Pillar I: Social safety net, not a real pension.

Let’s be honest: Pillar I is not a retirement savings plan. It’s a social assistance mechanism — and that’s okay! It’s like unemployment benefits, public education, or universal healthcare. It’s meant to be a safety net for the elderly, to ensure a minimum standard of living in old age.


But let’s stop pretending it’s a “contributive” system. It never was, and it never will be. It has never been financially sustainable. It runs on political promises, electoral cycles, and constant state patchwork. It’s been eroded consistently — especially through inflation, special pensions, and arbitrary decisions by the state.


So yes, it’s a socialist system — and that’s fine. But it’s not what people are told it is.


Pillar II: State-controlled investment? That’s not real investment.


Pillar II was supposed to be a privately managed pension fund. In reality, it’s state-controlled. And I’ve never trusted a system where the state tells me how to invest my own money.


That’s not investment. That’s just another political instrument.


I’ve always been certain that the state will eventually seize or redirect those funds — and so far, it’s doing exactly that. It controls where and how the money is invested, how and when it can be withdrawn, and whether we’ll even see it again.


I’m convinced we won’t. Pillar II was just electoral marketing.


In short, we need to rethink how we talk about pensions. We need to stop mixing the language of "social protection" with "financial independence".

July 19, 2025

What I Learned Losing a Million Dollars – A Modern Fairy Tale About Gambling in Business

I don’t read business books much anymore. I used to. Obsessively. At one point, reading felt like a compulsion—an intellectual sugar rush I couldn’t resist. These days, I prefer something sharper: peer-reviewed science, niche blogs, curated newsletters, specialized courses, and a healthy dose of GPT-fueled learning. Less time consuming, more frictionless, and far more adaptable to what I’m actually trying to do—learn and build.

But sometimes a book sneaks through the firewall.


What I Learned Losing a Million Dollars, by Jim Paul, came recommended by Sabin Gilceava.

This is an easy read - more of a business fable than a textbook. The story is compelling, a tale of gambling your way into (and out of) trading and business. It follows a tried-and-true formula: tell the reader simple but intriguing truths, sprinkle in some elementary insights from psychology and statistics, and package it all in a way that makes the reader feel smart. It’s accessible. It's a modern fairy tale - i.e. it’s about money.


It dances with ethical ambiguity. You keep wondering: is the author reflecting or justifying? It is not about value creation, nor business. It's about money, connections, bluffing, image, cheating, misrepresentation, risk, gambling, trading, speculation, money.

In that context, the introductory references to Edison or Ford are ironic. Please. Those men were engineers; they built things. 


That said, the book serves as a good reminder of foundational, state-of-the-art scientific and educational literature from psychology, economics, and statistics. The application of the five stages of grief (from pain management) to business loss is actually quite interesting. But like many books in this genre, it overstays its welcome, and sometimes exagerates with elementary truth until they become false. There’s a point where you realize you’re reading another 10-page explanation of why having a plan is better than not having a plan. And surely, both experience and research suggest that rigidly following an initial plan is usually a mistake, something the author simply ignores. Likewise, the value of objective over subjective decision-making is a repeated theme, in literature as well as in this book. Even though, surprisingly, it’s contradicted in the book’s very conclusion.


One quote stands out as a neat summary of the entire work:

“Most people who think they are investing are speculating. And most people who think they are speculating are gambling.”

Simple. Sharp. That wraps it up.

July 01, 2025

The great modern urban myth: no, AI doesn't replace programmers

AI doesn't replace programmers. It's a modern urban myth.


The net is full of stories about how AI/GPT writes full product code, and replaces programmers. 

It's an urban myth.

It sounds cool. But it's wrong.


AI/GPT is great at boring repetitive tasks. Great at searching. SearchGpt replaces successfully most manuals and documentation. It is even much faster and better than Stackoverflow.


It's also useless for solving any original or complex problem. It makes silly mistakes, writes redundant code, and doesn't really "understand" anything.

Yes, I write code, I love AI, we use it extensively, and I use GPT every day. AI is great. Technology is great.


But no. There is no "product" fully written by AI, by amateurs. 

It's a modern urban myth.

June 08, 2025

AGI and "true reasoning", vs "AI" and "reasoning. How society handles language mis-use and abuse

Obviously, current AI is not really intelligent, doesn't have "true reasoning" or "true understanding". And we may argue (scientifically) that the terminology is wrong. Which it is. 


But this will not change the way people use, and mis-use, words.


What we do, when words are mis-used or abused, is invent new words.


So, we say AGI (General Artificial Intelligence), and "true reasoning", to clarify that "AI" and "reasoning" are improper terms.



June 03, 2025

AI intoxication, AI poisoning, and Google bombing

AI intoxication is probably even harder than Google bombing, or Wikipedia astroturfing. Which is already incredibly difficult by now.


But it's coming. It's bound to be a thing,

Same as Google bombing, and Wikipedia Astroturfing.


We need a new name for this. AI intoxication? AI poisoning?


N.b. Google bombing is a method to trick Google to return a specific, fake result, to a specific search query. It exploits Google's algorithm that relies mostly on the number and reliability of websites that link to a specific webpage.


Astroturfing is the practice of creating a false appearance of grassroots support for a cause, product, person, or policy. It manipulates public perception by making it seem like a lot of people are behind a particular opinion.

Wikipedia Astroturfing is a method to trick the crowd-sourced editorial process of Wikipedia, which relies on reputation and wide consensus.




Why I Don't Believe in the Pension System (Even Though I'm Pro-Social Policies)

I'm very socialist when it comes to pensions — and broadly, when it comes to education, research, healthcare, and social safety nets. I ...