12 kwietnia 2026Philosophy

When I say that the Pythagorean theorem exists, grandfather's umbrella exists, the future exists, a stone exists, I exist, a spirit exists, or God exists, each time using the same word, I am saying something entirely different. Each of these things, and I could surely keep listing more, exists differently. When we say that something exists, we rely solely on intuition.

If you asked a physicist or a mathematician, they would explain the relationship between the physical world, the world of reason, and the world of abstraction, where all three create one another. However, if we were to divide things by their mode of existence, we would be dealing with even more worlds.

The sensation of color cannot be described. It can, however, be made dependent on the wavelength of light and the response of receptors in our bodies. Redness exists differently in the physical world and in the subjective world of the perceiver. We see a clear coupling between the two worlds, but does a set of transformations exist that would connect them both? And if so, in what way? A physical one?

We are still missing one element before reaching the first important conclusion. In theology, a spirit is defined as a being without time or place. That is, one that exists, but never and nowhere - in other words, it is futile to search for it in the physical world, and one cannot ascribe to it the same rules of existence that we ascribe to matter or even to a human being. The world of spirit may have its own mode of existence, just as the world of abstraction exists differently from the real world. The nonexistence of beings in the real world in no way violates their existence in abstraction; the absence of abstraction does not cause real beings to cease to exist. On the other hand, if there are any rules connecting the two - they belong neither to abstraction nor to reality - and yet reality is partially coupled with abstraction. The only postulate I put forward is the possibility of beings existing in yet another way. Beings that also couple with reality, but it is futile to search for what connects both worlds.

Here I put forward the postulate of the possibility of God's existence in His own way. Different from all other modes of existence. Inaccessible to either physics or abstraction. It is futile to search for Him through empirical experience or logical reasoning. What we experience empirically exists in reality, logic creates the world of abstraction, and we are now postulating yet another form of existence.

Long ago, Descartes said that a human being is composed of soul and body, committing in the process an error that cast a shadow over future centuries of philosophy. The issue was finding a bridge between the worlds of spirit and matter. Since a human being is composed of two elements, it was expected that they must interact with each other. The error lay in the expectation of some kind of physics between the worlds. Some force that, by moving the spirit, would also move the body and vice versa.

Had Descartes not thought of soul and body as components of a human being, but instead stated that a human being is simultaneously soul and body, this subtle difference would have allowed him to understand that such a dependency cannot be found, and that a human being may be an entity manifesting existence in two different worlds. Just as redness manifests existence differently in the consciousness of the perceiver and in the physical world. So too a human being can exist in the physical world, subject to its limitations, and in the world of spirit simultaneously, where different rules prevail. The only criterion connecting them is the person themselves and their own existence.

I do not wish to convince anyone that unverifiable forms of existence exist, especially since arguing for the existence of God is doomed to failure. However, there are qualia - things perceived through pure intuition - that cannot be described in any other way than to point at them and hope that others experience them the same way. They do not fit into any verifiable world, and there is no method similar to the scientific method that would prove their existence, or at least their scientificity.

What I am trying to do, however, is protest against understanding the world only in categories that are provable, by showing that if we swing Occam's razor too broadly, we will cut away beings that we know are there, yet for which we have no tool to verify their existence other than plain intuition. I also believe that just as through intuition we see redness, so too faith, which allows us to perceive the spiritual, is a form of such intuition.

-
29 marca 2026AI, IT

Vibe-coding in a nutshell. I sit in the CEO's chair.

I've just discovered an IDE for vibe-coding, probably some fork of VS Code. I know the tools that automate most processes. I connect the right connectors to my agent. A ready-made service providing a database, GitHub, and a service that takes code and deploys it straight to production. All of these will be tools my agent can use. I'll use an agent for testing too. Full AI, no human involved along the way, except the one writing prompts. I write a prompt, describe exactly what I want, and moments later I see the result. It's impressive. Not quite what it should be yet, so I write another prompt with corrections. When I'm satisfied, I write more prompts with new features. Deployments happen automatically. After a week, I have a working application. Done. It works. We're making money.

Is it perfect? Doesn't matter, it doesn't have to be. It makes no difference to me whether I trust AI or a human. After all, how is writing prompts different from delivering requirements to a developer.

If we put a period here in this story, it would be beautiful. But the story goes on. The first user shows up for whom something doesn't work. No problem, AI will fix it. Maybe we'll even automate this process too. Users will give feedback to the AI, which will prioritize defects and fix them on the fly. Soon it turns out that the application runs slowly, sometimes even throwing strange errors. AI looks for the cause and fixes it.

At this point, many things can go wrong, and the chances of the application running flawlessly shrink. What will you do if it turns out that users can see other users' sensitive data? What will you do if the AI suddenly decides that the cause of the slowdown is a database that's too large and decides to wipe it? What will you do if the AI at some point decides it's best to start from scratch and deletes everything?

At some point you need more resources. Your providers aren't enough. You have to migrate your application to different infrastructure. That's when you discover that the AI has everything hardcoded for the current setup, because nobody told it otherwise.

Now I sit in the programmer's chair.

The first difference - I approach writing code as if it were my own. Ultimately, I'm the one who will have to put my name on it. The second difference - years of experience have taught me that code is read more often than it is written. In this regard, AI has changed only one thing - you don't write the code. That means its readability is even more important. On the other hand, does it even matter if it works? I don't connect any connectors yet - working with Jira, Git is in my muscle memory, I simply don't need them. First, I want to see if this works.

I write my first prompt and before I even run the application, I read the code. What interests me most is the data model and the tables that will be created in the database during migration.

Why? The data model is the hardest thing to change after going to production. It is the represented world of our system. A careless change can lead to data loss or database inconsistency. Before the first release, you can simply wipe the database and repopulate it. In production, this means losing user data or, worse, misattributing data.

This is the moment where AI changes something - tests - it doesn't work - fixes - tests - it works. And in the end it turns out that along the way the database has fallen into ruin. I, of course, do this on a test environment and check the migrations to make sure the destructive behavior won't be reproduced in production.

Then I look at the interface. The interface tells me whether the AI understood my use cases. I review the handler names and their inputs and outputs. Then I check how the ones that do more than save/load/delete work.

That's when it turns out that while the AI does produce the expected result, internally it's not doing what it should. I explain how it should look, and it fixes it.

There's one more thing left. If I'm not reading the code very carefully right now, it means it must be readable enough that I can understand it when needed. I give the AI - in my case Claude - appropriate guidelines on how to write code. This isn't a matter of aesthetics, but of quickly finding your way through someone else's code during incident investigation.

In the end, I'm not only satisfied but impressed. Claude thought of edge cases that I hadn't thought of. It wrote thousands of lines of code in record time. Does everything work as it should? That needs to be checked. I start testing and it quickly turns out that I'd change something after all, or I simply have a defect. I tell Claude about it, and it eagerly gets to fixing.

No, no - you can't do it that way because you'll break something else - I explain to it once again, and it agrees with me. Personally, I found the root cause of the bug fifteen minutes ago, but I keep giving it a chance. Eventually I fix it myself. The investigation forced me to look deeper into the code and I see far more places that need improvement. It only pretends to work. I write a piece of code to show Claude how it's done and explain - apply this to the rest of the places. Claude does it, and the result is much better than if it had written it on its own.

Eventually I arrive at my own way of working with Claude. The most important thing is obviously the data model, then the scope of responsibility of services and their interfaces. Claude excels at writing code that supports my model. It does it even better than I would. It won't mix up operators, won't make mistakes in logical clauses, will remember about exceptions, and will do the most tedious work with enthusiasm.

Suddenly Claude gets stupid. An ordinary thing. The context ran out. It doesn't remember what it was doing. Our code, which we wrote together, becomes foreign to it. "I need to review the code structure" - it responds when I give it the next task. Well, fortunately I know how to explain it and steer it, but it takes a while, not to mention the consumed tokens. This means we need to continuously document what we're doing and describe the features. That's actually even better. It'll be cleaner and more organized, and the AI will handle the documentation anyway.

I have an MVP. Until now I haven't had to think about deployment. I know I split the application into dockerizable services and everything will end up in a k8s cluster anyway - where exactly, I'll decide shortly. I had Claude create a separate schema for each service, so everything is easily portable. Claude writes k8s configs - after all, nothing particularly goes beyond standard elements. As for the database and deployment - I don't need to use ready-made providers, I've done this thousands of times. I've had a production database for small projects and a cluster where I can add another namespace for a long time. Will I need something more? I'll migrate the database, update the configs, and deploy to a different cluster.

When the first client comes to me with a problem, I know where my logs are, I can look into them. I feed them to Claude, because why should I analyze them myself. It finds the problem and fixes it. Wait! - I say. You can't fix it that way because we'll lose user data. "You're right!" - Claude responds - "I'll do it without touching the database." Moments later we have a ready fix. I test it locally to see if it works. I do smoke tests on the rest of the features to check if we haven't broken anything, but we shouldn't have - after all, I understand the code change.

I come to the conclusion that I'm dealing with a junior on steroids. AI will write code much faster than I would, more precisely than I would, but it will also make a mess much faster and worse than I would.

Going back to the beginning, can I do in two weeks by myself what would take a team a month? Yes. The problem doesn't lie in the first two weeks but in the following years of the application's life. We fall for the illusion of a fresh project, where the acceleration is greatest and logical errors are still undiscovered.

Ultimately, a question arises. If I had two applications doing the same thing and all I knew about them was that one was written by a CEO with AI and the other by a programmer with AI - from which one would I expect better performance, security, and longevity? My suggestion: let CEOs use programmers, and let programmers use AI.

-
17 marca 2026AI

At the turn of the 19th and 20th centuries, a technological revolution took place. Machines appeared that automated production. The market built on craft guilds and manufactories collapsed. Specialists who had devoted their entire lives to their professions became unemployed and were forced to work in factories, where as mere appendages to machines they received starvation wages that barely allowed them to survive the month. There was no question of starting a family, saving money, nor was there any way out. From history, we know these people as the proletariat. They were not peasants, they were not slaves, they were not stupid, lazy, or unskilled people. They were people whose fishing rod — one they had made themselves — was taken away, and they were given fish bones in return.


They became a social problem that was politically exploited. From the need to level the disparity between the bourgeoisie, who were growing rich at an unprecedented pace, and the proletariat, who were expanding just as rapidly, socialism and its many variants were born. One of these, supported by communists, was proletarian socialism. The communists openly called the proletariat to revolution, which brought Europe what was probably the greatest bloodshed in history, and plunged its eastern part into decades of darkness.


I know I'm simplifying, and the whole story cannot be told in two paragraphs. But I want to tell it in a way that draws attention to familiar themes. Ultimately, machines brought many benefits to the world, and we would rather not have stopped at the stage of manufactories and guilds. But did so much suffering have to happen along the way?


I believe it was not technology that led to this, but human greed and the lack of an appropriate culture developed in time. On the surface, everything looked fine. The bourgeoisie were people who, at the right moment, saw the coming changes and invested. But culture was lacking. Feudal lords oppressed their subjects, but not to the degree that stripped them of their humanity. Over centuries they learned their relations to the peasants, which was unfair but deep. The bourgeoisie never developed any relation to proletariat, so they saw them no more than a tool to generate wealth. The bourgeoisie saw the problem they had created, everyone saw the problem, but not everyone responded, and those who did acted in their own interest. Ultimately, the proletariat itself also responded in its own interest.


Contrary to appearances, the conclusion is easy to see, though difficult to realize. We have forgotten about the human being. We no longer see them. We celebrate that the wonderful AI technology can do what many specialists do. So we lay off the specialists and reduce the rest to operating AI. We therefore need cheaper, less educated people. Today, like never before. In times when progress matters more than those it is meant to serve.

-
15 marca 2026AI, IT

A Parable of the Violinist

Once upon a time there was a violinist whose passion was, as one might easily guess, playing the violin. Not only did he have talent, but he had mastered his skills for years. Everyone was enchanted by the music he created, and he wanted to do nothing else.

Even when I'm old and they send me off to retirement, I'll keep playing - he used to say. He was offered various promotions — he could have become a conductor, a director of the philharmonic — but he turned down every offer except those that involved playing his instrument. And so he always played first violin and was perfectly suited for the role. When asked where he saw himself in ten years, he would reply that he already had everything he wanted and was doing what he wanted to do; the only thing he dreamed of was doing it better.

At some point, a trend emerged for recording and reproducing music. The market was flooded with people who mixed other people's work, calling themselves artists and even violinists, though they rarely held an instrument in their hands. They created beautiful pieces, drawing among other things on the music of our violinist, whom fewer and fewer people wanted to listen to, as they indulged in the works of those who, despite lacking the ability to play any instrument, called themselves specialists — just of a different kind.

Nowadays it doesn't matter whether you play the violin or the piano; what counts are skills of a different sort - they would say, pointing their finger at the violinist - those who failed to evolve get left behind and have only themselves to blame. This is where laziness and lack of ambition lead. After some time, the violinist put away his instrument, because no one wanted to listen to his works anymore. People still listened to music, of course, but only to what was replayed and remixed from what the old musicians had created. Our hero decided to start mixing himself. He did it well, because he loved music and understood it far better than the self-taught vibe-musicians. Nevertheless, he never touched the violin again.

However, this story was not about a violinist, and not about a music.

-
8 marca 2026AI, IT

I came across this comparison on LinkedIn. With a comment that AI inferred from the O*NET database and millions of real usage sessions that for programmers, about 75% of typical tasks are classified as automatable. For obvious reasons, this is a signal for many that the twilight of the IT aristocracy is coming and programmers should start packing their bags.

It's worth to mention that the diagram comes form Anthropic Research – Labor market impacts of AI / AI Exposure Index (March 2026), Anthropic Economic Index (January 2026 and September 2025) and keeping in mind the inherent conflict of interest.

I remain skeptical about this type of analysis, though. First and foremost, percentages mislead most people who deal with percentages. The problem with charts is that they're easy to interpret in favor of a thesis everyone already expects, so few bother to question it. For me, there are two key questions here:

What will this look like in absolute terms?

The natural reaction is a sense of threat that since AI is supposed to handle 75% of tasks, we won't need 75% of the programmers currently on the market. On top of that, other statistics say that companies already don't want to hire juniors. Combining the right data points, it's easy to conclude that IT is heading for an apocalypse. But if a single project used to require a team of thirty people and today only three are enough, then instead of firing twenty-seven, you can take on nine additional projects. You'd still have thirty people employed, and instead of one project you'd be doing ten, delegating most tasks to AI, and the equation saying AI will handle 75% of tasks still holds. We haven't pushed specialists out of the market — we've expanded the market. Why isn't this happening? I think it's due to a transitional period and mounting crises around the world. Companies are going into emergency mode and not investing in growth. Laid-off programmers won't vanish into thin air — they'll have to do something, and that something will most likely be a surge of new companies that are far more agile than the old ones and will leverage AI from day one. It may turn out in the end that even though AI handles most tasks for us, the demand for specialists will increase, not decrease. Right now we're in a crisis, and it shows everywhere. But before prophesying AI-driven threats and backing them with data, let's end the wars we're fighting, lower interest rates, let companies invest, and see what happens then — whether AI is more of a danger or an opportunity.

Why AI is entering these professions and not others?

The quick answer here is that these professions are the easiest to replace. I beg to differ. It really doesn't take much for AI to enter anything. Just as in IT it uses agents and programs already installed on computers to perform tasks, along with various connectors, AI can post job listings and use people where it can't do something itself. Apparently, there are already ideas for AI to pay subcontractors for work. I personally argue that AI is replacing tasks in IT fastest not because of ease, but because of the environment. People in IT are the most active AI users, and the market is large. I myself stopped doing many of my tasks by delegating them to Claude, because there's no point in me spending hours on something I could do in a few minutes. People in IT are the most susceptible to AI because they see its potential and want to use it. As a programmer, I use Claude, which lets me do ten things in the time I used to do one. My manager or the company CEO doesn't do this. AI helps me in my work and it's me who delegates tasks while doing my own job. At the end of the day, I still sign off on everything. IT is the biggest beneficiary of AI, because it was IT that created AI, and IT understands its potential and wants to leverage it.

Is anyone actually at risk? Let's not kid ourselves — IT is no longer an El Dorado for anyone who can launch an IDE and write a few lines of code, or fix bugs someone else introduced while building a feature. Even seniors with years of experience or more capable juniors won't be doing the things they used to do. Just as a farmer no longer goes out to the field with a scythe for weeks on end, but heads out for a day or two with heavy equipment. He's still a farmer — only his tools have changed, and his work is more efficient. And that's a good thing, because there are more and more people who need to eat. The demand for farmers hasn't declined that drastically. I think the same will happen with programmers. We'll create differently than before, and better. Focusing on the task at hand instead of getting bogged down in frameworks for frameworks and libraries for libraries.

-