Perception of Artificial Reality, Version 2.0
- Sarnav
- 17 hours ago
- 8 min read
Loading New Simulation
Everyone's talking about artificial intelligence, and mankind is making rapid progress in this field. But is this development always positive? As the negative aspects come to light, we realise that this technology is both a curse and a blessing.
It is very boring to talk about the issues on the agenda. To be honest, I can't bear to read another article about artificial intelligence. But when I read personal experiences or little-known experiences of others on the subject, the power and realism of the situation increases. It makes me want to talk about it again and again.
Surely you have experienced something similar: those social media posts sent by your parents, relatives... I'm talking about pictures of cats, dogs, maybe other animals, or a cute baby, a pleasant landscape. They are obviously shaped by artificial intelligence. However, people who are not digitally literate or who are not as familiar with existing technology as we are (and older people, of course) accept their artificiality as realistic. Most of the time they do this without question.
The strangeness of the visuals produced does not always affect them much. To tell the truth, this was not entirely true until a year ago. As you know, the development of this technology has increased enormously over the last few years. Its credibility has also increased. It's also reawakened scepticism.
I can't help but think back to the days of relatively low internet usage, when videos used to cause such amazement. Of course, we had our share in a similar way. When we look back on those days now, we feel stupid inside. But the point that makes the difference is that the illusions back then were usually handmade or had a resolution that was open to visual manipulation. Still, that could be an excuse. The interpretation is yours.
Today, as I said, we live in a similar way. But I can't be angry with those who can't keep up with this technology, because although I'm careful, I'm sometimes wrong.
I feel something strange in some of the pictures my friend sends me, but most of the time I can't understand it at a glance, I feel the need to examine it. I don't neglect to read the comments either. I see that this dilemma is often experienced by others. What bothers me is that this gives me a feeling of relief. It means that we are experiencing a lack of intuition in the social sense.
As I said, scepticism is coming to the fore more than ever. So much so that people are now acting with prejudice or preconceived acceptance and not hesitating to say that almost every post that looks good is made by artificial intelligence. Even if the posts were not created that way.

Photo by Cash Macanaya on Unsplash
This is the visual dimension. Such moments are precisely the beginning of the illusions that we experience with our eyes, that is, with one of our senses, and the seeds of suspicion that can be sown later. There is also the digital part of the work, which we cannot define in any way.
Yes, I am talking about artificial intelligence chat bots, which have begun to profoundly change our concept of search engines and have not hesitated to enter our vocabulary. I am sure that almost all of us have tried at least one of them. Isn't this technology, which I have been using since its inception and whose frequency of use changes from time to time, strange?
I am not going to talk about what this technological structure that comes out of fiction books is, because you already know more or less. I don't know in what form you have used it, but we are gradually changing bilaterally. I am not sure if we are evolving, but we are definitely changing. It is obvious that it has entered our daily conversations and lives with its structure that permeates our thoughts, its unique repetitive words and some meaningless answers.
It takes time to develop, that is inevitable. But we, the users who give feedback, grow it, and in doing so we can actually fall behind it. We often don't even have an idea of what it can do. I don't know if you've ever said something like "I could do that, I never thought of that", but it's a phrase that I personally repeat when I read the news and usage techniques.
So, on the one hand, it does say that: Everyone has their own area of use for artificial intelligence. We set our priorities accordingly. For example, we can ask for a recipe, we need book recommendations, or we don't mind if it creates a travel guide. We act according to our requirements and expect it to do the same. We trust it, at least in part. But remember, it is good to be sceptical.
I stress this because of the news I read and the series I watch. Yes, I can accept that the series that deal with such technological structures are generally negative and the focus is far from everyday use. But haven't there been strange developments that you have experienced or read about in the news? Let's remember the common phrase that even the founders of well-known artificial intelligence sites and experts in the field say: "We can't foresee where this technology can lead.”
There is an artificial intelligence assistant called Grok that is built into the X site, which I rarely use, and which you can also use on its own site. Because it is built into the platform, it can scan every single post there and answer questions based on that. It is actually a great system. Users feed this assistant with what they share, whether they want to or not. When asked questions such as the accuracy and source of the news (at least when it is shared on this platform, which is almost inevitable due to its popularity), it can give the “right” answer. At least, we think most of them are correct.
When users interact with this AI assistant, I find myself looking at what it answers. I read almost all of its answers every time I encounter it. Most of the time, not surprisingly, I see answers that I agree with or support. Some of the questions, while seemingly correct, can be leading, which can lead to biased answers. Even if the billionaire who owns the platform says the AI assistant is unbiased. Let's face it, that's impossible.
Of course, we can choose to avoid this situation, but there is another point where I hesitate.
It is the question of the impartiality of artificial intelligence and who owns it. This is what really makes me think. You know, there is a famous saying: “History is written by the victors”. We all know (even the billionaires at the top of the world's financial heap) that the development that has dominated the present and will increasingly dominate the future is artificial intelligence. That is why these people, who normally like to be frugal, do not hesitate to spend astronomical amounts of money.
This is what I fear the most (or let's just say I'm afraid for now). We are going through decades of a noticeable decline in literacy, awareness, scepticism, questioning in the global community and this will have a huge impact. Yes, the ignorance of the other really does affect our lives.
I know that the cat-dog videos created by artificial intelligence may not be seen as a direct contribution to this. But I see it like pop music. Technological breakthroughs that reach the public, that become part of the public, that do not require additional knowledge, that are partially sufficient, are already enough to numb the public. But you know that technology always has two layers: The layer that contains the use of the public and the layer that is above the public. Whoever has the power secures the position before distributing it, right?
This is my interpretation of the development of artificial intelligence. Remember, we know that Google, the search engine that everyone of us knows and uses, actually collects our data and uses it in any way that is useful to them. It is absurd to think that they will not do the same with artificial intelligence technology. Moreover, in the network of technologies that are likely to evolve instantaneously and in real time, this has become and will become frightening.
Let's go back to the example I gave earlier. We blindly trust an AI assistant, which we know is not alive and does not understand the technical part of the job, to be the source of the information shared on the platform. Let's say we're someone with a certain audience, we have awareness and we have the power to get communities behind us. An issue comes up that we don't want on the agenda, or a historical reference touches on our sacred values. An article in our favour on that subject will be the beginning of shaping the situation in the way we want.
You know that the more a subject is talked about, the more it sticks in people's minds. Truthfulness is not important. If I lie all the time, some people will believe it. And those who accept it unquestioningly as a new reality will spread it. As they spread it, it necessarily clouds the minds of others. As you know, we call this black propaganda. Because sometimes even the source is not clear.
"What is the big deal? Hasn't this happened in every period of history?" you may ask. It is true, it is a despicable behaviour that I believe has been possible since the beginning of mass communication. Newspapers spread lies the next day, radios immediately, but at certain hours. A plate of food was taken to tell the neighbour or the chatty friend in the office. It was told so that it would multiply and grow. But with the media developing at the speed of light, especially with the help of the Internet, it is so difficult to see where things are going today. I am reminded of a saying that is repeated in every new era: "It is now very difficult to add anything new to the list of technological breakthroughs". But it never ceases to surprise us, and to spread the “evils” that lie behind its benefits.
Of course, this is not a "technology and artificial intelligence are bad" article. People should always make the best use of the resources available to them, but in my view they should always do so within a virtuous framework. Nevertheless, who knows how many years it has been since we, the people, unconsciously chose to be slaves to developments rather than have a say in them.

Photo by Toa Heftiba on Unsplash
The number of people who expect artificial intelligence to solve their personal psychological problems, to do their work quickly or to complete it in a short time is growing. We began to put aside our worries about what we would have to say to someone else, what we would learn over time, and what mistakes we would make. We began to become artificial.
Bottom line: The more humanity shifts its burden to the artificial, the more it moves away from its own process of becoming. For the line between using technology and surrendering to it has become more blurred than ever.
Moreover, with a single word from those in power, we are very likely to accept the artificiality we trust so much as reality. The rewriting of history, the glorification and demonisation of individuals, the obliteration of truths and even the crossing of sacred red lines are the harbingers of the age we are gradually moving into. I wonder if tomorrow we will see advertisements such as 'Your new artificial intelligence friend who will only transmit the truth from the archived database, just like in the past!
Let the technology develop, but the efforts to utopianise our dystopian lives are not as innocent or cheerful as promised.
Comments