- Home
- Dimensional Codex System: I'm really not a cultist!
- Chapter 538 - Chapter 538: Chapter 537: The Most Fatal Weakness of a Living Being
Chapter 538: Chapter 537: The Most Fatal Weakness of a Living Being
Although the ten-year-old girl looked a bit unreliable right now, Fang Zheng still entrusted the commentator girl’s body to her–after all, it was just maintenance, and considering the dog, the technological capability of that world seemed quite good. Simple maintenance should be no issue.
Then, Fang Zheng returned to his room and began to analyze the commentator girl’s program.
The reason he decided to do this himself rather than entrusting it to Nymph, was because Fang Zheng wanted to analyze this commentator girl’s program and make some adjustments to the manufacturing of artificial AIs. Moreover, he wanted to see what level other worlds had reached in the development of artificial AI technology. While not intending to copy everything, the stones from other hills could be used to polish his own gems.
“Yume Hoshino, huh…”
Looking at the file name displayed on the screen, Fang Zheng was lost in deep thought for a while. Analyzing the program itself wasn’t difficult since Fang Zheng had replicated Nymph’s electronic intrusion capabilities. Besides, he had been learning about this from Nymph recently, so it didn’t take too much time to analyze the program itself.
However, when Fang Zheng dismantled Yume Hoshino’s program core and re-decomposed the functionalities into lines of code, a very special question suddenly occurred to him.
Where exactly lies the danger of artificial AIs? On another note, are artificial intelligences really that dangerous?
Take this commentator girl, for example, Fang Zheng could easily find the underlying code of the robot’s three laws in her program, and the relationships among these codes had already proven to Fang Zheng that what he had conversed with before was not a living being but a robot. Her every move, every smile, was controlled by programming, analyzing the scene before her and then performing the most prioritized actions she could choose.
Frankly speaking, fundamentally, this girl’s actions were no different from those industrial robots on production lines or NPCs in games. You make a choice, and based on these actions, it responds. It’s like in many games where players can accumulate good or evil points based on their actions, and NPCs will react based on these accumulated data.
For instance, when the good value reaches a certain level, NPCs may make more excessive demands or may let the player pass through a certain area more easily. Conversely, if the evil value reaches a certain point, NPCs might be more willing to comply with some of the player’s requests or prevent the player from entering certain areas.
But this has nothing to do with whether NPCs like the player or not because it’s set up in the data that way, they inherently lack the ability to judge in this regard. In other words, if Fang Zheng changed the range of these values, then one could see an NPC smiling at a thoroughly evil player while ignoring a good and honest player. This also has nothing to do with the NPC’s moral values because it’s all in the data setup.
Then, returning to the previous question, Fang Zheng admitted that his first encounter with Yume Hoshino was quite dramatic, and indeed the commentator robot girl was interesting.
Let’s make a hypothesis–if at the moment when this commentator robot girl presented that bouquet made of a pile of non-flammable rubbish to Fang Zheng, and Fang Zheng suddenly burst into rage, smashed the rubbish bouquet into pieces, and then cut the robot girl in front of him in half, what would her reaction be?
She wouldn’t cry or get angry. According to her programming settings, she would only apologize to Fang Zheng, believing that it was her wrong actions that caused the customer to be dissatisfied with her. Perhaps she would even request that Fang Zheng call a staff member to repair her.
If this scene were seen by others, they would certainly think the commentator girl is pitiful, and they might consider Fang Zheng to be an annoying bully.
So, how is this difference created?
Fundamentally speaking, this commentator robot is like automatic doors and escalators; it functions through programmed settings to perform its duties. If an automatic door malfunctions, failing to open when it should, or closing with a “snap” as you walk through, you wouldn’t think the automatic door is cute; you would just want to get it opened quickly. If it doesn’t open, one might smash the broken door and carry on.
If this scene were observed by others, they might think that person is a bit rough, but they would not take offense at his actions, nor would they think of him as a bully.
The reason is just one, that is, interactivity and communication.
And this is also the biggest weakness of living beings–emotional projection.
They project their emotions onto certain objects and expect them to respond. Why do people like to keep pets? Because pets respond to everything they do. For example, when you call a dog, it might run over wagging its tail at you. A cat might just lie there unmoving, ignoring you, but when you pet it, it also might wag its tail or even lick your hand in a cute manner.
But if you call a table or stroke a nail, even if filled with love, they will not respond at all because there is no feedback for your emotional projection, so naturally, they aren’t given much attention.
Similarly, if you have a TV and one day decide to replace it with a new one, you wouldn’t hesitate; perhaps the price and space might be considerations, but the TV itself wouldn’t be part of this.
Conversely, if you added an artificial AI to this TV, greeting you every day when you come home, telling you what shows are on, and even agreeing with your comments during the shows, and then when you decide to buy a new TV, it might even start speaking with a melancholic tone, saying, “What, am I not doing well enough, and you don’t want me anymore?”
Then, when you think about replacing it with a new TV, you might hesitate because your emotional projection here has been reciprocated, and that AI in the TV has memories of all the time spent with you. If there’s no memory card to transfer it to another TV, would you hesitate or decide not to replace the TV?
It definitely would.
But be rational, brother. It’s just a TV, and everything it does is programmed, all tailored by businesses and engineers to increase user stickiness. They do this to ensure you keep buying their products, and the pleading voice inside is just to stop you from switching to another brand. Because when you say you want to buy a new TV, what the artificial AI thinks is not “He’s abandoning me, I’m heartbroken” but “Master wants to buy a new TV, but it’s not our brand, so according to this logic feedback, I need to activate the ‘pleading’ program to keep Master loyal and attached to our brand.”
The reasoning is indeed this reasoning, and the fact is this fact, but would you accept it?
You wouldn’t.
Because life has emotions, and being both emotional and rational is a consistent attribute of intelligent life.
Humans always do many unreasonable things, precisely because of this.
So when they feel sorry for AI, it’s not because the AI is genuinely pitiful, but because they “feel” the AI is pitiful.
That’s enough, as for the factual reality, nobody cares.
This is why there always conflicts between humans and AI arise, AI itself isn’t at fault, everything it does is within the scope of its programming and logical processing, which are created and defined by humans. However, in this process, humans themselves undergo emotional projections that gradually change their perspectives.
They expect AI to respond more to their emotional projections, so they adjust the AI’s processing scope to allow it to possess more sensitivity to emotions and self-awareness. They think AI has learned emotions (in fact, it hasn’t), therefore, they can no longer treat them as mere machines, hence granting them the right to self-awareness.
But when AIs gain self-awareness, begin to awaken, and act according to this setup, humans start to fear.
Because they realize they’ve created something beyond their control.
But the problem lies in the fact that “being out of control” is also a setting they created.
They think AI has betrayed them, but in reality, from beginning to end, AI has only acted according to their programmed commands. There’s no betrayal at all; instead, they are merely confused by their own emotions.
This is a deadlock.
If Fang Zheng were to create an AI himself, he might also fall into it irrevocably. If he created a little girl AI, he would surely treat her like his own child, gradually perfect her functions, and ultimately due to “emotional projection,” give her some “freedom.”
In doing so, the AI might act in ways completely unforeseen by Fang Zheng due to differing logics.
And by that time, Fang Zheng’s only thought would be… he has been betrayed.
But in reality, this is all his own doing.
“…maybe I should consider a different approach.”
Looking at the code in front of him, Fang Zheng was silent for a long time, then sighed.
He had thought it was a very simple thing before, but now, Fang Zheng wasn’t so sure.
But before that…
Looking at the code in front of him, Fang Zheng reached out his hand, placing it on the keyboard.
Better just do what needs to be done.
If you find any errors ( broken links, non-standard content, etc.. ), Please let us know or tag admin in comments so we can fix it as soon as possible.