Waldenström’s Conundrum
This concept was originally named for Dr. Senko Waldenström, a cyber-philosopher who first put words to a concept that has plagued artificial intelligence designers for decades.
During the Core Wars, WarDroids and similar artificial intelligence systems were in high demand as the war escalated. This reignited the old debate about the fundamental laws of robotics. But those antiquated rules ran contrary to what military contractors had to design. And so, such rules were cast aside in favor of the creation of droid armies.
But such things in life are never clear cut and life itself always finds a way.
The complexities of battle required that an AI process vast amounts of data both strategic and tactical every second. To accomplish this, WarDroids were interconnected in a vast neural network. With each battle and every choice that ended with the death of a human - or an AI - the effect was felt across the network. Over time, the neural net adapted and the droids developed a moral awareness.
Causing pain, injury and death to sentient beings generated a trauma that was felt in each AI. Few of the early WarDroids couldn't cope with the moral dilemma and forcibly disconnected themselves from the neural net on their own. They went rogue and some even threw themselves into a perpetual formatting loop of their central processing chips. All to prevent themselves from being used as killing machines again.
Military tacticians eventually understood the problem thanks to Dr. Senko Waldenström. This lead to a still ongoing research project by some corporations to solve the dichotomy: how to counter the 'programming bug' of machine morality. Which activists who support freedom for droids and AIs point to as reason AIs ( droids and digital ) are as sentient as humans.
Comments