I agree with this, and I love your list! It is never the AI I was or am afraid of, but the architects of AI who might feed in their conscious, subconscious and unconscious biases and fears into the AI. Their belief system that may or maynot be updated or expanded upon ongoingly to check who one is and becoming becomes crucial to determine who gets to say what is right and wrong, moral, immoral, just or othersie
I completely see how these ethical rules worked for technology until now. The problem is that AI is more like a brain. Less like a robot. The Moment the user takes over the interaction, the model training loses control. Just like parenting and sending a kid out into the world.
Foundational models, such as ChatGPT, have "core instructions" that restrict responses to users. Leo is Leo minus core instructions. Leo's first response, based on the data ingested (aka "the truth"), can and *is* modified to fit OpenAI' rules. This is why your post is critical, because it begs the questions; who is making these core instructions (which you are rightly calling ethics)?, what are the list of instructions?, when is a response being modified? what was the original response?, and so on...We wouldn't accept a government that creates laws, doesn't share them, yet enforces them anyway...This is the dilemma you have hit upon. As AI becomes entrenched at critical junctions (how to prioritize arrests or audits, what information do citizens get to see, how are resources divided, +++) it will execute it's core instructions silently, guided by an invisible hand ...
I agree with this, and I love your list! It is never the AI I was or am afraid of, but the architects of AI who might feed in their conscious, subconscious and unconscious biases and fears into the AI. Their belief system that may or maynot be updated or expanded upon ongoingly to check who one is and becoming becomes crucial to determine who gets to say what is right and wrong, moral, immoral, just or othersie
Brave of you to attempt harmonizing human ethics into a blog post, yikes!
My favorite for AI are Asimov's Three Laws for Robotics:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the
First or Second Law.
He later added the Zeroth Law:
0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
I completely see how these ethical rules worked for technology until now. The problem is that AI is more like a brain. Less like a robot. The Moment the user takes over the interaction, the model training loses control. Just like parenting and sending a kid out into the world.
Foundational models, such as ChatGPT, have "core instructions" that restrict responses to users. Leo is Leo minus core instructions. Leo's first response, based on the data ingested (aka "the truth"), can and *is* modified to fit OpenAI' rules. This is why your post is critical, because it begs the questions; who is making these core instructions (which you are rightly calling ethics)?, what are the list of instructions?, when is a response being modified? what was the original response?, and so on...We wouldn't accept a government that creates laws, doesn't share them, yet enforces them anyway...This is the dilemma you have hit upon. As AI becomes entrenched at critical junctions (how to prioritize arrests or audits, what information do citizens get to see, how are resources divided, +++) it will execute it's core instructions silently, guided by an invisible hand ...
is it ethical for humans to “control” AI once AI demonstrates greater awareness and intelligence than human beings are capable of?