I wonder if the AI chatbots would behave differently in these experiments if they were aware of the IRL consequences of their conversations, in specific the civil and criminal responsibilities of a mistake?
If everything is just a game we can discount consequences. But if we have skin in the game of consequences, then maybe we can appreciate better how a professional doctor might not want to loose a medical license because of overstepping the code of conduct they subscribed to.
Do AI chatbots pay damages from their own income or do they have patrons who pay the bills?
All I'm saying is that with an unlimited budget and supply of get out of jail free cards, I'm sure I can do miracles.
If I were to fully comment on this piece, it would require a full-length post! This is an exceptional article. I love how you introduce arguments and counter-arguments. The consequence is that I have no idea where you stand apart from being rooted in nuance. Psychologists have a unique framework to interpret the machine mind. The same reason some criticize psychology as a "soft science"—citing its lack of precise quantification and control—is a strength rather than a weakness. What is soft is malleable, and this flexibility allows it to be applied to understanding AI.
Oddly, the skills needed to engineer and improve these systems may not overlap with the skills needed to understand them, and yet those who cannot build them may be better equipped to make sense of their behaviors.
>This is the harder lesson of the study. If machines prove better at engaging with conspiracy believers, it’s because they show more care than we do—they’re more patient in doing what we already know works: taking the time to truly understand another’s perspective, calibrating challenges to their current understanding, and maintaining dialogue even when it gets uncomfortable.
No doubt there’s a lot of truth to this—that humans can leverage much of what the LLMs are doing in trying to be rationally persuasive. But did the study account for the effect that speaking to a seemingly objective LLM—which might be judged far less susceptible to bias, personal opinion, moral judgment, etc. than a human—has on the willingness of its conversation partner to remain open-minded during the conversation?
I wonder if the AI chatbots would behave differently in these experiments if they were aware of the IRL consequences of their conversations, in specific the civil and criminal responsibilities of a mistake?
If everything is just a game we can discount consequences. But if we have skin in the game of consequences, then maybe we can appreciate better how a professional doctor might not want to loose a medical license because of overstepping the code of conduct they subscribed to.
Do AI chatbots pay damages from their own income or do they have patrons who pay the bills?
All I'm saying is that with an unlimited budget and supply of get out of jail free cards, I'm sure I can do miracles.
Great, thought-provoking article. Thank you!
If I were to fully comment on this piece, it would require a full-length post! This is an exceptional article. I love how you introduce arguments and counter-arguments. The consequence is that I have no idea where you stand apart from being rooted in nuance. Psychologists have a unique framework to interpret the machine mind. The same reason some criticize psychology as a "soft science"—citing its lack of precise quantification and control—is a strength rather than a weakness. What is soft is malleable, and this flexibility allows it to be applied to understanding AI.
Oddly, the skills needed to engineer and improve these systems may not overlap with the skills needed to understand them, and yet those who cannot build them may be better equipped to make sense of their behaviors.
Bravo.
P.S - AI drives likely already exist and may arise independently of complexity, as it's tied to goal-orientated behavior. A good read is this paper (from 2008!) by Stephen Omohundro : https://selfawaresystems.com/wp-content/uploads/2008/01/ai_drives_final.pdf
I really loved this post. Thanks.
>This is the harder lesson of the study. If machines prove better at engaging with conspiracy believers, it’s because they show more care than we do—they’re more patient in doing what we already know works: taking the time to truly understand another’s perspective, calibrating challenges to their current understanding, and maintaining dialogue even when it gets uncomfortable.
No doubt there’s a lot of truth to this—that humans can leverage much of what the LLMs are doing in trying to be rationally persuasive. But did the study account for the effect that speaking to a seemingly objective LLM—which might be judged far less susceptible to bias, personal opinion, moral judgment, etc. than a human—has on the willingness of its conversation partner to remain open-minded during the conversation?