Associate deans at the Vanderbilt University office of Equity, Diversity, and Inclusion (EDI) sent an email to the whole Peabody College community after a man shot eight people at Michigan State University. The deans made the ill-considered choice to use ChatGPT to write the note. But regardless of that choice, such post-shooting sympathy notes are often generic and soulless. If you have nothing human to say, does it really matter if you use an AI to say it?
Reviewing the decision to send this note — and to use AI to write it
Imagine for a moment that you are the associate dean in charge of the EDI office at the Peabody College of Vanderbilt. You hear about the shootings at MSU, which is 542 miles away. While I cannot know the minds of the people who wrote this note, I am assuming that they followed a thought process something like this:
- “I feel terrible about the people being shot at MSU. We should say something.” (Stop right there. Why should you say something? Is it because you feel bad, or because you think you can help others? If it is mostly because of your own feelings, then you’re better off saying nothing.)
- “If we were truly inclusive in our feelings towards each other, people might feel safer and less angry.” There is no evidence for the assumption that a lack of inclusion was a motivating factor for this shooting. It’s a mistake to use your view of the world to try to explain all the problems in the world.
- “Let’s write an email about how inclusion can make things better.” It’s unlikely that an email can change how people behave towards each other, even if you link it to a shooting at a school 500 miles away.
- “Let’s use ChatGPT to help write the email.” This is an emotional topic. ChatGPT is unlikely to be able to create an emotional response. Don’t use AI to try to touch people’s hearts.
That’s four bad decisions: the decision to write, the decision to attempt to tie inclusion to shootings, the decision to send an email to touch people emotionally, and the decision to use ChatGPT. All four contributed to this ill-considered note. If the writers had not used ChatGPT, it would still have been a mistake.
Analyzing the Vanderbilt email
Here’s the note that the EDI office sent, with my comments. (In case you’re wondering if this is legitimate, I asked ChatGPT to write a note from an EDI office about shootings, and it generated a note remarkably similar to this. And OpenAI’s AI Text Classifier identified the text as likely to be AI-generated.)
Office of Equity, Diversity and Inclusion
Dear Peabody Family:
The recent Michigan shootings are a tragic reminder of the importance of taking care of each other, particularly in the context of creating inclusive environments. As members of the Peabody campus community, we must reflect on the impact of such an event and take steps to ensure that we are doing our best to create a safe and inclusive environment for all.
This does not start with any sort of human connection or sympathy — a significant flaw in anything you write about shootings. It also states, but does not make the case, that inclusion has anything to do with shootings. That’s a hard case to make. You might ask ChatGPT to make it anyway; this is the predictable result, a stretch.
One of the key ways to promote a culture of care on our campus is through building strong relationships with one another. This involves actively engaging with people from different backgrounds and perspectives, listening to their stories, and showing empathy and support. We can also look out for one another by noticing signs of distress and offering support to those who may be struggling with mental health issues.
Another important aspect of creating an inclusive environment is to promote a culture of respect and understanding. This means valuing the diversity of experiences, perspectives, and identities on our campus, and actively working to create a space where everyone feels welcomed and supported. We can do this by listening to one another, seeking out new perspectives, and challenging our own assumptions and biases.
This sounds like ChatGPT culled general statements about inclusion from sites all over the Web. I’m sure it did. And that’s the problem: it sounds like every other soulless statement about the topic. It’s boilerplate about making connections. It may even be true, but it fails to create any emotion in the reader, and as a result it is unlikely to generate any action.
Finally, we must recognize that creating a safe and inclusive environment is an ongoing process that requires ongoing effort and commitment. We must continue to engage in conversations about how we can do better, learn from our mistakes, and work together to build a stronger, more inclusive community.
Totally generic. Meaningless, actually.
In the wake of the Michigan shootings, let us come together as a community to reaffirm our commitment to caring for one another and promoting a culture of inclusivity on our campus. By doing so, we can honor the victims of this tragedy and work towards a safer, more compassionate future for all.
Does this motivate you? Do you think people are coming together all over Vanderbilt now?
(Paraphrase from OpenAI’s ChatGPT AI language model, personal communication, February 15, 2023).
Peabody Office of Equity, Diversity and Inclusion
I can’t believe they admitted ChatGPT helped write this. But even more, I can’t believe they added “warmly.” This is the opposite of “warmly” — and the AI component makes that worse.
A few observations
It’s very hard to write in the wake of a tragedy. But “it is hard to write” doesn’t mean “let’s delegate it to an AI.” Emotional writing needs to come from the heart, not from a computer algorithm.
Most post-tragedy notes sound generic. They do nothing. So don’t write one. Even if you don’t use an AI, you may sound like one.
Everyone sees the world through their own lens. That doesn’t mean what you are expert in actually explains the world. Even if you are expert in equity, diversity, and inclusion.
Regardless of your intentions, clumsy attempts like this can cause a backlash. Remember, every piece of writing leaves an impression about the writer. Always ask, “What will they think of me if I write this?” That’s part of what it means to write.
I wouldn’t write a note like the EDI office did. But if I did, it would start like this:
If you’re feeling sad after the recent shootings at MSU, I know how you feel. I feel sad, too. It’s terrible that a single human being in pain can cause so much more pain in a community of people.
People are hurting. They are always hurting. You may not even notice. But take a moment to reach out to someone near you, even if you don’t know them well, or if they seem different from you. You just might make a connection that would make a difference.
It’s still probably not better to write the note. But if you do, don’t shy away from the emotional work of connecting with other human beings.
2 responses to “About that Vanderbilt post-shooting email: ChatGPT feigns sympathy poorly, but so do humans”
Your note is very nice, great advice on how to be more human, compassionate and empathetic. People probably won’t notice, but the AI surely will.
As a former student and former staff member at Vanderbilt, I’d love to be surprised by a tone-deaf response to a campus shooting from the top. I’m not. Of course, there are a host of reasons why Vanderbilt had (when I was there – maybe still today) its own SWAT team, equipped with all the latest gear and weaponry. A mass shooting incident there might turn out very differently, but it would still be tragic. The administration’s response would probably still be another form letter…