Estimated reading time: 3 minutes
When it comes to artificial intelligence (AI), one of the first things I think about is the future. We talk about AI being the future and how it will help us shape the future. I did an AI experiment recently that reminded me when we talk about the future, we can’t forget the past.
I was working on some revisions to a training program recently and I felt the content needed a new icebreaker. I thought, “Hey! I’ll ask AI to give me an icebreaker suggestion.” So, I asked GPT-4o (developed by OpenAI) and Gemini (developed by Google) the same question:
Tell me an icebreaker exercise to use with 20 professional adults in a virtual training program.
- GPT-4o suggested “Two Truths and a Lie”. This activity asks participants to come up with three statements about themselves. Obviously, two of them are true and one is … a lie. The rest of the group takes turns guessing which statement is the lie.
- Gemini suggested “Would You Rather?”. In this activity, the facilitator comes up with questions that ask, “Would you rather do A or B?” and participants share their responses. The group can discuss their choices.
Both AI programs provided instructions for conducting the icebreaker activity. If you’re interested, we’ve put them in a single page PDF that you can check out here.
It was interesting to see how each program responded to the prompt. Gemini included a list of materials including a mention of Google Docs. It also highlighted the benefits of using this activity – engaging, low pressure, and reveals preferences. GPT-4o included timing for each step in the activity and added some tips for working with a large audience.
But neither of them created anything new. Both icebreaker activities have been around for decades. They are not original. I’m not saying either one of them is bad. I am saying that I was surprised that AI suggested something so … well, old. It was an educational experience for me for a few reasons.
- AI is still learning. One of the things we continue to hear about AI is that it needs to learn. That’s why we see organizations changing their terms of service and/or asking us to opt in when it comes to their AI tools and learning.
- We need to have the right expectations. Maybe things that were developed in the past are classics and exactly the right things to suggest. I can also see how some things that were developed years ago might be old and outdated, so they may not get recommended.
- We need to evaluate the information provided by AI. This is where humans and AI will work together. AI doesn’t know everything, like your training audience. These icebreaker suggestions could be great, or they could be a disaster depending on your participants.
Just in case you’re wondering … while we appreciated the AI suggestions, for this particular training program we were working on we stuck with the classic “interview your partner” icebreaker. Both AI-suggested icebreakers did include a reminder about keeping comments professional, light, and positive to avoid any discomfort.
This AI experiment made me realize that AI and I are learning together. And because we are, there could come times when I have to adjust my expectations about what AI can and cannot do. That’s a good thing for both of us.
Image captured by Sharlyn Lauby while exploring the streets of Chicago, IL
The post Have the Right Expectations Regarding Artificial Intelligence appeared first on hr bartender.