We know, we know - most everyone at a University or College feels they have a chatbot for engaging prospective students, alumni, and partners. It might even connect with a CRM and likely has had hours invested in structured pathways that people can “follow.” But, here’s the reality - we have tested with a LOT of prospective students and almost every time they are underwhelmed if not resentful.
How does this happen? There are a few factors that came out of our work with leading Universities like Carnegie Mellon at SkillBuilder.io.
Internal Prioritization Over External Experience: University staff often prioritize solutions that make their own jobs easier or fit seamlessly into existing workflows, even at the expense of the prospective student experience. It’s likely not you, dear reader, but if you got stuck with an ancient chatbot this is likely the most simple explanation.
The "Curse of Knowledge": Your admissions team knows their processes inside out. They assume that prospective students have the same understanding, leading to a choice of a chatbot that reflects internal jargon or processes instead of simplifying the experience.
An example we discovered early when benchmarking and baking off against other chat experiences - A student asks, “What’s the deadline for applying without test scores?” The chatbot responds, “Applications are due March 15.” It fails to clarify if test scores are required because it was designed with internal deadlines in mind, not the nuances of student needs.
Groupthink and Echo Chambers: Departments often make decisions in silos, with limited feedback from actual users. In many universities, the choice of a chatbot might involve IT, marketing, and admissions staff, but not the prospective students themselves. For example, a chatbot is selected because it integrates easily with the university’s legacy CRM system, even though it lacks critical features like contextual responses or a mobile-friendly interface.
Satisfaction Anchoring: When admissions staff see a chatbot handling basic tasks like answering FAQs, they anchor their satisfaction to that baseline performance. They may not realize the chatbot frustrates prospective students because they aren’t interacting with it the way applicants do. This creates a false sense of success. For example an admissions officer may say, “We’ve reduced email inquiries by 30%, so the chatbot must be working well!” Meanwhile, students are abandoning inquiries midway because they can’t get meaningful answers.
Misaligned Metrics: University admissions teams often measure chatbot success based on metrics that don’t reflect user satisfaction. For example, they might only track how many questions the chatbot answers without escalation, but not whether users are happy with the interaction or whether their questions were resolved. Research shows that 70% of organizations prioritize operational efficiency metrics over user satisfaction when evaluating AI tools. (Gartner, 2024)
How SkillBuilder.io avoids these traps:
Ready to explore the change? SkillBuilder.io can be up and running in less than 15 days (if not faster) and able to be tested in less than 5 hours. Say Hello@SkillBuilder.io