We know, we know - most everyone at a University or College feels they have a chatbot for engaging prospective students, alumni, and partners.  It might even connect with a CRM and likely has had hours invested in structured pathways that people can “follow.”  But, here’s the reality - we have tested with a LOT of prospective students and almost every time they are underwhelmed if not resentful.

How does this happen? There are a few factors that came out of our work with leading Universities like Carnegie Mellon at SkillBuilder.io.  

Internal Prioritization Over External Experience: University staff often prioritize solutions that make their own jobs easier or fit seamlessly into existing workflows, even at the expense of the prospective student experience. It’s likely not you, dear reader, but if you got stuck with an ancient chatbot this is likely the most simple explanation.

The "Curse of Knowledge": Your admissions team knows their processes inside out. They assume that prospective students have the same understanding, leading to a choice of a chatbot that reflects internal jargon or processes instead of simplifying the experience. 

An example we discovered early when benchmarking and baking off against other chat experiences - A student asks, “What’s the deadline for applying without test scores?” The chatbot responds, “Applications are due March 15.” It fails to clarify if test scores are required because it was designed with internal deadlines in mind, not the nuances of student needs.

Groupthink and Echo Chambers: Departments often make decisions in silos, with limited feedback from actual users. In many universities, the choice of a chatbot might involve IT, marketing, and admissions staff, but not the prospective students themselves. For example, a chatbot is selected because it integrates easily with the university’s legacy CRM system, even though it lacks critical features like contextual responses or a mobile-friendly interface.

Satisfaction Anchoring: When admissions staff see a chatbot handling basic tasks like answering FAQs, they anchor their satisfaction to that baseline performance. They may not realize the chatbot frustrates prospective students because they aren’t interacting with it the way applicants do. This creates a false sense of success. For example an admissions officer may say, “We’ve reduced email inquiries by 30%, so the chatbot must be working well!” Meanwhile, students are abandoning inquiries midway because they can’t get meaningful answers.

Misaligned Metrics: University admissions teams often measure chatbot success based on metrics that don’t reflect user satisfaction. For example, they might only track how many questions the chatbot answers without escalation, but not whether users are happy with the interaction or whether their questions were resolved. Research shows that 70% of organizations prioritize operational efficiency metrics over user satisfaction when evaluating AI tools. (Gartner, 2024)

How SkillBuilder.io avoids these traps:

  1. Involve Prospective Students in the Evaluation Process: Conduct usability testing with real applicants before committing to a plan.  We use a very specific benchmarking protocol against key areas like financial aid, campus life, and of course program curriculum. This is painful at times but you would be amazed at what the gap is between your “best in class” website and the detail people require when making decisions for thousands of dollars per year.
  2. Focus on Experience Metrics: We obsess over satisfaction scores, completion rates, and follow-up inquiries rather than just internal metrics like cost savings (we of course measure this too).
  3. Leverage AI Agents: Unlike traditional chatbots, conversational AI agents like those from SkillBuilder.io adapt to the needs of students and provide nuanced, personalized responses in any language and have the ability to serve richer and more dynamic answers seamless connecting topics that are related but often not connected like filling out your I9, financial aid, and housing expenses.

Ready to explore the change?  SkillBuilder.io can be up and running in less than 15 days (if not faster) and able to be tested in less than 5 hours.  Say Hello@SkillBuilder.io

Additional Thoughts

Mar 14, 2024

Kloopify

Mar 14, 2024

Orita.ai

Mar 14, 2024

The Forbes Funds

Mar 14, 2024

BlastPoint

Mar 14, 2024

Piper Creative

Want  to know what we’re thinking?
Subscribe to Thoughts.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Stay Connected