img
  • By Franz Inc.
  • 7 January, 2026

From ALICE to Neuro-Symbolic AI: A Conversation with Dr. Richard Wallace

In a recent interview on Infinite Tech, Dr. Richard Wallace—an AI Scientist at Franz Inc. and one of the most influential figures in the history of conversational AI—walks through the long arc of chatbots: from the earliest rule-based systems to today’s large language models, and why the future is increasingly hybrid.

A pioneer’s origin story.

Wallace traces his path into AI back to the early era when ELIZA was the widely known “AI application,” and uses it as a lens for understanding why people attribute intelligence to language—sometimes too easily.

That history becomes a useful contrast to the current moment: modern systems can sound astonishingly fluent, but the core question remains the same—what do they actually understand, and how do we validate it?

What made ALICE different—and why AIML scaled

The interview highlights the practical breakthrough of ALICE and the value of a structured, symbolic approach to dialogue authoring. Wallace explains why AIML (Artificial Intelligence Markup Language) mattered: it made it possible to build conversational behavior in a modular, inspectable way—an early lesson in “explainability by design.”

That thread continues throughout the conversation: systems that can be understood and steered tend to be the ones that can be trusted and operationalized.

The Turing Test, the Loebner Prize, and measuring the wrong thing

Wallace offers a nuanced critique of the Turing Test—grounding it in Alan Turing’s original “imitation game” framing—and discusses how the Loebner Prize operationalized a simplified version of that idea (and why the “silver medal” was never awarded).

The takeaway isn’t that benchmarks are useless—it’s that evaluation needs to match reality: reliability, grounding, and the ability to explain “why” matter more than pure imitation.

Why black-box intelligence hits a wall in real applications

The discussion moves into modern AI and the interpretability gap: even as models improve, their decision-making can remain hard to inspect.

Wallace connects this to a practical need in enterprise settings: systems must be auditable, controllable, and aligned to domain rules—not just persuasive.

The next chapter: neural + symbolic, together

One of the most forward-looking parts of the interview is the explicit call for combining approaches—using neural methods where they excel (pattern learning, similarity, generalization) while retaining symbolic structures for logic, constraints, and transparent reasoning.

Wallace closes by pointing to real-world healthcare examples where predictions and decisions benefit from combining statistical learning with domain-grounded reasoning—exactly the kind of hybrid, neuro-symbolic direction the industry is moving toward.

Back to Blog

Related articles