The travel industry’s current infatuation with Artificial Intelligence (AI) has reached a fever pitch, but as the noise increases, so does the strategic obfuscation. On April 14, 2026, Expedia Group released its latest “Consumer Trend Report,” introducing a concept they’ve dubbed the “AI Trust Gap.” According to their survey of 5,700 travelers, while half of the world is happy to let AI plan a trip, 66% refuse to let it book one. On the surface, it’s a study in human hesitation. In reality, it is a masterclass in protectionist engineering.
Architecture of the Glass Ceiling: Lobotomy by Design
To understand the “Trust Gap,” one must look past the marketing and into the code. Expedia’s flagship AI agent, Romie, is marketed as an “agentic” companion, yet a technical audit reveals it is programmatically tethered. While Romie can scrape group chats and suggest hotels, the actual transactional handshake is barred from the interface. This is not a psychological barrier; it is a programmatic one. By forcing the final transaction back to a legacy web checkout, Expedia isn’t observing a lack of trust—they are enforcing a lack of utility. If the AI is not permitted to execute, the user has no choice but to return to the “Trusted Brand.”
This technical ceiling is reinforced by the “2 AM Fallacy.” Expedia’s leadership, including Chief AI Officer Xavi Amatriain, has argued that AI cannot be trusted because it lacks the “human-centric foundation” to handle service crises—like a booking error in the middle of the night. This narrative intentionally ignores the fundamental shift toward agentic commerce. In a decentralized, API-first travel ecosystem, an AI agent doesn’t need to “call” a front desk; it interacts with real-time service ledgers to resolve issues instantly. By insisting on a 20th-century solution (human intervention) for a 21st-century problem, Expedia justifies its 15–25% commission as a “Security Tax.” They are selling insurance for a friction they have intentionally preserved.

Commission Guardrail: Economics of the Click
The underlying motive for the “Trust Gap” isn’t psychological; it’s financial. In the current OTA model, the click is the currency. When a traveler plans via AI but is routed back to a legacy interface to book, the data remains proprietary, the cookie is set, and the commission is locked. To these gatekeepers, a truly autonomous, “trustworthy” AI is a direct threat to their balance sheets. They are in a fundamental conflict of interest with their own technology: a “Helpful but Limited AI” is a marketing tool, but a “Trustworthy, Agentic AI” is a bypass that cuts them out of the transaction.
This protectionist architecture is being deployed across the industry in two distinct ways:
- I. The Expedia “Waiting Room”: By programmatically tethering Romie, Expedia ensures the transactional handshake remains barred. They aren’t observing trust issues; they are enforcing utility issues. By keeping the AI “lobotomized,” they force the user back into the funnel. They aren’t building a bridge to the future; they are building a more sophisticated waiting room designed to collect a “Security Tax” on every arrival.
- II. The Booking.com “End-to-End” Trap: While Expedia builds a waiting room, Booking.com constructs a walled garden. Their “AI Trip Planner” is a strategic effort to prevent the traveler from ever leaving their ecosystem to book directly. They weaponize their scale to convince travelers that any transaction outside their “Accountability Shield” is a high-risk gamble, ensuring the “Agentic Future” remains a closed loop they control.
Expert Verdict: Cognitive Darwinism and the Agentic Enterprise
To understand why this “Trust Gap” is a terminal symptom of legacy thinking, we turn to futurist Brian Solis, Head of Global Innovation at ServiceNow. For nearly three decades, Solis has charted the intersection of technology and human behavior, and his 2026 framework on “Cognitive Darwinism” provides the exact lens for viewing Expedia’s current trajectory. Solis argues that we are currently in an era where AI is being misused to “optimize yesterday”—speeding up old, broken workflows rather than reimagining the value chain entirely.
When we apply the Solis lens to the OTA model, the “Trust Gap” is revealed as a failure of leadership to achieve what he calls the Agentic Enterprise. True AI agents, in Solis’s definition, don’t just “chat”; they are onboarded and managed like digital employees that take action across tools and make decisions. By programmatically barring their AI from the transaction, Expedia is “scaling dysfunction.” They are iterating a 20th-century middleman model in a world moving toward autonomous orchestration. As Solis famously notes, “AI is not the strategy; it is the capability.” Expedia has adopted a protectionist strategy, using AI as a capability to enforce the status quo.
The Signaling Pincer: Fear and the ‘Alliance’
The timing of this “Trust Gap” report is not incidental. It arrived just as a wave of new “Industry Alliances” and “Educational Foundations” launched, promising to help hoteliers navigate the “chaos” of AI. The pattern is clear: one side of the industry manufactures the fear (The Trust Gap), while the other sells the cure (Membership Dues and Certifications).
The independent hotelier is being squeezed. On one side, the OTAs tell the public that direct AI booking is “dangerous.” On the other hand, “Experts” tell hotels they are too far behind to survive without a middleman’s guidance. Both paths lead away from the direct guest relationship and back into the commission loop. An examination of Expedia’s partner-facing communications reveals the three primary gears of this gaslighting machine:
- The “Accountability” Red Herring: Expedia’s partner blog explicitly links AI to a lack of support, claiming travelers fear being “on their own” if a chatbot makes a mistake. By framing AI as inherently “unaccountable,” they position their 25% commission not as a fee, but as a luxury insurance policy. They aren’t solving a support problem; they are marketing friction as a security feature.
- The “Discovery vs. Transaction” Firewall: The report admits travelers love using AI for “inspiration” but “rely on brands” for the booking. This is the smoking gun of their strategy. Expedia is happy to let AI do the “hard work” of top-of-funnel research because it drives traffic to their platform. However, they’ve built a programmatic firewall at the point of purchase. They want AI to be a free intern for discovery, but never a partner in the transaction.
- The “Human-Centric” Pivot: Notice how they use “Human-Centric” as a euphemism for “Manual and Expensive.” By taking their lack of automation and calling it a “feature” of their brand’s empathy, they are attempting to freeze the industry in a 20th-century service model that requires their specific intervention.
- The YouGov Framing: By surveying people about “AI chatbots” in general—rather than a specific, secure, and personalized persistent companion—they are intentionally polling the fear of a 2023-era hallucinating bot to justify their 2026 protectionism.

Beyond the Ad-Man: The Rise of the Companion
The real threat to the legacy model isn’t a chatbot on a website; it is the persistent Personal AI Companion. A HAL-style engine that builds its profile through daily interaction doesn’t rely on “Brand Trust”; it relies on verified outcomes. It parses local permits to detect construction noise, monitors sentiment across un-curated social channels, and executes bookings via the most efficient technical path—usually a direct API handshake with the hotel.
Expedia isn’t fighting a technology; they are fighting for the relevance of the “Click.” In a world of agentic outcomes, the click becomes obsolete, and with it, the multi-billion-dollar advertising machine that sustains the gatekeepers.
The “Trust Gap” is not a bridge that needs to be built. It is a moat that needs to be crossed. For the seekers in this industry, the goal isn’t to wait for legacy brands to “fix” AI—it’s to build the systems that make their permission unnecessary.