Summary
Utilize in-cabin cameras and user data to deliver highly personalized video ads to the passenger. The vehicle’s interior camera (and other sensors) can recognize who the passenger is (or at least detect demographics and mood), while the rideshare app provides profile info (age, gender, ride history, destination, etc.). Using this, the system plays tailored video advertisements on the in-car display or the passenger’s mobile device — ads that are relevant to that specific rider’s interests or current context. This personalization increases the likelihood of engagement or conversion, making the ad more effective (and valuable). For example, a passenger on the way to the airport might see a travel insurance ad, whereas the next rider (headed to a mall) sees a fashion sale ad. Importantly, this would be opt-in for riders concerned about privacy, but many may opt in if it yields perks (like more relevant deals or discounted rides).
The moment a passenger enters the autonomous vehicle, the system identifies them or assesses key attributes:
If the rider logged in via the app and agreed to personalization, the system can pull their customer profile (e.g. past ride destinations, stated preferences, or loyalty program info). It might know this rider often goes to gyms and vegan restaurants, for instance. The in-cabin camera could also assist by doing facial recognition to confirm the identity (matching against the account photo) — ensuring the profile is correctly retrieved. If full identification isn’t available, the system can still use the camera to gauge demographics (approximate age, gender) and even emotional state (are they happy, bored, in a serious mood?) using AI vision techniques. These technologies already exist in the automotive space: In-Cabin AI can detect occupants’ emotions and states.
Based on this information and the context of the ride (time of day, destination, weather, etc.), the ad platform selects a video or multimedia ad most likely to resonate. For instance, if a young adult passenger is identified, and it’s around dinner time near their destination, the system might play a lively ad for a new restaurant or a food delivery service with a promo code. If the camera senses the passenger looks tired or bored, maybe an upbeat entertainment or coffee ad plays to catch their interest.
The ad is displayed on the seatback screen or personal device with personalized elements. This could mean the content itself is targeted (“Hey [Name], get 20% off at the store you visited last week!”) or simply the category of ad is chosen for them. Video ads can be complemented by interactive options — e.g. a “Interested? Tap to save deal” overlay, or a voice prompt at the end: “Say ‘more’ to get details.” If the car has voice recognition, the passenger might simply speak to interact (avoiding the need to touch a screen).
Throughout the ride, the system might show a sequence of ads aligned to the passenger’s journey. A short ride might have one tailored video; a longer ride could have an “advertising playlist” dynamically adjusted. The system could also adjust if it notices the passenger not paying attention (camera sees them looking away or using their phone — it might pause ads until they re-engage, or switch to a different approach like a gentle audio prompt).
In-Cabin Camera with AI: Most autonomous vehicles already have interior cameras for safety; these can be used (with consent) to run facial recognition or emotion detection algorithms. For example, Affectiva (now part of Smart Eye) provides an automotive AI that can read occupants’ expressions and identify individuals. This would require an onboard computer capable of running these AI models in real-time.
User Data Integration: The rideshare’s backend needs to integrate user profiles with the vehicle system. When a ride is booked, a profile token can be sent to the car so that it knows “Passenger = John Doe, male 30, interests X, loyalty member, headed to Y destination”. This can be done through the existing app and cloud infrastructure. Many apps already collect preference data and could share a segment of it for ad targeting (respecting privacy settings).
Display & Audio System: A screen (seatback tablet or ceiling-mounted display) to show video content, and speakers (or the car’s audio) for sound. The content delivery should be high-resolution and with good sound to be effective like a TV commercial.
Ad Decision Engine: Software (either in-cloud or in-car) that matches available ads to the passenger profile in milliseconds. This is similar to how online ads are served in browsers, but here it uses both user data and contextual data. It might connect to an ad exchange specialized for rideshare vehicles, where advertisers have uploaded various creatives and targeting criteria (e.g. “show this ad to 25–35 year-old females heading to shopping districts”). The decision engine ensures the ad fits the current context and the user’s opt-in preferences.
Interaction & Tracking: If the ad is interactive (say clickable or has a QR code), the system needs to handle that input — e.g., register that the passenger tapped “save coupon” and then perhaps send that coupon to their phone or email. It also should log viewing metrics: did the passenger watch fully, skip, look away? Cameras can supply engagement metrics (like did they smile at the ad or did their attention wander). Indeed, experimental systems detect “when [a rider] engages positively with certain brands’ ads on the backseat display” and send those metrics to advertisers.
This is targeted advertising taken to the next level, so it follows models similar to online targeted ads:
Cost Per Impression (CPM) with Targeting Premium: Advertisers pay for the ad to be shown to a specific audience. Because the targeting can be very precise (e.g. “tech-savvy 20-something on Friday evening”), they might pay a higher CPM for that guaranteed target. For the platform (the rideshare company or partner), this yields higher revenue per ad than a generic rotation.
Cost Per Action: If the system allows immediate action (like click, voice command, or QR scan), advertisers might pay per engagement. For instance, if a streaming service’s ad prompts the user to say “Sign me up” and they do so, that could be a lead or conversion that commands a higher fee.
Data/Insight Revenue: The anonymized data of how users engage with ads (where they look, which ads they skip or watch) is valuable feedback for advertisers. The platform could charge for these insights or use them to secure repeat business (e.g., “We noticed riders like you responded 50% better to personalized ads — proof that our in-car ads work!”).
Passenger Incentives: As part of monetization, the rideshare operator might give the passenger a slice of the value in the form of discounts or loyalty points for opting in. For example, a user might get free ride credits for watching personalized ads. This is analogous to proposals for ad-subsidized rides — tech visionaries have speculated about “free, ad-supported Uber rides” where seeing ads could offset the cost. Our model might not make the ride free, but could, say, knock 10% off the fare if the user allows personalized ads. The cost of that is covered by advertisers who gain a more receptive audience.
Overall, advertisers are often willing to pay more for personalization because it boosts effectiveness; one scenario describes a rider receiving in-store coupons on the ride based on their destination, which not only makes the rider happy but directly drives store sales. This kind of closed-loop (see ad -> get coupon -> go to store) can be very profitable.
Much of this can be implemented with existing tech and some riders are already experiencing early versions. In-car personalization pilots have been discussed in the industry: for example, a 2020 showcase by Affectiva described a rider named “Sarah” who opts into personalized content in a rideshare — as she enters, the system recognizes her and sends her a tailored in-store coupon since it knows her destination. It even noted when she was enjoying the ride as a good moment to show an ad. This shows the concept is not far-fetched; it’s being tried on a conceptual level. Technologically, the pieces are in place: many new cars have driver-facing cameras (which can be purposed for occupant recognition), and facial recognition is common in smartphones today. On the data side, Uber and Lyft already have rich data about users (favorite locations, Uber Eats orders, etc.) and have started an advertising business leveraging their ride apps. They can extend this to the car itself. In fact, Uber’s recent Journey Ads platform uses ride context (like destination and ride history) to show ads in the Uber app; an autonomous car’s cabin screen is the next logical step for such ads. Privacy is a consideration — passengers must opt in. But surveys show riders might opt in if it means more relevant offers and a better experience. A concrete implementation example: imagine Cruise or Waymo vehicles with an “interactive welcome screen.” The screen could greet the passenger by name (pulled from their profile) and say “Hi [Name], check this out:” then play a short personalized ad (maybe for a service in the area the person might like). If the rider taps the screen or says a voice command like “Tell me more,” the ad could expand or send info to their phone. If they ignore it, the system gracefully minimizes ads for the rest of the ride (ensuring it doesn’t annoy a disinterested user). Because this is all using off-the-shelf tech — cameras, user accounts, targeted video delivery — it could be trialed now. Some high-end car services or concept cars (Mercedes, etc.) already demonstrate personalized infotainment; applying it to advertising is a matter of business will. In summary, personalized in-cabin video ads can be deployed with today’s AI and data pipelines, creating a “minority report”-like tailored marketing experience in the back of a cab — except we have the tech to do it far more tactfully and effectively than ever before.