14:17 Major traffic incident reported on M25 at {loc:tells.transmitted.flops}. Involves one coach and three passenger vehicles. First responder drone dispatched. Emergency services alerted. Response team ETA: 8 minutes.

14:19 First responder drone arrives. Initial findings:

Fire suppression drone inbound. Traffic management AI reports emergency lane cleared. Traffic rerouting in progress.

14:20 Fire suppression drone containment in progress. First responder drone triage complete. Emergency medical kits deployed. Updates sent to incoming ambulances and hospital emergency departments. Traffic slowed for 5 km radius with notification “serious incident—hold position.” Response team revised ETA: 5 minutes.

14:25 Human responder team arrives.

-

The scene was orderly chaos when Sarah stepped out of the ambulance. Smoke curled lazily from the coach’s undercarriage, but it was clear the fire was no longer spreading. A few drones hovered above, their low hum blending into the ambient noise of distant sirens and idling engines. Sarah scanned the site. People were moving—but not in panic. Many were guided by their AIs, responding to quiet instructions in their earpieces or displays on their devices.

“Sarah,” her own AI spoke through her headset. “Fire contained. High-priority patients identified and tagged. Focus on quadrant two—child with suspected spinal injury. Bystanders providing comfort. Trauma unit ETA: 3 minutes.”

“Got it,” Sarah replied, hefting her kit and moving quickly toward the indicated area. Her HUD highlighted the route, overlaying a translucent map on her vision.

She spotted the child immediately. A man knelt beside her, holding a small, reflective blanket over her. He looked up as Sarah approached, his face pale but calm.

“My AI said not to move her,” he said. “She’s scared but okay. I think she hit her head.”

“Good work,” Sarah said, kneeling to assess the girl. Her AI seamlessly displayed the child’s vitals, transmitted from the bystander’s assistant. Heart rate elevated, oxygen stable. A quick scan confirmed a possible fracture at C6. She gently fitted a stabilising collar around the girl’s neck and signalled for a stretcher drone. It arrived almost immediately, its arms gently lowering to transfer the child.

“Fire brigade on-site,” her AI updated. “Moderate injuries in quadrant one require reassessment—bystander applying pressure incorrectly.”

Sarah nodded, already moving. “Which ones?”

A cluster of people stood near an overturned car, several of them crouched beside an older woman with blood pooling under her arm. Her AI marked one of the bystanders in yellow—a well-meaning helper pressing a shirt against a jagged wound but applying pressure unevenly.

“Let me take over,” Sarah said, kneeling beside the woman. The bystander shuffled back, nodding as his AI reminded him to step away for the professionals. Sarah worked quickly, applying a pressure bandage while updating the woman’s status.

-

14:26 Fire containment complete. Police drone forensic mapping in progress. Incident reports for insurance AI systems generated.

14:28 Ambulances arriving. 12 patients stabilised. Non-critical injured await transport. Hospital status:

-

By the time Sarah reached the coach, most passengers had been evacuated. A woman sat on the verge, her hands trembling as she clutched a steaming cup of tea—one of several dispensed by a supply drone now circling the site.

“Are you hurt?” Sarah asked.

The woman shook her head. “Just shaken. My AI told me to get out through the emergency exit. Said I’d be okay.”

Sarah glanced at the coach, its side crumpled but intact. The emergency door hung open, and she could see passengers’ belongings scattered across the ground. The AI had likely calculated the least hazardous escape path before the woman even realised what had happened.

“Good,” Sarah said. “Stay here until we’re done.”

-

14:30 Incident status: contained. Traffic resuming in restricted lanes. Summary report:

-

Sarah leaned against the ambulance as the last stretcher was loaded. Her AI updated her with the incident’s resolution, its tone calm, almost congratulatory.

“All patients receiving appropriate care. No further hazards identified. You may disengage.”

Sarah took a deep breath, watching as traffic inched forward. Drones hovered in the distance, clearing debris and documenting damage for insurers. The system worked, she thought. People trusted it. She trusted it. But it wasn’t always enough—there had been a moment back there, by the car, when she’d seen fear in a bystander’s eyes despite their AI’s guidance. Machines could advise, predict, assist. But they couldn’t comfort, not like humans could.

She climbed into the ambulance, her AI already prepping her for the next call. The world kept moving, and she moved with it.

Analysis

This scenario describes the impact pervasive AI could have on emergency response, exploring how it could reshape our approach to major incidents. In a situation where every second counts, the ability of AI systems to assess conditions and coordinate responses immediately and accurately will greatly improve outcomes. The drones and AI assistants in the story gather data, triage injuries, and coordinate human and machine responders with precision, significantly reducing the chances of a delayed or mismanaged response.

The broader effect of pervasive AI will be to reduce the likelihood of such emergencies. As traffic systems, vehicles, and infrastructure become increasingly aligned and aware, major accidents should become rarer. AI could dynamically adjust traffic flow to prevent congestion, reduce the risk of collisions, and detect vehicle malfunctions before they escalate into disasters. Autonomous vehicles, in particular, promise to eliminate many human errors that currently lead to accidents. However, might we become less prepared to respond to emergencies when they do occur? A society that rarely faces major accidents might find traditional models of emergency services over-resourced or its human responders out of practice, becoming too reliant on AI to manage such situations. This balance of safety versus preparedness highlights a potential limitation of the human factor in an AI-driven world.

The role of human responders in such a future is intriguing. With AI systems coordinating willing bystanders and providing detailed, step-by-step guidance, the traditional lines between official responders and the general public begin to blur. A bystander with basic training, augmented by an AI assistant, might perform CPR or apply a tourniquet with the precision of a trained paramedic. This democratisation of expertise suggests that human responders may shift toward skills that AI cannot easily replicate, such as providing emotional support or navigating interpersonal complexities that require nuanced judgment. For example, in the story, Sarah’s ability to connect with the injured child and her calm reassurance to bystanders highlight a uniquely human strength: the capacity to comfort and build confidence in moments of vulnerability.

Yet, if AI systems are capable of analysing a vast body of information in real-time, including biometric data, environmental factors, and individual capabilities, the question of authority becomes urgent. Who has accountability in such situations? AI can rapidly calculate the most effective course of action based on a large body of available data, likely outperforming even the most experienced human responder in terms of speed under duress. In the story, Sarah’s reliance on her AI assistant to guide her actions describes a partnership, but what happens if their assessments conflict? Should (or could) an AI’s recommendations override a responder’s instincts, or would a human’s judgment always take precedence? There is likely to always be a tension between human and machine decision-making in high-stakes situations.

By equipping bystanders with the means to assist helpfully and effectively, AI expands the circle of care in a crisis. However, this decentralisation of responsibility may also create ambiguity about who is ultimately in charge. If a bystander inadvertently causes harm despite AI guidance, who is held accountable—the individual or the system providing the instructions? Similarly, if a human responder overrides an AI’s recommendation and the outcome worsens, would the responder face scrutiny for ignoring the recommendation of the machine? It’s easy to apply our current attitudes of risk-aversion and liability here, since we are taught to “leave it to the professionals”. But pervasive AI will inevitably blur these boundaries.

In today’s world, the process of determining fault and liability often involves a long process of insurance claims, legal disputes, and conflicting narratives. With pervasive AI, objective, detailed knowledge about the event can be captured and shared quickly and transparently. In the scenario, drones and traffic systems seamlessly document the causes and outcomes of the accident, reducing the need for lengthy investigations. If a human error occurs—perhaps someone disables their vehicle’s AI safeguards or ignores its warnings—accountability could be pinpointed with stark clarity. This transparency could streamline claims and legal processes but also raises questions about the consequences for individuals who deviate from AI recommendations.

For instance, if a driver disregards multiple layers of AI safeguards and causes an accident, how should responsibility be assigned? Should they face harsher penalties for overriding systems designed to prevent harm, or would such strict accountability disincentivize trust in these systems? Conversely, if an AI fails to prevent an incident despite having access to all relevant data, would the manufacturer, programmer, or operator bear the blame? The idea of pervasive AI as an arbiter of truth and fairness is both promising and unsettling, depending on how it is implemented and governed.

This scenario also hints at the potential for a deeper societal shift. As emergencies become less frequent and more optimally managed, our relationship with risk may evolve. People may grow accustomed to the certainty that AI systems will intervene, reducing the sense of personal responsibility or readiness to act independently. At the same time, the growing trust in AI to manage crises could foster a sense of security and collaboration, as seen in the coordinated actions of bystanders and responders in the story.

The integration of pervasive AI into emergency response asks us to rethink the boundaries of human and machine roles. While AI’s capabilities in assessment, triage, and coordination might become unparalleled, the human elements of empathy, adaptability, and nuanced moral judgment remain irreplaceable. Balancing these strengths will require careful consideration, especially as we navigate the ethical and practical complexities of authority, accountability, and trust in critical moments.

Thinking points