Shipping’s new chokepoints are organisational
March 19, 2026 https://splash247.com/shippings-new-chokepoints-are-organisational/
Regular columnist Wolfgang Lehmacher has some advice for a different AI agenda for shipping.
What people say they “trust” and how they rely on systems often diverge. Organisations also rarely design for recalibrating that reliance when conditions change. Dangerously, the model’s view of the world becomes the organisation’s default worldview.
In the age of disruption, tools risk proposing “optimal” solutions for a world that’s already gone. Boardroom conversations about AI in shipping still orbit the same questions: are the forecasts accurate, are the optimisers sound? Those questions matter. But a new risk is gaining force: what happens to human judgment once a system is treated as the brain of the operation?
Planning engines now fuse demand forecasts, production plans and sailing schedules into precise outputs. StormGeo chief Kim Sørensen believes that AI could become a standard part of daily voyage decision-making and optimisation. Risk dashboards promise real-time visibility of war-risk areas, canal draft limits and congestion. No human team, however seasoned, can carry today’s tangle of alliances, rotations, feeder links and hinterland bottlenecks in their heads. Hence, the instinct to lean on smarter tools makes sense.
As systems take over the heavy lifting, though, the muscles that matter in a crisis – building scenarios, challenging assumptions, improvising under deep uncertainty – start to weaken. A review of 74 studies on automation bias finds that when automated systems generally perform well, users become more likely to accept their advice at face value, search less for conflicting information and miss rare but critical errors, especially under pressure.
Picture a shipper with an AI engine set to minimise cost and lead time on Asia-Europe flows. Over time, it learns that the cheapest pattern is to push volume through a few hyper-efficient corridors and a tight circle of low-cost, high-frequency carriers. Flows concentrate. When a chokepoint is threatened, the tool keeps recommending routes that are no longer safe, insurable or attractive. This example is obvious, but many are not. Planners, taught that overrides must be defended, hesitate. By the time judgment cuts in, ships are in the wrong queues and the scramble for capacity has already begun.
Nothing evil is happening inside the code. The vulnerability is organisational. AI drifts from tool to oracle. We are about to allow a self-inflicted single point of failure to grow in the digital layer of our supply chain networks. The more tightly platforms stitch planning, performance and risk into a single interface, the easier it becomes for the AI to turn from adviser into system, and the more deliberate boards must be about where humans can still interrupt, override or switch it off.
Shipping has been schooled on chokepoints: Suez, Panama, Hormuz, Malacca. Boards now know to ask which canals or terminals would bring them to their knees if they failed. The truth is that the next chokepoints are being written in code, not carved into geography.
That calls for a different AI agenda. The goal should not be to automate as much as possible, but to build systems that are powerful yet resistible: tools that sharpen, rather than numb, the judgment of people on the bridge, in the fleet centre and in the boardroom. As SmartSea’s Kris Vedat notes, “integration matters more than feature lists, and execution matters more than vision statements” – exactly the mindset boards need when they decide where and how AI is allowed to sit in the decision chain.
Three questions are vital. Does the system show what has changed and the available options, or just a single “recommendation”? Are overrides examined for what they reveal, or buried? And when the next shock hits, will dissent still have a secure place on the bridge?