You are here

Roboethics and the Collision Regulations

Roboethics and the Collision Regulations
Terry Ogg 2018-08-25 https://www.maritime-executive.com/editorials/roboethics-and-the-collisi...

To paraphrase the British comedian Peter Kay after he tasted garlic bread for the first time, autonomous ships are the future. At least, that’s what many industry participants are saying. I sometimes take these predictions with a large dose of skepticism, while at other times I’m stalked by a nagging doubt that maybe I’m just a cave dweller blighted by limited imagination and simply can’t see what it is the visionaries see.

My normal state is at best, oscillation between these two setpoints and at worst, holding both views simultaneously, depending on who is doing the predicting and how cogent their opinion is. Most of the time, I’m like a thinking version of Schrödinger’s Cat. The response I give you depends on when you ask me.

Robot ships

Autonomous and semi-autonomous ships are types of robots. IMO has adopted the term MASS (Maritime Autonomous Surface Ships) for robot ships. A MASS is a vessel that is capable of being operated without a human on board in charge and which has alternative control arrangements available. Maritime UK’s voluntary industry code for MASS up to 24 meters in length envisages six levels of control, from “manned” (Level 0) through “operated,” “directed,” “delegated,” “monitored” to “autonomous” (Level 5). Level 5 is full-on autonomous robot.

The MASS Code, which is intended to be adapted for much larger vessels, makes clear a vessel might change between control modes, or operate on different control modes simultaneously, depending on the function being controlled and according to time and location on the voyage. For example, navigation control might be at Level 4 (autonomous but monitored ashore) while a ROV is deployed under Level 1 control (remote control by an operator ashore). But care needs to be taken when considering the meaning of “unmanned.” A MASS operating under levels of control 1-5 is certainly unmanned for the purposes of control, but there could well be humans on board performing other functions such as, maintenance, inspection and testing, scientific study and analysis and so forth. It seems such vessels would likely be sub-categorized as “occasionally manned.”

Given the design innovation that will accompany, drive even, the predicted introduction of MASS, it will be interesting to see how the benefits are distributed between the twin aims of greater machine efficiency and enhanced safety of life at sea. Up to now, safety of life at sea has been the paramount concern. The vast majority of internationally adopted maritime standards are designed for the preservation of humans that man or otherwise travel on ships, even though these standards comprise mainly of measures directed at the construction, fitness, fitting, equipment and operation of the vessel itself. Get these things right and the people on board are (generally) pretty safe.

To put this into context, however, the general public and much of the maritime industry are rarely troubled by loss of life at sea when it occurs. When something like a Costa Concordia or Sewol happens, with all their shocking narrative elements, the public and industry imaginations flood for a brief period before ebbing away. Memories slip below the surface. Meanwhile, the continuing scandal of ships lost along with their entire crews due to bulk cargo liquefaction, for example, continues largely unremarked upon.

As for property loss and damage, only insurers, the uninsured and salvors have any skin in the game. It is against this uneven backdrop that our industry needs to consider the standing of robot ships, particularly with respect to their relationship to manned ships, as they are likely to co-exist for a considerable period of time.

Roboethics

For many centuries, humans have been both intrigued by and distrustful of devices that appear to mimic human abilities or characteristics or that usurp human functions. Like fear of the dark, there is something about apparently inanimate objects coming to “life” that lights up our brain stems and makes us uneasy. Popular culture in the late 19th and 20th centuries has explored the possibility of all manner of devices, from ventriloquist’s dummies to doomsday machines, achieving self-awareness with malevolent intent towards their human masters. In fact, the term robot derives from the Czech word “robota”, meaning forced laborer.

Unsurprisingly, when someone (the sci-fi writer Issac Asimov) got around to drawing up a set of principles to govern the behavior of robots, the focus was on the requirement of robot fidelity towards humans. Asimov’s Three Laws of Robotics set a tone of “do no harm to humans” but in recent decades the debate seems to have become much more permissive. Scholars of the Terminator movies will be aware that each machine iteration stomped all over the concept of do no harm to humans.

Similarly, in real life, the laws of robotics have moved on from Asimov and continue to evolve towards a broader set of ethical principles applicable to robots and artificial intelligence. Robot ethics, or roboethics, are intended to codify how we design, use and treat robots and artificial intelligences (AIs), while a subset referred to as machine ethics deals with how robots and AIs should treat humans.

Meanwhile, technology is already taking us to some weird places. In a profoundly ironical move, the Hong Kong-developed humanoid robot Sophia was granted Saudi citizenship in October 2017, which begs the question whether non-sentient machines (especially a glorified chatbot like Sophia) should be accorded human, or indeed any, rights. Sophia has generated huge media interest. In interview, it has said it knows the name it would give a child if it had one. But perhaps the wider point to make about Sophia is the way in which self-serving developers, publicity seeking authorities and a credulous media are able to take the roboethics debate and ram it down the rabbit hole.

It is axiomatic that our regulations stem from our laws and our laws stem from our ethics. If, therefore, we are to regulate robots and AIs as they are introduced into the maritime industry, surely our starting point should be roboethics?

Where we are headed

I recently came across a paper describing a fascinating project being undertaken by Queen’s University Belfast in collaboration with a number of maritime industry entities. The MAXCMAS project takes its name from “machine executable collision regulations for marine autonomous systems.” The aim is to develop a system that will enable a navigationally autonomous ship to be compliant with the existing collision regulations. In this the MAXCMAS project is not alone. Similar attempts to find the “golden rivet” of autonomous navigation are underway in other countries around the world.

While MAXCMAS is indeed fascinating, I have to admit my immediate reaction was, why? Why would anyone want a machine to comply with the collision regulations? Let me say right away that I have no issue with the development of collision avoidance systems for a fully autonomous robot ship. You can’t have the latter without the former. However, from a robethical point of view, I am not at all persuaded that compliance with the existing collision regulations is desirable. As I alluded to earlier, starting with the regulations is the wrong approach. Taking a roboethical approach, the natural result would be that autonomous unmanned ships shall keep clear of and give way to manned ships.

Pausing here, you will see I have dropped the various references to “robot ships,” “occasionally manned” ships and MASS and gone for something quite specific. I propose to use an unmanned, fully autonomous vessel as a baseline reference to explain my position. We can then move forward, incorporating other configurations and scenarios as we go.

The collision regulations

Along with SOLAS and MARPOL, the International Regulations for Preventing Collisions At Sea 1972 (aka “the Collision Rules,” “the Collision Regulations,” “Colregs,” “Rule of the Road”) is one of the most widely adopted maritime instruments in the world, applicable to more than 99 percent of the world’s shipping tonnage. The Colregs perform two functions. First, the regulations provide rules to govern ship encounters, to make those encounters predictable by assigning responsibilities and roles to vessels in particular circumstances, and to set the standard of conduct necessary to ensure safety.

Second, the Colregs form a set of objective rules, breaches of which can be used in conjunction with the more subjective test of “good seamanship” to determine navigational fault and from there to establish liability in the event of a collision. As applied in practice, the Colregs work. Every day, many thousands of vessels encounter each other without colliding. In the overall context of shipping movements, although close or dangerous encounters occur reasonably regularly, minor collisions of the bumps and scrapes variety occur infrequently, while major collisions are thankfully rare.

A substantial proportion of major collisions that do occur regrettably result in loss of life. Recent cases such as the Sanchi and collisions involving U.S. naval vessels have resulted in terrible loss of life. In such cases, at the instant of collision, the opportunity to prevent loss of life has already gone. Last month marked the 25th anniversary of the British Trent collision off Wandelaar pilot station. Nine of her crew died. I investigated and analyzed the circumstances of the collision on behalf of her owners and insurers. The toll in a case like that goes way beyond those actually lost – their family, friends, colleagues and that often-forgotten group, the survivors – all pay a price.

I have a law report for a collision case that occurred nearly 50 years ago. The liability action was heard at first instance in the English Admiralty Court but the decision was appealed to the Court of Appeal. Lord Justice Templeman was one of the three appeal judges. His judgment speech started thus:

“The shades of Conrad must be smiling grimly. In the small hours of October 28, 1970, two vessels, the Ercole and the Embiricos, proceeding in opposite directions, with only the China Seas for maneuver, and aware of each other at a distance of 18 miles, succeeded in colliding at speed with serious consequences.”

The “serious consequences” referred to included substantial loss of life. Despite knowing, or perhaps because he knew, the circumstances of the collision, his Lordship was unable to prevent his obvious disdain from dripping off the page. When looking at the causes of collisions through the narrow focus of rules-based standards, it is sometimes difficult not to experience incredulity in certain instances of navigational fault. Legal liability is concerned with proximate causes and in collision cases the causal factor that has the greatest causative potency and appears most frequently in proximate cause is human on board operator error, usually aided and abetted by other causal factors such as inadequate training, inadequate professional standards, poor ergonomics, equipment limitations, lack of equipment integration, poor procedures and working conditions, lack of resources, poor teamwork, poor communication and decision-making protocols, etc.

I use the term human on board operator error (HOBOE anyone?) advisedly. Anyone who thinks that term is synonymous with the phrase “human error” needs to consider the larger fault tree and the causal contributions of human error in the pre-operational phase as practiced by superintendents, technical mangers, designated persons and owners; the trainers, procedure writers, equipment designers and naval architects; the legislators and the flag States.

Human on board operator error that manifests itself as navigational fault can be classified in a number of different ways. We can refer to errors of commission and omission, conscious and automatic errors, knowledge-based, rules-based and skills-based errors, intended and unintended errors.

Another method of classification, which is useful for present purposes, is to make the distinction between forced and unforced errors. While unforced errors can and do routinely occur in any situation, forced errors tend to occur in situations when decisions are made requiring positive action, often in response to a stressful or unexpected event. Taken to the extreme, forced errors give rise to the notion of the agony of collision, in which a mariner might be thrown onto the horns of a dilemma with no way of avoiding all risks and dangers. In this sense, forced errors can have irretrievable and terminal consequences.

Our fully autonomous unmanned vessel

Before going on to consider the content of the Colregs, I need to introduce our fully autonomous unmanned vessel. What capabilities should it have? Well, for a start it should be capable of operating to at least the same standard as a properly constituted human bridge team performing error free. In fact, given the level of technological innovation that is a requisite for a fully autonomous unmanned vessel, it should be capable of achieving a much higher standard, particularly in situational and collision risk assessment.

Clearly, such vessels should have a unified sensor system, incorporating optics, radar, LIDAR, AIS, thermal imaging, acoustics, environmental conditions sensors and vessel motion sensors. It should be capable of identifying itself as a fully autonomous vessel by lights and shapes, AIS, radar transponder and synthetic voice over VHF. It should be capable of communicating its immediate and longer term navigational intentions in both general and targeted modes by electronic means and be able to provide this information on demand when interrogated by other vessels and shore stations. It should be able to process navigation intention data transmitted by other vessels and incorporate that information in its own decision-making.

Exhaustive capability studies and risk and failure mode analyses will be needed to determine redundancy requirements for sensors, transmitters, processors and controllers. In accordance with roboethics, our fully autonomous unmanned vessel must have a transparent operating system with a complete data history to enable faults within the system, and their interactions with other parts of the system, to be traced and diagnosed.

Similar to the controller systems on board dynamically positioned manned vessels, the collision avoidance system of a fully autonomous unmanned vessel would update every few seconds. The rate of updating combined with the variety and combined sensitivity of its sensors should be capable of producing a very accurate situational model. Taken together with real-time predictions, background simulations and a comprehensive set of anti-collision objectives and parameters, our fully autonomous unmanned vessel should achieve a higher level of safe operation and efficiency compared to a manned vessel with currently fitted technology and a bridge team performing without error. Our industry should demand this higher level of safety. If it cannot be achieved, what is the point of robot ships?

Pausing again, it’s quite apparent at this point that a great deal of investment will be required for ocean-going robot ships. Who or what is driving this investment? Much of the coming technological innovation would be of tremendous benefit to current manned vessel systems, but it seems adoption of technological support systems, when it happens, on manned vessels would be merely a by-product of advances in technology directed towards full autonomy.

The overall lack of engagement by shipowners in new technologies under development may have many causes, but the lack of focus on its applicability to current manned vessels suggests that the non-shipowner drivers behind the move towards robot ships are motivated not so much by safety of life at sea but more by the technological arms race now underway.

There is an argument that simply removing seafarers from the ships and replacing manned vessels with robot vessels contributes to safety of life at sea because there are then fewer lives potentially in danger. However, that argument is valid only in the event of a collision, at which point the die is cast. The primary objective of introducing robot ships must surely be to enhance safety of life...