Safety

Management: Getting to Grips with Human Factors in Drilling Operations

Looking after the human factors in daily operations is good business because it increases employee safety and improves work performance.

ogf-2012-04-culturehero.jpg
Source: Getty Images.

Since the events at Montara in late 2009 and Macondo in 2010, the exploration and production industry has undergone a substantial change in culture and regulation. However, investigations still tend to focus on technological and managerial deficiencies rather than probing deeper into the causes of human error. Nonetheless, there is a growing desire to understand human and organizational factors that influence potential process safety incidents during drilling and completion operations.

The topic of human factors as a contributor to well-control incidents was first discussed in 1992 by Paul Sonneman at an International Association of Drilling Contractors well-control conference in Houston (Sonneman 1992). The subject subsequently received attention from the UK Health and Safety Executive (Wilson and Stanton 2004), which assessed five human factors techniques relating to offshore personnel in combination with interviews of key personnel. Both of these studies were preceded by a 1990 analysis of the effect of human factors in stuck-pipe incidents in the US Gulf of Mexico.

More recently, the topic has forced its way firmly onto the agenda in the wake of the 2010 Macondo accident and as a result, industry organizations have published a number of relevant texts. In 2014, SPE published a white paper (SPE, 2014) covering a wide spectrum of human factors issues and made a number of recommendations to politicians and the industry at large.

The International Association of Oil and Gas Producers (OGP) published Report 460 (OGP 2012) on the cognitive issues relating to process safety, Report 501 (2014a) on recommendations for the implementation of training in nontechnical skills, and Report 502 (2014b), which proposed recommended practices for such training programs.

The Energy Institute in the UK published its guidance on crew resource management, which made the case for such training based on the experience in other high-hazard industries, such as aviation, rail, nuclear, and mining (Energy Institute 2014). The North Sea Offshore Authorities Forum (NSOAF) carried out a joint human and organizational survey of the factors relating to well-control preparedness (NSOAF 2013).

The common thread throughout all of these publications and activities is that they are directed toward a narrow audience of human factors specialists who are concerned with researching human- factors-related issues or setting up of training programs. However, there has been little attempt to communicate the central ideas to those who deal with the problems of human error in all its forms on a daily basis; in particular, the drilling operations community where, in the recent past, some of the major accidents have occurred.

To address this, SPE held a workshop titled “Getting to Grips with Human Factors on Drilling Operations” in London in October 2014 to define and clarify the underlying ideas about human factors related to the drilling community. The purpose of the workshop was primarily to expose ideas and engage with a more hands-on and operational audience of drilling operations professionals.

The main points from the presentations and ensuing discussions are summarized here and references are provided for further reading. Practical tools and insight that can be applied in day-to-day operations are also presented.

Relevance of Human Factors in Drilling

Human factors is a very board term and encapsulates different subdisciplines, from ergonomics and human–machine interfaces to psychological aspects of how people perceive risk, make decisions, and deal with stress. It has largely evolved through the analysis of accidents that provide an important window into understanding how people perform their work and how things might go wrong.

Recounting the stories behind accidents and near misses is an important element of collective learning. It honors those who died and keeps the memory of such events alive. Adopting a narrative rather than a technical approach, Lillian Espinoza-Gala told the personal stories leading up to the Macondo blowout. Her story illustrated the importance of nontechnical skills, such as communication, situation awareness, and decision making in operational safety.

John Thorogood observed that human factors such as decision-making processes play an important role in drilling operations and contribute to almost every nonproductive time event. He proposed that the drilling community needs to think about the influence of human factors during routine operations. By paying attention to the human factors in incident investigations, simple and easy conclusions such as “follow procedures, dodgy equipment, incompetent crew, bad engineering, and ‘try harder, don’t let it happen again’” will be avoided.

Kristina Lauche introduced some key terms from the psychology of decision-making and organizational factors. An important recognition relates to the fact that humans are very skillful, but often imperfect, decision makers. Our cognitive system evolved to deal with life as hunter-gatherers in small groups in which intuition and team coherence formed a suitable survival strategy. Lauche argued that in the complex socio-technical setting of drilling, we can no longer rely on intuitive decision making, but need to deliberately design and organize workplace practices to take human factors into account.

When analyzing human factors after things have gone wrong, we can sometimes be surprised that people did not spot the problems earlier. However, the elements contributing to accidents are only clear in hindsight. Dekker (2011) comments that complex systems have no system architect and hence no one has a complete overview: What worked well yesterday can fail tomorrow. Organizations can therefore drift into failure without noticing. What gets us there is the “normalization of deviance.” This phenomenon is particularly likely to occur when production pressures and resource limitations are institutionalized; if there is a widespread belief that safety is assured and risk is under control; when signals of potential danger are accepted; and an attitude exists that since nothing went wrong, operations are continued.

In contrast to trying to eliminate human error, organizational resilience involves designing systems and organizations that are able to deal with the inevitable errors that need to be detected, contained, and recovered from. But this requires well-trained staff, local authority, and slack resources. High-reliability organizations focus on mindful organizing whereby they spend sufficient time on training and practicing (Thorogood 2012; Weick and Sutcliffe 2007).

In summary, there are three approaches to mitigate the human contribution to accidents:

  • Fixing the person: While the “person model” (finding the guilty individual to blame and shame) is insufficient to understand accident causation, it is still possible and necessary to prepare people for their role in high-hazard environments. This can be done through selection, training, preparation on the job, and protection against stress and fatigue.
  • Fixing the technical design: Equally, technology is not the sole cause of failure but can contribute to how well humans are able to do their job. Efforts can be made to reduce risk through technology design and to support the human user rather than to attempt to design the person out of the process.
  • Fixing the organization: Ultimately, attempts to deal with human factors must also involve addressing the organizational context in which people work.

Nontechnical Skills

A question frequently asked after incidents is “How can people that are technically proficient make errors?”

Margaret Crichton proposed one explanation during her presentation—the current lack of focus on nontechnical skills in drilling (Flin et al. 2008). Nontechnical skills are part of the human factors, focusing on the individual and their social, cognitive, and personal limitation aspects. This does not only refer to people on the sharp end, i.e., operations on the drill floor, but also to middle and senior management. The six key nontechnical skills are situation awareness, decision making, communication, teamwork, leadership, and stress management.

Crichton discussed the relevance of these nontechnical skills to drilling and described how including human factors in the analysis of incidents led to a greater understanding of how errors occurred and helped to identify ways to reduce the potential for errors. Crew resource management, a training course directed toward enhancing nontechnical skills, has been successfully implemented in other industries, and has undergone six iterations since its introduction in the 1970s, leading to the current focus on reducing human error and optimizing human performance.

Expanding capabilities in nontechnical skills in drilling may be obtained by

  • Raising awareness of human factors through publications, journal articles, and focused events
  • Introducing common terminology: Agree on a singular definition of what human factors and nontechnical skills are.
  • Developing processes and methods: Think about human factors in all aspects of well operations, the design phase, meetings, etc.
  • Integrate human factors whenever there is human interaction.
  • Analyzing incidents and near misses to identify human factors contribution. Begin to see what went wrong and why.
  • Training in nontechnical skills and integrating them with technical training courses
  • Observing and assessing skills using a competency framework to ensure continued enhancement of nontechnical skills
  • Integrating human factors methods into routine workplace practices: Use existing tools such as Stop the Job or Toolbox Talks to capture weak signals and consider implications of ambiguous data. Encourage constructive challenge as a norm.
  • Recognizing the impact of human factors on difficult decisions: Stopping the job is the Holy Grail, but people are reluctant to shut down a million-dollar-a-day operation. Perhaps reframing the problem, not as a loss or a cost of wasted precautions, but as prudent measures to insure against a bad outcome might help.

Seeing Human Factors in Everyday Operations

Human factors can seem a bit abstract; engineers and operations people often cannot see how human factors relate to their day-to-day work. Incident investigation is one area where the human contribution to events can be observed and its effects can, to the alert observer, readily be seen even in apparently everyday events. To highlight the relevance of human factors to drilling operations, Thorogood and Crichton described a case study showing how the effect of simple biases could result in a classic nonproductive time event (Thorogood et al. 2014).

A land well being drilled under difficult conditions in midwinter suffered a series of technical failures that resulted in an unintended deviation of 14° from the vertical before being detected and corrected. There were a number of obvious problems, such as faulty inclinometer readings, a less than optimum bottomhole drilling assembly, and possibly a failure to follow procedures. However, it was clear that the determination to achieve the goal, timely drilling of the well, risk aversion resulting in avoiding taking the time and cost to investigate the anomalies, and the ambiguity in the survey data meant that those involved were looking for readings that confirmed their belief that the hole was vertical. The extent of the problem was only revealed when repeat readings with a reliable unit emphatically confirmed the bad news.

The human factors insights helped to explain why the people let a problem, abundantly obvious with hindsight, develop to the point it did. The case study illustrates how the effects of human factors can be seen even in apparently trivial events. Engaging a person with knowledge of human factors in reviewing even minor operational events can bring the subject alive to drilling engineers and supervisors and show them just how pervasive the effects can be on decision making and team behavior.

However, over the years, the term “human error” has been used and misused to the point where it has become almost meaningless. Koen van der Merwe, senior human factors consultant at DNV GL, presented the Energy Institute human error taxonomy, which provides more depth and breadth to the term (Fig. 1). This model provides a structure that enables apparent human errors to be classified, but as emphasized earlier, the effects of hindsight bias must be factored into any conclusion as to cause. When performing incident investigation, one should aim to find the root causes of accidents, and therefore go beyond the term “human error.”

jpt-2015-04-fig1managementgetting.jpg
Fig. 1—Human error taxonomy. Source: Energy Institute.

 

Workshop participants performed an exercise based on a case study of an oil spill during well testing. Participants received a description of the event, followed by a set of questions they needed to answer in groups, focusing on what might possibly go wrong, how they could avert such situations, and what they could do to mitigate them. The case study aimed to demonstrate that although human error was a part of the causes of the spill, a number of performance-shaping factors and nontechnical factors could be identified to have contributed to the incident and that should be investigated further in a deeper analysis.

Practical Examples of Human Factors

Subsequent presentations at the workshop provided a variety of good practices, including in design, training, exercises, and techniques that are currently either being implemented in the drilling sector, or are being used successfully in other high-hazard environments.

Good Practice in Interface and System Design. Erland Engum of NOV described how the operator environment on drilling rigs is improving. Equipment manufacturers, shipyards, and rig owners look at the capabilities of the human involved in controlling drilling operations. However, there is still a lack of formal research in the area.

The design of the operator station, cabin, and rig floor can be improved using human factors concepts to simplify and optimize processes such as tripping, stand building, and drilling connections. Making incremental changes to the operator station improves the line of sight and ergonomics, and facilitates teamwork and communications.

Rig system automation is developing in many areas, driven by a desire for increased efficiency and reduction of human errors. The interaction between people and the automation, however, remains a challenge. Equipment designers cannot control the humans carrying out the drilling process but can provide interlocks to ensure that, if necessary, safe access can be assured. On the other hand, automation should not try to design people out of the process or degrade their skills; it cannot be allowed to reduce situation awareness. The trap of “the designer who tries to eliminate the operator still leaves the operator to do the tasks, which the designer cannot think how to automate” has to be avoided.

Good Practice in Training. Raphael Waxin of Total described a training course put in place by a major operator after the Macondo disaster to improve operational reliability by raising awareness for the drilling and wells chain of command. The 5-day training course focuses on sharing industry information, reminds participants of the company’s main blowout-prevention procedures, and introduces human and organizational factors. Participants are challenged through a progressive tactical decision game (TDG) called Bosig (blowout simulation game), which takes place twice a day throughout the course. The trainees are forced to make time-limited decisions on the best course of action, but information may be limited or inaccurate. The aim is to put the participants under pressure, to re-create the essence of a real-life incident. Interestingly, “Chatham House Rules” apply: Any apparent technical mistakes made by the participants that could reflect adversely on an individual’s competence remain within the room. The goal is to provide participants with training in the human factors aspects and to show how these emerge in difficult situations.

George Galloway, business development director of the Well Academy, presented an overview of a scenario-based well-control training course developed in the Netherlands in response to Macondo. The Dutch trade association NOGEPA recognized that well-control training had become very repetitive and people were often only trained to pass the test rather than improve their competence. The focus of the course is on the rig team: driller, assistant driller, and toolpusher, and ideally should include the operator drilling supervisor.

Morten Kaiser, chief instructor at Maersk Training, described a team-based scenario training program developed by a major drilling contractor for drilling teams using a high-fidelity simulator environment. While a lot of training courses focused on technical aspects before Macondo, a working group in his company examined the Macondo investigation report and developed team-based well-control training, focusing on training of whole teams rather than individuals. The drilling contractor also uses the training to display some of the challenges that contractors face with their clients. Embedding the client in the training also helps them to develop trust between hem.

The training consists of a simulator part and offshore class introductions. The simulator session lasts about 3 hours, with 1 hour of debriefing. An important finding from Macondo was that it is easy to get drillers to act, but they typically do not reflect on the alternative of their choice.

Good Practice in Exercising. Lars Bagger Hviid and Crichton led a session providing practical experiences of TDGs, which are low-fidelity facilitated simulations, using scenarios that vary in complexity and technicalities.

Complex scenarios in TDGs do not have to come only from technical information, but rather from the inclusion of other simultaneous events occurring during the event (which may or may not be relevant), or from leaving out information whereby participants fill in gaps with assumptions. There is no right answer; the scenario culminates in a dilemma and leads to a debate among participants about what they would do and why. Stressors are also imposed during the exercise, such as withholding pieces of information, creating distractions during the decision-making period, or reducing the time allowed for making a decision. By having a mixture of levels of experience in a TDG session, participants learn from each other, and have the opportunity to delve into the effects of experience and training.

During the workshop, participants were split into four groups of six to eight members, and were presented with a generic scenario, where they, as individuals, took on the role of leading a party of teenagers on a walk in the Scottish hills. Each participant had to decide what they would do and why in a limited amount of time and described their decision and reason. The facilitator in each group then led a discussion about the similarities and differences between the participants’ decisions.

After reaching consensus within the group about an agreed course of action, the facilitator debriefed the exercise by asking participants what the most influential pieces of information were; what made the decision challenging, if anything; and what they would do differently if in a similar situation in the future. When all the TDG groups reconvened, the decisions agreed by each group were discussed. A vigorous discussion ensued about influences on decisions from the different groups of participants.

Three types of observation/assessment tools were debated: detailed behavioral marker, dichotomous behavior (observed/not observed), and rated behavior. A brief discussion took place about the advantages and disadvantages of the three different tools.

Threat and Error Management

Next, Thorogood and Crichton described threat and error management, which is a core practice of modern civil aviation crew resource management (Thorogood and Crichton 2014). It is one of several important nontechnical elements that contribute to the high level of airline safety that we accept as normal today.

To check its relevance to drilling operations, they carried out a series of interviews with both operators and contractors to examine current risk-management practices. Although broad similarities were identified in the type of workplace practices employed, there was, in oil and gas, almost unanimously a strong reliance on supervisors to ensure that these practices were applied rigorously. Interviewees were adamant that industry standard practices are used, but upon following up, it turned out these are often a box-ticking exercise, turning safety into a numbers game. Pockets of change were identified, but these were dispersed across and within organizations.

The interviewees were emphatic that new ideas, like threat and error management, must avoid becoming “just another initiative.” Ideally, they should strengthen existing workplace practices such as toolbox talks, briefings, and shift handovers by prompting questions, such as “How I could make a mistake?,” “What could turn today into a bad day?,” “Are the barriers secure?,” “What is taking your mind off the job?” It is important to identify the rituals (e.g., safety meetings), and their frequency, and see how such questioning can instill a sense of chronic unease in the rig team.

By making such an approach an integral part of everyday operations, the end result will be that effective nontechnical skills can be observed, coached, and assessed on a routine basis in the workplace, thereby ensuring that they would more readily come to the fore when called upon in an emergency situation.

Opportunities, Barriers, and Possible Ways Forward

There was a consensus among workshop participants that people involved in drilling must have appropriate training in human factors and the underlying psychological principles.

It is important to accept that separate technical and human factors training will probably not suffice. Training in technical skills in isolation could give the misleading impression of being competent while ignoring the skills to deal with demanding situations under stress. Nontechnical skills training in isolation would probably seem meaningless and too abstract and participants would not connect these skills to the challenges of their work environment. Similarly, training assessors must have both technical and human factors knowledge to be able to properly assess the skill set of the participants.

Human factors awareness must be integrated into the entire process and at all levels, from planning and design to the actual operations. The influence of human factors and the underlying psychology must be recognized throughout the chain of command, from the drill floor to the board of directors. The role of human factors in incident causation can greatly assist the investigation process to illuminate not only the technical lessons but can also improve our understanding of the role of human factors in incidents.

Contribution From Organizations and Industry. To begin with, there must be a broad awareness in organizations and the industry about the importance of human factors. This can be achieved through training and communication.

Secondly, there must be a common understanding of the underlying concepts, principles, terminology, and applications. Standardization creates a common language to talk about human factors, as well as a uniform methodology to assess and train nontechnical skills. This requires senior management support, industry involvement, and a clear perspective of what benefits industry leaders and workers want out of human factors study.

Currently, human factors are put on the agenda following an incident, but as time goes on, people forget about it or leave, causing the loss of critical memory. Over time, the urgency decreases and it is back to business as usual. While top management must buy in, engineers on the ground can play a key role in their own environment by leading through example and can use their insight and experience to “sell” the relevance of human factors to their leaders.

Thirdly, using techniques such as threat and error management, the what-if mentality must be ingrained in all aspects of the process, from planning and design to the actual operation. In all situations, we need to contemplate the potential impact of our decisions and be aware of alternative choices and their effect.

Finally, we need a learning culture. Currently, the industry still does not share information to the extent it could and should, particularly around well-control incidents. Transparency is important as it allows other companies to learn from incidents without having to incur the costs. Additionally, we should also encourage learning from other industries.

Conclusion

Given that change comes from the top, one essential step that could contribute to the engagement of top management is to frame the question differently. Top managers will demand safety and continuous improvement in terms of operational efficiency, costs, and investments. The goal of any human factors activity is therefore to support the high-level intent. Human factors affect these business performance issues, so we must develop a human-factors-sensitive business case.

Nonproductive time is extremely costly, so developing the attributes of a high-reliability organization becomes a competitive necessity. It applies not only to the operators, but also to drilling contractors and service providers. Senior management engagement is essential.

Human factors are good for business and good for safety: It is a win-win situation. There is also a parallel with quality management, which was created because clients demanded that suppliers had a management system in place to assure the quality of delivered products. High-reliability organizations are safe and this may allow them to gain permission to drill in difficult or sensitive environments.

It is noticeable that people are increasingly experiencing “Macondo-fatigue.” Human factors are still not firmly at the top of the agenda; the business case has yet to be made.

Organizational change is not just led from the top but must be embraced by the community at large. Practitioners must influence this process by promoting the idea of change from the bottom-upward. But to make this work, visibility is required. So, in addition to the business case, there was a consensus in the workshop that we have to roll out the message that looking after the human factors is good business; it increases safety and improves operational performance. We can do this by sharing our experiences at meetings and workshops, by publishing papers, and paying attention to human factors during routine operations.

References

Dekker, S. 2011. Drift Into Failure: From Hunting Broken Components to Understanding Complex Systems. Ashgate Publishing.

Energy Institute. 2014. Guidance on Crew Resource Management. The Energy Institute, London.

Flin, R., O’Connor, P., and Crichton, M. 2008. Safety at the Sharp End: A Guide to Non-Technical Skills.& Ashgate Publishing.

NSOAF. 2013. Multi-National Audit: Human and Organisational Factors in Well Control. North Sea Offshore Authorities Operators Forum.http://www.hse.gov.uk/offshore/nsoaf.pdf.

OGP. 2012. Cognitive Issues Associated With Process Safety and Environmental Incidents. Report No. 460. Human Factors Subcommittee, International Association of Oil and Gas Producers. www.ogp.org.uk.

OGP. 2014a. Crew Resource Management for Well Operations Teams. Report No. 501. Human Factors Subcommittee, International Association of Oil and Gas Producers. www.ogp.org.uk.

OGP. 2014b. Well Operations Crew Resource Management Recommended Practice. Report No. 502. Human Factors Subcommittee, International Association of Oil and Gas Producers. www.ogp.org.uk.

Sonneman, P. 1992. The Psychology of Well Control. Paper presented at IADC Well Control Conference of the Americas, Houston, Texas, 18–19 November.

SPE. 2014. The Human Factor: Process Safety and Culture. Paper SPE 170575-TR, Society of Petroleum Engineers.

Thorogood, J.L. 2012. Is There a Place for High Reliability Organisations in Drilling? Paper SPE 151338-PA presented at the SPE/IADC Drilling Conference and Exhibition.

Thorogood, J.L. and Crichton M.T. 2014. Threat and Error Management: The Connection Between Process Safety and Practical Action at the Worksite. Paper SPE 167967-PA presented at the IADC/SPE Drilling Conference and Exhibition, Fort Worth, 4–6 March.

Thorogood, J.L., Crichton, M.T., and Bahamondes, A. 2014. Case Study of Weak Signals and Confirmation Bias in Drilling Operations. Paper SPE 168047-PA presented at IADC/SPE Drilling Conference, Fort Worth, March.

Weick, K. and Sutcliffe, K.M. 2007. Managing the Unexpected: Resilient Performance in an Age of Uncertainty. San Francisco, California: Josey-Bass.

Wilson, J.A. and Stanton, N.A. 2004. Safety and Performance Enhancement in Drilling Operations by Human Factors. UK Health and Safety Executive Research Report 264.

This article is a summary of paper SPE/IADC 173104, prepared for presentation at the SPE/IADC Drilling Conference and Exhibition held in London, United Kingdom, 17–19 March 2015.