Factors In Modern Flight Deck Design Essay, Research Paper
A Technical Analysis of Ergonomics and Human Factors in Modern Flight Deck Design
I. Introduction
Since the dawn of the aviation era, cockpit design has become
increasingly complicated owing to the advent of new technologies enabling
aircraft to fly farther and faster more efficiently than ever before. With
greater workloads imposed on pilots as fleets modernize, the reality of he or
she exceeding the workload limit has become manifest. Because of the
unpredictable nature of man, this problem is impossible to eliminate completely.
However, the instances of occurrence can be drastically reduced by examining the
nature of man, how he operates in the cockpit, and what must be done by
engineers to design a system in which man and machine are ideally interfaced.
The latter point involves an in-depth analysis of system design with an emphasis
on human factors, biomechanics, cockpit controls, and display systems. By
analyzing these components of cockpit design, and determining which variables of
each will yield the lowest errors, a system can be designed in which the
Liveware-Hardware interface can promote safety and reduce mishap frequency.
II. The History Of Human Factors in Cockpit Design
The history of cockpit design can be traced as far back as the first
balloon flights, where a barometer was used to measure altitude. The Wright
brothers incorporated a string attached to the aircraft to indicate slips and
skids (Hawkins, 241). However, the first real efforts towards human factors
implementation in cockpit design began in the early 1930’s. During this time,
the United States Postal Service began flying aircraft in all-weather missions
(Kane, 4:9). The greater reliance on instrumentation raised the question of
where to put each display and control. However, not much attention was being
focused on this area as engineers cared more about getting the instrument in the
cockpit, than about how it would interface with the pilot (Sanders & McCormick,
739).
In the mid- to late 1930’s, the development of the first gyroscopic
instruments forced engineers to make their first major human factors-related
decision. Rudimentary situation indicators raised concern about whether the
displays should reflect the view as seen from inside the cockpit, having the
horizon move behind a fixed miniature airplane, or as it would be seen from
outside the aircraft. Until the end of World War I, aircraft were manufactured
using both types of display. This caused confusion among pilots who were
familiar with one type of display and were flying an aircraft with the other.
Several safety violations were observed because of this, none of which were
fatal (Fitts, 20-21).
Shortly after World War II, aircraft cockpits were standardized to the ?
six-pack’ configuration. This was a collection of the six critical flight
instruments arranged in two rows of three directly in front of the pilot. In
clockwise order from the upper left, they were the airspeed indicator,
artificial horizon, altimeter, turn coordinator, heading indicator and vertical
speed indicator. This arrangement of instruments provided easy transition
training for pilots going from one aircraft to another. In addition, instrument
scanning was enhanced, because the instruments were strategically placed so the
pilot could reference each instrument against the artificial horizon in a hub
and spoke method (Fitts, 26-30).
Since then, the bulk of human interfacing with cockpit development has
been largely due to technological achievements. The dramatic increase in the
complexity of aircraft after the dawn of the jet age brought with it a greater
need than ever for automation that exceeded a simple autopilot. Human factors
studies in other industries, and within the military paved the way for some of
the most recent technological innovations such as the glass cockpit, Heads Up
Display (HUD), and other advanced panel displays. Although these systems are on
the cutting edge of technology, they too are susceptible to design problems,
some of which are responsible for the incidents and accidents mentioned earlier.
They will be discussed in further detail in another chapter (Hawkins, 249-54).
III. System Design
A design team should support the concept that the pilot’s interface with
the system, including task needs, decision needs, feedback requirements, and
responsibilities, must be primary considerations for defining the system’s
functions and logic, as opposed to the system concept coming first and the user
interface coming later, after the system’s functionality is fully defined.
There are numerous examples where application of human-centered design
principles and processes could be better applied to improve the design process
and final product. Although manufacturers utilize human factors specialists to
varying degrees, they are typically brought into the design effort in limited
roles or late in the process, after the operational and functional requirements
have been defined (Sanders & McCormick, 727-8). When joining the design process
late, the ability of the human factors specialist to influence the final design
and facilitate incorporation of human-centered design principles is severely
compromised. Human factors should be considered on par with other disciplines
involved in the design process.
The design process can be seen as a six-step process; determining the
objectives and performance specifications, defining the system, basic system
design, interface design, facilitator design, and testing and evaluation of the
system. This model is theoretical, and few design systems actually meet its
performance objectives. Each step directly involves input from human factors
data, and incorporates it in the design philosophy (Bailey, 192-5).
Determining the objectives and performance specifications includes
defining a fundamental purpose of the system, and evaluating what the system
must do to achieve that purpose. This also includes identifying the intended
users of the system and what skills those operators will have. Fundamentally,
this first step addresses a broad definition of what activity-based needs the
system must address. The second step, definition of the system, determines the
functions the system must do to achieve the performance specifications (unlike
the broader purpose-based evaluation in the first step). Here, the human
factors specialists will ensure that functions match the needs of the operator.
During this step, functional flow diagrams can be drafted, but the design team
must keep in mind that only general functions can be listed. More specific
system characteristics are covered in step three, basic system design (Sanders &
McCormick, 728-9).
The basic system design phase determines a number of variables, one of
which is the allocation of functions to Liveware, Hardware, and Software. A
sample allocation model considers five methods: mandatory, balance of value,
utilitarian, affective and cognitive support, and dynamic. Mandatory allocation
is the distribution of tasks based on limitations. There are some tasks which
Liveware is incapable of handling, and likewise with Hardware. Other
considerations with mandatory allocation are laws and environmental restraints.
Balance of value allocation is the theory that each task is either incapable of
being done by Liveware or Hardware, is better done by Liveware or Hardware, or
can only be done only by Liveware or Hardware. Utilitarian allocation is based
on economic restraints. With the avionics package in many commercial jets
costing as much as 15% of the overall aircraft price (Hawkins, 243), it would be
very easy for design teams to allocate as many tasks to the operator as possible.
This, in fact, was standard practice before the advent of automation as it
exists today. The antithesis to that philosophy is to automate as many tasks as
possible to relieve pressure on the pilot. Affective and cognitive support
allocation recognizes the unique need of the Liveware component and assigns
tasks to Hardware to provide as much information and decision-making support as
possible. It also takes into account limitations, such as emotions and stress
which can impede Liveware performance. Finally, dynamic allocation refers to an
operator-controlled process where the pilot can determine which functions should
be delegated to the machine, and which he or she should control at any time.
Again, this allocation model is only theoretical, and often a design process
will encompass all, or sometimes none of these philosophies (Sanders & McCormick,
730-4).
Basic system design also delegates Liveware performance requirements,
characteristics that the operator must posses for the system to meet design
specifications (such as accuracy, speed, training, proficiency). Once that is
determined, an in-depth task description and analysis is created. This phase is
essential to the human factors interface, because it analyzes the nature of the
task and breaks it down into every step necessary to complete that task. The
steps are further broken down to determine the following criteria: stimulus
required to initiate the step, decision making which must be accomplished (if
any), actions required, information needed, feedback, potential sources of error
and what needs to be done to accomplish successful step completion. Task
analysis is the foremost method of defining the Liveware-Hardware interface. It
is imperative that a cockpit be designed using a process similar to this if it
is to maintain effective communication between the operator and machine (Bailey,
202-6). It is widely accepted that the equipment determines the job. Based on
that assumption, operator participation in this design phase can greatly enhance
job enlargement and enrichment (Sanders & McCormick, 737; Hawkins, 143-4).
Interface design, the fourth process in the design model, analyzes the
interfaces between all components of the SHEL model, with an emphasis on the
human factors role in gathering and interpreting data. During this stage,
evaluations are made of suggested designs, human factors data is gathered (such
as statistical data on body dimensions), and any gathered data is applied. Any
application of data goes through a sub-process that determines the data’s
practical significance, its interface with the environment, the risks of
implementation, and any give and take involved. The last item involved in this
phase is conducting Liveware performance studies to determine the capabilities
and limitations of that component in the suggested design. The fifth step in
the design stage is facilitator design. Facilitators are basically Software
designs that enhance the Liveware-Hardware, such as operating manuals, placards,
and graphs. Finally, the last design step is to conduct testing of the proposed
design and evaluate the human factors input and interfaces between all
components involved. An application of this process to each system design will
enhance the operators ability to control the system within desired
specifications. Some of the specific design characteristics can be found in
subsequent chapters.
IV. Biomechanics
In December of 1981, a Piper Comanche aircraft temporarily lost
directional control in gusty conditions within the performance specifications of
the aircraft. The pilot later reported that with the control column full aft,
he was unable to maintain adequate aileron control because his knees were
interfering with proper control movement (NTSB database). Although this is a
small incident, it should alert engineers to a potential problem area. Probably
the most fundamental, and easiest to quantify interface in the cockpit is the
physical dimensions of the Liveware component and the Hardware designs which
must accommodate it. The comfort of the workspace has long been known to
alleviate or perpetuate fatigue over long periods of time (Hawkins, 282-3).
These facts indicate a need to discuss the factors involved in workspace design.
When designing a cockpit, the engineer should determine the physical
dimensions of the operator. Given the variable dimensions of the human body, it
is naturally impossible to design a system that will accommodate all users. An
industry standard is to use 95% of the population’s average dimensions, by
discarding the top and bottom 2.5% in any data. From this, general design can
be accomplished by incorporating the reach and strength limitations of smaller
people, and the clearance limitations of larger people. Three basic design
philosophies must be adhered to when designing around physical dimensions: reach
and clearance envelopes, user position with respect to the display area, and the
position of the body (Bailey, 273).
Other differences must be taken into account when designing a system,
such as ethnic and gender differences. It is known, for example, that women are,
on average, 7% shorter than men (Pheasant, 44). If the 95 percentile convention
is used, the question arises, on which gender do we base that? One was to speak
of the com
characteristic divided by the average male characteristic. Although this ratio
doesn’t take into account the possibility of overlap (i.e., the bottom 5th
percentile of males are likely to be shorter than the top 5th percentile of
females), that is not an issue in cockpit design (Pheasant, 44). The other
variable, ethnicity must also be evaluated in system design. Some Asian races,
for example have a sitting height almost ten centimeters lower than Europeans
(Pheasant, 50). This can raise a potential problem when designing an instrument
panel, or windshield.
Some design guides have been established to help the engineer with
conceptual problems such as these, but for the most part, systems designers are
limited to data gathered from human factors research (Tillman & Tillman, 80-7).
As one story went, during the final design phase of the Boeing 777, the chairman
of United Airlines was invited to preview it. When he stood in his first class
seat, his head collided with an overhead baggage rack. Boeing officials were
apologetic, but the engineers were grinning inside. A few months later, the
launch of the first 777 in service included overhead baggage racks that were
much higher, and less likely to be involved in a collision. Unlike this
experience, designing clearances and reach envelopes for a cockpit is too
expensive to be a trial and error venture.
V. Controls
In early 1974, the NTSB released a recomendation to the FAA regarding
control inconsistencies:
“A-74-39. Amend 14 cfr 23 to include specifications for standardizing fuel
selection valve handle designs, displays, and modes of operation” (NTSB
database).
A series of safety accidents occurred during transition training of pilots
moving from the Beechcraft Bonanza and Baron aircraft when flap and gear handles
were mistakenly confused:
“As part of a recently completed special investigation, the safety board
reviewed its files for every inadvertent landing gear retraction accident
between 1975 and 1978. These accidents typically happened because the pilot was
attempting to put the flaps control up after landing, and moved the landing gear
control instead. This inadvertent movement of the landing gear control was often
attributed to the pilot’s being under stress or distracted, and being more
accustomed to flying aircraft in which these two controls were in exactly
opposite locations. Two popular light aircraft, the Beech Bonanza and Baron,
were involved in the majority of these accidents. The bonanza constituted only
about 30 percent of the active light single engine aircraft fleet retractable
landing gear, but was involved in 16 of the 24 accidents suffered by this
category of aircraft. Similarly, the baron constituted only 16 percent of the
light twin fleet, yet suffered 21 of the 39 such accidents occurring to these
aircraft” (NTSB database).
Like biomechanics, the design of controls is the study of physical relationships
within the Liveware-Hardware interface. However, control design philosophy
tends to be more subtle, and there is slightly more emphasis on psychological
components. A designer determines what kind of control to use in a system only
after the purpose of the system has been established, and what operator needs
and limitations are. In general, controls serve one of four actions:
activation, discrete setting, quantitative setting, and continuous control.
Activation controls are those that toggle a system on or off, like a light
switch. Discrete setting switches are variable position switches with three or
more options, such as a fuel selector switch with three settings. Quantitative
setting switches are usually knobs that control a system along a predefined
quantitative dimension, such as a radio tuner or volume control. Continuous
controls are controls that require constant equipment control, such as a
steering wheel. A control is a system, and therefore follows the same
guidelines for system design described above. In general, there are a few
guidelines to control design that are unique to that system. Controls should be
easily identified by color coding, labeling, size and shape coding and location
(Bailey, 258-64). When designing controls, some general principles apply.
Normal requirements for control operation should not exceed the maximum
limitations of the least capable operator. More important controls should be
given placement priority. The neutral position of the controls should
correspond with the operator’s most comfortable position, and full control
deflection should not require an extreme body position (locked legs, or arms).
The controls should be designed within the most biomechanically efficient design.
The number of controls should be kept to a minimum to reduce workload, or when
that is not possible, combining activation controls into discrete controls is
preferable. When designing a system, it should be noted that foot control is
stronger, but less accurate than hand control. Continuous control operation
should be distributed around the body, instead of focused on one particular part,
and should be kept as short as possible (Damon, 291-2). Detailed studies have
been conducted about control design, and some concerns were such things as the
ability of an operator to discern one control with another, size and spacing of
controls, and stereotypes. It was stated that even with vision available,
easily discernible controls were mistaken for another (Fitts, 898; Adams, 276).
A study by Jenkins revealed a set of control knobs that were not prone to such
error, or were less likely to yield errors (Adams, 276-7). Some of these have
been incorporated in aircraft designs as recent as the Boeing 777. Another
study, conducted by Bradley in 1969 revealed that size and spacing of knobs was
directly related to inadvertent operation. He believed that if a knob were too
large, small, far apart, or close together, the operator was prone to a greater
error yield. In the study, Bradley concluded that the optimum spacing between
half-inch knobs would be one inch between their edges. This would yield the
lowest inadvertent knob operation (Fitts, 901-2; Adams, 278). Population
stereotypes address the issue of how a control should be operated (should a
light switch be moved up, to the left, to the right, or down to turn it on?).
There are four advantages that follow a model of ideal control relationship.
They are decreased reaction time, fewer errors, better speed of knob adjustment,
and faster learning. (Van Cott & Kinkdale, 349). These operational advantages
become a great source of error to the operator unfamiliar with the aircraft and
experiencing stress. During a time of high workload, one characteristic of the
Liveware component is to revert to what was first learned (Adams, 279-80). In
the case of the Bonanza and Baron pilots, this was the case in mistaking the
gear and flap switches.
VI. Displays
In late 1986, the NTSB released the following recommendation to the FAA
based on three accidents that had occurred within the preceding two years:
“A-86-105. Issue an Air Carrier Operations Bulletin-Part 135, directing
Principal Operations Inspectors to ensure that commuter air carrier training
programs specifically emphasize the differences existing in cockpit
instrumentation and equipment in the fleet of their commuter operators and that
these training programs cover the human engineering aspects of these differences
and the human performance problems associated with these differences” (NTSB
database).
The instrumentation in a cockpit environment provides the only source of
feedback to the pilot in instrument flying conditions. Therefore, it is a very
valuable design characteristic, and special attention must be paid to optimum
engineering. There are two basic kinds of instruments that accomplish this
task: symbolic and pictorial instruments. All instruments are coded
representations of what can be found in the real world, but some are more
abstract than others. Symbolic instrumentation is usually more abstract than
pictorial (Adams, 195-6). When designing a cockpit, the first consideration
involves the choice between these two types of instruments. This decision is
based directly on the operational requirements of the system, and the purpose of
the system. Once this has been determined, the next step is to decide what sort
of data is going to be displayed by the system, and choose a specific instrument
accordingly.
Symbolic instrumentation usually displays a combination of four types of
information: quantitative, qualitative, comparison, and reading checking (Adams,
197). Quantitative instruments display the numerical value of a variable, and
is best displayed using counters, or dials with a low degree of curvature. The
preferable orientation of a straight dial would be horizontal, similar to the
heading indicator found in glass cockpits. However, conflicting research has
shown that no loss accuracy could be noted with high curvature dials (Murrell,
162). Another experiment showed that moving index displays with a fixed pointer
are more accurate than a moving pointer on a fixed index (Adams, 200-1).
Qualitative readings is the judgment of approximate values, trends, directions,
or rate of variable change. This information is displayed when a high level of
accuracy is not required for successful task completion (Adams, 197). A study
conducted by Grether and Connell in 1948 suggested that vertical straight dials
are superior to circular dials because an increase in needle deflection will
always indicate a positive change. However, conflicting arguments came from
studies conducted a few years later that stated no ambiguity will manifest
provided no control inputs are made if a circular dial is used. It has also
been suggested that moving pointers along a fixed background are superior to
fixed pointers, but the few errors in reading a directional gyro seem to
disagree with this supposition (Murrell, 163). Comparisons of two readings are
best shown on circular dials with no markings, but if they are necessary, the
markings should not be closer than 10 degrees to each other (Murrell, 163).
Check reading involves verifying if a change has occurred from the desired value
(Adams, 197). The most efficient instrumentation for this kind of task are any
with a moving pointer. However, the studies concerning this type of
informational display has only been conducted with a single instrument. It is
not known if this is the most efficient instrument type when the operator is
involved in a quick scan (Murrell, 163-4).
The pictorial instrument is most efficiently used in situation displays,
such as the attitude indicator or air traffic control radar. In one experiment,
pilots were allowed to use various kinds of situation instruments to tackle a
navigational problem. Their performance was recorded, and the procedure was
repeated using different pilots with only symbolic instruments. Interestingly,
the pilots given the pictorial instrumentation performed no navigation errors,
whereas those given the symbolic displays made errors almost ten percent of the
time (Adams, 208-209). Regardless of these results, it has long been known that
the most efficient navigational methods are accomplished by combining the
advantages of these two types of instruments.
VII. Summary
The preceding chapters illustrate design-side techniques that can be
incorporated by engineers to reduce the occurrence of mishaps due to Liveware-
Hardware interface problems. The system design model presented is ideal and
theoretical. To practice it would cost corporations much more money than they
would save if they were to use less cost-efficient methods. However, today’s
society seems to be moving towards a global concensus to take safety more
seriously, and perhaps in the future, total human factors optimization will
become manifest. The discussion of biomechanics in chapter three was purposely
broad, because it is such a wide and diverse field. The concepts touched upon
indicate the areas of concern that a designer must address before creating a
cockpit that is ergonomically friendly in the physical sense. Controls and
displays hold a little more relevance, because they are the fundamental control
and feedback devices involved in controlling the aircraft. These were discussed
in greater detail because many of those concepts never reach the conscious mind
of the operator. Although awareness of these factors is not critical to safe
aircraft operation, they do play a vital role in the subconscious mind of the
pilot during critical operational phases under high stress. Because of the
unpredictable nature of man, it would be foolish to assume a zero tolerance
environment to potential errors like these, but further investigation into the
design process, biomechanics, control and display devices may yield greater
insight as far as causal factors is concerned. Armed with this knowledge,
engineers can set out to build aircraft not only to transport people and
material, but also to save lives.