AustLII Home | Databases | WorldLII | Search | Feedback

Journal of Law, Information and Science

Journal of Law, Information and Science (JLIS)
You are here:  AustLII >> Databases >> Journal of Law, Information and Science >> 1993 >> [1993] JlLawInfoSci 8

Database Search | Name Search | Recent Articles | Noteup | LawCite | Help

Hallinan, Jennifer --- "Human Factors in Computer Security: A Review" [1993] JlLawInfoSci 8; (1993) 4(1) Journal of Law, Information and Science 94

Human Factors in Computer Security:
A Review

by

Jennifer Hallinan[*]

Abstract

With the increasing use of computers, both large and small, in businesses of all kinds, the topic of computer security is becoming of increasing importance to most organisations. Although there is a considerable body of literature on the technical aspects of computer security, for most organisations the main risk to security, and the main defence against data compromise, lies in the people who interact with the computers. In this paper, a number of categories of computer user are considered and assessed in terms of the risk they pose to computer security and their potential role in protection of data.

We also consider the importance of psychological factors related to computer use. At the individual level, factors such as human cognitive function and user interface design combine to affect the way in which people use computers, and their enjoyment or otherwise of the experience. Similar factors also come into play at the organisational level, where subtle influences such the "psychological climate" of the organisation and its informal behavioural norms affect the way in which its members act. The process of organisational change management, such as the introduction of new security policies, requires that such factors be taken into consideration if it is to be effective.

The third area reviewed here is that of security and user education policy formulation. While it is generally agreed that such policies should exist, in a formal and readily accessible format, their usefulness will be affected by the balance they achieve between the needs of individual workers, and the organisation's requirement for computer security.

___________________________

What is Computer Security?

Mention computer security and the average, television-trained imagination immediately visualises spotty-faced teenage "hackers" starting World War III, or industrial spies in fedora hats with briefcases full of secret codes. In the more mundane world of the computer security literature, however, there are a number of recognised threats to the security of a computer system which are both less glamorous, and potentially far more serious. Computer security, in its broadest sense, can be taken to mean any measure aimed at ensuring that information is protected from unauthorised access, disclosure, modification, and/or destruction. Statistics in both Australia and overseas have highlighted disgruntled employees and common every day mistakes as the factors contributing to the largest business losses in this area (Dickie 1991).

Other threats discussed in the literature as significant include: unauthorised actions by authorised persons, and abuse of special privileges by systems programmers and facility operators (Clark & Wilson 1987); the manipulation of input or, less often, output by people working within their level of authority (Chalmers 1986); problems with physical security or personnel policy rather than with computer security per se (Brand 1990); inadequate knowledge of computer-related practices, leading to equipment damage and loss of computer hardware, software or data; and fraud (Adams 1991).

Are these threats significant in the current Australian context? The Australian Computer Abuse Research Bureau, at the RMIT Faculty of Business' Department of Business Information Systems has collected statistics on computer crime in Australia since 1987. Table 1 depicts the result of their research into the incidence of various categories of "computer abuse" and the cost of each type of incident.

Table 1.

Types of Computer Abuse - Total as of Oct 1991

Category
N
Value1
$ Value
of Loss
Percent
of Value2
Average $ Loss3






Fraud
111
80
$13,660,543
80.8%
$170,757
Unauthorised Use
52
17
$57,600
0.3%
$3,388
Hacking
31
4
$18,300
0.1%
$4,575
Information Theft
70
23
$292,490
1.7%
$12,736
Sabotage
17
6
$903,900
5.3%
$150,650
Virus Attack
166
3
$18,770
0.1%
$6,275
Equipment Theft
51
47
$1,955,976
11.6%
$41,616






Total
497
180
$16,908,029
100%
$93,933

Notes: Source: Adams (1991)

1. Number of cases in which value of loss is known

2. Percentage of total value of all known cases

3. Average amount of loss per case

The most frequent incident was viral attack. This figure has increased considerably in the immediate past, as viruses become more widespread, and users become more competent at detecting infections. However, although viruses account for 33% of incidents, the known cost of virus attack was only 0.1% of the total known costs. This implies that viruses, while widespread, are not a major threat, despite media reports to the contrary. These figures must be regarded as somewhat "woolly", however, as the cost was able to be estimated in only three of the 166 incidents.

The other media bugbear, "hacking" also appears in practice to be a relatively minor problem. Hacking incidents comprised only 6% of all incidents, and the known cost was 0.1% of total known costs.

In contrast, cases of fraud made up 22% of incidents. Cost was obviously easier to estimate in this category, with 80 of the 111 incidents being valued. The known cost of fraud made up 80.8% of the total known costs.

Another category which has shown a marked increase in the immediate past is information theft. It is possible that some, at least, of this increase is accounted for by increasing awareness on the part of business managers that information is a resource which can be stolen. It appears to be relatively difficult to assess the cost of information theft, with only 23 of 70 cases being valued. However, Adams (1991) believes that "the strategic significance of this category of computer crime is undoubtedly great".

It is also interesting to note that equipment theft cost companies the third largest amount. Again, Brand (1990) believes that "there is no substitute for physical security, and proper separation will require an attacker to compromise physical security in order to penetrate the system".

The major threats, ranked in terms of financial loss, thus appear to be:

1. Fraud

2. Equipment theft

3. Sabotage

4. Information theft

5. Unauthorised use

6. Hacking / Viruses

Who are the Perpetrators?

Table 2 shows the major categories of computer abuse perpetrator in the ACARB dataset, and the estimated value of the losses for which they are responsible.

Table 2.

Major Perpetrator of Computer Abuse - Total to Oct 1991

Category
Count
Value
$ Value
of Loss
Percentage
of Value
Average
$ Loss
EDP Employee
77
44
$2,346,676
13.9%
$53,333
User Employee
95
39
$9,851,467
58.3%
$252,602
Consultant
3
1
$994,000
5.9%
$994,000
Outsider
135
31
$2,362,651
14.0%
$76,214
Unidentified
187
65
$1,353,236
8.0%
$20,819
Total
497
180
$16,908,030
100%
$93,933

Source: Adams (1991)

According to Adams (1991) the high numbers of Outsider/Unidentified perpetrators is a relatively recent phenomenon, which can be attributed to the increase in virus activity mentioned above. Of those perpetrators identified as user employees, none were programmers, customers, or operators. Employees, both user and EDP, were responsible for a just over a third of all incidents recorded, and nearly three-quarters of the identifiable cost of the incidents reported. These figures support the assessments of risks made by authors such as Chalmers (1986) and Brand (1990) who assert that it is users performing within their level of authority who pose the greatest risk to computer security.

Adams (1991) concludes that "from our research we might deduce - the major threat to organisations comes from their employees - principally users". The main risk seems to come from non-technical employees, followed by technical employees, with outsiders a relatively remote threat.

It appears, then, that the major threats to computer security lie within the organisation; specifically, within the personnel of the organisation. The attitude of staff towards computer security is thus, apparently, the key to data protection.

User Perceptions of Risk

The picture painted in the media of threats to computer security tends to be somewhat different from that described above. Some risks, such as system penetration by crackers, and the spread of computer viruses have been widely publicised, and have caused considerable concern among non-technical users. A classic recent example is that of the Michelangelo virus, whose trigger date is March 6th. The ill effects of this program were emphasised world-wide, with the result that many PC owners were afraid to turn their machines on at all on the fateful day in 1992. The resulting loss of work time may well have cost more than the potential damage caused by the virus. In cases where the machine was not infected, this is definitely the case! Interestingly, there was no corresponding media "blitz" in the weeks before March 6th, 1993, and no subsequent reports of damage caused by the virus.

Very little appears to have been published concerning user perceptions of computer security risks. It is tempting to think that such opinions, particularly among non-technical computer users, must be shaped by media representations, but there is currently little basis for this belief. If this is indeed the case, this would be a significant security risk in itself, since the very people responsible for the security of a system may believe their contribution to be irrelevant.

The People in the Picture

Humans interact with computers in a business environment in a wide variety of ways. Since the role of a computer user in a business will have implications about factors such as his degree of technical knowledge, interest in and enjoyment of computers, and attitudes to security issues, I have divided "people" into a number of categories for consideration below.

Computer Users

The vast majority of humans in a business can be classified as end users. While they may well be knowledgeable and enthusiastic about computers, there is usually no requirement for them to know more than how to start and run one or more canned applications. The majority of users today probably use desktop PCs; some may use terminals to a mainframe or minicomputer.

It is widely accepted that non-technical users in general tend to be unenthusiastic about computer security practices. For example, Chalmers (1986) asserts that "most users are not sufficiently motivated to go to the trouble of selecting and remembering good passwords"; Armstrong (1992) believes that "the most serious threat to company information comes from untrained or careless users"; and Menkus (1989) tells us that "computer security managers report consistently that employees are reluctant to treat the organisation's information holdings as assets that should be protected". (See also Zajac 1988; Fites et al. 1989; and Pfleeger 1989). Furthermore, Peltier (1992) asserts that there is evidence of a marked decrease in company loyalty among employees (in the US) in the last several years. If true, this implies that such disaffected employees are unlikely to value their employers' data highly.

One reason for this may be the widely-held view of small computers as vehicles for empowerment of the worker. The desktop PC is often seen as liberating workers from the control of a bureaucratic corporate computer centre and placing them in charge of their own work. As Frank et al. (1991) put it "this decentralisation represents a view of job redesign and enhanced quality of working life that advocates increasing worker autonomy as a way of enhancing workers' sense of responsibility, involvement and intrinsic motivation."

This vision has two corollaries. Firstly, users tend to believe that if they create and control a dataset, it is their property, rather than that of the organisation. They may therefore consider that they have the right to share the data as they see fit, and even take it with then upon leaving a company, as the basis for further work (Menkus 1989).

Secondly, however possessive they may be about their own data, many users feel that anyone who works for an organisation should have unlimited access to all the information that it possesses. This is believed to "open up" the computing environment and improve job performance (Menkus 1989; Horey 1992). Both of these beliefs may lead to lax attitudes towards data security.

Another factor which may affect users' attitudes is the nature of information itself. Information is rarely perceived as an asset belonging to a company because it is intangible, hard to value, and does not appear on the profit and loss statement. In addition, use of a company's data is not generally restricted to employees. "Outsiders" such as technicians, vendors and contract employees are frequently given unrestricted access to a company's computers and thus to the data they contain (Menkus 1989). On a more prosaic level, the majority of computer users today are unlikely to be trained typists. Awkwardness with the keyboard may well lead such users to be impatient with security practices, particularly if they are already seen as unnecessary or restrictive.

Given that end users are the primary users of a company's information store, the benefits of having a security-conscious end-user population become apparent. These benefits may include:

• reducing the probability of theft of information by both insiders and outsiders;

• reducing the probability of embezzlement;

• reducing the probability of theft of goods, again by both insiders and outsiders;

• reducing the probability of fraud by vendors, suppliers and contractors;

• reducing the probability of errors and omissions.

(Zajac 1988)

How are these benefits to be realised for a given organisation? Most authors write vaguely of "a good security awareness program", "fostering employee involvement", or similar (Menkus 1989; Zajac 1988; Hollins 1992). Among the more positive suggestions that have been made are:

• support from top management;

• the use of written and signed data security policies, given to employees upon commencing work, and renewed annually;

• actions such as prosecution, termination of employment, or some type of internal disciplinary actions taken against transgressors;

• screening of potential employees.

Since these measures involve specific security policies, they will be discussed in more detail below.

In addition to their role as protectors of data, however reluctant, end users may also pose a threat to data security. With respect to computer crime Zajac (1990) states that "generally it is people within your organisation and they are doing it for one of three main reasons - money, revenge, or 'intellectual challenge'." Peltier (1992) adds "The typical computer crook is an employee who is a legitimate and non-technical user of the system." It should be borne in mind that accident and incompetence are probably at least as important as conscious criminal behaviour in causing loss and disclosure of data (Peltier 1992).

Detecting users who are a threat or are likely to become one is not an easy task. This is not a problem unique to computer security, however; employee fraud has existed as long as there have been employees, and the use of computers does not significantly affect the issue in most cases.

Managers

In this context, "managers" refers to the managerial and executive staff of the organisation, rather than to computer managers. Managers come from the ranks of end users and share many of their characteristics. However, managers have added responsibilities with respect to computer security, being ultimately responsible for protecting the information that is generated by or held in their department. Because of their seniority within the organisation, managers may pose security risks different from those of ordinary users.

Managers who are not aware of security issues may provide additional risks to data security (Adams 1991). On multi-user systems managers are often provided with privileged accounts as a sign of status, but they may not be familiar with the security implications of such access. Bosses also frequently give the passwords to such accounts to subordinates such as their secretaries, in order for them to perform tasks such as screening electronic mail (Brand 1990) In addition, because of their seniority they can safely ignore recommendations from Information Technology staff pertaining to security issues such as the changing of passwords or file protections (Brand 1990).

There are a number of additional reasons why management needs to be aware of the requirements for computer security. For one thing, it appears that computer security is taken more seriously by the workers in an organisation if the motivation for this comes from the top (Fites et al. 1989; Hollins 1992). No security manager can be expected to operate effectively without an approved policy (Bound 1988). However, management may well be under pressure to reduce costs while maintaining or increasing service levels. This leads to increased dependence upon information technology, and thus to potential problems with audit controls as fewer workers are required to handle more tasks, making adequate separation of responsibilities difficult. (Armstrong 1992).

Another consideration for managers is the possibility of corporate or even personal liability for inadequate security (Bloombecker 1990). It has been predicted that, at least in the USA, computer and business managers can be held civilly liable for failure to provide adequate security and controls (Dickie 1991; Peltier 1992). Employees of a company are commonly held to share in the responsibility for asset protection, and evidence that this extends to intangible assets such information has been provided by a number of United States court case and labour arbitration proceedings (Menkus 1989).

Programmers / Analysts

Programmers occupy a separate category because of their greater technical knowledge. In a small-to-medium sized business there may be few programmers, with a wide range of duties and responsibilities. Programmers would appear to be well placed, in terms of knowledge and authority, to carry out fraud. Analysts, in addition, are actually required to be familiar with both the computer system and the operation of the company, and thus may pose an even greater risk.

Specific ways in which programmers may be a security risk include practices such as the use of a single password for multiple accounts, the sharing of passwords with other, non-authorised computer professionals, and carelessness with security due to busyness and familiarity with the system. For example, system programmers have been known to create privileged programs as needed and then forget to disable or delete them (Brand 1990; Weinberg 1971).

Programmers and analysts may also be positive factors for security. Security must be designed into computer systems, and this is generally the responsibility of the analyst (Lane 1985). Particularly in a multi-user system, computer staff should also have the knowledge to pick up indications that there is a problem with security in their system, such as the existence of unauthorised accounts or logins, or unusual usage of system resources.

Systems Administrators

In any company, someone must have ultimate responsibility for information technology issues. This may be the systems administrator, computer centre manager, or IT Manager, depending upon the size of the organisation and the type of system installed. He or she will probably be responsible for data security, and possibly for user training; the system administrator is also well placed to carry out fraud.

Many system administrators, particularly of academic mainframes, are not primarily employed in that capacity. Security may not be one of their priorities. ACARB security surveys in 1986/1989 in Australia have shown that only a small percentage of companies reached what they describe as "a minimum objective level of computer security" (10 out of 179 in 1989). However, 50% of computer centre managers believed that their security was "good to excellent" (Adams 1991).

Why does this discrepancy exist? System administrators tend to be busy people, and may be either ignorant of risks, or consider that the risk of attack is not worth the effort of maintaining a secure installation. This effort may be non-trivial. Security must be considered both day-to-day and when installing and upgrading software and operating systems (Barlow 1992). Measures to be taken may include:

• ensuring the physical security of hardware and software;

• the installation of a shadow password scheme to protect encrypted passwords from being copied to a remote site;

• enforcing the choice of hard-to-guess passwords;

• changing passwords on a regular basis;

• disabling unused accounts ;

• keeping track of and installing "patches" for security holes in the OS as these become available;

• preparing contingency and disaster recovery plans;

• performing backups.

On the other hand, administrators who are overly concerned about security may interfere with the actions of legitimate users to a degree which is not warranted by the actual level of security risk. For example, the relatively common practice of disabling login after a set number of failed attempts may cause problems for unskilled users, or lead to a form of denial of service attack if exploited by 'outsiders' wishing merely to cause trouble (Barlow 1992).

Although training is not generally part of the IT function, in a small-to-medium sized organisation the systems administrator is also frequently responsible for enforcing end-user adherence to the company security policy, if such exists. He must balance the benefits of user compliance against the costs of enforcing such compliance and the true extent of the vulnerability (which should previously have been estimated as part of the risk assessment process) (Barlow 1992).

Outsiders

"The stereotype of the computer criminal is a young, bright, white male who commits crimes because he is challenged by difficult computer problems. This represents a very small part of the security problem." (Bloombecker 1989).

However, outsiders may pose a threat to an organisation's computer security in a number of ways. Outsiders may "crack" a system across a network; they may write viruses which affect an organisation's hardware; they may conspire with staff to obtain sensitive information; they may break into a building to steal hardware, software or data. Particularly in the area of computer crime, outsiders may consider themselves 'immune' from retaliation. According to Zajac (1988), a large number of computer criminals still think they will not get caught and even if they are caught, nothing will happen.

Outsiders can thus pose a real risk to computer security. However, for most organisations, this risk is relatively small, and can largely be countered by adherence to physical security measures such as locking offices, backing up data, and restricting access to hardware.

Psychological Factors in Computer Use
General

The psychology of human-computer interaction is currently an active field of study (eg Frese et al. 1987). The experience of using a computer seems to be different from that of using any other machine. The computer, particularly the desktop PC, which has an immediate presence, tends to appear "intelligent" or "intimidating". We are constantly reminded by "experts" that the computer is "just a tool", as if that fact was somehow surprising, and not self-evident.

In my opinion, it is not possible to understand the types of user behaviour described above without considering what is understood about the psychology of computer users, both as individuals and as part of the human-computer duality. Frank et al. (1991) suggested a model of factors influencing PC Security-related behaviour. This model postulates the existence of a number of factors, both organisational and individual (psychological). However, they found that of all these variables those with the most positive correlation to security-related behaviour were PC user knowledge and informal organisational norms (part of the psychological climate). However, their results must be interpreted with caution, since their sample comprised mainly managerial staff.

In the following sections we will consider firstly components of the psychological climate of an organisation, then elements of the psychology of its employees, both as individuals and in communication with a computer. Lastly, we will briefly consider organisational change management, an entire field of study in itself, and its relation to the implementation of computer security policies.

The Psychological Climate

A new information system which fails to take account of the behavioural factors of its users may be regarded as disappointing, or may even be discarded, despite its technical adequacy. If the needs of the users are ranked below considerations of "efficiency" by the system's designers, the result may be maximally inefficient, in that it is used as little as possible. A widely used but not optimally efficient system is more effective than an elegant, yet unused one (Ahituv & Neumann 1986).

Ahituv and Neumann (1986) discuss the concept of the "psychological climate" of an organisation, which they define as the "psychological factors related to group behaviour which develop out of continuous interactions among the parties involved". They suggest that any attempt to disregard these intangible but influential forces in an organisation is doomed to failure; understanding and proper treatment of psychological climate are vital if information systems are to be useful.

The psychological climate is made up of contributions from all the groups described above. For example, if the end-users consider a system to be awkward to use, these attitudes will quickly spread throughout the organisation. Similarly, if management is dissatisfied with the system, perhaps because too much was expected from it, this attitude will be passed on. IT staff may contribute to a negative psychological climate either by over-enthusiasm about the system, which runs the risk of creating unrealistic expectations among the users, or in contrast by being negative about the system, either through personal dislike of the system, or because of a low opinion of the users' abilities. This may frustrate users who are not encouraged to use the full potential of a system. Ahituv and Neumann (1986) also make the point that the IT staff who are implementing the new system may be relatively new to the organisation, since IT professionals tend to change jobs relatively frequently. Through ignorance they may cause further ill-feeling by being unaware of the political subtleties of a company's operations.

All of this implies that it is difficult to maintain a favourable psychological climate. However, such "informal norms" may be an essential factor in the motivation of staff towards security. Fites et al. (1989) make the point that a key element in security and control is to make sure that the organisational culture includes norms which lead to professional behaviour. This is supported by the research of Frank et al. (1991), who found that, particularly among users with little knowledge of computers, informal behavioural norms are strongly related to security-related behaviour. Furthermore, according to their research, informal norms are even more important when formal policies also exist in an organisation, implying that formal policies can provide a basis for the development of more effective norms.

Human-Computer Interactions
Cognitive Factors

Parasuraman & Igbaria (1990) identify a number of personality variables which affect the manner in which humans interact with computers. These include:

• trait anxiety, a chronic predisposition to be anxious or nervous;

• math anxiety;

• locus of control, perceptions about whether they themselves influence events and outcomes in their lives, or whether they are influenced by factors such as luck, fate, chance, or significant others;

• and cognitive style, the characteristic processes individuals exhibit in the acquisition, analysis, evaluation and interpretation of data used in decision making.

Trait anxiety and math anxiety are fairly clear-cut concepts. Locus of control, however, has been the subject of a considerable amount of research (Lefcourt 1981). Locus of control can be depicted as a continuum, with at one end "internals", who believe that their internal traits determine what happens in a given situation, and at the other end "externals" who feel that they are at the mercy of other forces. Generally, internals have been found to perform better at tasks which require complex information processing and complex learning (Miner 1988). Not surprisingly, Parasuraman & Igbaria (1990) found that individuals who demonstrate a high level of trait anxiety or math anxiety, or a low locus of control are likely to feel uncomfortable with computers.

Cognitive style is more difficult to quantify than the other traits, and its implications for computer use are less clear-cut. Amongst their subjects, these researchers identified a condition they call "computer anxiety", which they define as "the tendency of individuals to be uneasy, apprehensive or fearful of current or future use of computers" (Parasuraman & Igbaria 1990). They looked at the correlation of sex with computer anxiety in a sample of managers, and found that men and women in managerial positions do not differ in the level of computer anxiety reported, and are very similar in their attitudes towards microcomputers.

Previous studies have found gender differences in several facets of involvement with microcomputers, with men generally having more positive attitudes towards using computers both at work and as a recreation. Parasuraman & Igbaria (1990) suggest that these findings may be affected by the fact that the women in such studies tend to be less well educated and have less status than the men, thus predisposing the female sample to factors such as math anxiety.

Computer anxiety among managers is likely to contribute in a significant manner towards a negative psychological climate towards computers in an organisation.

As mentioned above, the subject of individual cognitive style and its influence on human-computer interaction has been the topic of considerable research (eg Hoang 1990; Kitajima 1989; Card et al. 1983). A number of approaches have been tried in an attempt to identify and quantify significant factors. Such research normally involves the calculation of the effect of human factors in existing systems through the use of observation techniques and action control mechanisms (Hoang 1990).

Such research, while interesting, has at present little practical application to the problem of increasing computer security. Studies of computer users may eventually lead to further insights both into the nature of human-computer interactions, and into the cognitive function of the users themselves, but at present this is a relatively new and theoretical field of research.

Interface Factors

Another active area of research is that of user interface design. The design of the human-computer interface has been found to affect significantly the response of users to a computer system (Card et al. 1983). This work has implications for the acceptance by users of security measures - the ease of use of the human-computer interface may well affect the user's willingness to comply with policies as laid down. In this section I will firstly look at the work that has been done in this area in general terms, and then attempt to relate it to the issue of user compliance with security policy.

Ahituv and Neumann (1986) identify user-friendly software as that which uses prevailing terms, does not require the keying-in of obscure characters in a rigid sequence, allows the user to select routines by means of a menu, contains help options and provides for many sorts of output. This definition is made up of a number of factors which have been identified as important in the human-computer interface. Many of these factors are directly related to increasing the user's sense of self-confidence as he uses the system. It is worth noting however, that Frank et al. (1991) believe that self-confidence in PC users may lead them to "rationalise away cautionary actions."

One of the most significant factors of the human-computer interface is feedback - users feel more comfortable if their actions to provoke a response. A feedback mechanism provides a reinforcement of the user's previous understanding of an event, and increases the user's confidence in the system. That is why online information systems terminals generally display a confirmation signal upon completion of the keying of an input record. No new information is being provided - the user already knows that he has entered the information - but now the user is confident that the computer has "understood" his intentions (Frese et al. 1987).

It is easier for people to absorb and understand data when it is presented in a familiar context and format. This means that conventions should be followed in, for example, the use of prevailing terms and jargon and common practices of writing (Ahituv & Neumann 1986). Some of these conventions may be society-wide (for example, the division of text into paragraphs and sections), while others may be organisation-specific (for example the layout and format of those paragraphs). Some degree of customisation for a particular environment may thus be useful in increasing user acceptance of a particular system.

For users experienced in the use of computers, this principle of familiarity also implies the use of consistent and conventional keystroke assignments. This is generally more difficult to achieve, since it is decided by the original designer of the product (at least in PC-based products). In this context, the increasing popularity of Graphical User Interface products such as Microsoft Windows and the Apple Macintosh interface may be related, in part, to the standardisation of interface which they provide.

Users also gain self-confidence from having a measure of control over the information presented to them by a system. "Information overload" is a well-known phenomenon in psychology, and has been shown to reduce user productivity, while increasing fatigue and dissatisfaction with the system (Ahituv & Neumann 1986). Many techniques are available for attracting the user's attention to important information - colouring, asterisks, blinks, reverse video, and sound are all widely used. However, a balance must be struck between attracting the user's attention and overwhelming him. The over-use of such techniques is merely confusing.

Acceptance of a system is also increased if the user can customise, as far as possible, the type of information presented to him, and the format in which it is displayed. In current systems this sort of facility is generally only available in "top-end" products, such as executive information systems, some of which make a selling-point of catering to the individual cognitive style of the user.

Users evaluate a system according to the direct benefits they gain from it. If they feel that the system contributes to their competence in task performance, they will tend to use it more often, and even experiment with its capabilities, while a difficult system will be used as little as possible (Ahituv & Neumann 1986). However, as discussed above, such judgements tend to be highly subjective. Unfortunately, the benefits arising from a security package are likely to be underestimated by users who base their assessment only on interface factors. The risk of not using the package may seem small in the light of the generally inflexible and time-consuming interface provided.

Furthermore, with security systems the objective of giving a user as much control over the interface as possible may not be feasible. The amount and type of information presented will be relatively inflexible. However, such information will also be relatively small in amount, so information overload is unlikely to be a problem for computer security systems.

Change Management

The implementation of security systems in an organisation which has previously not had stated policies in this area will require careful change management. Robbins (1986) identifies two factors which make people likely to resist change: loss of the known and tried, or concern over personal loss. In the area of information systems both of these factors are likely to come into play. Familiar systems are to be replaced by an unknown quantity, and the users may well lose both a degree of control, and in the case of security systems, convenience. Resistance to change is a dominant factor in nearly all information systems implementation. It is detected among IT professionals as well, particularly if they feel their status is being eroded (Ahituv & Neumann 1986).

Change management involves three phases:

• Unfreezing: disturbing the current equilibrium and, introducing the concept of the need for change;

• Moving: presenting the new system and conducting a learning process until the relevant material is completely understood;

• Refreezing: integrating the change with existing behavioural frameworks to recreate a whole, natural entity.

Resistance to change reflects an incomplete or unsuccessful unfreezing (Ahituv & Neumann 1986).

The best way to deal with resistance is to avert it rather than to fight it after it has arisen. Ahituv and Neumann (1986) suggest a number of methods for anticipating and defusing resistance to change. They include:

• pointing out problems with the existing system;

• explanations of the negative consequences of not changing the system;

• full explanation of the operation and advantages of the new system;

• maintaining two-way communication with users and incorporation of their ideas so that users feel they have contributed to the success of the new system;

• frequent consultation with the users.

The key to these changes is obviously user education. This may be relatively straightforward in a situation where the benefits of the change are reasonably obvious, such as the introduction of a new computerised system to replace a current manual one. Although users may be apprehensive about their ability to handle the new technology, and reluctance to make the effort to change from a familiar, working system, the benefits of the change are quantifiable and explainable.

In contrast, the benefits of introducing security policies may be harder to demonstrate, particularly in an organisation where there has not previously been any incidents of loss of data to motivate users who will be inconvenienced by the introduction of the new system. Security - related activities are preventative in nature, and generally do not produce any visible, rewarding, or positive results (Frank et al. 1991). In addition, it will probably not be permissible to follow Ahituv and Neumann's recommendation of "intense user involvement during system design".

The Importance of Security Policy

Having considered some of the factors motivating humans with regard to computer use, we are in a position to place these factors in the context of stated organisational security policy. As has previously been mentioned, policy must be made at the highest level of the organisation in order for it to carry authority at the lower levels. What types of security policy are likely to be decided upon, and how are users likely to react to them?

General

The fundamental rule of security is that the level of security should match the degree of risk faced by the organisation (Armstrong 1992). The degree of risk depends upon both the potential exposure of the organisation and the value of the data it is protecting. For most businesses the degree of risk will be moderate. The data to be protected is more likely to be confidential than highly sensitive (eg. more likely to be personnel records than international funds transfer records). Obviously, organisations such as banks and military establishments will have highly specific security requirements, which often require users behaviour to be strictly controlled.

It has been suggested (Frank et al. 1991) that it should be possible to remotely monitor employees' use of their personal computers using appropriate software across a LAN, or by the use of "identification" software on a stand-alone PC. Such software would recognise a user by biometric or behavioural means - for example, some work has been done on the identification of individuals by their typing rhythms (eg. Newberry 1990). It has been reported that such surveillance has been implemented in large American companies. However, in a small or moderate-sized organisation this would appear to be overkill. The cost of such a policy in terms of money, time spent in surveillance, and loss of employee trust would be difficult to justify.

Pre-Employment Screening

Criminal behaviour involves three basic elements - dishonesty, opportunity, and motive. (Fites et al. 1989). Employee selection aims to control the former, administrative controls the latter two. Since these cannot be fully controlled it is also necessary to have controls for preventing, detecting, and reacting to incidents.

Pre-employment screening in terms of background checks plus psychological testing for honesty and emotional stability has not been widely practiced in Australia by private companies. It has been more popular in the USA and Europe, but its efficacy and cost-effectiveness is under question (Bologna 1988). For a company whose data is not highly sensitive, exhaustive background checks are often too expensive and unreliable to be warranted. Interestingly, Fites et al. (1989) estimated that 10-30% of resumes contain misrepresentations or actual falsehoods, most commonly about certificates and degrees. They do not, however, indicate the type of population from which their sample was drawn.

Bologna (1988) suggests that the development (education) of personnel, rather than selection, will become the primary goal of employers over time. More time and money will be spent on training and development rather than on recruitment and selection. He believes that employers are becoming concerned about the validity, reliability and legality of psychological testing tools in a time of litigiousness and equal employment rights (Bologna 1988).

Written Security Policies

One suggestion frequently put forward with regard to computer security is the incorporation into the contract or agreement governing the employee-employer relationship an obligation to comply with the organisation's policy statements on data security maintenance and non-disclosure of confidential information (Menkus 1989; Fites et al., 1989). Upon commencement of work with a company following this policy, a new employee is provided with a written security policy. He is required to read this and sign a statement to the effect that he has understood and will comply with the policy. This may be reaffirmed annually.

As well as ensuring that employees understand the organisation's attitude towards data security, this document may help to guard the company against liability for misuse of information. Several model data security policies have been published, ranging from extremely comprehensive and severe (eg. Fites et al. 1989) to gentle suggestion.

Employee Termination

The question of management of employee termination is another source of controversy. Authors such as Fites et al. (1989) suggest that all company ID (business cards, etc) should be collected, codes and passwords collected, locks and codes to which the employee has access should be changed immediately, all the employee's financial accounts settled, and other staff members informed. Should the termination be involuntary, all access to sensitive resources should be denied immediately and the employee escorted off the premises. While such measures make sense in terms of the departing employee, the authors do not comment on the effect of such measures upon the psychological climate of the organisation, which is presumably being cultured so carefully.

Training in Computer Security

Training should not be a primary responsibility of either the security or the Information Technology departments of an organisation (Fites et al. 1989). However, user practices are closely allied to training, so the security manager must, perforce, take an interest in training. In addition, a small to medium-sized business may not be able to afford a full-time staff training officer, in which case the task will undoubtedly fall to whomever has the required expertise. This may well be the IT staff.

From the security point of view, the aim of a training program must be to raise user awareness of security issues, policies and practices. Dickie (1991) suggests that the prerequisites for awareness can include:

• sound data security policy;

• support from senior management;

• well organised awareness program to sell security;

• practical security service;

• listen to the users;

There are two kinds of training which are significant from the security standpoint: orientation training and skill development. Orientation is training whose intent is to set a climate, convey initial information about a situation, and so on, and is probably more relevant to security implementation than skills training (Fites et al. 1989). After initial training is complete, security and control principles suggest continual reinforcement of the message already conveyed. In the context of computer security, training is probably most important in relation to change management and employee orientation, as has already been discussed.

Problems In Practice - The Password Issue

The above discussion may be illustrated by a consideration of the problem of password control. This topic is the source of considerable controversy among the security community. Menkus (1988) claims that "a password shares one thing in common with the conventional lock - it is only effective with honest people", while Chalmers (1986) points out that passwords alone are not enough to ensure a secure system. Despite this, passwords have been described as "the single most important topic in incident prevention" (Brand 1990), and are widely used in computer security systems, particularly on multi-user systems. They are becoming more common on PC-based stand-alone or networked systems, as security packages for such systems become more widely available.

If a user is allowed to choose his own password, a significant proportion will choose easily guessed (weak) combinations (Spafford, 1992). An account where the user name is the same as the password is said to be the single most common cause of password problems (Brand 1990). If regular password ageing is enforced, many people will alternate between a small number of easily-remembered passwords. For many users the inconvenience of using a password outweighs what they perceive as the minor risk of their system being cracked, and to give such users greater control over password choice will result in less security, rather than more.

To avoid the selection of easily-guessed password such as English words and names, it is sometimes suggested that users be allowed to choose their own password, but an algorithm is suggested for the procedure. However, if a single algorithm is used throughout an organisation, the possession of this information by outsider significantly reduces the effort needed to crack the system. It is generally conceded that the use of password algorithms is not a good idea (Spafford, 1992).

If control over the password is completely taken from the user, and a machine-generated random string of characters is assigned as a password, many users will write it down, and keep it close to their terminal. This is traditionally regarded as a significant breach of security. Brand (1990) considers that machine generated passwords are generally a bad solution to the password problem. However, the same author has suggested that a secure password which is written down and kept somewhere safe, such as a wallet, may be more secure than an easily-guessed password which is not written down.

Password management programs such as npasswd on UNIX systems represent an attempt to leave the user some control over password choice. Passwords chosen by the user are rejected if they do not fulfil criteria such as a minimum number of characters, the inclusion of at least one non-letter character, etc. By preventing a user from selecting a poor password, one doesn't need an administrative procedure to get him to change it later. It can all happen automatically with no human intervention and no apparent accountability (Brand 1990). This may help to maintain a good user - administrator relationship, which may inturn lead to users being more receptive to other security directives.

Password control is a matter of ongoing debate amongst the security community. It is a classic example of the clash between the needs of the company - for data security through secure passwords - and the needs of users - for convenience, ease of use, and room for error.

Conclusions

This paper has reviewed what is known about the types of people who are important to computer security. For convenience, they have been divided into a number of categories - users, managers, programmers/analysts, systems administrators and outsiders, and their roles as security risks and security assets considered individually.

It is apparent from recent Australian research that the primary threat to a company's data security comes from the perpetration of fraud by its own employees. The risks that are widely discussed in the media are in reality relatively minor in most cases, and it seems reasonable that this discrepancy might cause confusion in the minds of users, although little has been published in this area.

It is generally agreed that users are reluctant to implement security measures. The possible reasons for this may lie in the organisational culture, the cognitive/risk perception processes of individuals, the design of security software in terms of human-computer interface or, more probably, combinations of these factors.

There is currently no clear rationale for the selection of security policies by a company. Although the behaviours necessary for adequate security may be easily determined, policy should be framed and implemented in such a manner that these desired behaviours are encouraged. At present, little consideration is given to this aspect of the problem. While it appears that factors such as the computer-literacy of the users, organisational norms and stated policies are important in affecting user behaviour, little work has yet been done in this area.

Computer security is an issue which will continue to grow in importance in the foreseeable future, with the proliferation of low-cost, high-power PCs throughout organisations. The most important factor in computer security for most companies is the humans who work with the computers; an understanding of the behaviours and motivations of these people with regard to security is essential to the formulation of effective security policy.

References

1. Adams, T.(1991). "Information Security, not just Computer Security, is the Real Issue". Conference on Combating Computer Fraud and Improving Information Systems Security. 2&3 December 1991, Sydney.

2. Ahituv, A. & Neumann, S. (1986). "Psychological and Behavioural Aspects of Information Systems." In Principles of Information Systems for Management: Wm. C. Brown.

3. Armstrong, T. (1992). "Computer Security: An IT Issue for the '90s." PC Week, 26 February 1992, p.29

4. Barlow, J. (1992). Personal Communication.

5. Bloombecker, J. J. (1989). Interview. Corporate Crime Reporter; April 10.

6. Bloombecker, J. J. (1990). "People are the Security Problem, not Computers." Interview in Computer Technology Review, June 1990 pp 8-9.

7. Bologna, J. (1988). "Selection Risks in Hiring Information Systems Personnel." Computers and Security 7:353-355.

8. Bound, W. A. J. (1988). "Discussing Security with Top Management." Computers and Security 7:129-138.

9. Brand, R. L. (1990). "Coping with the Threat of Computer Security Incidents: a Primer from Prevention through Recovery." Manuscript.

10. Card, S. K. , Moran, T. P. & Newell, A. (1983). The Psychology of Human-Computer Interaction. Lawrence Erlbaum Associates: Hillsdale, NJ.

11. Chalmers, L. (1986). "An Analysis of the Differences Between the Computer Security Practices in the Military and Private Sectors." Proceedings of the 1986 IEEE Symposium on Security and Privacy, April 7-9,1986, Oakland, California.

12. Clark, D. D. & Wilson, D. R. (1987). "A Comparison of Commercial and Military Computer Security Policies." Proceedings of the 1987 IEEE Symposium on Security and Privacy, April 27 - 29, 1987, Oakland California

13. Dickie, J. (1991). "Countering Threats to Mainframe Security." Conference on Combating Computer Fraud and Improving Information Systems Security. 2&3 December 1991, Sydney.

14. Fites, P. E., Kratz, M. P. J. & Brebner, A. F. (1989). Control and Security of Computer Information Systems. Computer Science Press: Rockville MD.

15. Frank, J. Shamir, B. & Briggs, W. (1991). "Security - Related Behaviour of PC Users in Organisations." Information & Management 21: 127 - 135.

16. Frese, M., Eulich, E. & Dzida, W. (1987)(eds). Psychological Issues of Human-Computer Interaction in the Work Place. North-Holland: Amsterdam.

17. Hoang, T. H. (1990). "Human Factors in Decision and Control: Nonstandard Modeling and Information Processing." Information Sciences 51: 13-59.

18. Hollins, J. D. (1992). "Policy Implementation at W H Smith's". Computer Fraud and Security Bulletin April 1992 13 - 16.

19. Horey, J. (1992). Two Bits Worth. Australian Personal Computer, March 1992. p.50.

20. Kitajima, M. (1989). "A Formal Representation System for the Human-Computer Interaction Process". Int. J. Man-Machine Studies 30:669-696

21. Lane, V. P. (1985). Security of Computer Based Information Systems. MacMillan: London.

22. Lefcourt, H. M. (1981). Research with the Locus of Control Construct. Vol. 1: Assessment Methods. Academic Press: New York.

23. Menkus, B. (1988)."Understanding the Use of Passwords." Computers and Security 7:132-133.

24. Menkus, B. (1989). "The Employee's Role in Protecting Information Assets." Computers and Security, 8:487-492.

25. Miner, J. B. (1988). Organizational Behaviour. Random House: New York.

26. Newberry, M. (1990). The Psychology of Typing and its Application to Computer Security. University College, The University of New South Wales, Department of Computer Science Technical Report CS90/32.

27. Parasuraman, S. & Igbaria, M. (1990). "An Examination of Gender Differences in the Determinants of Computer Anxiety and Attitudes towards Microcomputers among Managers." Int.J.Man-Machine Studies 32:327-340.

28. Peltier, T. R. (1992). "Selling IS to the Employees". Computer Fraud and Security Bulletin, February 1992 10 - 17.

29. Pfleeger, C. P. (1989). Security in Computing. Prentice-Hall International: Englewood Cliffs, NJ.

30. Robbins, S. P. (1986). Organizational Behaviour: Concepts, Controversies and Applications. Prentice-Hall International: Englewood Cliffs, NJ.

31. Spafford, E. H. (1992). "OPUS: Preventing Weak Password Choices". Computers and Security 11: 273 - 378.

32. Weinberg, G. M. (1971). The Psychology of Computer Programming. Van Nostrand Reinhold Company Inc: New York.

33. Zajac, B. P. (1988). "Personnel: The Other Half of Data Security." Computers and Security 7:131-132.

34. Zajac, B. P. (1990). "People: The "Other" Half of Computer Security." Computers and Security 9: 301-303.


[*] BSC (Hons) (ANU); Grad Dip Computing (Deakin); MInfSc(UNSW). Formerly Information Technology Officer, Australian Institute of Criminology, currently PhD candidate, Research School of Biological Sciences, Australian National University.


AustLII: Copyright Policy | Disclaimers | Privacy Policy | Feedback
URL: http://www.austlii.edu.au/au/journals/JlLawInfoSci/1993/8.html