AustLII Home | Databases | WorldLII | Search | Feedback

Journal of Law, Information and Science

Journal of Law, Information and Science (JLIS)
You are here:  AustLII >> Databases >> Journal of Law, Information and Science >> 1994 >> [1994] JlLawInfoSci 4

Database Search | Name Search | Recent Articles | Noteup | LawCite | Help

Sarre, Rick --- "The Evaluation of Criminal Justice Initiatives: Some Observations on Models" [1994] JlLawInfoSci 4; (1994) 5(1) Journal of Law, Information and Science 35

The Evaluation of Criminal Justice Initiatives: Some observations on models

RICK SARRE[*]

Absract

In this paper the author examines the difficulties of evaluating criminal justice initiatives and focuses upon a range of models which may assist in overcoming these problems.

1. Introduction

The issue of the right of people to go about their daily lives in peace and with security continues to rank very high in opinion polls which measure voter concerns. Australian governments thus pour billions of dollars annually into criminal justice, crime prevention and policing programs and projects. Many of these projects are never systematically evaluated. Where evaluation does occur, it is very often unsatisfactory. The focus of this paper is to explore the difficulties encountered in evaluation, and to focus upon the range of evaluation models which may be of assistance to overcome or, at the very least, to alleviate some of these difficulties.

Some form of evaluation of criminal justice projects is essential. Without evaluation it is impossible to even hazard a guess as to whether existing policies or programs, or changes to them, are having the desired effects, whether additional changes are required, and, if so, what shape these changes should take. In short, implementors and benefactors want to know what 'works'. Furthermore, in the absence of proper evaluation, the public may be duped by commentators who use their own undisciplined and unhelpful interpretations of crime figures, for example, to highlight the 'failures' of governments to address rising crime rates. Public frenzies about law and order issues then may afflict our communities, often with unfortunate results. By way of recent illustration, one need only look to Western Australia, where the government has brought into law a provision for minimum and indeterminate gaol terms for chronic juvenile offenders. Legal and criminological commentators almost universally agree that these measures contravene international civil rights covenants and long established principles of sentencing (Chappell 1992, p 2). Such changes to law should be less reactionary and more considered. That will only occur if the proper evaluative studies are available.

Yet there are different approaches one can adopt when faced with a requirement to evaluate. On the one hand one can say that evaluation should be rigorous, systematic and far-reaching. This view is premised upon the opinion that judgments are made concerning new and existing criminal justice initiatives, their continuation or abandonment, on the flimsiest of anecdotal evidence. There are concerns that large-scale drains on the public purse need to be carefully monitored and controlled (for example, the implementation of new directions in 'community' policing, the building of new prisons, and government ‘crime prevention’ strategies such as the 1989 South Australian initiative Together Against Crime). The understanding is that if there is no obvious improvement to any given situation (on, say, a ‘cost-benefit’ analysis or some other ‘effectiveness’ scale), then that program or strategy ought to be modified or axed altogether. It is all very well, say the proponents of this view, to have a 'warm inner glow' about the extent to which public funds may be assisting the alleviation of anti-social conduct in a handful of lives, but such feelings do not justify the expenditures of large amounts of money if this effect is not more widespread or less intangible.

On the other hand, there are those who hold the opinion that a thorough evaluative exercise is a luxury we sometimes can ill afford, and one which itself needs to be monitored carefully. Evaluative studies can be extremely costly, particularly if they are long-term and methodologically rigorous. It is not possible to conduct an exacting and long-term appraisal of a justice initiative, for example, without there being some concern that some of the time, effort and money expended on the evaluation could have been better spent on the initiative itself. For example, at the moment in South Australia (May 1992) there are more than 20 major State Government and parliamentary inquiries under way (in addition to the Royal Commissions currently being convened). They include 14 separate parliamentary 'select committee' inquiries covering fields as diverse as juvenile justice, death and dying, rural finance, education, drugs and gambling (Advertiser 16/5/92 p 6). The estimated final bill for these quasi-evaluative inquiries is $26 million. A cynic might say that the most we gain from such exercises is a report that is quietly shelved if it proves to be politically unpopular, or a recommendation that further study or on-going evaluation needs to be conducted, for evaluators appear rarely satisfied with their time-frame and terms of reference. Other critics might assert that the state does not benefit as much as the incomes of those employed to undertake the task of evaluation, or claim that evaluation is becoming an end in itself rather than a means to an end. There is, then, a suspicion amongst critics that we may be doing evaluations principally because we are supposed to be doing evaluations, and that, in the final analysis, they tell us very little. Indeed, Graham argues that many evaluations confuse issues surrounding the cause and effect of strategies, and thus if the choice is between poor evaluations and no evaluation at all, the latter is the better option (HEUNI 1990, p 153).

So our criminal justice planners and administrators appear to be in a no-win situation. They must evaluate, for to omit this component of the implementation process is to leave them open to claims that the program may not be achieving its aims, and is thus an unnecessary drain on the public purse. Yet there is always a suspicion that nothing but cynicism and equivocation will become of the expensive evaluation, which may be manipulated (if it is reviewed at all) by those who wish to turn it to their own political advantage. Faced with this dilemma, the many bodies charged with the administration of justice do not necessarily relish the thought of undertaking the evaluation studies that they are told they have to do.

Of course, the above quandary is merely one of the reasons why evaluation in the criminal justice field is piecemeal at best. There appears to be a general reluctance to evaluate anyway, and a number of reasons have been isolated as follows (Sarre 1991):

• Administrators very often fail to identify their goals prior to commencing a project, and may neglect to specify the different ways they plan to achieve them. Evaluation is then perceived to be too difficult to carry out, ex post facto, even for those with the appropriate expertise and budgets.

• There is often a lack of information, or a lack of access to data or a lack of consistency within data collection making comparative analyses difficult if not impossible.

• There may be a lack of faith in the selection of the evaluators and in the framing of the terms of reference.

• Evaluators are wary of finding the ‘wrong’ results or unexpected results. Evaluation does sometimes produce disappointing results which are unwelcome to the agency carrying out the program itself, governments and other sponsoring organisations. The inventors of 'solutions', too, want to be acclaimed, not criticised.

• Evaluators face methodological difficulties. There is a suspicion that evaluation can never achieve anything of significance because of the difficulties entailed in maintaining methodological rigour and credibility. For example, the terms of analysis may be extremely difficult to define. Terms such as 'project effectiveness', 'policy success' and 'program failure' are at once ambiguous and problematic.

• Evaluators are rarely convinced that the time-frame during which the evaluation is conducted is adequate, and there is always the suspicion that at another time, the results may have been different.

One criminal justice initiative which appears to defy the odds against evaluation is the well-known crime prevention/community policing program known as 'Neighbourhood Watch'. Evaluations of Neighbourhood Watch and 'cocoon' neighbourhood watch schemes are legion, and, while some are better than others, they have generally served to highlight the strengths, weaknesses, triumphs and failings of several of these schemes in a variety of jurisdictions (Bennett 1987 and 1990, Mukherjee and Wilson 1987, Rosenbaum 1987, Bottoms 1990, pp 11-15). The evaluations suggest, for the most part, that in some areas, with certain conditions present, and in the short term, some crime rates have fallen as a result of Neighbourhood Watch. It would have been irresponsible not to have carried out such evaluations. By the same token, there is an argument that suggests that too great an emphasis has been placed on the plethora of expensive surveys (replete with methodological difficulties) of Neighbourhood Watch which result merely in heavily qualified, politically motivated and sometimes highly equivocal conclusions. A survey released in South Australia in October 1991, for example, found that some property offence rates had actually risen in Neighbourhood Watch areas although they had fallen in non-Watch areas. This discrepancy was reconciled by the observation of the Officer-in-Charge that the 'Watch' area had been selected because of its pre-existing high crime rate, and that the rates had been skewed by the various contemporaneous police 'campaigns' against crime (which had the effect of raising awareness of crime, the consequent reporting of offences, and therefore arrests). In the end one really had to wonder whether the evaluation revealed anything more than to highlight the difficulties of 'one-off' measurements of crime. There is always the suspicion that results of evaluations can be so manipulated by their interpreters that any scheme can be branded as a success when crime rates (the usual measurement variable) go down, but not unsuccessful when the rates go up, because of extraneous matters beyond the control of the program managers. Is it little wonder that there are cynics?

In short, we have to do evaluation, and we have to do it better, while remaining within budgetary constraints. What, then, are the options for improving the current situation?

2. Are there better ways to evaluate criminal justice schemes?

When evaluation is carried out, the key model of evaluation chosen by criminal justice administrators to date has been the standard so-called 'scientific' model. There is some logic in this, for the scientific approach is most apt when any new product is tested prior to being placed on the market. The aim of such evaluation is to determine whether a defined intervention (for example, a new food additive or change in the law) leads to a given result (a desired effect or an unwanted side-effect). This style of evaluation has often suited criminal justice administrators who are anxious to find out immediately whether a new project or change in strategy has had its claimed effect. Typically, for example, evaluations of state initiatives towards decriminalisation concentrate mainly upon the varying rates of certain offences in a target area, whether the changes have been made in accordance with their design, and whether the resources to carry out the changes have been used in the most efficient manner. Two local South Australian evaluations come to mind immediately: the evaluation of the SA cannabis expiation notice scheme (Office of Crime Statistics 1989) and the decriminalisation of drunkenness (Office of Crime Statistics 1986).

These evaluations, however, may suffer from many of the difficulties described above. They were, for instance, financed by the Attorney-General's Department out of state revenue, a resource that is not unlimited nor available, in most cases, for community groups and other private organisations. They also need to be on-going. In both the mentioned cases, they are not. For that reason theorists have been exploring other ways of making the task more accessible, more long term (without being financially impossible) and more meaningful. They have been seeking ‘middle’ roads down which those charged with the responsibility of future evaluations may go. One of these roads requires adopting a more flexible style of evaluation generally (I will refer to this approach as 'open-ended' evaluation). Another is to pursue the rigour and excellence in evaluation expected of a more 'scientific' approach while restricting the budget in accordance with the importance of the endeavour (I will refer to this approach as 'budget-led' evaluation). While there are some points of direct inconsistency between these approaches, they both attempt to address the concerns expressed above and allow administrators flexibility in choosing their appropriate response.

3. 'Open-ended' evaluation

Only a person who has been a hermit for the past two decades will not have noticed profound changes in the way the world (including scientists, theologians, social scientists and philosophers) now views its 'reality'. Kuhn (1970) argued that we build paradigms - conceptual schemes - around which experiments and observations are built. From time to time there have occurred what have been referred to as 'paradigm shifts'. There has been a paradigm shift, too, in the way we think about the role of objectivity in the tasks traditionally carried out by social science. We now think it far more valuable that we train social scientists who can structure and inform policy choices rather than attempt to establish truths (Moore 1983, p 290). The ramifications of this in criminal justice evaluations have yet to be fully explored, at least in Australia.

An ‘open-ended’ style of evaluation does not insist upon the pursuit of value-free objectivity. While it does not spurn the scientific model of evaluation, neither does it insist upon it, because these evaluations are too difficult to control, and the pursuit of objectivity too difficult to sustain. At the heart of ‘open-ended’ evaluation, on the contrary, is the notion that there is nothing sacrosanct about the pursuit of objectivity. 'Scientific' evaluators have typically attempted to make detached and dispassionate observations. 'Open-ended' evaluation, however, maintains that less 'objective' evaluations may nevertheless be extremely useful in isolating problems in implementation and suggesting solutions. For example, a methodological purist would shudder at the thought of reviewing a project through the eyes of one of its implementors, or ‘stake-holders’. However, a factor which may be a key determinant of the success of that project may be the ability of workers to get along with one another. In that event a stakeholder in that project would be in an ideal position to determine the effect of such a variable, where a detached observer might miss it altogether. The important thing, therefore, may not necessarily be objectivity, but rather that the criteria for determining success or failure need to be clearly laid out in advance and those best placed to observe the progress of the criteria need to be identified and consulted from the outset.

Furthermore, the more 'scientific' evaluations, while being of some assistance to policy-makers, tend to down-play testing other key factors which require analysis. A 'scientific' analysis of a new policing strategy, for example, might simply include a review of the crime rates in an area where the initiative had been launched (a suspect variable at best, as debated below). But the 'open-ended' model would, as a matter of necessity, look wider than that. Such an evaluation could include a study of the enthusiasm or fears of the police officers involved in 'beat' policing, their workloads and priorities, their sense of control in the implementation of the policy, and their perception of their relationship with their community and vice versa. It accepts that while people's activities are 'directed' (like a play is directed), nevertheless there is a great deal of improvisation (Shearing and Ericson 1991). These dynamics are important to recognise, isolate and explore. They are not given such weight in the more 'scientific' approaches.

Further reference needs to be made to the measurement of crime rates as a determinant of the success or failure of criminal justice initiatives. They are regularly used, notwithstanding that they are notoriously bad indicators of anything. They can be manipulated by a variety of factors, including police practices like targeting certain areas, methods of reporting, differences in definition and data collection techniques. More than that, however, there is the difficulty presented by the penchant some evaluators (and politicians) have for using these 'rates' as the sole or key determinant of the success of an initiative. For example, how would one go about evaluating a program that endeavoured to decrease the level of assault in a neighbourhood? If the expressed goal is to reduce assaults, and the numbers of assaults, according to police figures, are not decreasing, then the program will have 'failed' according to the standard 'scientific' measurement. However, other criteria of success, not set (necessarily) at the outset of the policy implementation, may emerge, for example, the willingness of the community to view police as partners in this endeavour, or the alteration of the perception of police concerning the possibility of change for the better in street confrontations. There may be a heightened community awareness of the problems of assaults in the home, and more reporting of such assaults. There may be an increased availability of women's shelters or legal aid for those seeking restraining orders. From these perspectives, the program might have certain components which may be regarded as very successful. Conversely, the number of actual assaults may have decreased according to police figures (due to fewer arrests) but unless some of these wider concerns were also measured, the exercise in determining success or failure could be a waste of time and effort. ‘Open-ended’ evaluation would not fall into such a trap. It does not insist upon any standard criteria of success but leaves that issue, as its name implies, open-ended.

Important to open-ended evaluation is the ‘performance indicator’. Performance indicators (for example, see Lenne and Wells 1986) may include a broad range of factors typically omitted from the more ‘scientific’ evaluations. They seek to monitor more amorphous values rather than hard facts. Evaluators may set the following initial questions if, say, they were seeking to measure the success of a community-based crime prevention strategy: Are the people to whom we wish to direct our program being reached? What will we compare our results to? How will we gather the information to make our judgments and form our conclusions? How does this project unify or alienate members of our neighbourhood? How does the community see its power to alter existing trends? How will groups that have previously been alienated from the community now have access to and responsibility for it? The possibilities are endless. They are not dictated to by scientific 'rules' or made vulnerable to methodological strictures.

The advantages of this style of ‘open-ended’ evaluation can be listed as follows:

• It pays heed to the organisational factors in the implementation process itself, that is, it sees as essential to the process the people involved in it, their functions as 'stake-holders', their ability to get along with each other, and the ability of those who evaluate these organisations to negotiate change.

• It places an emphasis upon learning and adaptation as a feature of evaluation.

• It does not see 'failure' as a reason to abandon a project, but rather as an opportunity to alter its course or modify its practices.

• It anticipates the value-laden nature of the policy-making and program implementation processes and works with that in mind rather than fighting against it. "Who any longer doubts that evaluation inescapably shapes (or warps) ostensibly value-free empirical analysis?" (Lindblom 1990, p 145).

• It places a emphasis upon using a variety of types and styles of evaluation depending upon the type of program under scrutiny.

• It recognises the value of incrementalism in dealing with these issues, that is, it favours small-scale, on-going evaluation by the use of performance indicators rather than seeing the evaluative role as only involving large scale, detached, ex post facto evaluations.

• It recognises alternatives to the rule-based (measuring congruence with and adherence to orders and regulations) conception such as the approaches of ethnomethodologists who acknowledge the presence of people as active participants in the 'construction of action' (Shearing and Ericson 1991, p 501).

• It allows for differences in people responding to different situations, for unexpected opportunities to shape changes, for ambiguity, inconsistency and uncertainty.

• It allows on-going studies without the necessity for additional expense.

There are some limits, however, to the 'open-ended' model. Social observers have been criticised for being overly confident that they can successfully and satisfactorily explore the meanings that others attach to their actions, and thus avoid the possibility that evaluators impose their own interpretations (Potts 1991, p 14). It also may imply that anyone can 'have a go' at social explanation. The presentation of, for example, media 'survey results' is rarely differentiated from the presentation of more bona fide studies. The implication is that all research carries the same weight, no matter how well or poorly it is collected and interpreted (Weiss and Singer 1988, p 257). ‘Open-ended’ evaluation, with its emphasis upon subjectively exploring the manner in which a project is organised and run, needs to be viewed as but one vehicle for understanding, not the only vehicle.

These limits have been expressed recently by criminologist Lawrence Sherman (1992) who extols the virtues of encouraging a renaissance in methodologically sound evaluation. He does so with the caveat that evaluation expenditures should be carefully monitored. This style I will refer to as ‘budget-led’ evaluation.

4. 'Budget-led' evaluation

By implication, Sherman reminds us that too great an emphasis upon the above model may lead to unsatisfactory outcomes.

"Business has a bottom line, but it also has the independent audit. Surgery has the patient's recovery, but it also has the second opinion. Academics write books, but they must also undergo book reviews. Each of these systems is a set of rules for evaluating the results of professional work. The rules may vary in the fairness or accuracy of the assessments they produce, but they all insure that the results are judged by someone independent of those who did the work. Anything else is a conflict of interest." (Sherman 1992, p 693).

Sherman points out that if these principles are not applied, police, for example, may declare their methods a success regardless of results, and discourage them and their administrators from learning the principles of causal inference needed for useful evaluation. If we are to omit proper principles of evaluation from a management 'blue-print', we run the risk of allowing well-motivated and innovative people to fool both themselves and their financial benefactors into believing that they are successful even when they are not. Furthermore, to be methodologically lax would be to endorse over-blown claims of success and render independent evaluators (whose claims are more modest) vulnerable to charges of 'sour grapes' (Sherman 1992, pp 695-6). Indeed, there may be a hostile reaction to any unwillingness on the part of independent evaluators to accept assertions about the effectiveness of certain projects. Aggressive defensiveness, claims Sherman, is a by-product of poor evaluation (Sherman 1992, p 699).

How much rigour, then, is required in criminal justice evaluation? Sherman is, by implication again, of the view that the more 'open-ended' style of evaluation leaves much to be desired and fails to overcome the difficulties presented to evaluators as described above in the introduction to this paper. While he admits that social scientists do not have a monopoly on the identification and diagnosis of problems and the invention of new solutions to counter those problems, he is adamant that one cannot short-cut rigorous social science principles for evaluating the effectiveness of a solution.

His preferred option is evaluative rigour. He is cautious, however, to ensure that it is budget-led. While he agrees that small-scale assignments do not warrant elaborate evaluations, he is adamant that big efforts ought to be subjected to big assessments. His remarks focus upon police effectiveness-assessment, but his remarks may be adopted in a wider context.

"The effort put into assessing results could then vary along several sliding scales. One scale might be the scope of the problem or its solution, with city-wide problems receiving more evaluation effort than solutions aimed at a single address or a single person. Another scale might be the level of ... effort put into solving a problem; the more effort invested, the more intensive evaluation required. A third scale might be the on-going cost of the solution, either to the ... agency or to some external organization. The more costly the solution ... the more probing the evaluation should be for rival hypotheses." (Sherman 1992, p 706).

Such an approach goes some way towards finding a convenient middle path through the difficulties inherent in being so flexible and open as to be meaningless on the one hand, and so rigidly 'scientific' as to be either inaccessible or so 'qualified' to be unworkable on the other.

5. Summary

There is no doubt that most commentators in the business of evaluation are clear that there must be a middle ground between, on the one hand, no evaluation at all, and, on the other hand, complex and costly research which is disproportionate to its usefulness. Where the debate lies is what form that middle ground should take.

If policy-makers adopt 'open-ended' evaluation, in addition to the more standard scientific forms of evaluation, not only will the evaluative process become more accessible, but the 'successes' and 'failures' of the implementation process will become more readily available for interpretation and adaptation. If anything has been discovered in the past two decades of research in this field it is the wisdom of seeing the world in ‘open-ended’ terms rather than in terms of 'certainties'. In that light, administrators will be less inclined to look for simple, 'quick-fix' technical conclusions to complex social problems, and see their role as part of an on-going learning process.

On the other hand, should administrators reject the valuable tools of more rigorous assessments, these 'open-ended' findings may become a little misleading and limited by their esoteric nature. Their results may be allowed to become obscured by the rivalries and extraneous hypotheses that the call for evaluation was designed to eradicate. There is a strong argument, then, reiterated recently by criminologist Lawrence Sherman, that there is no substitute for academic and methodological rigour in evaluation. There needs to be, however, a costing of these endeavours proportionate to the amount of money which is being spent on the initiative.

Evaluators, then, have more choices than they might previously have thought. One study may endeavour to capitalise upon the breadth of the open-ended approach. Another study will seek to avoid compromising methodological rigour. Whatever the approach, evaluators should recognise that any evaluative study which does not first consider the above options as useful and helpful tools to explore during their work may leave itself open to widespread dissatisfaction if not dismissive criticism.

6. Bibliography

Bennett, Trevor (1987), An Evaluation of Two Neighbourhood Watch Schemes in London, Institute of Criminology, Cambridge University, March 1987

---------- (1990), Evaluating Neighbourhood Watch, Aldershot: Gower Publishing Co.

Bottoms, Anthony (1990), "Crime Prevention: Facing the 1990s", Policing and Society, Volume 1(1), 3-22

Chappell, Duncan (1992), "The Law and Order Debate: Renewed Advice to a 'Mythical Minister'", unpublished paper presented to the 14th Annual National Conference of the Australian Society of Labor Lawyers, 23 May 1992, Melbourne

HEUNI (The Helsinki Institute for Crime Prevention and Control) and John Graham (1990), Crime Prevention Strategies in Europe and North America, Helsinki, Finland: HEUNI

Kuhn, Thomas S (1970), The Structure of Scientific Revolutions, (2nd ed), Chicago: University of Chicago Press

Lenne, Bryan and Carolyn Wells (1986), "Program evaluation and the development of performance indicators" in Program Evaluation Bulletin, 2/86 Public Service Board (New South Wales) Program Evaluation Unit, (March 1986)

Lindblom, Charles E (1990), Inquiry and Change: The Troubled Attempt to Understand and Shape Society, New Haven: Yale University Press with the Russell Sage Foundation

Moore, Mark H (1983), "Social Science and Policy Analysis: Some fundamental differences", in Daniel Callahan and Bruce Jennings (eds), Ethics, the Social Sciences and Policy Analysis, New York: Plenum Publishing pp 271-291

Mukherjee, Satyanshu and Paul Wilson (1987), "Neighbourhood Watch: Issues and policy implications" in Trends and Issues in Crime and Criminal Justice, No. 8, November 1987, Australian Institute of Criminology, Canberra

Office of Crime Statistics (1986), "Decriminalising Drunkenness in South Australia", Research Bulletin Series B No. 4, South Australian Government Attorney-General's Department, SA Government Printer

----------(1989), "Cannabis: The Expiation Notice Approach", Research Bulletin Series C No. 4, South Australian Government Attorney-General's Department, SA Government Printer

Potts, David (1991), "Two Modes of Writing History: The Poverty of Ethnography and the Potential of Narrative", 66-67 Australian Historical Association Bulletin March-June 1991 pp 5-24

Rosenbaum, Dennis P (1987), "The Theory and Research Behind Neighborhood Watch: Is it a Sound Fear and Crime Reduction Strategy?" Crime and Delinquency, Vol. 33 No. 1, January 1987 pp 103-134

Sarre, Rick (1991), "Political Pragmatism versus Informed Policy - Issues in the design, implementation and evaluation of anti-violence research and programs", in Duncan Chappell, Peter Grabosky and Heather Strang (eds), Australian Violence: Contemporary Perspectives, Canberra: Australian Institute of Criminology , pp 263-285

Shearing, Clifford and Richard Ericson (1991), "Culture as Figurative Action" Volume 42(4) The British Journal of Sociology, 481-506

Sherman, Lawrence W (1992), Book Review of Herman Goldstein Problem-Oriented Policing, NY McGraw-Hill 1990, in Volume 82(3) The Journal of Criminal Law and Criminology pp 690-707

Weiss, Carol H and Eleanor Singer (1988), Reporting of Social Science in the National Media, New York: Sage


[*] Senior Lecturer, School of Law, Faculty of Business and Management, University of South Australia. The author would like to acknowledge the assistance of the Newhouse Center for Law and Justice of the Rutgers State University of New Jersey in providing him with the opportunity to use its resources in preparing the background for this paper.


AustLII: Copyright Policy | Disclaimers | Privacy Policy | Feedback
URL: http://www.austlii.edu.au/au/journals/JlLawInfoSci/1994/4.html