mit einem Anhang "Fallstudie Flugzeugunglück Überlingen 1. Juli 2002" von Simone Reineke ISSN 1612-5355 Creating Order in Hybrid Systems Reflexions on the Interaction of Man and Smart Machines Johannes Weyer Arbeitspapier Nr. 7 2. Auflage (April 2005) Herausgeber: Prof. Dr. Hartmut Hirsch-Kreinsen Prof. Dr. Johannes Weyer Lehrstuhl Wirtschafts- und Industriesoziologie Fachgebiet Techniksoziologie is@wiso.uni-dortmund.de johannes.weyer@uni-dortmund.de www.wiso.uni-dortmund.de/IS www.wiso.uni-dortmund.de/TS Wirtschafts- und Sozialwissenschaftliche Fakultät Universität Dortmund D-44221 Dortmund Ansprechpartnerin: Dipl.-Päd. Martina Höffmann, e-mail: m.hoeffmann@wiso.uni-dortmund.de Die Soziologischen Arbeitspapiere erscheinen in loser Folge. Mit ihnen werden Aufsätze (oft als Preprint), sowie Projektberichte und Vorträge publiziert. Die Arbeitspapiere sind daher nicht unbedingt endgültig abgeschlossene wissen-schaftliche Beiträge. Sie unterliegen jedoch in jedem Fall einem internen Verfahren der Qualitätskontrolle. Die Reihe hat das Ziel, der Fachöffentlichkeit soziologische Arbeiten aus der Wirtschafts- und Sozialwissenschaftlichen Fakultät der Universität Dortmund vorzustellen. Anregungen und kritische Kommentare sind nicht nur willkommen, sondern ausdrücklich erwünscht. Content 1 Introduction: Society in transformation..........................1 1.1 Society as a laboratory? ............................................................1 1.2 Preview of the chapters.............................................................2 2 Smart agents and hybrid systems ...................................4 2.1 Pervasive/ubiquitous computing.................................................4 2.2 Intelligent behaviour? ...............................................................7 2.3 Hybrid systems .........................................................................9 2.4 Modes of governance of complex systems ................................13 2.5 Guaranteeing safety by means of self-organization?..................14 2.6 Conclusion: The release of hybrid systems as a large-scale societal experiment .................................................................17 3 Case Study TCAS............................................................19 3.1 History and operational logic of TCAS.......................................19 3.2 ATC and/or TCAS? ..................................................................20 3.3 The Mid-Air Collision at Überlingen...........................................24 4 Lessons to be learned....................................................28 4.1 The pitfalls of automation........................................................28 4.2 Coping with the risks of complex hybrid systems ......................29 5 Final remarks: Creating order in hybrid systems...........31 5.1 A new mode of governance? ...................................................31 5.2 The double trap ......................................................................32 6 Anhang: Fallstudie Flugzeugunglück Überlingen 1. Juli 2002 (von Simone Reineke) ....................................35 7 References.....................................................................45 A previous version of this paper has been presented at workshops in Berlin (12 May 2004), Graz (18 June 2004) and Bielefeld (12 September 2004). I would like to thank all discussants for their remarks and critiques. Additionally I am deeply indebted to Maike Fälker, Simone Reineke, Stephan Cramer, Helge Döring, Tobias Haertel and Un-Seok Han for their assistance and valuable recommendations. 1 Introduction: Society in transformation Modern knowledge societies find themselves in a situation that may turn out as the threshold of a new era, which is constituted by a new relation- ship of man and machine, of technology and society. Smart agents are now being released in large numbers into the real world, which are stupid com- pared to human beings, but can generate "intelligent" behaviour if they are interconnected to large networks. Smart technical agents meet human ac- tors more and more frequently, and they interact and coordinate their ac- tions. Artificial societies emerge and mingle with human societies. Hybrid systems come into being which are constituted of human and non-human decision makers. Except for a small fraction of researchers in the fields of "artificial societies" and "socionics", sociology has not yet reacted to this challenge. Still there is only little understanding of the processes of the destruction of the old social order and the construction of a new socio-technical order. Basic so- ciological questions must be reformulated: How does interaction work, if actors and agents are part of the game? How do they generate mutual expectations or even trust? Can we imagine social integration in hybrid societies the same way we did in the case of human societies? How do norms and institutions emerge? And can we apply our instruments for the governance of social systems to hybrid systems? Or to put it into one single question: How does social order emerge in hybrid systems, consisting of a set of interrelated human and non-human decision makers? 1.1 Society as a laboratory? From experiences with technological innovations in the past we know that the implementation of a new technology is a risk-taking endeavour – not only from the point of view of the participating scientists (because their hypotheses may fail), but also from the point of view of society (because people may be harmed by failed experiments).1 A number of case studies from different fields (aviation, road traffic, military technology), showed that in most cases real-life experiments are unavoidable if society doesn't refrain from technological progress.2 Mostly it takes about 15 years of ex- perimentation within – and with – society until a new (revolutionary) tech- nology works sufficiently. However, the implementation of a new technology cannot simply be re- garded as the introduction of a new item to a given, stable structure. In 1 Cf. the concept "Society as a laboratory", as it has been developed by Wolfgang Krohn and me in the late 1980s (Krohn/Weyer 1994). 2 Cf. Krohn/Weyer 1994, Weyer 1994, 1995, 1997. 2 Weyer, Hybrid Systems most cases experimental implementation has led to a fundamental change of the world and the way people live and work (i.e. the way they interact with machines and/or with each other). Bruno Latour showed convincingly in his famous study on the discovery of the anthrax vaccine by Louis Pas- teur, that the successful implementation of a newly invented technology to the real world requires a transformation of social structures according to the requirements of the new device. The vaccination technology only worked after the practices of agriculture had been adapted and changed fundamentally (cf. Latour 1983). Similarly the successful application of the glass cockpit in modern passen- ger aircraft required a new definition of the roles of the pilot and the com- puter (auto-pilot) – as well as an agreement on the distribution of respon- sibility between them (cf. Weyer 1997). To steer a plane safely with "fly- by-wire"-devices and a large number of assistant systems (some of them partly or even totally automatic) is a demanding task which differs funda- mentally from the traditional way of flying a plane manually (with numbers of conventional instruments and displays in the cockpit). Pilots had to learn the new rules of interaction and coordination, which means they had to acquire knowledge about the functioning of the system, which is partly explicit, and partly implicit knowledge, the latter of which can only be gath- ered by experience. Recent research on (and experiments with) the "steer-by-wire"-technology in modern cars indicate, that a similar technological revolution is taking place in the area of road traffic (cf. Vasek 2004). Systems are now being designed which transfer the concept of the "intelligent plane" to the auto- mobile sector, resulting in very similar concepts of the "intelligent" or "smart car", which is part of a telematic network which interconnects a large number of individual cars and integrates them by this way into a large technological system. To sum it up: In almost any sector of society smart agents are gaining ground. Currently the long-term consequences of the new relationship of man and machine can only hardly be assessed, but it is obvious that a ma- jor change in the knowledge society is now taking place. 1.2 Preview of the chapters The paper deals with the topics of the interaction of actors and smart agents as well as the creation of order in hybrid systems. Its aim is to out- line the scope of a sociological framework for the analysis of this new kind of systems. Chapter 2 illustrates the vision of pervasive computing and discusses its potential social impacts (2.1). It takes a brief look at the paradigm shift in Weyer, Hybrid Systems 3 the field of artificial intelligence which led to the construction of multi- agent-systems (2.2), and then discusses the gradual concept of action as a tool of the understanding hybrid systems (2.3). It examines different modes of governance of complex systems (2.4) and finally deals with the question, if decentralized systems might get out of control (2.5). Chapter 3 presents a case study on the Traffic Alert and Collision Avoidance System (TCAS) – one of the first cases of a multi-agent-system that failed and contributed to a catastrophe in aviation in 2002 (maybe the first case at all?). It presents insights into the operational logic of TCAS (3.1), espe- cially into the interplay of human and non-human decision makers (3.2) and then analyses in a Perrow-like style the causes of the mid-air collision near Überlingen (3.3). The chapters 4 and 5 draw some preliminary conclusions and sketch the framework for future, more far-reaching analysis. The failure of TCAS illus- trates the well-known pitfalls of automation (4.1) and provokes the ques- tion if humans can be excluded totally from decision-making in complex technical systems (4.2). Obviously there is no easy way out of the automa- tion dilemma. On the contrary the release of smart agents seems to inten- sify the problems rather than solving them. Chapter 5 discusses the option of a new mode of central control of decentralized systems (5.1), and finally puts a "double trap" on the agenda, asking for a more intense debate about these questions in the community of social scientists (5.2). 4 Weyer, Hybrid Systems 2 Smart agents and hybrid systems 2.1 Pervasive/ubiquitous computing In aviation, in road traffic as well as in many other sectors of society we can presently observe a new type of technology emerging which differs to some respect from the (automation) technology of past decades. It has been labelled "pervasive" or "ubiquitous computing". The visions of the engineers tell us that in future a large number of smart, embedded agents will monitor and control our actions and help us to manage our everyday life ("smart washing machine") or dangerous situations e.g. in traffic ("smart assistant systems"). These agents may become more and more autonomous, thus displacing the human decision maker. Referring to Marc Weiser, who first outlined the vision of ubiquitous com- puting in 1991, Friedemann Mattern (2003) describes the vision of a world, which is inhabited by miniaturized, sensor-equipped devices that have the capability of context-awareness, i.e. they can observe their environment and can react to a change of certain parameters (such as light, tempera- ture, speed, distance etc.). These smart devices are part of almost any object (desk, window, door, coffee machine etc.) and can communicate the collected data with other parts of the system (cf. TA-Swiss 2003b). Given the current state of technology, it is very easy to imagine a smart coffee machine, which starts preparing the coffee some minutes after your first visit to the bath room in the morning, which organizes the supply of your favourite coffee by negotiating special offers in the internet and which optimises the coffee preparation by downloading the latest software up- dates and by exchanging experiences with other actors or agents in an internet chat room.3 A large number of pervasive smart objects will observe our movements and actions and check them according to predefined patterns. This may be the quantity of alcohol we are allowed to drink by law when we drive, the speed limit at a kindergarten, but also our personal preferences such as the favourite pizza. Everything can be monitored and adapted to the rules which are inscribed in the system. It is true to say that these smart devices are omnipresent, but they are disappearing at the same time – they become invisible to the user (cf. Weiser 1991). In future no human being will be forced to operate a com- 3 Note that this kind of data link would also allow external actors or agents (e.g. the manufacturer of the machine or a coffee retailer) to generate data (e.g. for direct marketing purposes) and subsequently to intervene into your smart home, unless you restrict this by protection measures such as a firewall. Weyer, Hybrid Systems 5 puter interface any more (like a computer keyboard, a touch pad or any adjusting lever e.g. for the central heating), because all the smart objects around observe their environment and act according to pre-programmed routines, e.g. switching on the light when you enter the room. Weiser ar- gues, that life will become more convenient this way. In future no-one will have to care about computers – similar to the current situation where eve- ryone is used to switch on the light and doesn't have to care about electric- ity (which is invisibly hidden in the wall). To quote Weiser: "The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are in- distinguishable from it." (Weiser 1991: 1) Smart objects allow for an identification and localisation of mobile objects and people,4 thus raising questions about data protection as well as about abuse of these systems for surveillance by totalitarian organizations (Mat- tern 2003: 14). This conflict seems to be irresolvable, because if you strengthen data protection by inhibiting the hidden exchange of data, the performance and the efficiency of the systems will diminish and vice versa. Mattern points at a large number of unresolved questions, which – accord- ing to his opinion – have to be solved urgently, if modern societies want to exploit the potentials of the new technology. One cause of concern is the possible loss of control, if smart objects become disloyal (e.g. the "virtual dad" which controls the young daughter's driving performance in order to prevent her from risky driving as well as to maintain low insurance rates). Devices of this kind do not obey the user but try to spy him, to manipulate him, and to transmit information to other agencies, which implies a sub- stantial loss of control. Another question is the degree of autonomy of smart objects. Who has to pay for the highly expensive spring water from the Italian Apennine moun- tains your coffee machine decided to order independently because it didn't want to get poisoned by the low-quality water your offered it?5 A further serious problem is the erosion of trust, which emerges for example by dy- namic pricing in the internet6 or by the dynamic adjustment of your 4 The basic underlying technology is Radio Frequency Identification (RFID), i.e. the equipment of people and objects with transponders that submit all relevant data needed to identify and to track a person; cf. Andres 2004. 5 This example is taken from Mattern 2003: 21. 6 Cf. Skiera 2000, Skiera/Lamprecht 2000. A risk not only from a liberal point of view, but also from the standpoint of big companies: The German railway company "Deutsche Bahn" had to draw back their innovative dynamic pricing system, which had been introduced in December 2002, already in July 2003. The customers, 6 Weyer, Hybrid Systems (mostly) virtual environment (cf. Mattern 2003, TA-Swiss 2003b).7 Trust is a central feature of modern societies, because – as Uwe Schimank argues (1992) – we can only execute goal-oriented action if there is at least some sort of certainty of expectation, i.e. a (mutual) expectation that other ac- tors will behave in a predictable way, that has emerged as a result of pre- vious actions and interactions. Furthermore said certainty of expectation allows for individual and organizational learning (see below) and thus is – in addition with goal-oriented action – one major precondition for societal development. In a world without trust social relations will erode and socie- tal development will slow down or even grind to a halt.8 Consider the following example, which is only slightly futuristic: Yesterday the – electronically generated – speed limit at the kindergarten had been 40 km/h and some kilometres further a special offer for a delicious pizza appeared on the monitor of your car. Today – for reasons you don't know9 – your car is electronically slowed down to 30 km/h and you have do drive home hungrily because they don't offer any pizza at all. Isn't that a very strange world, a – more and more virtual – world that changes every day and takes different shapes? In a world of smart, adaptive things the objects are no longer stable and resistant, but soft and fleeting (cf. Hubig 2003). This results in a number of severe consequences for human action, because individual (as well as so- cietal?) learning is only possible, if you can fail when acting strategically (intentionally).10 Learning is based on previous knowledge (I know there is which were used to reliable information, now were facing prices that changed every few hours totally arbitrarily. They lost their orientation and refrained in large numbers from travelling by train. 7 Mattern (2003) and others such as Langheinrich (2003) discuss the option of a virtual memory which helps you to find your path through the real world by provid- ing information about the objects. If each object has its own website, the "identity" of the objects as well as the – virtual – reality you are living in, may change fre- quently and may be manipulated easily. 8 The question is, if in future we can still rely on mechanisms we adopted in a learning process of some hundreds or even thousands of years, and how long it will take to establish new mechanisms of social integration (in a hybrid society). 9 Maybe the children present a screen-play to their parents in the afternoon, and legal regulations require to slow down traffic automatically when smart sensors report that there are people in the kindergarten. (Maybe it was only a cat, who knows?) 10 It is important to note, that learning also requires at least some zones of disor- der, e.g. redundancies, opportunities, the chance to meet people with other kinds of interest of knowledge (cf. Nonaka/Takeuchi 1995). UC systems, aiming at opti- mising the performance, would probably try to reduce these opportunities. Weyer, Hybrid Systems 7 a pizza restaurant a few kilometres after the kindergarten), a strategy (I will drive there after work and buy a pizza) and the experience you make when you act according to your strategy (the restaurant doesn't prepare take away pizzas after 7 p.m.). The process of learning is based on a feed- back loop: When your strategy has failed once you can modify your knowl- edge about the world (i.e. add new information in an ordered way to your memory) and adjust your strategies (e.g.: get there tomorrow before 7 p.m.) and try it again. In a smart world that constantly changes people are hindered to act strate- gically, because strategic action implies to take into consideration the boundary conditions of action (the world outside as well as the presumptive actions of other actors), to calculate the probability of success and failure, and subsequently, to adjust the strategy according to the result of the ac- tion. Intelligent systems, as described above, prevent human actors from acting strategically and from learning that way because they try to avoid situations in which the individual can fail (and learn) – by presenting or rather constructing a "perfect" world, that shows up according to the sys- tem's rules, the user does neither know nor understand. The logic of intel- ligent systems is the preventive avoidance of learning (by doing or by ex- perience). 2.2 Intelligent behaviour? The notion "pervasive computing" focuses on networks of embedded com- puters, which are equipped with sensors and communication devices in order to collect data and communicate them to other components of the system. The singular components of these systems are not intelligent, at least when regarded from the point of view of traditional AI research. They are rather stupid because of lacking computer power as well as missing capabilities to overview the whole situation. The single device doesn't have global information, but the network of "smart" object may develop a re- markable problem-solving capability. We have to say goodbye to a notion of intelligence which mainly focuses on the cognitive capacities and human-like properties of a machine and to put our attention to a new concept of intelligence which focuses on the problem-solving capacities of computer systems or rather multi-agent sys- tems ("practical intelligence"). This paradigm shift has among others been promoted by Rodney Brooks. In his book "Flesh and Machines" (2002) he describes his unsuccessful attempts to program a robot that could ma- noeuvre safely through a building, avoiding collisions with people and ob- jects. All attempts to take into consideration, according to the model of anticipative planning, every possible situation and to supply the machine 8 Weyer, Hybrid Systems with a software-based routine to cope with these situations, failed as Brooks shows very convincingly in his graphic and conveniently readable description. It is the simple changes of light and shadow during the course of the day that raise unsolvable problems to a robot of this type (Brooks 2002: 39pp, 52). Every conceivable constellation had to be put down in the memory of the central processor, and it would take several minutes to cal- culate one single step of the robot. In fact this device would be incapable of solving very simple tasks. Therefore 20 years ago the idea came up to do without a central processor, to distribute "intelligence" among the different parts ("agents") of a ma- chine and to combine these parts into a multi-agent system (MAS). The singular agents only have little computer power, but they can act autono- mously according to their simple internal rules and – the most innovative feature – they can monitor their environment and thus contribute to a be- haviour of the entire system that can be described as adaptive. For exam- ple a MAS robot can very easily move through a building only complying to the simple rule: "If one of your sensors reports an approach to an object, stop your movement, turn around by x degree and continue your move- ment." It is rather the coupling of a number of simple, decentralized agents and not the superior "intelligence" of a centralized brain that makes these systems so powerful. "Intelligence" thus appears as an emergent feature of the network (and its coordinating activities) and not as a given property of the elements (also cf. Weiser 1991: 2, 4). MAS systems can act in real time and they can adapt to their environment. (In principle they can gain the capability to interact, to learn and, subsequently, to evolve.) Brooks cre- ated small insect-like machines which moved around with an astonishing agility and thus contributed to the emergence of the new research field of "Artifical life", because the behaviour of the machines resembled living or- ganisms. Above all: The performance of MAS was much better than the performance of classical robots constructed by referring to the traditional, "cognitive" AI paradigm (also cf. Christaller 2001). Excursus on replacement discourse Replacement scenarios, mostly driven by the vision of science fiction nov- els, have dominated the debate on artificial intelligence for decades.11 Al- though it is difficult to assess future developments, there is only little evi- dence that the human species will be replaced by a superior race of robots, as Hans Moravec or Ray Kurzweil suggest (cf. Moravec 2000). However this 11 The book "Robotik", edited by Thomas Christaller in 2001, which gives a broad view on technical and ethical aspects as well as different application scenarios of the research field, also highly concentrates on the replacement discourse. Weyer, Hybrid Systems 9 debate is well-suited to distract public attention from developments which may be more fundamental and important for society. No-one really doubts that robots will become more and more humanlike, and some of them will probably also replace humans, e.g. human workers in the field of health services etc. But the prominent issue is not shape (do robots resemble human beings?) or brain power (do they have similar cog- nitive capacities?), but performance (can intelligent systems act in a way that resembles or even goes beyond humans?). If we focus on the wrongly put question of replacement of the human race, we will ignore the more fundamental change that takes place: Smart devices, which are stupid in comparison to a human, but clever in performing simple tasks are now be- ing released to society in large numbers. In a very short period of time – before robots have taken over! – they will have created a new social reality that will fundamentally transform the way we live and work (cf. Rochlin 1998: 11, Brooks 2002: 19). Most current debates focus on the question, if robots will become more intelligent than humans, and ignore the more fundamental transformation of the knowledge society, where smart devices are slowly exercising control of – at least parts of – society without becom- ing human-like. 2.3 Hybrid systems Promoted by Brooks and others, during the last 20 years a new generation of technology has emerged, which can be categorized – according to Werner Rammert and Ingo Schulz-Schaeffer (2002a) – as interactive, intel- ligent and adaptive. This has a number of important implications for the interaction of man and machine, since our traditional instrumental notion of technology no longer applies, if technological devices gain the capability of autonomous decision making (cf. Weyer 2004). The most fascinating ques- tion thus arises, if we look at the confrontation of the two species, i.e. of human agents and technological agents. If multi-agent systems (MAS) are released into the real world, a new type of socio-technical system emerges which can be labelled "hybrid system", since it consists of human and non- human decision makers. Rammert and Schulz-Schaeffer have made a very convincing attempt of categorizing these new kinds of hybrid systems, thus getting away from the by some means unproductive debate on actor networks as conceived by Bruno Latour (1988, 1994, 1996, 1998).12 They put the question how tech- 12 There is no space for a detailed discussion of the actor network theory (ANT) – a theory which has been refuted since 1992 (cf. Collins/Yearley 1992) but which nevertheless has been actively promoted by its proponents. For a detailed analysis 10 Weyer, Hybrid Systems nology studies shall deal with technical objects, which behave intelligently and interact in a way that resembles social interaction (Rammert/Schulz- Schaeffer 2002a: 16).13 If hybrid social constellations emerge, which con- sist of human actors and technical agents, the question of "agency" arises. There are more and more technical systems where intelligent agents assist or substitute the human decision maker (e.g. in modern airplanes, trains or cars, cf. Timpe/Kolrep 2002).14 The behaviour of a car which brakes auto- matically in case of danger looks very similar to the behaviour of a car where the action has been taken by a human controller. In many instances we are unable (from an outside point of view) to distinguish human action from non-human action, because the system's behaviour is almost identi- cal. If the brake assistant system of a car intervenes to avoid a collision with a child, it has the same effect to an external observer as if the car driver had taken the action. (In many cases they act together in a coordi- nated way.) We can only say, the socio-technical system, consisting of the driver, the brake assistant and other devices, has made a decision. Thus we attribute a more or less strategic behaviour even to non-human agents. By that way smart objects gain autonomy and become part of our social life, and the traditional division between strategic action (of human beings) and adaptive behaviour (of technical devices) increasingly erodes (cf. chap- ter 2.5). Rammert's and Schulz-Schaeffer's approach starts with the observation that we tend to attribute agency even to technical objects, and they construct a new model of "distributed agency between people and objects" (2002a: 21) in order to observe the evolution of socio-technical constellations. They suggest a gradual concept of action ("gradualisierter Handlungsbegriff", 43), which distinguishes different levels of "agency" ("Handlungsträger- schaft") which can be achieved either by human actors or by technical agents (43-50): 1. causality: the ability to trigger changes, 2. contingency: the capability to act in a different way (i.e. choos- ing alternatives), and critique of ANT cf. among others Schulz-Schaeffer 2000 and Christaller/Wehner 2003. 13 It is important to note that Bruno Latour never dealt with smart objects, but only with representatives of a technology that can easily be understood by using the traditional instrumental notion of technology, such as key tags or speed bumps. 14 For a more detailed discussion of assistant vs. substitution systems, cf. Timpe/Kolrep 2002, Weyer 1997. Weyer, Hybrid Systems 11 3. intentionality: the ability to control and to give meaning to the given behaviour (48). At the first level there are almost no differences between men and ma- chines. A dish washer can do its job at least as good as the well-behaved husband. However, to meet the decision which kind of dishes may be put into the dish washer (second level), requires some sort of advanced tech- nology which can for example distinguish different material. Bid assistants at electronic market places or autopilots in modern planes obviously can be subsumed to this category. The third level is the most complicated one. Rammert/Schulz-Schaeffer avoid a final statement (e.g. on ontological issues) and plead for a prag- matic approach which refrains from any attempt to close a debate which simply cannot be terminated now. Instead they call for an observa- tion/examination of the societal practices of attribution of intentionality to either people or objects (47). Rammert/Schulz-Schaeffer argue that inten- tionality is not a natural ingredient of human action, but mostly a product of processes of interpretation and attribution – be it by external observers, be it by the actor her/himself.15 We are used to suppose that someone who acts in the way s/he acts, has done so because s/he intended to do so. But nobody really knows, if our assumption of rational decision making is true.16 The same could even apply to machines. Today we still are not used to assume that a brake assistant (cf. the example discussed above) acts intentionally, but – according to Rammert/Schulz-Schaeffer – this is a societal practise we should reflect about and not a fixed ontological fact. Rammert/Schulz-Schaeffer have sketched the framework of a new, power- ful paradigm for technology studies which helps to understand the interac- tion of actors and agents in hybrid systems more easily. However, some questions remain. The concept of distributed agency focuses on a broad variety of hybrid systems without imposing any (normative) standard which types of systems are desirable and which ones are not. Any kind of interac- tion seems to be possible. Their model is guided by the concept of an equality in character of human societies and agent societies. The underly- 15 Cf. Grande/Fach 1992 on the topic of ex-post-justification of technology policy. 16 This is the weak spot of the rational choice theory. Modern versions of a subjec- tive rationality avoid to refer decision making to objective standards, but accept the individuality of the respective choices. However the notion of "rationality" is being dissolved this way into accepting that a decision maker did, what s/he actu- ally did – according to her/his specific preferences. The model "subjective expected utility" is a flexible framework that can be adjusted to every specific decision in a way that makes it possible to (re-)construct the decision process as rational behav- iour; cf. Esser 1991, 1993. 12 Weyer, Hybrid Systems ing assumption is that the mechanisms which guide agent societies resem- ble those guiding human societies – in a way that you can easily transfer your knowledge about system's operational logic from one type to an- other.17 And the notion of "distributed behaviour" implies, that the two spe- cies bump into each other at the same level – with no privileged position for either one. If we follow Rammert/Schulz-Schaeffer, we should be curious and explore the broad variety of hybrid systems open-mindedly, without excluding any case e.g. because of its potential consequences (e.g. social risks). We should rather observe how hybrid systems emerge and gain problem- solving capability by negotiating (internally) the distribution of agency and the responsibilities of actors and agents. The model of Rammert/Schulz- Schaeffer thus can be regarded as a very optimistic point of view which doesn't take too much care about possible consequences and risks of hy- brid systems as well as about the desirability of the upcoming develop- ment. In their model the question remains unanswered what will happen, if the two species are confronted with each other.18 Will this encounter only lead to a minor change of social life, or does it – at least in the long run – imply a change of our notion of social actor and, subsequently, lead to a funda- mental transformation of society? Can we really construct agent societies according to the rules of human societies (as the "socionics" programme did), taking a similarity of actors and agents for granted? Can we simply transfer our knowledge about social systems to hybrid systems, and finally can we treat agents as quasi-actors (with equal rank with human beings)? The contrary view would argue, that the interaction of the two species – "actors" and "agents" – entails a large amount of uncertainty and probably some risks, the consequences of which we can hardly assess now. To this regard, the release of (artificial) agent societies and their confrontation with human societies is another case of large-scale experimental learning within society. However, the effects which will emerge from the interaction of these two species are mostly unknown, and we need a criterion to as- sess the desirability of certain alternatives in order to prevent a develop- ment that endangers societal achievements such as the freedom to act 17 This conception had also been one of the central premises of the "Sozionik" pro- gramme which aimed at analysing and modelling artificial societies via the coop- eration of sociologists and information scientists; cf. Malsch 1997, 1998, Kron 2002, Meister 2002. 18 Note that – similar to Latour – there is no detailed modelling of this specific kind of interaction in the concept of Rammert/Schulz-Schaeffer. Weyer, Hybrid Systems 13 according to one's own intention ("Willensfreiheit"), the principles of data protection and some more. This more sceptical, pessimistic view is taken by Gene Rochlin, who has written a well-informed analysis about smart technology (1998) and sum- marizes the experiences already made by implementing agent systems in different sectors of society. 2.4 Modes of governance of complex systems In his book "Trapped in the Net" (1998) Rochlin discusses the "unantici- pated consequences of computerization" (subtitle of the book), which is – to start with the conclusion – a fundamental transformation of society. His argumentation is based on a number of case studies on the informating of labour, the management of organizations, computer trading, the automati- zation of high reliability systems and, finally, on different instances of the implementation of smart technology in the field of the military. His main argument is, that in history we can observe a sequence of modes of man- agement of organizations as follows (7p.): 1. Hierarchical, centralized control and rational planning (core technol- ogy: mainframe computers, period of time: 1950s and 1960s), 2. flexible, decentralized self-organization (core technology: personal computer, period of time: 1970s and 1980s), 3. central coordination and control of decentralized structures by means of networking (core technology: networking of heterogene- ous, distributed systems, e.g. via the internet, period of time: since the 1990s). Rochlin's book deals with the apparent paradox that the networking of autonomous elements eventually ends up with a total control and loss of autonomy. In this context he uses the term "computer trap" (217) to point at the unintended and unpredicted consequences of the implementation of smart technical devices. His main concerns are the foreseeable risks of this development such as the deconstruction of social institutions (56, 208p.) and the growing vulnerability and dependency of society on a kind of tech- nology which can hardly be controlled, but entails unpredictable risks (11, 14, 106, 186). One would obviously misinterpret Rochlin, if one relates these negative impacts to the character of the technology and nothing else. Rochlin insists that it is the striving for a permanent improvement of efficiency (and the utilization of computer networks to achieve this objective) which causes a development that eventually ends up with a standardization of processes and a subordination of decentralized systems under a central plan. Every- 14 Weyer, Hybrid Systems one who has ever been in touch with inventory or data warehouse systems easily can understand what Rochlin means. The technological foundation of this type of systems, which operates in a mode of central, anticipative con- trol, is the electronic networking and integration of numbers of elements in real time. Rochlin uses the term "micro-management" (149, cf. 63), to indi- cate a strategy of vertical coordination, which intervenes directly at any level with means of IT devices, thus governing every process in the whole organization. Rochlin also hints at the risks of such a development, which are a loss of autonomy and "slack" (63, 213) – i.e. the ability, to react to unexpected situations flexibly – as well as "the potential losses of social means for learning, social capacity for adaptation, and social space for in- novation and creativity" (213). To summarize Rochlin's argument: The new type of smart technology facili- tates the emergence of a new mode of governance of complex systems, which goes far beyond the well-known types of central control and decen- tralised self-coordination. Hybrid systems enable us to execute some kind of central control of decentralised systems, since smart devices can collect large amount of data about the world and the people and integrate them – by networking – into a unique control architecture. If we compare the concept of Rammert/Schulz-Schaffer with the approach of Rochlin, the divergences are obvious. Rammert/Schulz-Schaeffer have constructed a model for an unbiased analysis of hybrid systems (with dis- tributed agency) which still lacks empirical evidence and is open-minded concerning the results of future development.19 Rochlin, on the other hand, has analysed a large number of cases of the informating of society which give evidence to the assumption that networks of pervasive smart devices may lead to totalitarian systems of central control which eventually end up with a fundamental transformation of society.20 Obviously it is impossible to resolve the debate and to conciliate the two approaches. But we can take this controversy as a starting point for the analysis of the case study on TCAS (see chapter 3). Therefore I will con- tinue this debate later. 2.5 Guaranteeing safety by means of self-organization? Rochlin apparently is a promoter of decentralized self-coordination. He ar- gues that systems of this type are able to manage unexpected situations, 19 Different case studies are currently conducted at the Technical University of Berlin, but they still lack a detailed description of the interaction processes in hy- brid systems, cf. Meister 2002. 20 This warning had already been issued in the UC-paper of Weiser (1991: 8). Weyer, Hybrid Systems 15 because the members of the organization are well-trained in crisis manag- ment (cf. Rochlin 1991, LaPorte/Consolini 1991, Krücken/Weyer 1999). Every measure that transfers the decision making authority to a central control body therefore entails the risk of a fatal error, because the design of a centralized system may entail errors, which finally inhibit the partici- pants to act according to the concrete operational needs in a given emer- gency situation. The concept of decentralized self-organization argues that endogenous processes within a given system lead to good and stable solutions, which mostly cannot be attained by central control. The external controller obvi- ously doesn't have the knowledge that the participants have. Additionally external influence inhibits the members of the system to get involved and shape the process by their creative contributions. Only the participation of goal-oriented, self-interested actors which pursue their profit-maximizing strategies, leads to a stable solution, because the actors expect to gain from social cooperation and thus contribute – unintended – to the emer- gence of social order. You can pursue the history of this concept of social integration from its origins with Adam Smith (cf. Vanberg 1975) up to modern theories of self- organization (cf. Krohn/Küppers 1989) and models of computer-based simulation of social systems (cf. Epstein/Axtell 1996, Resnick 1995). How- ever, this concept doesn't take too much care of the question, if the solu- tions achieved by means of self-organization are acceptable for the outside environment, be it other actors not involved, be it the society as a whole. So what happens if self-organized systems get out of control? The argu- ment of the unpredictable emergence of system's properties, mostly con- sentingly quoted in the context e.g. of organizational learning, can also be read from the opposite. Emergence also can imply unforeseeable, undesir- able behaviour. Michael Crichton has written a beautiful novel "Prey" (2002) which deals with this problem, but besides from science fiction we should take the issue seriously. If agent societies acquire emergent behaviour – who guarantees that the effects can still be controlled or at least managed in a way that is harmless to society? Do we have points of departure at all to intervene into self-organizing systems?21 21 Helmut Willke (1984) developed the concept of decentralized contextual control ("dezentrale Kontextsteuerung") as a means of a "soft" governance of systems. However, this concept leaves many questions open, e.g. if the Luhmannian con- cept of the functional differentiation of society enables one actor (or system?) to be in a privileged position to govern other systems (cf. Weyer 1993). 16 Weyer, Hybrid Systems The question becomes more critical, if safety is at stake, e.g. in traffic sys- tems. It is obviously an unresolved question, if self-organized systems can also guarantee a high level of safety, e.g. in aviation. Here we face a fun- damental dilemma – with no solution in sight. In principle smart devices enable the user of the system to create solutions for current problems (such as circumventing congestions in road traffic) which are superior to the user who operates in a conventional manner. The basic logic here is local optimisation, since the well equipped user normally doesn't have global information (there is no need for) and doesn't take care of the external effects of her/his actions (there is no need for, either), but optimises her/his performance regardless of the consequences for other users as well as for the global system. For example in current telematic systems for road traffic there is no direct feedback concerning the individ- ual's actions to the system and especially no check if her/his evasive ac- tions are prudent viewed from the standpoint of the global system. In densely coupled traffic systems such as aviation or railway transporta- tion the safety architecture mostly is based on the principle of global opti- misation (cf. TA-Swiss 2003a). The governance structures are shaped by the top objective of overall system's safety, which is – for instance in civil aviation – realized by giving strict orders to the participants they have to obey to.22 In road traffic most systems give only recommendations to the users or operate with incentives such as road-pricing which try to inhibit the users of taking actions which are undesirable regarded from the sys- tem's point of view (such as the displacement of traffic in urban areas). Future telematic systems are already being designed, where smart devices ("agents") function as an interface which observe the users ("spy-chips") and communicate in both directions thus controlling the user's behaviour in order to align her/him to the system.23 The control architecture intends to guarantee the system's performance and efficiency (with a load factor as high as possible) as well as the safety of operations (with an accident rate as low as possible). 22 In many cases – e.g. in aviation – all participants agree, that safety is the top objective, and therefore comply to the rules. However, a strict commitment to the system's rules and norms doesn't provide the opportunity to gain additional indi- vidual profit, which mostly derives from a (rational) deviation of standard behav- iour. 23 Cf. TA-Swiss 2003a. For example the "Ruhrpilot" is an innovative telematics sys- tem for the densely crowded Ruhr district. The designers of this system are cur- rently debating an option (among others), where the users will have to make res- ervations in advance, to allow the system to compute the probable effects and to offer recommendations or even directives (cf. Spehr 2004). Weyer, Hybrid Systems 17 Noticeably we face a "double trap": If we rely on self-organization, systems may get out of control – with irreversible consequences, and we have no measures to recapture them. But if we construct control structures (in or- der to cope with unintended effects of self-organization), we will – accord- ing to Rochlin – run the risk to transform our society in a totalitarian way, that eliminates self-organization and social learning at all. 2.6 Conclusion: The release of hybrid systems as a large-scale societal experiment The release of agent systems to the real world can thus be regarded as a large-scale social experiment which aims at creating a new type of interac- tion of man and machine, where human decision makers and autonomous agents which act almost like humans, communicate, interact and coordi- nate their actions. The results of the confrontation of these two species are difficult to assess, but there is some evidence, that human beings can no longer treat technology as a dump device which – normally – acts accord- ing to their demands and commands, which are part of a strategic action (traditional instrumental notion of technology). If modern technology be- comes intelligent, adaptive and even interactive (new notion of "intelligent technology"), the style of interaction may change: Human beings will find themselves more and more in an – increasingly virtual – environment which has been shaped (or is even controlled) by agent systems. The results may be that in the near future the human parts of these com- plex hybrid systems will no longer have the free space to act accordingly to their strategies but are increasingly restricted by the system's current state – which changes frequently due to its internal dynamics – and are finally forced to adapt to these changing boundary conditions of action. The con- frontation of human societies and agent societies thus may eventually lead to a role reversal: If smart agents behave more and more like strategic actors, human actors may be forced to behave like adaptive role players. Given this analysis, the establishment of hybrid systems implies the de- struction of previous social structures and the subsequent construction of new socio-technical structures, i.e. patterns of interaction, social norms, and societal institutions. However, the question arises how this new social order of a hybrid society will emerge and which shape it will take. Looking back to history it took some thousands of years to establish the kind of social order we are now used to live in.24 The stabilization of modern pat- terns of individual behaviour and social interaction was the result of a long and painful process of societal trial-and-error. And it was only the historical 24 Cf. Popitz 1995, ten Horn-van Nispen 1999. 18 Weyer, Hybrid Systems chance of a coincidence of different critical factors in 17th century Britain that gave birth to the modern industrial society which we are now going to transform. The upcoming transformation of the knowledge society into a hybrid soci- ety will presumably imply – and require as well – some fundamental changes of the ways we are used to live and work. The opportunities of this new society can only be discovered by experience, i.e. by making – carefully conducted – experiments which explore the potentials of the new technology. However, society must be aware that this kind of real-world experiments generates results and consequences which may prove to be irreversible.25 Mostly it is almost impossible to abolish a technology once it has been released to the real world. This will apply to hybrid systems at least as much as it did to former innovations. The experimental release of hybrid systems into society will comprise a societal learning process, aimed at acquiring the capabilities to cope with new challenges and new types of risk. The experimental development of a radically new technology will presumably require a learning period of at least 15 years in which a number of incidents and accidents may happen. However this process must also include the construction of new institu- tions, i.e. modes of – trust-based – social interaction and integration as well as new modes of control of hybrid systems, which will probably need a longer period of time. 25 Cf. Krohn/Weyer 1994. Weyer, Hybrid Systems 19 3 Case Study TCAS In the following section a case study on the Traffic Alert and Collision Avoidance System (TCAS) will be presented, which explores the operational logic of a hybrid system, discusses the challenges and risks and raises some – tentative – questions about different control logics and system ar- chitectures. The experiences, which have been gathered with the interac- tion of human actors and non-human agents in TCAS guided aircraft during the last ten years, are being interpreted in the following as a part of the effort to create a new type of socio-technical order in hybrid societies. 3.1 History and operational logic of TCAS TCAS is an airborne short-range collision avoidance system which is man- datory equipment of modern passenger aircraft in the U.S. since 1994 and in the European Union since 2000 (Denke 2001, Nordwall 2002). Its pur- pose is to prevent aircraft from midair collision by warning the pilot when another plane is in a predefined range of about six kilometres (which is about 40 seconds flight time). This is especially important at night or when weather conditions are bad.26 Chart 1: TCAS (source: Rannoch 1998) If a TCAS system detects a conflict situation it warns the pilot ("traffic advi- sory") and some seconds later issues commands to climb or to descend ("resolution advisory"). If both aircraft which are part of the conflict are equipped with TCAS "they will communicate to avoid mirror-image ma- noeuvres" (Nordwall 2002). The recent version of TCAS is even capable of reversing the maneuver by making a dynamic adjustment, e.g. when a 26 For more details see VC 1997, BFU 2004b, Reineke 2004 (annex of this paper). 20 Weyer, Hybrid Systems descending aircraft detects that the approaching aircraft also descents. Previous versions lacked this capability.27 TCAS thus can be regarded as a multi-agent system where two (or even three) agents "communicate intentions" (Nordwall 2002) and coordinate their actions autonomously. Even if the pilot finally takes the action, the decision to act is being made by a set of communicating agents. U.S. pilots are told to obey TCAS strictly – for reason to be discussed further below. TCAS is a "foolproof" system, and pilots "have a high regard" (Nordwall 2002) for this device (cf. also VC 1997). Nevertheless TCAS has been one cause among others in the midair collision of a Russian Tupolew Tu154M and a DHL cargo Boeing B757-200 on July 1, 2002 over the Bodensee in southern Germany, killing all 71 passengers aboard. Both aircraft had been equipped with the latest version of TCAS. Before describing the chain of events that led to the disaster, it is neces- sary to understand the role TCAS plays in the overall system of air trans- port safety. 3.2 ATC and/or TCAS? The well established system whose task is to avoid accidents in aviation is the ground based air traffic control (ATC). ATC centres are equipped with modern devices – mostly redundant – to detect airplanes far away. They have global information of schedules and current traffic on different flight levels. Moreover they can communicate with the airplanes by radio teleph- ony but also by asking the transponders of the planes to communicate all relevant data. So in an ideal situation the ATC has complete information. The ATC system is a typical example of a hierarchical, centrally controlled system which in principle is able – referring to Perrow (1984) – to guaran- tee a high degree of safety.28 27 The history of TCAS is a nice illustration for the argument, that technological progress may generate new, uncommon risks: The version 6.04 which had been introduced in 1994 was able to make "a dynamic reversal when a TCAS-equipped aircraft encountered a non-TCAS aircraft that made a mirror manoeuvre" (Nordwall 2002). But it was unable to do so if both aircraft were equipped with TCAS; this "quirk" (ibd.) was only overcome in 1999. Additionally the first generation of the system, called TCAS I, provided the pilot only with information and let her/him decide what to do, while second generation TCAS II gives precise commands (Venik 2002). 28 However, Perrow also points to the fact that the introduction of new safety tech- nology mostly leads to an increase of the load factor thus absorbing at least some of the safety gains. As Stephan Cramer (2001, 2003) showed in the case of 19th century navigation, new safety devices lead to an unintended acceleration, thus creating new risks. Weyer, Hybrid Systems 21 However, since the 1960s an alternative safety system has been developed which functions according to a very different logic, which is decentralized self-organization. Even in 1956 the American physicist John S. Morell cre- ated a mathematical formula for the calculation of the "tau" time, which is the time-to-go to a collision of two moving objects.29 The first aircraft colli- sion system, implemented in 1961 still generated too many false alarms. The first test flight of TCAS took place in 1982, but the device still was un- reliable and therefore was not accepted by the pilots. But the version intro- duced in 1994 is well accepted by airlines and pilots and since then has prevented many midair collisions. TCAS creates a communication link between two (or more) aircraft and so can avoid collisions independently of the ATC ground control. It is obvious that this system can only work reliable, if every aircraft is being equipped with (at least similar) TCAS devices. And it generates questions about the co-ordination of ATC and TCAS which can be answered in different ways – either replacing the ground control by an independent system or finding reliable ways of cooperation of the two systems.30 One cause of the Über- lingen accident is the fact that different aviation communities have gener- ated different answers to this question. The U.S. Standard Operation Procedures (SOPs) In the U.S. as well as in Europe TCAS is being regarded as an additional short-range "safety net" (VC 1997) that only issues warnings if the ATC systems already has failed (Nordwall 2002). Pilots are being trained to rely on TCAS and to obey its commands strictly in a case of emergency – espe- cially concerning the fact that in these situations there remain only a very few seconds to react properly. U.S. pilots are used to ignore the commands of ground controllers, because they are aware of the fact that in such a critical situation the ATC has imperfect knowledge, since the (decentral- ised) TCAS systems are not designed to communicate their data to the ATC computer.31 It is true that pilots are urged to report the TCAS commands to 29 Cf. VC 1997, Aerowinx 2002, Venik 2002. 30 A Russian source, the reliability of which cannot be proved sufficiently, argues that the original TCAS system was designed to eliminate the ATC and to control the aircraft directly by giving inputs into the autopilot; cg. Venik 2002. Due to technical and financial restrictions this design, however, had been shifted to a "simplified version" which left "a lot of room for errors" (ibd.) (The author obvi- ously is well-informed, but a little too single-minded about the question of blame.) 31 TCAS doesn't provide the pilots with a more wide-range view because of the risk of false alarms and because of interferences with the ground control systems. However, it is currently debated if next-generation TCAS will include a downlink capability (Nordwall 2002). 22 Weyer, Hybrid Systems the ground controller by radio telephony, but this is obviously an unreliable method of overcoming the communication gap especially in an emergency situation.32 U.S. pilots rely on TCAS even if they know that this system has some short- comings. In the U.S. TCAS is only mandatory for passenger aircraft with more than 30 seats, which means that freight planes and smaller passen- ger aircraft don't have to be equipped with TCAS. In the European Union there is another regulation which puts the limit at a weight of 15 metric tons, thus including freight aircraft, but excluding smaller aircraft, too. Ad- ditionally military planes use another mode for transmitting their codes, and sometimes switch off their transponders at all (e.g. fighter jets in close formation). Every pilot flying an TCAS-equipped aircraft therefore must be aware of the fact that there may be some other aircraft in the vicinity which are invisible to him (VC 1997).33 Nevertheless pilots' associations such as the German "Vereinigung Cockpit" recommend to their members to rely on TCAS and to adopt to the U.S. SOPs (Denke 2001). The Russian Standard Operation Procedures The Russian SOPs concerning TCAS are very different, because they dis- trust the system to some extent – maybe partly because it is U.S. technol- ogy, but mainly because of its well-known limitations and pitfalls. The Rus- sian logic to cope with situations of conflict between ATC and TCAS is very simple: "In Russia pilots will take ATC's orders over the instructions of any onboard navigational system." (Venik 2002, emphasis added) Besides the invisibility of some sort of aircraft (see above) the reason is, that ATC has "a complete picture of the sky" which is based on redundant systems (TCAS is not redundant!). Since the ground controller has the superior abil- ity to resolve possible conflicts, s/he can "inform the pilots of a possible collision 5-10 or more minutes in advance" (Venik 2002). Additionally the quoted Russian source argues, that in "most countries pilots are trained to take ATC's orders above anything else" (ibd.). It was not part of this case 32 This case study generated some surprises for the author: Firstly the fact that the pilot has a very incomplete knowledge of the situation, secondly the fact that radio telephony works very insufficiently (different aircraft at once, bad quality of com- munication etc.). 33 There are still some more limits of TCAS. In case of a RA the autopilot has to be disengaged and the plane must be flown manually (Aerowinx 2002) – another source of distraction and conflict, which shows that TCAS is not perfect. Addition- ally in case of an interception of a civilian aircraft by a military plane – heavily dis- cussed after 9/11 – the compliance with a RA could be interpreted as an avoidance manoeuvre, eventually leading to a military action (cf. VC 2002). Weyer, Hybrid Systems 23 study to verify or falsify this statement, but the confusion in the skies and the conflict between different safety cultures is obvious.34 This confusion is partly due to the fact, that the two safety systems are not interconnected. Chart 2: The "missing link" in aviation safety A feedback link that informs the ground controller of the recommendations given by TCAS to the pilots obviously would help to improve the system's performance and avoid opposing commands. In the current state the con- flict must be resolved by the lonely pilot who has to decide – in a very short period of time – to ignore one of the two safety measures the original intention of which had been to increase air transportation safety (cf. Venik 2002). And s/he takes the full responsibility for her/his actions – a typical pitfall of automation which apparently cannot substitute for human decision making, but creates situations of human decision making which are much more difficult to solve than before. However, the confusion is also partly due to the incompatible operational logics of the two systems: Either you rely on central control and put the responsibility on the controller, who organizes the system's performance by hierarchical governance, or you rely on the problem-solving capacity of decentralized self-organization and distribute responsibility within the sys- tem. But you cannot do both at once. We'll come back to this question later. Now it is easier to understand the Überlingen crash, because the crew of the DHL cargo Boeing followed the U.S. SOPs, to obey strictly to TCAS, 34 The report of the German Accidents Investigation Board (Bundesstelle für Flu- gunfalluntersuchung) clearly states, that even the rules of the International Civil Aviation Organization (ICAO) were "inconsistent, full of gaps and partly contradic- tory" (BFU 2004b: 3). 24 Weyer, Hybrid Systems while the crew of the Russian Tupolew was divided on the question which procedure to follow and finally adopted the Russian way which urged them to rely on the ground controller and to neglect the recommendation of the TCAS, which unhappily was exactly the opposite. This conflict, however, could have been resolved easily, if there had not been an unfortunate con- catenation of events which contributed to the dramatics of the situation. 3.3 The Mid-Air Collision at Überlingen The two planes crashed at July 1, 2002 at 23:35:32 at flight level FL 360, which is a height of about 10.000 meters. Both planes had been guided by the ATC at Zuerich (Switzerland) where only one controller was on active duty, because it had been a common – informal – practice that in a quiet night-shift one of the two controllers was allowed to retreat to the rest room.35 The present controller had taken over the two planes only a few minutes before, but did only realize the conflict 43 seconds before the colli- sion. When he issued his warnings, the TCAS systems of the two aircraft had already automatically generated their recommendations (TAs and RAs), and the tragedy was that he urged the Russian crew to descend – which was exactly the contrary of the TCAS recommendation.36 Additionally an instructor, who was not familiar with the TCAS system, had taken over the role of the pilot in command (PIC) in the Russian plane37, while the other pilot – in normal situations the commander – acted as the pilot flying (PF). After the contrary recommendations had been issued the crew was not only confused about the facts but also had to cope with authority and co- ordination problems, since the PIC demanded to comply with the com- mands issued by the ATC, while the PF insisted on relying to the TCAS. The rearrangement of the crew (at a night flight!) and the missing comprehen- sion of the TCAS system on part of the instructor were major causes of the subsequent mistake.38 So the question arises: Was it simply a human error? 35 This practise had been established, when the ATC was still operated by three controllers, but had been kept up, when the crew was reduced to two people (cf. BFU 2004b). 36 It cannot be determined why the TCAS of the Boeing didn't perform a dynamic reversal, when facing the Russian plane doing a mirror manoeuvre, since this ca- pability has been reported to be one feature of recent TCAS versions (cf. Flottau 2002a). 37 It is unknown why instruction flights take place at night. Obviously there had been no awareness of the specific risks of this situation on part of the Russian crew. 38 For more details see BFU 2004b, Reineke 2004 (annex of this paper). Weyer, Hybrid Systems 25 Chart 3: The collision at Überlingen (source: FAZ 20.07.2002: 7) Similar to other cases of highly automated systems modern aircraft are also conducted by people, the main task of which is to monitor the system's operating in the routine mode, were nothing happens for hours and hours. Steering a container ship over the ocean or a plane on an intercontinental flight is a very boring job, and it is difficult to overcome the monotony. Many accidents happen at midnight, frequently short after a change of shift, when only a minimum of personnel is on duty, e.g. the Exxon Valdez accident in Alaska in 1989 (cf. Bossow 1999). Additionally modern techno- logical systems are designed to prevent the emergency case, which mostly works very well – but with the consequence that unforeseen situations oc- cur only very rarely and can hardly be trained systematically. As in other cases, the investigation of the Überlingen crash revealed that the operating procedures for TCAS were confusing and inconsistent (BFU 2004b: 4, 7). At least the Russian crew was not familiar with the system and had only little experience with it. To get familiar with a new technol- ogy, however, requires a long and systematic training which also includes the un-learning of previously acquired patterns of behaviour.39 Man- 39 One example from my personal experience: In my previous car the automatic speed control could be switched off by tapping on a button in the left part of the multi-functional steering wheel, thus driving the car a little bit like Michael Schumacher does. This happened for example when another car suddenly changed the lane. My new car has a slightly different layout of the driver's place. In the first couple of weeks I often tapped on the place where the button had been before until I slowly un-learned my old behaviour and subsequently learned to establish a new routine. 26 Weyer, Hybrid Systems machine interfaces must therefore be designed in a way that facilitates decisions and actions which avoid error. However, if we take a look at the organization of Skyguide, the operator of the ATC Zuerich, and the safety culture of this organization we will find some more factors which contributed to the disaster: In that night there had been maintenance works at the ATC, which re- quired to partially shut down the radar system. As a consequence the Short Term Conflict Alert (STCA) system – a ground based pendant to TCAS – which warns the controller of an upcoming conflict constellation, was only partly operating, i.e. it could not present the information optically on the screen. However, the controller was not aware of this. Generally, no one at Skyguide knew exactly which effects the maintenance works had on the overall performance of the ATC Zuerich. For example the telephone line was out of duty for a few minutes. Evidently there was no awareness that this constellation with a coincide of uncommon events raises the risk and the probability of errors. One possible consequence might have been that the informal practise of a one-man-operation had been abandoned that night. During the critical period of time (from 23:30 to 23:35) the attention of the unfortunate controller, who later was murdered by a relative of one Rus- sian victim, was distracted by a third aircraft, approaching the nearby air- port of Friedrichshafen, he had to guide, too, working at two desks simul- taneously at different radio frequencies. A colleague at Karlsruhe airport who also observed the scene and had been alerted by his STCA, could not reach him by phone because the lines were out of duty exactly at that moment. (Remember: The STCA at Zuerich airport could not issue an opti- cal warning at that time.) Therefore the controller at Zuerich only realized the conflict between the Boeing and the Tupolew a few seconds before the crash. He had to switch abruptly from the routine to the emergency mode (cf. LaPorte/Consolini 1991), but it was too late to find a proper solution because of the parallel working – above described – of two badly coordi- nated control systems working with opposite operation logics. He suddenly was heavily under pressure and made a number of mistakes: He didn't un- derstand a radio message of the Boeing crew properly (partly because of several simultaneous messages at the two desks), and he didn't register the acoustic STCA warning. Again: Was it a human error? Or organizational failure (at Skyguide as well as in the cockpit of the Tupolew)? Or was it a system's failure – a failure of a system, that must fail because of complexity, of tight coupling, of lax safety cultures with a lacking awareness for risky constellations, and finally of unavoidable inattentiveness in boring night shifts in highly automated Weyer, Hybrid Systems 27 facilities and systems? In the following chapter these question will be dis- cussed at a more general level. 28 Weyer, Hybrid Systems 4 Lessons to be learned 4.1 The pitfalls of automation Given this analysis it is easier to comprehend the causes of the Überlingen crash and to make some general conclusions concerning the interaction of man and machine in complex hybrid systems. Obviously we can identify some typical causes, which are well-known from other cases of highly automated systems. On the other hand, however, we can also see, that the presence of smart devices and their participation in coordination processes contributes to a previously unknown intensification of problems and risks. It remains an open question if this development can be regarded as a minor step on a well-known path (and thus can be treated by implementing es- tablished means of control of technology) or if we are now going to open the door to a new stage of societal development (with the need to create new institutional mechanisms). To some regard the Überlingen crash resembles other accidents in com- plex, tightly coupled, highly automated systems (cf. Perrow 1984, Bossow 1999, Weyer 1997). Routine work with boring monitoring duties leads to a low level of attention and awareness of risks – especially in systems re- garded as self-controlling and inherently safe. In case of emergency, which mostly takes the operators by surprise, an unexpected interaction of sys- tem's components produces a situation which is only partly understood and which can only hardly be managed. (This is due to the growing complexity of the system, the rare combinations of events and mostly uncommon in- terrelations of parts of the system.) In an emergency situation the operators, who are frequently trained insuf- ficiently, suddenly find themselves in a situation where they have to take the responsibility to control the facility. However this implies to make very difficult decisions, which emerge because of unfamiliar and previously un- known uncertainties. To some regard we can talk about an automation paradox: the re-entry of the human decision maker, which had been ex- cluded from making first-order-decisions (steering the plane on a climbing or descending path) and now has to make second-order-decisions, i.e. de- termine if the automation device gives the correct advice or not and if s/he can rely on its recommendation to climb or to descend. Obviously it is very difficult to cope with situations of this kind, and it re- quires intensive and systematic training – on the individual as well as on the organizational level – to achieve this capability. However, many organi- zations are not aware of the fact that they must be subsumed to the cate- gory of "high-reliability organizations" (cf. LaPorte/Consolini 1991), which can only guarantee safety by an intense training and the development of Weyer, Hybrid Systems 29 an organizational safety culture. This especially applies in the case of tech- nological change. Normally it takes a long time to create new routines, es- pecially if you have to unlearn well-established routine behaviour. Organi- zations must take measures to prevent people from falling back to their trained routine behaviour in emergency situations. It is absolutely neces- sary to reflect this process of change intensively and to build up the new behaviour by systematic training, which also implies to find oneself in real- istic situations which support this training process. The pitfall of automa- tion, however, is that automated devices aim at avoiding situations in which the operator can gain experience and learn from experience. One additional set of factors that contribute to accidents in complex sys- tems is the acceleration of processes, the growing rigidity of coupling com- ponents, the permanent extending of the scope of the system and, finally, the increase of performance expectations (e.g. night flights, long distance flights, instrumental guided flights, increasingly dense traffic). These critical factor enlarge the risks of failure. It is a typical dilemma of highly auto- mated systems that the gains which are achieved by new safety devices often are eaten up by the higher performance expectations which are es- tablished in the name of efficiency (cf. Perrow 1984). Besides the pitfalls of automation it was mainly organizational failure, the existence of divergent safety cultures as well as the irresolvable conflict of centralized and decentralized systems which contributed to the tragedy at Überlingen. 4.2 Coping with the risks of complex hybrid systems Obviously there are two ways out of the automation dilemma: One can try to eliminate human error by extending the scope of automated decision making, i.e. by inventing more and more sophisticated ("smart") devices which substitute human decision makers. However, this is an endless spiral which frequently intensifies the problems. For example one could easily imagine a direct link between TCAS and the autopilot so that TCAS could find a proper solution without any human intervention. This strategy of forced automation could eventually lead to unmanned aerial vehicles (UAV), i.e. planes with no pilot on board (cf. Friese/Hein 2004). But the extension of the area of automated decision making only implies that the borderline between humans and machines is being moved; currently every technical system contains such a borderline. This applies especially to the issue of mode selection: Someone has to identify, in which kind of situation (e.g. landing approach versus cruising or routine versus emergency situa- tion) the system is currently operating, someone has to select the appro- priate control mode and finally make the switch from one mode to another. 30 Weyer, Hybrid Systems In principle there are two options: In partly automated systems the pilot may be the decision maker, who has gained the capability to act appropri- ately by training and experience, whereas in completely automated sys- tems the decisions have been incorporated into the system's architecture (e.g. software). In this case software engineers have to anticipate critical states in which the system shifts automatically to another mode. However, they are unable to check if these solution is appropriate to the given situa- tion, unless they learn from experience, namely by incidents or accidents.40 Even in completely automated systems human error is possible and even unavoidable – especially if we take into consideration that the decision maker now is a person with no practical experience in flying planes (cf. Weissbach/Poy 1993, Gras 1994, Moricot 1994, Weyer 1997). Obviously there is no way of eliminating human decision making, unless machines are able to reproduce themselves. So every kind of automation strategy must take into consideration the necessity of designing the com- munication between man and machine in a way that facilitates the reliable operation of the system. An alternative to a forced automation and elimina- tion of the human decision maker might be an evolutionary learning proc- ess which includes the operators into the planning and designing of the system and which puts emphasis on better communication and coordina- tion (which implies systematic training of the operations in the standard as well as in the emergency mode). 40 This is the reasons why accidents are – regarded from a methodological point of view – so worthy for application-oriented science; cf. Krohn/Weyer 1994. Weyer, Hybrid Systems 31 5 Final remarks: Creating order in hybrid systems 5.1 A new mode of governance? To some respect the debate on hybrid systems may lead beyond our cur- rent thinking on the management of complex systems, which mostly fo- cuses only on two types: the centralized control and the decentralized self- organization. For many years the mainstream of management literature has heavily promoted the advantages of decentralized organizations (cf. Peters 1992, Willke 1995, Resnik 1995). The essential argument of this approach has been, that decentralized systems generate solutions which are superior to those which emerge from centralized ones (cf. chapter 2.5). However, advocates of self-organization never could answer the question convinc- ingly how the solutions emerged. The reference to a number of successful cases (from laser physics to innovation studies) could never cover the gap in theory, i.e. the inability to give a proper theoretical description of the mechanism of "emergence".41 As in a pendulum movement the debate cur- rently seems to be slightly shifting back to centralized control,42 but the debate has not yet been terminated. And there remain many open ques- tions. Hybrid systems allow for a new, but still unknown kind of interaction and control (as Rammert/Schulz-Schaeffer suggest) or even for a new type of central control of decentralized systems (referring to Rochlin). These sys- tems seem to provide a combination of the advantages of the two well- known modes of governance: They rely on the problem-solving capabilities of self-organization, and they profit heavily from the advantages of central control. Thus they seem to be able to combine flexibility and efficacy. How- ever, up to date no one knows (neither empirically nor theoretically) if this combination of governance modes really works. In their famous book "The Knowledge-Creating Company" (1995) Nonaka and Takeuchi have outlined a concept of a "middle-up-down management" which combines the two modes. And they present a number of impressive case studies which sup- port their opinion that companies which apply this concept, are successful above average. But doubts arise if we can easily transfer these experi- ences, which are based on face-to-face communication in innovation 41 This applies to the theories of Krohn/Kueppers (1989) as well as Esser (1991, 1993), cf. critique of Greshoff/Schimank 2003. 42 For example a new centralized approach of protecting the internet and providing safe operations is under discussion as a successor of the older decentralized ap- proach; cf. FAZ 11.09.2004: 11 ("Alte Verbrechen mit neuer Technik") and 18 ("In- tel will das Internet sicherer machen"). 32 Weyer, Hybrid Systems teams, to hybrid systems which clearly violate many standards of the model of Nonaka/Takeuchi. In view of this open debate about modes of governance, one could imagine the following four paths to the future in the field of air transportation safety: 1. A continuation of the established mode of hierarchical control via ATC. 2. The installation of a system that allows for a completely decentral- ized self-coordination of the aircraft, as it is now being developed in the U.S. as a successor of TCAS (cf. Hughes/Mecham 2004). 3. A recentralization, i.e. a centralized remote control of all aircraft by one control centre (similar as in the case of German railways) – based on a fleet of unmanned aerial vehicles (UAV).43 4. A participatory approach which doesn't put emphasis only on auto- mation, but also on the creation of an inter-organizational safety culture in order to maintain and gradually improve the peoples' as well as the system's capabilities to operate complex systems and to cope with the risks. 5.2 The double trap Considering the advantages and disadvantages of the different government modes, one can argue that hybrid systems may lead society into a double trap: • On the one hand self-organizing systems are able to create innovative solutions by own means and to generate emergent effects – a perform- ance which hardly can be produced by centralized systems. However, these effects are – according to the concept of unintended conse- quences (cf. Vanberg 1975, Esser 1991) – unpredictable, which some- times might also imply: undesirable and uncontrollable. Self-organizing systems may get out of control, and we still don't have proper means to cope with this problem – maybe partly because of our lacking theoreti- cal understanding of the processes of self-organization. • On the other hand if we try to establish a network of control measures which integrates the system's elements and coordinates their behaviour in order to avoid or even eliminate undesirable effects and to achieve a global optimum (e.g. safety and reliability), we face the problem that 43 The option of a remote control of an aircraft, that has been heavily debated after 9/11, will not be discussed here. Weyer, Hybrid Systems 33 this kind of control may become totalitarian and even destroy the sources of self-organization. It is still an open question if we can create new institutions which can man- age these problems and integrate the two modes of governance. In any case a theory of central control of decentralized systems is still lacking. It is a challenge for future work in the field of technology studies as well as general sociology to develop theoretical models for the interaction of man and machine in hybrid systems. A major task will be to deal with the ques- tion, how new types of trust-based social (or socio-technical) relations be- tween human beings and smart objects can emerge. Social science can contribute to a deeper understanding of these processes and may help to shape the future of hybrid societies in a way that reduces the risks of this transformation. Reineke, TCAS 35 7 Anhang: Fallstudie Flugzeugunglück Überlingen 1. Juli 2002 7.1 Einleitung Das Flugzeugunglück bei Überlingen am Bodensee vom 1. Juli 2002 ist ge- kennzeichnet von vielen unglücklichen Zufällen, die in ihrer Verkettung zu dieser Katastrophe geführt haben. Ziel dieser Fallstudie ist es, den Unfall- hergang darzustellen (Kapitel 7.2) und dabei die Besonderheiten und Ei- genarten des Zusammenstoßwarngerätes TCAS aufzuzeigen (Kapitel 7.3), welches einen beträchtlichen Anteil an dem Zustandekommen des Unfalls hatte. Dazu ist es zunächst sinnvoll, sich ein genaues Bild der Funktionsweise die- ses technischen Systems zu verschaffen, bevor auf die Unfallursachen im einzelnen eingegangen wird (Kapitel 7.4). Diese lassen sich vor allem in systemische und unmittelbare Ursachen differenzieren und den Piloten der beteiligten Flugzeuge bzw. dem Fluglotsen zurechnen. Bei der Analyse dieses Unfalls wird deutlich, dass verschiedene soziologi- sche Theorien berührt werden, die mögliche Erklärungen dieser Katastro- phe anbieten. So wird zum Beispiel Perrows Theorie der eng gekoppelten Systeme ebenso berührt wie das Problem des Umschaltens vom Routine- in den Ausnahmezustand bei Überwachungs- und Automationsarbeit, wie es La Porte/Consolini (1991) beschreiben. Ein Fazit meinerseits soll diese Fall- studie abschließen (Kapitel 7.5). 7.2 Unfallhergang Am 1. Juli 2002 befand sich eine Tupolev TU154M auf dem Weg von Mos- kau (Russland) nach Barcelona (Spanien). Auf gleicher Flughöhe flog ein Frachtflugzeug (Boeing B757-200) von Bergamo (Italien) nach Brüssel (Belgien). Beide Flugzeuge flogen nach Instrumentenflugregeln und wur- den vom ACC (Area Control Center) Zürich geführt. An Bord der beiden Maschinen befanden sich 71 Passagiere (vgl. BFU 2004b, S. 1). Um 22:48 Uhr startete die Tupolev ihren Flug in Moskau. Bereits um 23:15 Uhr erreichte sie deutschen Luftraum und wurde zunächst von München aus geführt. Skyguide in Zürich übernahm die Kontrolle um 23:30 Uhr. Das Flugzeug flog einen Kurs von 274° in einer Höhe von ca. 10.000 Metern (Flight Level 360). Bereits um 23:34:42 Uhr warnte das bordseitige Zu- sammenstoßwarngerät TCAS durch eine Traffic Advisory (TA) die Besat- zung vor möglichem Konfliktverkehr. Sieben Sekunden später erteilte ACC Zürich der Besatzung die Anweisung zum Sinkflug und wies dabei ebenfalls auf Konfliktverkehr hin. Diese Anweisung wurde von der Crew zwar nicht direkt per Funkspruch bestätigt, aber dennoch sofort umgesetzt. Zur glei- 36 Reineke, TCAS chen Zeit generierte das bordseitige TCAS eine Resolution Advisory (RA) für einen Steigflug. Um 23:34:56 Uhr wiederholte der Radarlotse seine Anwei- sung an die Besatzung, zügig auf eine Höhe von FL 350 zu sinken. Die Be- satzung bestätigte diese Anweisung diesmal sofort. Die Befolgung der, im Vergleich zur Anweisung des TCAS-Systems, genau gegenteilige Anweisung des Fluglotsen stellte einen schweren Fehler des Piloten der Tupolev dar, was sich in Kapitel 4 noch genauer zeigen wird. Nachdem die Crew die An- weisung des Radarlotsen bestätigt hatte, informierte dieser die Besatzung, dass sich anderer Flugverkehr in der "2-Uhr-Position" ebenfalls in einer Höhe von FL 360 befinde (vgl. Aarons 2002). Dies war allerdings nicht der Fall. Das andere Flugzeug befand sich in der "10-Uhr-Position". Die Boeing 757-200 startete ihren Flug um 23:06 Uhr in Bergamo. Um 23:30 Uhr flog sie unter der Leitung ACC Zürichs in einer Höhe von FL 360 auf einem Kurs von 004°. Bereits um 23:34:42 Uhr warnte das bordseitige TCAS durch eine Traffic Advisory (TA) die Besatzung vor möglichem Kon- fliktverkehr. 14 Sekunden später erhielt die Besatzung vom TCAS eine Re- solution Advisory (RA) für einen Sinkflug. Sie folgte diesem Kommando sofort und erhielt weitere 14 Sekunden später das Kommando stärker zu sinken ("increase descent"). Um 23:35:19 Uhr berichtete die Crew dem Radarlotsen, dass TCAS das Kommando zum Sinkflug gegeben hat (vgl. Aarons 2002). Um 23:35:32 Uhr kollidierten beide Flugzeuge nördlich der Stadt Überlingen (Bodensee). Alle 71 Menschen an Bord der beiden Flug- zeuge starben. 7.3 Funktionsweise TCAS Das Airborne-Collision-Avoidance-System (ACAS) wurde im Jahr 1993 durch die International Civil Aviation Organization (ICAO) im Annex 2 (Rules of the Air) als Standard festgelegt. Die Weiterentwicklung dieses Systems wurde 1995 durch die ICAO als "Standards and Recommended Practices" verabschiedet. In Europa war Eurocontrol wesentlich an der Entwicklung und Implementierung von ACAS beteiligt. Parallel zu der Entwicklung von ACAS auf Basis der ICAO Standards wurde in den USA das Traffic Alert and Collision Avoidance System (TCAS) entwi- ckelt. ACAS und TCAS waren als bordautonome und von Navigationsanla- gen unabhängige Systeme ausgelegt, sie waren jedoch nicht kompatibel zueinander. Erst mit der Entwicklung von TCAS II, Version 7 wurde die Kompatibilität zu den Anforderungen von ACAS hergestellt (vgl. BFU 2004a, S. 46). In den USA besteht seit dem 30. Dezember 1993 eine gesetzliche Ausrüs- tungspflicht mit TCAS II für zivile Luftfahrzeuge mit mehr als 30 Sitzplät- zen. Eurocontrol schlug in Europa die Einführung von TCAS im Jahr 1995 in Reineke, TCAS 37 zwei Schritten vor: Ab dem 1. Januar 2000 sollten alle Flugzeuge mit TCAS ausgerüstet werden, die über mehr als 30 Sitzplätze verfügen bzw. in der Gewichtsklasse über 15 000 kg liegen. Anschließend galt die Ausrüstungs- pflicht auch für Flugzeuge mit mehr als 19 Sitzplätzen bzw. in der Ge- wichtsklasse über 5 700 kg (ab 1. Januar 2005). Flugzeuge mit mehr als 30 Sitzplätzen, die nicht mit TCAS ausgerüstet sind, durften nach dem 30. September 2001 im Luftraum der European Civil Aviation Conference (ECAC) nicht mehr fliegen. TCAS ist ein bodenunabhängig arbeitendes Zusammenstoß-Warngerät, welches unabhängig von der Navigationsausrüstung des Flugzeuges und von den Piloten arbeitet. TCAS nutzt die Transponder anderer Flugzeuge als Informationsquelle, besitzt eigene Sende-, Empfangs-, Peilanlagen und Rechner zur schnellen Ermittlung von Flugbahnen sowie zur Generierung von Kommandos für die Piloten. Der TCAS-Gerätesatz ist in einem Flugzeug nur einmal vorhanden, wodurch eine Redundanz für einen ausfallfreien Betrieb nicht gegeben ist. Ein Flugzeug mit ausgefallenem TCAS darf nach Angaben der Minimum Equipment List (MEL) bis zu 10 Tage weiter betrie- ben werden (vgl. BFU 2004a, S. 46). Der Flugverkehr in dem von TCAS überwachten Flugraum wird auf Displays im Cockpit dargestellt. Die Piloten können die relative Position, die relative Flughöhe und den Trend der relativen Flughöhe der anderen Flugzeuge beobachten. Der TCAS-Gerätesatz besteht aus folgenden Komponenten: Quelle: BFU Untersuchungsbericht 2004, S. 47 38 Reineke, TCAS Neben der eigenen Steig- und Sinkgeschwindigkeit werden Position und Bewegung eines oder mehrerer überwachter Flugzeuge und deren relative Höheninformationen visuell dargestellt. Zusätzlich erfolgt ein akustisches Signal bei einer Traffic oder Resolution Advisory (vgl. BFU 2004a, S. 47- 50). Basis für das Collision-Avoidance-System (CAS) ist die Verfolgung der Flug- bahnen der im Überwachungsbereich befindlichen Luftfahrzeuge. Das TCAS-System überwacht den Luftraum bis 40 NM (ca. 74 km) vor dem Flugzeug, bis 15 NM (ca. 28 km) hinter dem Flugzeug, bis 20 NM (ca. 37 km) seitlich des Flugzeugs und bis ca. 9000 ft (ca. 2,74 km) oberhalb bzw. unterhalb des eigenen Flugzeuges.1 TCAS verfolgt die Flugbahnen aller Luftfahrzeuge mit in Betrieb befindlichen Transpondern im Überwachungs- bereich und errechnet mögliche Konflikte nach folgenden Parametern: Schrägentfernung, Peilwinkel, Annäherungsgeschwindigkeit, Flughöhe und vertikale Geschwindigkeit. Ein Konflikt wird durch den "closest point of ap- proach" bestimmt, an dem die notwendige Sicherheitsentfernung zu einem anderen Flugzeug nicht mehr eingehalten werden kann. In diesem Fall be- steht Kollisionsgefahr. Die verbleibende Zeit bis zu diesem Punkt wird jede Sekunde neu berechnet. Wenn die errechnete Zeit unter vorgegebene Wer- te sinkt, werden TA- oder RA-Kommandos ausgegeben (48 Sekunden vor der Kollision eine TA, 35 Sekunden vorher eine RA). Das TCAS-System ko- ordiniert die RAs in den am Ausweichmanöver beteiligten Flugzeugen, so dass diese praktisch gleichzeitig generiert werden. Die RAs sind immer ge- gensätzlich in ihrer Richtung, so dass in dem einen Flugzeug ein Steigflug und in dem anderen ein Sinkflug empfohlen wird. Ausweichmanöver ge- schehen nur vertikal. Das TCAS kann nach der ersten RA weitere RAs gene- rieren, die den sich ändernden Bedingungen angepasst sind. TCAS II, Ver- sion 7 kann sogar eine Umkehrung der ursprünglichen Ausweichrichtung generieren. 7.4 Unfallursachen Die Ursachen des Flugzeugabsturzes bei Überlingen können grundsätzlich in unmittelbare Ursachen und systemische Ursachen gegliedert werden. Unmittelbare Ursachen stehen dabei in direkter Beziehung zu dem zu un- tersuchenden Unfall, wohingegen systemische Ursachen vom Einzelfall ab- hängige Mängel aufzeigen, die prägenden Einfluss auf das Unfallgeschehen gehabt haben (vgl. BFU 2004b, S. 2). 1 Eine nautische Meile (NM) entspricht 1,852 Kilometern (km), ein foot (ft) 30,48 cm. Reineke, TCAS 39 Unmittelbare Ursachen: ƒ ACC Zürich als Flugsicherungskontrollstelle hat die drohende Staffe- lungsunterschreitung nicht rechtzeitig bemerkt und somit ist die Anwei- sung zum Sinkflug an die Tupolev Besatzung zu spät erfolgt (vgl. BFU 2004b, S. 2).2 ƒ Die Crew der Tupolev folgte der Anweisung des Radarlotsen, einen Sinkflug einzuleiten, und befolgte diese auch weiter, als TCAS sie zum Steigflug aufforderte. Damit wurde ein zur TCAS-RA entgegengesetztes Manöver durchgeführt. Bei dieser Entscheidung wurde nicht beachtet, dass eine RA ein vertikales Ausweichmanöver ist, bei dem die beteilig- ten Luftfahrzeuge jeweils entgegengesetzte Kommandos zur Kollisions- vermeidung erhalten. Manöver, die entgegengesetzt zur eigenen RA ge- flogen werden, sind somit zu vermeiden (vgl. BFU 2004b, S. 110-111). Systemische Ursachen: ƒ Die Integration von ACAS/TCAS II in das System Luftfahrt war unzurei- chend und entsprach nicht in allen Punkten der Systemphilosophie. Die Regelungen der nationalen Luftfahrtbehörden waren widersprüchlich (Russland vs. Europa) und die Betriebs- und Verfahrensanweisungen des TCAS-Herstellers und der Luftfahrtunternehmen waren nicht ein- heitlich und lückenhaft (vgl. BFU 2004a, S. 112). ƒ Die Führung und das Qualitätsmanagement des Flugsicherungs- unternehmens ACC Zürich gewährleistete keine permanente Besetzung der geöffneten Arbeitspositionen mit Flugverkehrsleitern im Nacht- dienst, so dass zum Zeitpunkt der Kollision lediglich ein Radarlotse an- wesend war, welcher gleichzeitig an zwei verschiedenen Arbeitsplätzen arbeitete. Es war beim ACC Zürich üblich, dass sich ein Lotse zu ver- kehrsarmen Zeiten in der Nacht zur Ruhe begab (vgl. BFU 2004b, S. 4). Durch diese Ausführungen wird deutlich, dass ein Flugunfall ein sehr kom- plexes Geschehen ist, was sich daraus ergibt, dass verschiedene Systeme innerhalb des Gesamtsystems Luftfahrt zusammenspielen: zwei Flugzeuge als technische Systeme, die Besatzungen der Flugzeuge, ein System der Flugsicherung mit den handelnden Personen, Umgebungsfaktoren sowie Regelwerke und Vorschriften. 2 Zur Unfallzeit war – bedingt durch den "Fallback-Modus" – die normale horizontale Staffe- lung von 5 auf 7 NM erhöht worden. Alle Flugzeuge, die in derselben Höhe flogen, sollten somit einen horizontalen Abstand von mindestens 7 NM voneinander haben. Um 23:34:56 Uhr wurde dieser Abstand unterschritten, d.h. zu diesem Zeitpunkt hätte die Tupolev bereits sinken müssen, um den Abstand aufrecht zu erhalten. Dazu wäre es nötig gewe- sen eine Anweisung zum Sinken zu geben, was der Fluglotse nicht machte, da er den verspäteten Anflug eines A320 auf Friedrichshafen regelte (vgl. BFU 2004 b, S. 109). 40 Reineke, TCAS Dieser Unfall hat sich ereignet, weil viele Ereignisse und Umstände zusam- mentrafen, die für sich allein betrachtet nur eine geringe Bedeutung für die Flugsicherheit gehabt hätten. Im zeitlichen Zusammentreffen aber verknüp- fen sie sich, beeinflussen und verstärken sich gegenseitig und setzen so die Ereigniskette fort, welche nicht unterbrochen wurde und zum Unfall führte (vgl. Perrow 1987). Verkettung unglücklicher Umstände: ƒ Beim ACC Zürich wurden in der Nacht vom 1. zum 2. Juli 2002 Sektori- sierungsarbeiten durchgeführt, bei denen Kontrollsektoren neu geord- net werden sollten. Diese Arbeiten waren mit den benachbarten Flug- verkehrskontrollstellen nicht koordiniert worden. ƒ Während dieser Zeit wurde das Radarsystem im "Fallback-Modus" be- trieben. Der Lotse war sich nicht bewusst, dass das optische Short Term Conflict Alert (STCA) in diesem Modus nicht mehr dargestellt wird, und war darüber auch nicht informiert worden. ƒ Die direkten Telefonverbindungen zu den benachbarten Flugsiche- rungsdiensten standen in der Zeit von 23:23 Uhr bis 23:34:37 Uhr nicht zur Verfügung. Eine automatische Umschaltung für eingehende Gesprä- che auf das Reservesystem existierte nicht. ƒ Der Fluglotse musste an zwei Arbeitsplätzen arbeiten, da er auf einen verspäteten Anflug eines Airbus A320 auf Friedrichshafen aufmerksam wurde. Diesen Flug überwachte er bis zu diesem Zeitpunkt noch nicht. Dazu war es notwendig einen zweiten Arbeitsplatz mit einer anderen Funkfrequenz zu öffnen und den Anflug mit Friedrichshafen telefonisch zu koordinieren. Da das Telefon aber nicht funktionierte, musste der Lotse einen anderen Weg zur Koordination finden, was Aufmerksamkeit und Zeit kostete (vgl. BFU 2004b, S. 2). Insgesamt kann also festgehalten werden, dass der Fluglotse die Ausnah- mesituation nicht erkannt hat und deshalb auch noch kurz vor dem Unglück im "Routinemodus" gearbeitet hat. Das Zusammenspiel der unglücklichen Umstände erforderte ein schnelles Umschalten von Routinehandeln in eine Stresssituation (vgl. La Porte/Consolini 1991). Dabei war der Fluglotse of- fensichtlich überfordert. In Verbindung mit den anderen Umständen und der Fehlentscheidung des Piloten der Tupolev kam es schließlich zur Katast- rophe. Die Rolle der Piloten Bevor man das Verhalten und die Kompetenz der Piloten untersucht, er- scheint es mir wichtig, auf die Mensch-Maschine-Schnittstelle zwischen dem TCAS-System und dem Piloten genauer einzugehen. Zunächst ist festzuhal- ten, dass TCAS als letztes Sicherheitsnetz zur Vermeidung von Kollisionen konstruiert wurde. Wird eine RA von diesem System generiert, so kann man davon ausgehen, dass andere Mechanismen zur Kollisionsvermeidung, Reineke, TCAS 41 wie z.B. die Höhenstaffelung durch den Fluglotsen, nicht ausreichend wirk- sam oder fehlerhaft waren (vgl. BFU 2004a, S. 79). Die Crew muss der RA somit ohne Zeitverzug folgen und das durch TCAS initiierte Ausweichmanö- ver dem Fluglotsen mitteilen. Jedes andere Verhalten, so die BFU, würde dem Sinn von TCAS entgegenstehen. Das Gesamtsystem TCAS arbeitet halbautomatisch, d.h. das beabsichtigte Ziel, eine Kollision zu vermeiden, kann nur durch die Mitwirkung des Men- schen erreicht werden. Voraussetzung dafür ist nach Angaben der BFU fol- gende Arbeitsteilung zwischen Mensch und Maschine: Da TCAS in beiden Flugzeugen nachträglich eingebaut wurde, waren die TCAS-Anzeigen innerhalb spezieller Variometer integriert. Dadurch muss ein sehr kleines Display in Kauf genommen werden. Dieses erfordert eine hö- here Aufmerksamkeit durch die Crew. Die Anzeige der "Intruder" mit Sym- bolen, die ihre Form und Farbe ändern, sowie die Darstellung der Höhendif- ferenzen und Trendinformationen, können so leichter übersehen werden (vgl. BFU 2004a, S. 83). 3 Die Crew der Boeing B757-200 Für die Crew der Boeing galten die Regelungen des deutschen Luftfahrt- handbuchs (AIP), in welchen zur Anwendung von TCAS folgende Hinweise gegeben werden: ƒ "Alle Resolution-Advisory-Anzeigen (korrektiv oder präventiv) sollen befolgt werden, es sei denn, der Luftfahrzeugführer kann den sich auf Kollisionskurs befindlichen Verkehr nach Sicht identifizieren und ent- scheidet selbst, dass keine Abweichung vom gegenwärtigen Flugverlauf erforderlich ist. Auf Änderungen der Ausweich-empfehlungen soll der Luftfahrzeugführer umgehend unter Beachtung der Anzeige reagieren." (BFU 2004a, S. 55) 3 Ob dies bei der Tupolev Crew der Fall war, kann nicht geklärt werden. Quelle: BFU Untersuchungsbericht 2004, S.79 42 Reineke, TCAS ƒ "Falls entschieden wurde, einer (korrektiven oder präventiven) Resolu- tion-Advisory-Anzeige nicht zu folgen, soll niemals ein Ausweichmanö- ver entgegen der empfohlenen Richtung der Resolution Advisory Anzei- ge durchgeführt werden. Das ist besonders wichtig, da sich das System ohne Wissen des Luftfahrzeugführers mit anderen entsprechend ausge- rüsteten Luftfahrzeugen koordiniert." (BFU 2004a, S. 55) Hieraus lässt sich schließen, dass die Piloten in ihrer Autorität eingegrenzt werden und immer mehr von Techniken "beherrscht" bzw. in ihren Hand- lungsweisen eingeschränkt werden. Bei dieser konkreten Mensch-Maschine- Schnittstelle wird dem Piloten keine Entscheidungsfreiheit eingeräumt, da dies nicht mit der Systemphilosophie von TCAS II, Version 7 vereinbar sei, so die BFU (2004a, S.82). Das Zusammenspiel zwischen Mensch und Technik muss für einen risikolo- sen Betrieb optimal funktionieren, was meiner Ansicht nach nicht gewähr- leistet werden kann, wenn der Pilot das technische System nicht versteht oder im Umgang mit diesem zu wenig Erfahrung hat. Wie aus dem Untersuchungsbericht der BFU weiter hervorgeht, war der Pilot ein erfahrener Linien-Trainingskapitän mit fundiertem Können und Wissen. Auch der Copilot verfügte als ehemaliger Linien-Trainingskapitän über ein fundiertes Können und Wissen. Aus den Auswertungen des Cock- pit-Voice-Recorders (CVR) gehe zudem hervor, dass die Piloten die Situati- on in der sie sich befanden, richtig eingeschätzt hätten, so die BFU (2004, S. 98). Die Piloten reagierten schnell auf das Warnsignal des TCAS und schenkten diesem die nötige Aufmerksamkeit. Zudem gaben sie dem Flug- lotsen Rückmeldung über das von TCAS empfohlene Ausweichmanöver. Zusammenfassend lässt sich sagen, dass die Piloten in der Situation ange- messen reagiert haben. Zudem haben sie ihr Verhalten schnell der Stresssi- tuation angepasst, in der schnell gehandelt werden muss, weitreichende Entscheidungen getroffen werden müssen und die Handlung reflektiert werden muss. Dieses Verhalten steht in krassem Gegensatz zu der Routine- tätigkeit des Überwachens der Geräte und des Fliegens mithilfe des Autopi- loten (Gewährleistungsarbeit). Die Crew der Tupolev TU 154M In der Flugbetriebsanleitung für die Tupolev war die Bedeutung von TCAS im Unterschied zum Deutschen Luftfahrthandbuch folgendermaßen be- schrieben: ƒ "Zur Verhütung von Flugzeugkollisionen in der Luft ist die visuelle Kon- trolle der Situation im Luftraum durch die Crew und die korrekte Aus- führung sämtlicher Anweisungen der Flugverkehrskontrolle als Haupt- mittel anzusehen. Das TCAS-System ist ein zusätzliches Mittel, dass die rechtzeitige Feststellung entgegenkommender Flugzeuge, die Klassifi- zierung des Gefahrenpotentials und, falls erforderlich, die Ausarbeitung Reineke, TCAS 43 eines Kommandos zur Durchführung eines vertikalen Manövers gewähr- leistet." (BFU 2004a, S. 54) Durch diesen Ausschnitt wird deutlich, dass die Flugsicherung bei der Ver- meidung einer Kollisionsgefahr die höchste Bedeutung erhält und nicht das TCAS-System, wie es in dem "Luftfahrthandbuch Deutschland" der Fall ist. Die Besatzung der Tupolev setzte sich aus einem Kommandanten, Copilo- ten, Navigator, Flugingenieur und einem Instruktor (PIC) zusammen. Kommandant, Copilot, Navigator und Flugingenieur flogen normalerweise als ständige Cockpitbesatzung zusammen. Durch das Hinzukommen des Instruktors hatte sich die Besatzung umorganisieren müssen (vgl. BFU 2004a, S. 102). Der Instruktor war einer ihm vertrauten Rolle als "Pilot in Command" (PIC) zugeordnet. Auch die Rollen des Navigators und des Flug- ingenieurs unterschieden sich nicht zu vorherigen Flugeinsätzen. Der Kom- mandant jedoch hatte eine ihm neue Rolle inne, indem er auf dem linken Sitz als Pilot Flying (PF) saß, ohne gleichzeitig auch verantwortlicher Luft- fahrzeugführer zu sein. "Als sich die Konfliktsituation anbahnte, war der Pilot non Flying (PNF) für die Durchführung des Sprechfunkverkehrs zuständig und hatte die Sink- fluganweisung von ACC Zürich zu bestätigen. Davon war er zunächst abge- lenkt, da er seine Entscheidung (als PIC), die Sinkfluganweisung zu befol- gen, der Cockpitbesatzung erläuterte." (vgl. BFU 2004a, S. 102) Durch die falsche Auskunft des Fluglotsen, dass sich der Konfliktverkehr in der 2-Uhr- Position befinde, nahm die Verwirrung und das Bedrängnis der Besatzung weiter zu. Bei der Besatzung existierten zudem verschiedene Ansichten, wie das Problem gelöst werden sollte. So machte der Copilot den Piloten mehrmals darauf aufmerksam, dass TCAS eine entgegengesetzte Empfeh- lung zum Steigflug gab (vgl. BFU 2004a, S. 104). Festzuhalten bleibt, dass die Neuzusammenstellung der Flugbesatzung eine neue Rollenverteilung erforderte, welche mit einer Umstellung verbunden war und somit zu zusätzlichem Koordinations- und damit Zeitaufwand führ- te. Die Verwirrung nahm durch die Fehlauskünfte des Fluglotsen zu, was zu einer erhöhten Stresssituation führte. Das TCAS-System wurde nicht hinrei- chend beachtet, und die Systemlogik war dem Pilot in Command offenbar nicht bekannt, so dass er einen Sinkflug einleitete, welcher eine entgegen- gesetzte Handlung zu der Anweisung des TCAS darstellte. Zudem scheint die Flugbetriebsanleitung der Tupolev eine wesentliche Rolle bei der Ent- scheidung des PIC gespielt zu haben, einen Sinkflug einzuleiten, weil aus dieser hervorgeht, dass dem Fluglotsen Priorität vor dem TCAS-System eingeräumt wird, da dieser eine bessere Übersicht habe. Die Neuzusammenstellung des Teams, sowie die Nichtbeachtung des tech- nischen Systems TCAS können als Hauptursachen für die Fehlentscheidung des Piloten angeführt werden. 44 Reineke, TCAS Die Rolle des Fluglotsen Wie oben bereits angeführt, kann das "Versagen" des Fluglotsen auf die Verkettung der unglücklichen Umstände zurückgeführt werden, sowie auf die Tatsache, dass dieser bereits vor dem Unglück die Situation falsch ein- geschätzt hatte. Er handelte wie in einer Routinesituation, obwohl durch die Sektorisierungsarbeiten keine Routinesituation vorlag. Auch bei der Rolle des Fluglotsen ist auffällig, dass dieser Gewährleistungs- arbeit leistet und die Mensch-Maschine-Schnittstelle aus diesem Grunde besonders wichtig wird. Aufgrund der folgenreichen Entscheidungen, die in Situationen großer Unsicherheit und unter Stress zu fällen sind, kommt der Funktion des Fluglotsen, wie auch der Funktion des Piloten eine immanent wichtige Stellung zu, die meiner Ansicht nach nicht automatisiert werden kann. 7.5 Fazit Zum Abschluss dieser Fallstudie seien noch einmal die wichtigsten Punkte zusammengefasst. Das sozio-technische System Luftfahrt besteht aus vie- len miteinander interagierenden Personen und Maschinen. Anhand des Fall- beispiels wurde deutlich, dass gerade die Mensch-Maschine-Schnittstelle einheitlich und unmissverständlich gestaltet sein muss, damit das gesamte System reibungslos funktioniert. Am 1. Juli 2002 war dies nicht der Fall. So existierten unter anderem unterschiedliche Auffassungen über die Relevanz des TCAS-Systems an Bord der Flugzeuge. Die russische Besatzung verfolg- te die Anweisung des Fluglotsen entgegen der Anweisung, die TCAS gene- rierte. Die Crew der Boeing hingegen war darauf trainiert worden, die An- weisungen von TCAS denen des Fluglotsen vorzuziehen. Der offenbare Konflikt zwischen zentraler und dezentraler Steuerung des Systems Luft- fahrt führte in Kombination mit weiteren unglücklichen Umständen zur Ka- tastrophe. Die traditionelle zentrale Steuerung durch den Fluglotsen wird durch das neue dezentrale System TCAS unterlaufen. Es ist nicht einheitlich geklärt, wann und wie dies vom Piloten beachtet und befolgt werden soll. Durch die enge Kopplung der einzelnen Systeme (Flugzeug, Flugverkehrs- kontrolle etc.) können kleine Fehler große Folgen nach sich ziehen. Ein wei- teres Problem stellt die Überwachungstätigkeit der Piloten und Fluglotsen dar. Die überwiegende Zeit verbringen diese damit Anzeigen, zu überwa- chen, was eine sehr monotone Arbeit darstellt. In Ausnahmesituationen hingegen herrscht sehr hoher Stress. Die Piloten und Fluglotsen müssen blitzschnell reagieren und handeln. Dieses "Umschalten" vom Routinemo- dus in den Notfallmodus erfolgte bei dem Fluglotsen in diesem Fallbeispiel viel zu spät. Dieser war sich nicht bewusst, dass er sich bereits durch die Sektorisierungsarbeiten, die durchgeführt wurden, in einer Ausnahmesitua- tion befand. Weyer, Hybrid Systems 45 7 References Aarons, R.N., 2002: Cause & Circumstance: German Midair – TCAS Worked, ATC Didn´t, in: Business & Commercial Aviation 27.09.2002 (www.aviationnow.com/avnow/search/BasicSearch.jsp, 21.07.2004) [Aerowinx 2002] TCAS, www.aerowinx.de/html/tcas.html (29.07.04) Andres, M., 2004: Feature über Transpondertechnologie/RFID, www.elog- center.de (20.09.04) Beckenbach, N./Treeck, W.v. (eds.), 1994: Umbrüche gesellschaftlicher Arbeit. Göttingen: Otto Schwartz [BFU 2004a] Bundesstelle für Flugunfalluntersuchung: Untersuchungsbericht Mai 2004, Aktenzeichen AX001-1-2/02, Braunschweig (http://www.bfu- web.de/berichte/02_ax001dfr.pdf, 21.07.2004) [BFU 2004b] Bundesstelle für Flugunfalluntersuchung: Presseinformation zur Veröffentlichung des Untersuchungsberichtes über den Zusammenstoß einer Boeing B757-200 mit einer Tupolew TU154M am 1. Juli 2002 bei Überlingen am Bodensee (19. Mai 2004), www.bfu-web.de/040519_Pressetext.pdf (16.06.04) Bossow, G., 1999: Mayday, Mayday. Schiffshavarien der 80er und 90er Jahre, Stuttgart: Pietsch-Verlag Brooks, R., 2002: Flesh and Machines, New York: Pantheon (dt.: Menschmaschinen. Wie uns die Zukunftstechnologien neu erschaffen. Campus: Frankfurt/M., quotes from the German issue) Collins, H.M./Yearley, S., 1992: Epistemological chicken, in: A. Pickering (ed.), Science as Practise and Culture, Chicago: Chicago UP, 301-326 Crichton, M., 2002: Prey, London: HarperCollins Christaller, T., et al., 2001: Robotik. Perspektiven für menschliches Handeln in der zukünftigen Gesellschaft, Berlin: Springer Christaller, T./Wehner, J., (eds.), 2003: Autonome Maschinen - Perspektiven einer neuen Technikgeneration, Wiesbaden: Westdeutscher Verlag Cramer, S., 2001: Die unbeabsichtigte Entdeckung der Schnelligkeit - Wie Sicherheitssysteme die Schifffahrt im 19. Jahrhundert gefährdeten, Bielefeld (Diss.) Cramer, S., 2003: How safety systems made seafaring risky - Unintended acceleration in the 19th century, Dortmund (Arbeitspapier Nr. 3 des Fachgebiets Techniksoziologie) Denke, C., 2001: Zum Zusammenstoß von Überlingen (Juli 2001), www.vcockpit.de/politik.php?artikel=34 (30.08.04) Epstein, J.M./Axtell, R., 1996: Growing Artificial Societies. Social Science from the Bottom Up, Washington D.C.: Brookings Inst. Press Esser, H., 1991: Alltagshandeln und Verstehen. Zum Verhältnis von erklärender und verstehender Soziologie am Beispiel von Alfred Schütz und 'Rational Choice', Tübingen: Mohr Esser, H., 1993: Soziologie. Allgemeine Grundlagen, Frankfurt/M.: Campus Flottau, J., 2002a: TCAS, Human Factors at Center of Midair Probe, in: Aviation Week & Space Technology, Vol 157, Issue 3 (July 17, 2002): 33 Flottau, J., 2002b: Human Factors Role Cited in German Crash, in: Aviation Week & Space Technology, Vol. 157, Issue 11, p. 44 Friese, U./Hein, C., 2004: Rüstungshersteller setzen auf den Verkaufserfolgen von Drohnen. Milliardenmarkt für unbemannte Fluggeräte. Großbritannien will verstärkt in "intelligente" Waffensysteme investieren, in: FAZ 24.07.2002: 12 Grande, E./Fach, W., 1992: Emergent Rationality in Technological Policy: Nuclear Energy in the Federal Re-public of Germany, in: Minerva 30: 14-27 46 Weyer, Hybrid Systems Gras, A./Moricot, C./Poirot-Delpech, S.L./Scardigli, V., 1994: Faced with Automation. The Pilot, the Controller and the Engineer. Paris: Publications de la Sorbonne Greshoff, R./Schimank, U., 2003: Die integrative Sozialtheorie von Hartmut Esser (unpubl. paper) Hubig, C., 2003: Selbständige Nutzer oder verselbstständigte Medien - Die neue Qualität der Vernetzung, in: Mattern 2003: 211-229 Hughes, D./Mecham, M., 2004: 'Free-Flight' Experiments, in: Aviation Week & Space Technology, June 7, 2004: 48-50 Knorr-Cetina, K./Mulkay, M., (eds.), 1983: Science Observed: Perspectives on the Social Study of Science, London: Sage Krohn, W./Küppers, G., 1989: Die Selbstorganisation der Wissenschaft, Frankfurt/M.: Suhrkamp Krohn, W./Weyer, J., 1994: Society as a laboratory: the social risks of experimental research, in: Science and Public Policy 21: 173-183 Kron, T., (ed.), 2002: Luhmann modelliert. Sozionische Ansätze zur Simulation von Kommunikationssystemen, Opladen: Leske + Budrich Krücken, G./Weyer, J., 1999: Risikoforschung, in: S. Bröchler et al., 1999: Handbuch Technikfolgenabschätzung (3 Bde.), Berlin: edition sigma, 227-235 Langheinrich, M., 2003: Vom Machbaren zum Wünschenswerten: Einsatzmöglichkeiten des UbiComp, paper presented at the workshop "Mobil in intelligenten 'Welten' - Szenarien - Visionen - Trends", Universität Stuttgart, 11.-12.12.2003 LaPorte, T.R./Consolini, P.M., 1991: Working in Practice But Not in Theory: Theoretical Challenges of "High-Reliability Organizations", in: Journal of Public Administration Research and Theory 1: 19-47 Latour, B., 1983: Give Me a Laboratory and I will raise the World, in: Knorr-Cetina, K./Mulkay, M., (eds.), Science Observed: Perspectives on the Social Study of Science, London: Sage, 141-170 Latour, B., 1988: Mixing Humans and Nonhumans Together: The Sociology of a Door-Closer, in: Social Problems 35: 298-310 Latour, B., 1994: Der Berliner Schlüssel, Berlin (WZB FS II94-508) Latour, B., 1996a: On actor-network theory. A few clarifications, in: Soziale Welt 47: 369-381 Latour, B., 1998: Über technische Vermittlung. Philosophie, Soziologie, Genealogie, in: W. Rammert (ed.), 1998: Technik und Sozialtheorie, Campus: Frankfurt/M., 29-81 Malsch, T., (ed.), 1998: Sozionik. Soziologische Ansichten über künstliche Sozialität, Berlin: edition sigma Malsch, T., 1997: Die Provokation der "Artificial Societies". Warum die Soziologie sich mit den Sozialmetaphern der Verteilten Künstlichen Intelligenz beschäftigen sollte, in: Zeitschrift für Soziologie 26: 3-21 Mattern, F., (ed.), 2003: Total vernetzt. Szenarien einer informatisierten Welt (7. Berliner Kolloquium der Gottlieb Daimler- und Karl Benz-Stiftung), Heidelberg: Springer Meister, M., et al., 2002: Die Modellierung praktischer Rollen für Verhandlungssysteme in Organisationen. Wie die Komplexität von Multiagentensystemen durch Rollenkonzeptionen erhöht werden kann, Berlin (Technical University Technology Studies Working Papers TUTS-WP-6-2002) Moravec, H., 2000: Die Roboter werden uns überholen, in: Spektrum der Wissenschaft, - Spezial: Forschung im 21. Jahrhundert (1/2000): 72-79 Moricot, C., 1994: Die Leidenschaft des Ikaros. Risikowahrnehmung und Sicherheit in der Luftfahrt, in: Beckenbach/v. Treeck 1994: 359-368 Weyer, Hybrid Systems 47 Nonaka, I./Takeuchi, H., 1995: The Knowledge-Creating Company, Oxford: Oxford UP Nordwall, B.D., 2002: TCAS mor 'foolproof' than generally recognized, in: Aviation Week & Space Technology, July 15, 2002: 36 Perrow, C., 1984: Normal Accidents. Living with High-Risk Technologies, New York: Basic Books Peters, T., 1992: Liberation Management. Necessary Disorganization for the Nanosecond Nineties, New York: Alfred A. Knopf Popitz, H., 1995: Der Aufbruch zur Artifiziellen Gesellschaft. Zur Anthropologie der Technik, Tübingen: J.C.B. Mohr Rammert, W./Schulz-Schaeffer, I., (eds.), 2002: Können Maschinen handeln? Soziologische Beiträge zum Verhältnis von Mensch und Technik, Frankfurt/M.: Campus Rammert, W./Schulz-Schaeffer, I., 2002a: Technik und Handeln. Wenn soziales Handeln sich auf menschliches Verhalten und technische Abläufe verteilt, in: Rammert/Schulz-Schaeffer 2002: 11-64 [Rannoch 1998] Traffic Alert and Collision Avoidance System, www.rannoch.com/tcasf.html (16.06.04) Resnick, M., 1995: Turtles, Termites, and Traffic Jams. Explorations in Massively Parallel Microworlds (Complex Adaptive Systems), Cambridge/Mass.: MIT Press Rochlin, G., 1998: Trapped in the net. The unanticipated consequences of computerization, Princeton: Princeton UP Rochlin, G.I., 1991: Iran Air Flight 655 and the USS Vincennes: Complex, Large- scale Military Systems and the Failure of Control, in: T. LaPorte (ed.), 1991: Social Responses to Large Technical Systems. Control or Anticipation. Dordrecht: Kluwer, 99-125 Schimank, U., 1992: Erwartungssicherheit und Zielverfolgung. Sozialität zwischen Prisoner's Dilemma und Battle of the Sexes, in: Soziale Welt 43: 182-200 Schulz-Schaeffer, I., 2000: Akteur-Netzwerk-Theorie. Zur Koevolution von Gesellschaft, Natur und Technik, in: J. Weyer (ed.), Soziale Netzwerke. Konzepte und Methoden der sozialwissenschaftlichen Netzwerkforschung, München: Oldenbourg Verlag, 187-209 Sekigawa, E., 2002: Japan Debates TCAS When Controllers Err, in: Aviation Week & Space Technology, Vol. 157, Issue 5, p. 52 Skiera, B., 2000: Preispolitik und Electronic Commerce - Preisdifferenzierung im Internet, in: C. Warmser (ed.), Electronic Commerce - Grundlagen und Perspektiven, München: Vahlen, 117-130 Skiera, B./Lambrecht, A., 2000: Erlösmodelle im Internet, in: Albers, S./Herrmann, A., (eds.), Handbuch Produktmanagement, Wiesbaden: Gabler, 813-831 Spehr, M., 2004: Die Physik des Staus. Neue Wege gegen den Kollaps auf der Straße, in: FAZ 14.09.2004: T1 TA-Swiss (ed.), 2003a: Auf dem Weg zur intelligenten Mobilität. Kurzfassung des TA-Arbeitsdokumentes "Das vernetzte Fahrzeug" (TA 43A/2003), Bern, www.ta-swiss.ch/www-remain/reports_archive/publications/2003/ KF_Verkehrstelematik_d.pdf (06.10.03) TA-Swiss (ed.), 2003b: Unser Alltag im Netz der schlauen Gegenstände. Kurzfassung der TA-Swiss-Studie "Das Vorsorgeprinzip in der Informationsgesellschaft", Bern (TA 46A/2003), www.ta-swiss.ch/www- remain/reports_archive/publications/2003/TA_46A_2003_deutsch.pdf (06.10.03) ten Horn-van Nispen, M.-L., 1999: 400 000 Jahre Technikgeschichte. Von der Steinzeit bis zum Informationszeitalter, Darmstadt: Primus 48 Weyer, Hybrid Systems Timpe, K.-P., et al., (eds.), 2002: Mensch-Maschine-Systemtechnik. Konzepte, Modellierung, Gestaltung, Evaluation, Düsseldorf: Symposion Vanberg, V., 1975: Die zwei Soziologien. Individualismus und Kollektivismus in der Sozialtheorie, Tübingen: J.C.B. Mohr Vasek, T., 2004: Rechner auf Rädern, in: Technology Review 7/2004: 20-41 [VC 1997] Vereinigung Cockpit: TCAS - Traffic Alert and Collision Avoidance Systeme, www.vcockpit.de/presseaktuell.php?artikel=67 (20.07.04) Venik 2002: Mid-Air Collision over Germany (July 13, 2002), www.aeronautics.ru/news/news002/news053.htm (16.06.04) Weiser, M., 1991: The Computer for the 21st Century, in: Scientific American, www.teco.edu/lehre/ubiq/ubiq2000-1/weiser-sci-amer.htm (28.02.01) Weißbach, H.-J./Poy, A. (eds.), 1993: Risiken informatisierter Produktion. Theoretische und empirische Ansätze. Strategien zur Risikobewältigung. Opladen: Westdeutscher Verlag Weyer, J., 1993: Akteurstrategien und strukturelle Eigendynamiken. Raumfahrt in Westdeutschland 1945-1965, Göttingen: Otto Schwartz Weyer, J., 1994: Actor Networks and High Risk Technologies. The Case of the Gulf War, in: Science and Public Policy 21: 321-334 Weyer, J., 1995: The Social Risks of Experimental Research and Technological Innovation (unpubl. paper), www.techniksoziologie- dortmund.de/veroeffentlichung/files/KOLLEK-1995.pdf Weyer, J., 1997: Die Risiken der Automationsarbeit. Mensch-Maschine-Interaktion und Störfallmanagement in hochautomatisierten Verkehrsflugzeugen, in: Zeitschrift für Soziologie 26: 239-257 Weyer, J., 2004: Von Innovations-Netzwerken zu hybriden sozio-technischen Systemen. Neue Perspektiven der Techniksoziologie, in: L. Bluma et al. (eds.), Technikvermittlung und Technikpopularisierung. Historische und didaktische Perspektiven, Münster: Waxmann (Cottbuser Studien zur Geschichte von Technik, Arbeit und Umwelt, Bd. 23), 9-31 Willke, H., 1984: Gesellschaftssteuerung, in: M. Glagow (ed.), Gesellschaftssteue- rung zwischen Korporatismus und Subsidiarität, Bielefeld: AJZ Verlag, 29-53 Willke, H., 1995: Systemtheorie III: Steuerungstheorie. Grundzüge einer Theorie der Steuerung komplexer Sozialsysteme, Stuttgart: Gustav Fischer Bereits erschienene Soziologische Arbeitspapiere 1/2003 Hartmut Hirsch-Kreinsen, David Jacobsen, Staffan Laestadi- us, Keith Smith Low-Tech Industries and the Knowledge Economy: State of the Art and Research Challenges (August 2003) 2/2004 Hartmut Hirsch-Kreinsen “Low-Technology“: Ein innovationspolitisch vergessener Sek- tor (Februar 2004) 3/2004 Johannes Weyer Innovationen fördern – aber wie? Zur Rolle des Staates in der Innovationspolitik (März 2004) erschienen in: Rasch, M./Bleidick, D. , (Hg.): Technikge- schichte im Ruhrgebiet – Technikgeschichte für das Ruhrge- biet, Essen: Klartext Verlag 2004, 278-294 4/2004 Konstanze Senge Der Fall Wal-Mart: Institutionelle Grenzen ökonomischer Glo- balisierung“ (Juli 2004) 5/2004 Tabea Bromberg New Forms of Company Co-operation and Effects on Industrial Relations (Juli 2004) 6/2004 Gerd Bender Innovation in Low-tech – Considerations based on a few case studies in eleven European countries (September 2004) 7/2004 Johannes Weyer Creating Order in Hybrid Systems. Reflexions on the Interac- tion of Man and Smart Machines (Oktober 2004) Bereits erschienene Arbeitspapiere des Lehrstuhls Wirtschafts- und Industriesoziologie (vormals Technik und Gesellschaft) 1/1998 Hartmut Hirsch-Kreinsen Industrielle Konsequenzen globaler Unternehmensstrategien (Juni 1998) 2/1998 Gerd Bender Gesellschaftliche Dynamik und Innovationsprojekte (Juli 1998) 3/1999 Staffan Laestadius Know-how in a low tech company - chances for being com- petitive in a globalized economy (März 1999) 4/1999 Hartmut Hirsch-Kreinsen/Beate Seitz Innovationsprozesse im Maschinenbau (Juni 1999) 5/1999 Howard Davies The future shape of Hong Kong's economy: Why low tech- nology manufacturing in China will remain a sustainable strategy (November 1999) 6/2000 Hartmut Hirsch-Kreinsen Industriesoziologie in den 90ern (Februar 2000) 7/2000 Beate Seitz Internationalisierungsstrategien und Unternehmensreorgani- sationen (Februar 2000) 8/2000 Gerd Bender/Horst Steg/Michael Jonas/Hartmut Hirsch- Kreinsen Technologiepolitische Konsequenzen "transdisziplinärer" In- novationsprozesse (Oktober 2000) 9/2001 Marhild von Behr Internationalisierungsstrategien kleiner und mittlerer Unter- nehmen (März 2001) 10/2002 Gerd Bender/Tabea Bromberg Playing Without Conductor: the University-Industry Band in Dortmund – Networks, Spin-offs and Technology Centre (Januar 2002) 11/2002 Michael Jonas/Marion Berner/Tabea Bromberg/A. Kolas- sa/Sakir Sözen ’Clusterbildung’ im Feld der Mikrosystemtechnik – das Bei- spiel Dortmund (Januar 2002) 12/2002 Hartmut Hirsch-Kreinsen Wissensnutzung in dynamischen Produktionsstrukturen. Er- gebnisse eines Workshops am 15. Oktober 2002, Universität Dortmund (November 2002) 13/2002 Hartmut Hirsch-Kreinsen Knowledge in Societal Development: The Case of Low-Tech Industries (November 2002) Die Arbeitspapiere sind über den Lehrstuhl erhältlich. Bereits erschienene Arbeitspapiere des Fachgebiets Techniksoziologie 1/2003 Johannes Weyer Von Innovations-Netzwerken zu hybriden sozio-technischen Systemen. Neue Perspektiven der Techniksoziologie (Juni 2003) erschienen in: L. Bluma et al. (Hg.), Technikvermittlung und Technikpopularisierung. Historische und didaktische Perspek- tiven, Münster: Waxmann 2004 (Cottbuser Studien zur Ge- schichte von Technik, Arbeit und Umwelt, Bd. 23), 9-31 2/2003 Johannes Weyer/Stephan Cramer/Tobias Haertel Partizipative Einführung von Methoden und Techniken in der Projektorganisation eines Softwareherstellers (Projekt-Endbericht – nur zum internen Gebrauch) (Juli 2003) 3/2003 Stephan Cramer How safety systems made seafaring risky. Unintended accel- eration in the 19th century (August 2003)