Journal articles on the topic 'Germany (East) – Politics and government – 1945-1990'

To see the other types of publications on this topic, follow the link: Germany (East) – Politics and government – 1945-1990.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 30 journal articles for your research on the topic 'Germany (East) – Politics and government – 1945-1990.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Kloiber, Andrew. "Brewing Relations: Coffee, East Germany, and Laos." Gastronomica 17, no. 4 (2017): 61–74. http://dx.doi.org/10.1525/gfc.2017.17.4.61.

Full text
Abstract:
This investigation contributes to studies of post-1945 Europe and the Cold War by examining the culture, economics, and politics surrounding the consumption of a single commodity in East Germany, coffee. Coffee was associated with many cultural values and traditions that became tied to the GDR's official image of socialism. When the regime's ability to supply this good was jeopardized in 1975–77, the government sought out new sources of coffee in the developing, so-called Third World. East Germany entered into long-term trade and development projects with countries such as Angola, Ethiopia, Laos, and Vietnam to secure sufficient beans to supply its own population – this article singles out the GDR's relationship with Laos for discussion. These trade deals connected East Germany to a much broader, globalizing economy, and led to certain lasting effects on the world coffee trade.
APA, Harvard, Vancouver, ISO, and other styles
2

Storkmann, Klaus. "East German Military Aid to the Sandinista Government of Nicaragua, 1979–1990." Journal of Cold War Studies 16, no. 2 (April 2014): 56–76. http://dx.doi.org/10.1162/jcws_a_00451.

Full text
Abstract:
The East German regime provided extensive military assistance to developing countries and armed guerrilla movements in Africa, the Middle East, Southeast Asia, and Latin America. In the 1980s, the pro-Soviet Marxist government in Nicaragua was one of the major recipients of East German military assistance. This article focuses on contacts at the level of the ministries of defense, on Nicaraguan requests to the East German military command, and on political and military decision-making processes in East Germany. The article examines the provision of weaponry and training as well as other forms of cooperation and support. Research for the article was conducted in the formerly closed archives of the East German Ministry for National Defense regarding military supplies to the Third World as well as the voluminous declassified files of the Socialist Unity Party of Germany (the ruling Communist party).
APA, Harvard, Vancouver, ISO, and other styles
3

Wu, Zhiqing. "Analyzes in effects of 1990 German reunification in economic, political and cultural perspective." Highlights in Business, Economics and Management 2 (November 6, 2022): 242–47. http://dx.doi.org/10.54097/hbem.v2i.2369.

Full text
Abstract:
After the fall of Berlin Wall, the East and West Germany faced a series of problems brought by the reunification. It was surprising to witness the unification of east socialist regime with west capitalism for the theorists. East Germany relied on subsidies from the government and investments from west Germany due to its low living standard and productivity level. The forty years of separation in culture values, political community, and economic systems generates great obstacles for the union of the two Germanie.
APA, Harvard, Vancouver, ISO, and other styles
4

Caciagli, Mario. "Le sette elezioni federali nella Germania unita (1990-2013)." Quaderni dell Osservatorio elettorale QOE - IJES 72, no. 4 (December 30, 2014): 55–88. http://dx.doi.org/10.36253/qoe-9571.

Full text
Abstract:
Stability and predictability had been the norm in the German political system before the unification. The seven federal elections in the unified Germany from 1990 to 2013 did have significant consequences on the traditional continuity. After the last two governments headed by Helmut Kohl (1990-1998), the Social Democrat Gerhard Schröder became Chancellor in a Red-Green coalition (1998-2005) and the Christian Democrat Angela Merkel became Chancellor, fi rst in a Grand Coalition with the Social Democrats (2005-2009), than in a coalition with the Liberals (2009-2013), and after the 2013 elections in a Grand Coalition again. These frequent changes can be explained by the mobility of the electorate: the cumulative effect of the growth of the middle class and the general social mobility have eroded traditional loyalties, as the disaffection of the youth includes changing electoral choices or tendency to no-vote. Economic and social issues too did have effect on voting behavior: because their critical social situation the electors of the East had preferred fi rst Kohl’s CDU, than Schröder’s SPD and again the CDU under Merkel’s leadership; in the West millions of left electors disappointed by Schröder’s contentious reforms of the labor market leaved the SPD in the 2009 and 2013 elections; the performance of the economy in the last years after periods of crisis, collocating Germany at the top of the European Union, has stimulated the support to Merkel. Because a new party, the PDS than Linke, which has stable roots in the East, but can’t be partner of a government; because the exclusion from the Bundestag of the liberal FDP; and, finally, because the least reform of the electoral system toward more proportionality: all that injects uncertainty into a “fluid” party and political system.
APA, Harvard, Vancouver, ISO, and other styles
5

Brzozowski-Zabost, Grzegorz. "Od ruchu protestu do partii władzy. Rozwój Zielonych w Niemczech." Studia Ecologiae et Bioethicae 6, no. 1 (December 31, 2008): 223–39. http://dx.doi.org/10.21697/seb.2008.6.1.16.

Full text
Abstract:
The author presents in this paper the developing process of German Green Party. In the 1970s new social movements like environmentalists, peace organizations and feminist founded political party The Greens (Die Grünen). It was an act of opposition against pollution, use of nuclear power, and some aspects of life in highly developed and industrialized society, the formal inauguration was held 1980 in West Germany. 1990 three civil rights groups in East Germany combined to form Bündnis 90, which merged with Die Grünen after long uniting process in 1993. 18 years after foundation they built together with social democrats from SPD government which lasted for two term of office between 1998 and 2005. So day there are a lot of green parties all over the world, but and the German greens are the most successful, they are an example for other green parties.
APA, Harvard, Vancouver, ISO, and other styles
6

Spaulding, Robert Mark. "German trade policy in Eastern Europe, 1890–1990: preconditions for applying international trade leverage." International Organization 45, no. 3 (1991): 343–68. http://dx.doi.org/10.1017/s0020818300033130.

Full text
Abstract:
Over the past century, Germany has repeatedly attempted to use trade as a tool of foreign policy vis-à-vis Imperial Russia, the Soviet Union, Poland, and Czechoslovakia. Against the background of continual German economic superiority, this article analyzes Germany's ability to apply trade leverage in terms of four other factors: the nature of the prevailing international trade regime, government views of trade leverage as a tool of statecraft, the degree of German state autonomy in setting trade policies, and the availability of an effective bureaucratic mechanism for controlling German imports and exports. The historical record demonstrates that beyond economic superiority, the application of trade leverage requires a permissive international trade regime, state acceptance of trade-based economic statecraft, an autonomous domestic regime, and a rigorous trade control bureaucracy. Surprisingly, this conjunction of factors, as they applied to Eastern Europe, occurred during both the Nazi period and the early years of the Federal Republic. The article closes by pointing out how two important factors—the politicized nature of the East-West trade regime and the Federal Republic's high degree of state autonomy in setting Eastern trade policy–are being eroded by political and economic change in Eastern Europe.
APA, Harvard, Vancouver, ISO, and other styles
7

Létourneau, Paul. "L'Allemagne unie entre l'Ouest déclinant et l'Est désintégré." Études internationales 23, no. 1 (April 12, 2005): 77–96. http://dx.doi.org/10.7202/702967ar.

Full text
Abstract:
German unification is both a cause and an effect of the restructuring of alliances now taking place with the end of the long postwar era. An enlarged Germany finds itself in a new geostrategic position at the centre of a henceforth unified continent and its vocation is pan-European. The underpinnings of its external policy and its security have been modified. In this context, the German government has opted not only for keeping a renewed NATO but also for deepening and widening Europe's economic and political institutions. It does not want to disappoint either the Americans or its European Community partners and those wishing to join the EC. Nor does it want to disappoint the East Europeans, including those-of the former Soviet Union. Nevertheless, the traditional policy of seeking non-isolation, at times not without ambivalence, is destined to change and could become more assertive. Two items testify to this change in direction : the "debate over normalization', which has brought down taboos in Germany, and the leadership role that Bonn has openly taken, for the first time since 1945, on the issue of recognition without further delay of Slovenia and Croatia by the European Community as of January 15 1992.
APA, Harvard, Vancouver, ISO, and other styles
8

Lammers, Karl Christian. "The Making of the GDR: New Research on its Formative Years and Problems." Contemporary European History 11, no. 2 (May 2002): 333–42. http://dx.doi.org/10.1017/s0960777302002102.

Full text
Abstract:
Andreas Malycha, Die SED. Geschichte ihrer Stalinisierung 1946–1953 (Paderborn: Ferdinand Schöningh, 2000), 541 pp., DM 98.00, ISBN 3-506-75331-2. Gareth Pritchard, The making of the GDR 1945–53. From antifascism to Stalinism (Manchester: Manchester University Press, 2000), 244 pp., £45.00, ISBN 0-7190-5654-3. Mark Allinson, Politics and popular opinion in East Germany 1945–68 (Manchester: Manchester University Press, 2000), 178 pp., £45.00, ISBN 0-7190-5554-7. Jonathan Grix, The Role of the Masses in the Collapse of the GDR (Basingstoke/New York: Macmillan/Palgrave, 2000), 213 pp., £45.00, ISBN 0-333-80098-2. Raymond G. Stokes, Constructing Socialism. Technology and Change in East Germany, 1945–1990 (Baltimore: The Johns Hopkins University Press, 2000) 260 pp., $51.00, ISBN 0-8018-6391-0. Mary Fulbrook, German National Identity after the Holocaust (Cambridge: Polity Press, 1999), 248pp., £14.99, ISBN 0-7456-1045-5. Detlef Nakath and Gerd-Rüdiger Stephan, Die Häber-Protokolle. Schlaglichter der SED–Westpolitik 1973–1985 (Berlin: Karl Dietz Verlag, 1999), 480 pp., DM 48.00, ISBN 3-320-01968-6. Benno-Eide Siebs, Die Aussenpolitik der DDR 1976–1989. Strategien und Grenzen (Paderborn: Ferdinand Schöningh, 1999), 461 pp., DM 128.00, ISBN 3-506-77510-3.
APA, Harvard, Vancouver, ISO, and other styles
9

Headey, Bruce, Peter Krause, and Roland Habich. "East Germany: Rising Incomes, Unchanged Inequality and the Impact of Redistributive Government 1990-92." British Journal of Sociology 46, no. 2 (June 1995): 225. http://dx.doi.org/10.2307/591787.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hildebrandt, Achim, and Eva-Maria Trüdinger. "Belonging and exclusion: the dark side of regional identity in Germany." Comparative European Politics 19, no. 2 (February 22, 2021): 146–63. http://dx.doi.org/10.1057/s41295-020-00230-5.

Full text
Abstract:
AbstractA collective regional identity is a favourable condition for the acceptance of majority decisions made at the regional level and for the delegation of competencies from the central to regional governments. Moreover, a regional identity can play an important role in times of global challenges. Regional attachment might generate a we-feeling and help individuals to cope better with a complex world. The same feeling, however, might also serve as a basis for exclusionary attitudes. In this article, we analyse regional identity at the Land level in Germany with data from the German General Social Survey. Our results show that regional identity is strong in both the eastern and western parts of the country, with people in the east, surprisingly, identifying with their respective Land slightly more than people in the west, even though the five eastern Länder were only established in 1990 after decades of centralist rule. Furthermore, the dark side of regional identity manifests itself only in eastern Germany, where a stronger regional identity tends to go hand in hand with a greater dislike of foreigners.
APA, Harvard, Vancouver, ISO, and other styles
11

Sierzputowski, Bartłomiej. "Public international law in the context of post-German cultural property held within Poland’s borders. A complicated situation or simply a resolution?" Leiden Journal of International Law 33, no. 4 (August 28, 2020): 953–68. http://dx.doi.org/10.1017/s0922156520000461.

Full text
Abstract:
AbstractThe article discusses the complicated situation of post-German cultural property held within Poland’s borders after the Second World War. On 2 August 1945, ‘the Big Three’ decided a new layout of power within Europe. They reached an agreement that Silesia, Pomerania, the Free City of Danzig (Gdańsk), and part of East Prussia (Regained Territories) along with all the property which had been left on site, should be a part of Poland. One of the post-war priorities of the Polish Government was to regulate the legal status of post-German cultural property left within these newly-delineated borders. Although the Second World War ended in 1945, there was still a threat that the majority of post-German property could be devastated, destroyed, or even looted. There are some documented cases where such cultural property was seized inter alia by the Red Army and then transported to Russia. Since 1945, Russian museums have exhibited many of these pieces of art. This article addresses the question concerning the legal status of post-German cultural property in light of public international law. Furthermore, the article responds to the question, whether Poland is entitled to restitution of post-German cultural property looted from the Regained Territories.
APA, Harvard, Vancouver, ISO, and other styles
12

Bach, Jonathan, Heather L. Dichter, Kirkland Alexander Fulk, Alexander Wochnik, Wilko Graf von Hardenberg, and Carol Hager. "Book Reviews." German Politics and Society 34, no. 3 (September 1, 2016): 100–116. http://dx.doi.org/10.3167/gps.2016.340305.

Full text
Abstract:
Jon Berndt Olsen, Tailoring Truth: Politicizing the Past and Negotiating Memory in East Germany, 1945-1990 (New York: Berghahn Books, 2015) Reviewed by Jonathan BachMicahel Krüger, Christian Becker, and Stefan Nielsen, German Sports, Doping, and Politics: A History of Performance Enhancement (Lanham: Rowman & Littlefield, 2015) Reviewed by Heather L. DichterSusanne Rinner. The German Student Movement and the Literary Imagination: Transnational Memories of Protest and Dissent (New York: Berghahn Books, 2013) Reviewed by Kirkland Alexander FulkKristen Kopp, Germany’s Wild East: Constructing Poland as Colonial Space (Ann Arbor: The University of Michigan Press, 2012) Reviewed by Alexander WochnikSean Ireton and Caroline Schaumann, eds., Heights of Reflection: Mountains in the German Imagination from the Middle Ages to the Twenty-First Century, Studies in German Literature, Linguistics, and Culture (Rochester: Camden House, 2012). Reviewed by Wilko Graf von HardenbergFrank Uekötter, The Greenest Nation? A New History of German Environmentalism (Cambridge: MIT Press, 2014). Reviewed by Carol Hager
APA, Harvard, Vancouver, ISO, and other styles
13

Mandel, Maud S. "One Nation Indivisible: Contemporary Western European Immigration Policies and the Politics of Multiculturalism." Diaspora: A Journal of Transnational Studies 4, no. 1 (March 1995): 89–103. http://dx.doi.org/10.3138/diaspora.4.1.89.

Full text
Abstract:
Since World War II, policies with regard to immigrant populations have changed dramatically and repeatedly throughout Western Europe. From 1945 to 1955, Western European nations absorbed an enormous number of refugees uprooted during the war. Until the 1970s, governments did not limit migration, nor did they formulate comprehensive social policies toward these new immigrants. Indeed, from the mid-1950s until 1973, most Western European governments, interested in facilitating economic growth, allowed businesses and large corporations to seek cheap immigrant labor abroad. As Georges Tapinos points out, “For the short term, the conditions of the labor market [and] the rhythm of economic growth . . . determined the flux of migrations” (422). France, Britain, Germany, Belgium, and the Netherlands welcomed the generally young, single male migrants as a cheap labor force, treating them as guest workers. As a result, few governments instituted social policies to ease the workers’ transition to their new environments. Policies began to change in the 1960s when political leaders, intent on gaining control over the haphazard approach to immigration that had dominated the previous 20 years, slowly began to formulate educational measures and social policies aimed at integrating newcomers.
APA, Harvard, Vancouver, ISO, and other styles
14

Mclellan, J. "Behind the Berlin Wall: East Germany and the Frontiers of Power, by Patrick Major * Inventing A Socialist Nation: Heimat and the Politics of Everyday Life in the GDR, 1945-1990." English Historical Review CXXVI, no. 520 (June 1, 2011): 763–66. http://dx.doi.org/10.1093/ehr/cer129.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

McGlynn, Sean, R. A. W. Rhodes, Geoffrey K. Roberts, Christopher Johnson, Brigitte Boyce, Mark Donovan, Deiniol Jones, Susan Mendus, Krishan Kumar, and Robert McKeever. "Book Reviews: The McFarlane Legacy: Studies in Late Medieval Politics and Society (The Fifteenth Century Series No. 1), Crown, Government and People in the Fifteenth Century (The Fifteenth Century Series No. 2), Courts, Counties and the Capital in the Later Middle Ages (The Fifteenth Century Series No. 4), The Treasury and Whitehall: The Planning and Control of Public Expenditure, 1976–1993, Das Wiedervereinigte Deutschland: Zwischenbilanz und Perspektiven, Unifyng Germany 1989–1990, Uniting Germany: Actions and Reactions, behind the Wall: The Inner Life of Communist Germany, The Russians in Germany: A History of the Soviet Zone of Occupation, 1945–1949, Origins of a Spontaneous Revolution: East Germany, 1989, Intellectuals, Socialism and Dissent. The East German Opposition and its Legacy, The Rotten Heart of Europe: The Dirty War for Europe's Money, Muslim Politics, Muslim Communities Re-Emerge: Historical Perspectives on Nationality, Politics, and Opposition in the Former Soviet Union and Yugoslavia, The Politics of Pan-Islam: Ideology and Organization, The Crisis of the Italian State: From the Origins of the Cold War to the Fall of Berlusconi, The End of Post-War Politics in Italy: The Landmark 1992 Elections, beyond Confrontation: Learning Conflict Resolution in the Post-Cold War Era, Care, Gender, and Justice, Nationalisms: The Nation-State and Nationalism in the Twentieth Century, Nationalism and Postcommunism: A Collection of Essays, Notions of Nationalism, on the Limits of the Law: The Ironic Legacy of Title VI of the 1964 Civil Rights Act." Political Studies 45, no. 4 (September 1997): 790–804. http://dx.doi.org/10.1111/1467-9248.00113.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Hossain, Arif. "Peace, Conflict and Resolution (Good vs. Evil)." Bangladesh Journal of Bioethics 4, no. 1 (March 26, 2013): 9–19. http://dx.doi.org/10.3329/bioethics.v4i1.14264.

Full text
Abstract:
The immense structural inequalities of the global social /political economy can no longer be contained through consensual mechanisms of state control. The ruling classes have lost legitimacy; we are witnessing a breakdown of ruling-class hegemony on a world scale. There is good and evil among mankind; thus it necessitates the conflict between the good and evil on Earth. We are in for a period of major conflicts and great upheavals. It's generally regarded that Mencius (c.371- c.289 B.C) a student of Confucianism developed his entire philosophy from two basic propositions: the first, that Man's original nature is good; and the second, that Man's original nature becomes evil when his wishes are not fulfilled. What is good and what is evil? Philosophers of all ages have thought over this question. Each reckoned that he had solved the question once and for all, yet within a few years the problem would re-emerge with new dimensions. Repeated acts of corruption and evil action makes a man corrupt and takes away a man from his original nature. Still now majority of the people of the world give compliance to corruption because of social pressures, economic pressures, cultural pressures and political pressures. The conflict between good and evil is ancient on earth and is prevalent to this day. May be the final confrontation between the descendants of Cain and Abel is at our doorsteps. During the 2nd World War America with its European allies went into world wide military campaign to defeat Germany, Italy and Japan. When the Second World War ended in 1945 the United States of America came out as victorious. America was the first country to detonate atomic bomb in another country. During that period Russia fell into competition with America in politically colonizing countries after countries. With the fall of Communism Russia terminated its desire wanting to be the champion of the oppressed of the world. The situation in Russia continues to deteriorate, a country which until only a few years ago was a superpower. Russians are deeply disillusioned today with the new politicians in Russia, who they says "promise everything and give nothing." The Russians still strongly oppose a world order dominated by the United States. If anyone looks at or investigates the situations in other countries it can be seen that at present almost all countries of the world are similar or same in the forms of structures of corruption and evil. The Worldwide control of humanity‘s economic, social and political activities is under the helm of US corporate and military power. The US has established its control over 191 governments which are members of the United Nations. The last head of state of the former Soviet Union, Mikhail Gorbachev on December 2012, at a conference on the future of the Middle East and the Black Sea region in the Turkish city of Istanbul, has warned the US of an imminent Soviet-like collapse if Washington persists with its hegemonic policies. Mass public protest occurred against US hegemony are mainly from Muslim countries of South East Asia, South Asia, Central Asia, West Asia, North Africa and Africa. The latest mass protests erupted in September 2012 when the divine Prophet Muhammad (pbuh) was insulted by America and Israel. There were strong mass protests by people from Indonesia to Morocco and in the European countries by mostly immigrants and Australia were there are Muslim populations. This worldwide protest had occurred while the rise of the masses is ongoing against corrupt rulers in West Asia and North Africa. The masses of the people are thirsty and desperate for justice, dignity, economic welfare and human rights. Most major religions have their own sources of information on the Last Age of Mankind or the End of Times, which often include fateful battles between the forces of good and evil and cataclysmic natural disasters. Humans are evolving to a final stage of their evolution towards a 'New Age‘ that is to come which the corrupt does not understand. At present times a final battle of good versus evil on Earth will ensue. The World powers (leaders) and their entourages who are really detached from the masses have organized to keep aloft the present world order that degenerates the masses in corruption, keeps the people in unhappiness, and deprives the masses from economic well being, education and keeps promoting wars and conflicts to support corruption and evil. We are at the ?End of Times?. The Promised Messiah will come to set right what is wrong, no doubt. DOI: http://dx.doi.org/10.3329/bioethics.v4i1.14264 Bangladesh Journal of Bioethics 2013; 4(1):9-19
APA, Harvard, Vancouver, ISO, and other styles
17

Hossain, Arif. "Peace, Conflict and Resolution (Good vs. Evil) Part 2." Bangladesh Journal of Bioethics 4, no. 2 (September 9, 2013): 9–21. http://dx.doi.org/10.3329/bioethics.v4i2.16372.

Full text
Abstract:
The immense structural inequalities of the global social /political economy can no longer be contained through consensual mechanisms of state control. The ruling classes have lost legitimacy; we are witnessing a breakdown of ruling-class hegemony on a world scale. There is good and evil among mankind; thus it necessitates the conflict between the good and evil on Earth. We are in for a period of major conflicts and great upheavals. It's generally regarded that Mencius (c.371-c.289 B.C) a student of Confucianism developed his entire philosophy from two basic propositions: the first, that Man's original nature is good; and the second, that Man's original nature becomes evil when his wishes are not fulfilled. What is good and what is evil? Philosophers of all ages have thought over this question. Each reckoned that he had solved the question once and for all, yet within a few years the problem would re-emerge with new dimensions. Repeated acts of corruption and evil action makes a man corrupt and takes away a man from his original nature. Still now majority of the people of the world give compliance to corruption because of social pressures, economic pressures, cultural pressures and political pressures. The conflict between good and evil is ancient on earth and is prevalent to this day. May be the final confrontation between the descendants of Cain and Abel is at our doorsteps. During the 2nd World War America with its European allies went into world wide military campaign to defeat Germany, Italy and Japan. When the Second World War ended in 1945 the United States of America came out as victorious. America was the first country to detonate atomic bomb in another country. During that period Russia fell into competition with America in politically colonizing countries after countries. With the fall of Communism Russia terminated its desire wanting to be the champion of the oppressed of the world. The situation in Russia continues to deteriorate, a country which until only a few years ago was a superpower. Russians are deeply disillusioned today with the new politicians in Russia, who they says "promise everything and give nothing." The Russians still strongly oppose a world order dominated by the United States. If anyone looks at or investigates the situations in other countries it can be seen that at present almost all countries of the world are similar or same in the forms of structures of corruption and evil. The Worldwide control of humanity‘s economic, social and political activities is under the helm of US corporate and military power. The US has established its control over 191 governments which are members of the United Nations. The last head of state of the former Soviet Union, Mikhail Gorbachev on December 2012, at a conference on the future of the Middle East and the Black Sea region in the Turkish city of Istanbul, has warned the US of an imminent Soviet-like collapse if Washington persists with its hegemonic policies. Mass public protest occurred against US hegemony are mainly from Muslim countries of South East Asia, South Asia, Central Asia, West Asia, North Africa and Africa. The latest mass protests erupted in September 2012 when the divine Prophet Muhammad (pbuh) was insulted by America and Israel. There were strong mass protests by people from Indonesia to Morocco and in the European countries by mostly immigrants and Australia were there are Muslim populations. This worldwide protest had occurred while the rise of the masses is ongoing against corrupt rulers in West Asia and North Africa. The masses of the people are thirsty and desperate for justice, dignity, economic welfare and human rights. Most major religions have their own sources of information on the Last Age of Mankind or the End of Times, which often include fateful battles between the forces of good and evil and cataclysmic natural disasters. Humans are evolving to a final stage of their evolution towards a ?New Age‘ that is to come which the corrupt does not understand. At present times a final battle of good versus evil on Earth will ensue. The World powers (leaders) and their entourages who are really detached from the masses have organized to keep aloft the present world order that degenerates the masses in corruption, keeps the people in unhappiness, and deprives the masses from economic well being, education and keeps promoting wars and conflicts to support corruption and evil. We are at the ?End of Times?. The Promised Messiah will come to set right what is wrong, no doubt. DOI: http://dx.doi.org/10.3329/bioethics.v4i2.16372 Bangladesh Journal of Bioethics 2013; 4(2) 9-21
APA, Harvard, Vancouver, ISO, and other styles
18

King, Desmond. "Labor Market Policy in the United States: The Neoliberal Regime - Margaret Weir, Politics and Jobs: The Politics of Employment Policy in the United States (Princeton: Princeton University Press, 1992. Pp. xviii, 238. $24.95). - Gary Mucciaroni, The Political Failure of Employment Policy, 1945–1982 (Pittsburgh: University of Pittsburgh Press, 1990. Pp. xii, 317. $17.95 paper). - Udo Sautter, Three Cheers for the Unemployed: Government and Unemployment Before the New Deal (New York: Cambridge University Press, 1991. Pp. xiii, 402. $54.95). - Thomas Janoski, The Political Economy of Unemployment: Active Labor Market Policy in West Germany and the United States (Berkeley and Los Angeles: University of California Press, 1990. Pp. xxvi, 345. $39.95)." Journal of Policy History 6, no. 3 (July 1994): 259–65. http://dx.doi.org/10.1017/s089803060000395x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Ludewig, Alexandra. "Home Meets Heimat." M/C Journal 10, no. 4 (August 1, 2007). http://dx.doi.org/10.5204/mcj.2698.

Full text
Abstract:
Home is the place where one knows oneself best; it is where one belongs, a space one longs to be. Indeed, the longing for home seems to be grounded in an anthropological need for anchorage. Although in English the German loanword ‘Heimat’ is often used synonymously with ‘home’, many would have claimed up till now that it has been a word particularly ill equipped for use outside the German speaking community, owing to its specific cultural baggage. However, I would like to argue that – not least due to the political dimension of home (such as in homeland security and homeland affairs) – the yearning for a home has experienced a semantic shift, which aligns it more closely with Heimat, a term imbued with the ambivalence of home and homeland intertwined (Morley 32). I will outline the German specificities below and invite an Australian analogy. A resoundingly positive understanding of the German term ‘Heimat’ likens it to “an intoxicant, a medium of transport; it makes people feel giddy and spirits them to pleasant places. To contemplate Heimat means to imagine an uncontaminated space, a realm of innocence and immediacy.“ (Rentschler 37) While this description of Heimat may raise expectations of an all-encompassing idyll, for most German speakers “…there is hardly a more ambivalent feeling, hardly a more painful mixture of happiness and bitterness than the experience vested in the word ‘Heimat’.” (Reitz 139) The emotional charge of the idiom is of quite recent origin. Traditionally, Heimat stimulates connotations of ‘origin’, ‘birth place, of oneself and one’s ancestors’ and even of ‘original area of settlement and homeland’. This corresponds most neatly with such English terms as ‘native land’, ‘land of my birth’, ‘land of my forefathers’ or ‘native shores’. Added to the German conception of Heimat are its sensitive associations relating, on the one hand, to Romanticism and its idolisation of the fatherland, and on the other, to the Nazi blood-and-soil propaganda, which brought Heimat into disrepute for many and added to the difficulties of translating the German word. A comparison with similar terms in Romance languages makes this clear. Speakers of those tongues have an understanding of home and homeland, which is strongly associated with the father-figure: the Greek “patra”, Latin and Italian “patria” and the French “patrie”, as well as patriarch, patrimony, patriot, and patricide. The French come closest to sharing the concept to which Heimat’s Germanic root of “heima” refers. For the Teutons “heima” denoted the traditional space and place of a clan, society or individual. However, centuries of migration, often following expulsion, have imbued Heimat with ambivalent notions; feelings of belonging and feelings of loss find expression in the term. Despite its semantic opaqueness, Heimat expresses a “longing for a wholeness and unity” (Strzelczyk 109) which for many seems lost, especially following experiences of alienation, exile, diaspora or ‘simply’ migration. Yet, it is in those circumstances, when Heimat becomes a thing of the past, that it seems to manifest itself most clearly. In the German context, the need for Heimat arose particularly after World War Two, when experiences of loss and scenes of devastation, as well as displacement and expulsion found compensation of sorts in the popular media. Going to the cinema was the top pastime in Germany in the 1950s, and escapist Heimat films, which showed idyllic country scenery, instead of rubble-strewn cityscapes, were the most well-liked of all. The industry pumped out kitsch films in quick succession to service this demand and created sugar-coated, colour-rich Heimat experiences on celluloid that captured the audience’s imagination. Most recently, the genre experienced something of a renaissance in the wake of the fall of the Berlin Wall and the subsequent accession of the German Democratic Republic (GDR, also referred to as East Germany) to the Federal Republic of Germany (FRG or West Germany) in 1990. Described as one of the most seminal moments in modern history, the events led to large-scale change; in world politics, strategic alliances, but were most closely felt at the personal and societal level, reshaping community and belonging. Feelings of disbelief and euphoria occupied the hearts and minds of people all around the world in the days following the night of the 9 November 1989. However, the fall of the Wall created within weeks what the Soviet Union had been unable to manage in the previous 40 years; the sense of a distinctly Eastern identity (cf. Heneghan 148). Most of the initial positive perceptions slowly gave way to a hangover when the consequences of the drastic societal changes became apparent in their effects on populace. Feelings of disenchantment and disillusionment followed the jubilation and dominated the second phase of socio-cultural unification, when individuals were faced with economic and emotional hardship or were forced to relocate, as companies folded, politically tainted degrees and professions were abolished and entire industry sectors disappeared. This reassessment of almost every aspect of people’s lifestyles led many to feel that their familiar world had dissipated and their Heimat had been lost, resulting in a rhetoric of “us” versus “them”. This conceptual divide persisted and was cemented by the perceived difficulties in integration that had emerged, manifesting a consciousness of difference that expressed itself metaphorically in the references to the ‘Wall in the mind’. Partly as a reaction to these feelings and partly also as a concession to the new citizens from the East, Western backed and produced unification films utilised the soothing cosmos of the Heimat genre – so well rehearsed in the 1950s – as a framework for tales about unification. Peter Timm’s Go, Trabi, Go (1991) and Wolfgang Büld’s sequel Go, Trabi, Go 2. Das war der Wilde Osten [That Was the Wild East, 1992] are two such films which revive “Heimat as a central cultural construct through which aspects of life in the new Germany could be sketched and grasped.” (Naughton 125) The films’ references to Eastern and Western identity served as a powerful guarantor of feelings of belonging, re-assuring audiences on both sides of the mental divide of their idiosyncrasies, while also showing a way to overcome separation. These Heimat films thus united in spirit, emotion and consumer behaviour that which had otherwise not yet “grown together” (cf. Brandt). The renaissance of the Heimat genre in the 1990s gained further momentum in the media with new Heimat film releases as well as TV screenings of 1950s classics. Indeed Heimat films of old and new were generally well received, as they responded to a fragile psychological predisposition at a time of change and general uncertainty. Similar feelings were shared by many in the post-war society of the 1950s and the post-Wall Europe of the 1990s. After the Second World War and following the restructure after Nazism it was necessary to integrate large expellee groups into the young nation of the FRG. In the 1990s the integration of similarly displaced people was required, though this time they were having to cope less with territorial loss than with ideological implosions. Then and now, Heimat films sought to aid integration and “transcend those differences” (Naughton 125) – whilst not disputing their existence – particularly in view of the fact that Germany had 16 million new citizens, who clearly had a different cultural background, many of whom were struggling with perceptions of otherness as popularly expressed in the stereotypical ethnographies of “Easterners” and “Westerners”. The rediscovery of the concept of Heimat in the years following unification therefore not only mirrored the status quo but further to that allowed “for the delineation of a common heritage, shared priorities, and values with which Germans in the old and new states could identify.” (Naughton 125) Closely copying the optimism of the 1950s which promised audiences prosperity and pride, as well as a sense of belonging and homecoming into a larger community, the films produced in the early 1990s anticipated prosperity for a mobile and flexible people. Like their 1950s counterparts, “unification films ‘made in West Germany’ imagined a German Heimat as a place of social cohesion, opportunity, and prosperity” (Naughton 126). Following the unification comedies of the early 1990s, which were set in the period following the fall of the Wall, another wave of German film production shifted the focus onto the past, sacrificing the future dimension of the unification films. Leander Haußmann’s Sonnenallee (1999) is set in the 1970s and subscribes to a re-invention of one’s childhood, while Wolfgang Becker’s Goodbye Lenin (2003) in which the GDR is preserved on 79 square metres in a private parallel world, advocates a revival of aspects of the socialist past. Referred to as “Ostalgia”; a nostalgia for the old East, “a ‘GDR revival’ or the ‘renaissance of a GDR Heimatgefühl’” (Berdahl 197), the films achieved popular success. Ostalgia films utilised the formula of ‘walking down memory lane’ in varying degrees; thematising pleasing aspects of an imagined collective past and tempting audiences to revel in a sense of unity and homogeneous identity (cf. Walsh 6). Ostalgia was soon transformed from emotional and imaginary reflection into an entire industry, manifesting itself in the “recuperation, (re)production, marketing, and merchandising of GDR products as well as the ‘museumification’ of GDR everyday life” (Berdahl 192). This trend found further expression in a culture of exhibitions, books, films and cabaret acts, in fashion and theme parties, as well as in Trabi-rallies which celebrated or sent up the German Democratic Republic in response to the perceived public humiliation at the hands of West German media outlets, historians and economists. The dismissal of anything associated with the communist East in mainstream Germany and the realisation that their consumer products – like their national history – were disappearing in the face of the ‘Helmut Kohl-onisation’ sparked this retro-Heimat cult. Indeed, the reaction to the disappearance of GDR culture and the ensuing nostalgia bear all the hallmarks of Heimat appreciation, a sense of bereavement that only manifests itself once the Heimat has been lost. Ironically, however, the revival of the past led to the emergence of a “new” GDR (Rutschky 851), an “imaginary country put together from the remnants of a country in ruins and from the hopes and anxieties of a new world” (Hell et al. 86), a fictional construct rather than a historical reality. In contrast to the fundamental social and psychological changes affecting former GDR citizens from the end of 1989, their Western counterparts were initially able to look on without a sense of deep personal involvement. Their perspective has been likened to that of an impartial observer following the events of a historical play (cf. Gaschke 22). Many saw German unification as an enlargement of the West; as soon as they had exported their currency, democracy, capitalism and freedom to the East, “blossoming landscapes” were sure to follow (Kohl). At first political events did not seem to cause a major disruption to the lives of most people in the old FRG, except perhaps the need to pay higher tax. This understanding proved a major underestimation of the transformation process that had gripped all of Germany, not just the Eastern part. Nevertheless, few predicted the impact that far-reaching changes would have on the West; immigration and new minorities alter the status quo of any society, and with Germany’s increase in size and population, its citizens in both East and West had to adapt and adjust to a new image and to new expectations placed on them from within and without. As a result a certain unease began to be felt by many an otherwise self-assured individual. Slower and less obvious than the transition phase experienced by most East Germans, the changes in West German society and consciousness were nevertheless similar in their psychological effects; resulting in a subtle feeling of displacement. Indeed, it was soon noted that “the end of German division has given rise to a sense of crisis in the West, particularly within the sphere of West German culture, engendering a Western nostalgica for the old FRG” (Cooke 35), also referred to as Westalgia. Not too dissimilar to the historical rehabilitation of the East played out in Ostalgic fashion, films appeared which revisit moments worthy of celebration in West German history, such as the 1954 Soccer World Championship status which is at the centre of the narrative in Sönke Wortmann’s Das Wunder von Bern [Miracle of Bern, 2003]. Hommages to the 1968 generation (Hans Weingartner’s Die fetten Jahre sind vorbei [The Educators, 2004]) and requiems for West Berlin’s subculture (Leander Haußmann’s Herr Lehmann [Mr Lehmann, 2003]) were similar manifestations of this development. Ostalgic and Westalgic practices coexisted for several years after the turn of the millennium, and are a tribute to the highly complex interrelationship that exists between personal histories and public memories. Both narratives reveal “the politics, ambiguities, and paradoxes of memory, nostalgia, and resistance” (Berdahl 207). In their nostalgic contemplation of the good old days, Ostalgic and Westalgic films alike express a longing to return to familiar and trusted values. Both post-hoc constructions of a heimatesque cosmos demonstrate a very real reinvention of Heimat. Their deliberate reconstruction and reinterpretation of history, as well as the references to and glorification of personal memory and identity fulfil the task of imbuing history – in particular personal history – with dignity. As such these Heimat films work in a similar fashion to myths in the way they explain the world. The heimatesque element of Ostalgic and Westalgic films which allows for the potential to overcome crises reveals a great deal about the workings of myths in general. Irrespective of their content, whether they are cosmogonic (about the beginning of time), eschatological (about the end of time) or etiologic myths (about the origins of peoples and societal order), all serve as a means to cope with change. According to Hans Blumenberg, myth making may be seen as an attempt to counter the absolutism of reality (cf. Blumenberg 9), by providing a response to its seemingly overriding arbitrariness. Myths become a means of endowing life with meaning through art and thus aid positive self-assurance and the constructive usage of past experiences in the present and the future. Judging from the popular success of both Ostalgic and Westalgic films in unified Germany, one hopes that communication is taking place across the perceived ethnic divide of Eastern and Western identities. At the very least, people of quite different backgrounds have access to the constructions and fictions relating to one another pasts. By allowing each other insight into the most intimate recesses of their respective psychological make-up, understanding can be fostered. Through the re-activation of one’s own memory and the acknowledgment of differences these diverging narratives may constitute the foundation of a common Heimat. It is thus possible for Westalgic and Ostalgic films to fulfil individual and societal functions which can act as a core of cohesion and an aid for mutual understanding. At the same time these films revive the past, not as a liveable but rather as a readable alternative to the present. As such, the utilisation of myths should not be rejected as ideological misuse, as suggested by Barthes (7), nor should it allow for the cementing of pseudo-ethnic differences dating back to mythological times; instead myths can form the basis for a common narrative and a self-confident affirmation of history in order to prepare for a future in harmony. Just like myths in general, Heimat tales do not attempt to revise history, or to present the real facts. By foregrounding the evidence of their wilful construction and fictitious invention, it is possible to arrive at a spiritual, psychological and symbolic truth. Nevertheless, it is a truth that is essential for a positive experience of Heimat and an optimistic existence. What can the German situation reveal in an Australian or a wider context? Explorations of Heimat aid the socio-historical investigation of any society, as repositories of memory and history, escape and confrontation inscribed in Heimat can be read as signifiers of continuity and disruption, reorientation and return, and as such, ever-changing notions of Heimat mirror values and social change. Currently, a transition in meaning is underway which alters the concept of ‘home’ as an idyllic sphere of belonging and attachment to that of a threatened space; a space under siege from a range of perils in the areas of safety and security, whether due to natural disasters, terrorism or conventional warfare. The geographical understanding of home is increasingly taking second place to an emotional imaginary that is fed by an “exclusionary and contested distinction between the ‘domestic’ and the ‘foreign’ (Blunt and Dowling 168). As such home becomes ever more closely aligned with the semantics of Heimat, i.e. with an emotional experience, which is progressively less grounded in feelings of security and comfort, yet even more so in those of ambivalence and, in particular, insecurity and hysteria. This paranoia informs as much as it is informed by government policies and interventions and emerges from concerns for national security. In this context, home and homeland have become overused entities in discussions relating to the safeguarding of Australia, such as with the establishment of a homeland security unit in 2003 and annual conferences such as “The Homeland Security Summit” deemed necessary since 9/11, even in the Antipodes. However, these global connotations of home and Heimat overshadow the necessity of a reclaimation of the home/land debate at the national and local levels. In addressing the dispossession of indigenous peoples and the removal and dislocation of Aboriginal children from their homes and families, the political nature of a home-grown Heimat debate cannot be ignored. “Bringing them Home”, an oral history project initiated by the National Library of Australia in Canberra, is one of many attempts at listening to and preserving the memories of Aboriginals and Torres Strait Islanders who, as children, were forcibly taken away from their families and homelands. To ensure healing and rapprochement any reconciliation process necessitates coming to terms with one’s own past as much as respecting the polyphonic nature of historical discourse. By encouraging the inclusion of diverse homeland and dreamtime narratives and juxtaposing these with the perceptions and constructions of home of the subsequent immigrant generations of Australians, a rich text, full of contradictions, may help generate a shared, if ambivalent, sense of a common Heimat in Australia; one that is fed not by homeland insecurity but one resting in a heimatesque knowledge of self. References Barthes, Roland. Mythen des Alltags. Frankfurt a.M.: Suhrkamp, 1964 Berdahl, Daphne. “‘(N)ostalgie’ for the Present: Memory, Longing, and East German Things.” Ethnos 64.2 (1999): 192-207. Blumenberg, Hans. Arbeit am Mythos. Frankfurt a.M.: Suhrkamp Verlag, 1979. Blunt, Alison, and Robyn Dowling. Home. London: Routledge, 2006. Brandt, Willy. “Jetzt kann zusammenwachsen, was zusammengehört [Now that which belongs together, can now grow together].” From his speech on 10 Nov. 1989 in front of the Rathaus Schöneberg, transcript available from http://www.bwbs.de/Brandt/9.html>. Cooke, Paul. “Whatever Happened to Veronika Voss? Rehabilitating the ‘68ers’ and the Problem of Westalgie in Oskar Roehler’s Die Unberührbare (2000).” German Studies Review 27.1 (2004): 33-44. Gaschke, Susanne. “Neues Deutschland. Sind wir eine Wirtschaftsgesellschaft?” Aus Politik und Zeitgeschichte B1-2 (2000): 22-27. Hell, Julia, and Johannes von Moltke. “Unification Effects: Imaginary Landscapes of the Berlin Republic.” The Germanic Review 80.1 (Winter 2005): 74-95. Heneghan, Tom. Unchained Eagle: Germany after the Wall. London: Reuters, 2000. Kohl, Helmut. “Debatte im Bundestag um den Staatsvertrag.” 21 June 1990. Morley, David. Home Territories: Media, Mobility and Identity. London: Routledge, 2000. Naughton, Leonie. That Was the Wild East. Film Culture, Unification, and the “New” Germany. Ann Arbor: U of Michigan P, 2002. Rentschler, Eric. “There’s No Place Like Home: Luis Trenker’s The Prodigal Son (1934).” New German Critique 60 (Special Issue on German Film History, Autumn 1993): 33-56. Reitz, Edgar. “The Camera Is Not a Clock (1979).” In Eric Rentschler, ed. West German Filmmakers on Film: Visions and Voices. New York: Holmes and Meier, 1988. 137-141. Rutschky, Michael. “Wie erst jetzt die DDR entsteht.” Merkur 49.9-10 (Sep./Oct. 1995): 851-64. Strzelczyk, Florentine. “Far Away, So Close: Carl Froelich’s Heimat.” In Robert C. Reimer, ed., Cultural History through the National Socialist Lens. Essays on the Cinema of the Third Reich. Rochester, NY: Camden House, 2000. 109-132. Walsh, Michael. “National Cinema, National Imaginary.” Film History 8 (1996): 5-17. Citation reference for this article MLA Style Ludewig, Alexandra. "Home Meets Heimat." M/C Journal 10.4 (2007). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0708/12-ludewig.php>. APA Style Ludewig, A. (Aug. 2007) "Home Meets Heimat," M/C Journal, 10(4). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0708/12-ludewig.php>.
APA, Harvard, Vancouver, ISO, and other styles
20

Kabir, Nahid. "Why I Call Australia ‘Home’?" M/C Journal 10, no. 4 (August 1, 2007). http://dx.doi.org/10.5204/mcj.2700.

Full text
Abstract:
Introduction I am a transmigrant who has moved back and forth between the West and the Rest. I was born and raised in a Muslim family in a predominantly Muslim country, Bangladesh, but I spent several years of my childhood in Pakistan. After my marriage, I lived in the United States for a year and a half, the Middle East for 5 years, Australia for three years, back to the Middle East for another 5 years, then, finally, in Australia for the last 12 years. I speak Bengali (my mother tongue), Urdu (which I learnt in Pakistan), a bit of Arabic (learnt in the Middle East); but English has always been my medium of instruction. So where is home? Is it my place of origin, the Muslim umma, or my land of settlement? Or is it my ‘root’ or my ‘route’ (Blunt and Dowling)? Blunt and Dowling (199) observe that the lives of transmigrants are often interpreted in terms of their ‘roots’ and ‘routes’, which are two frameworks for thinking about home, homeland and diaspora. Whereas ‘roots’ might imply an original homeland from which people have scattered, and to which they might seek to return, ‘routes’ focuses on mobile, multiple and transcultural geographies of home. However, both ‘roots’ and ‘routes’ are attached to emotion and identity, and both invoke a sense of place, belonging or alienation that is intrinsically tied to a sense of self (Blunt and Dowling 196-219). In this paper, I equate home with my root (place of birth) and route (transnational homing) within the context of the ‘diaspora and belonging’. First I define the diaspora and possible criteria of belonging. Next I describe my transnational homing within the framework of diaspora and belonging. Finally, I consider how Australia can be a ‘home’ for me and other Muslim Australians. The Diaspora and Belonging Blunt and Dowling (199) define diaspora as “scattering of people over space and transnational connections between people and the places”. Cohen emphasised the ethno-cultural aspects of the diaspora setting; that is, how migrants identify and position themselves in other nations in terms of their (different) ethnic and cultural orientation. Hall argues that the diasporic subjects form a cultural identity through transformation and difference. Speaking of the Hindu diaspora in the UK and Caribbean, Vertovec (21-23) contends that the migrants’ contact with their original ‘home’ or diaspora depends on four factors: migration processes and factors of settlement, cultural composition, structural and political power, and community development. With regard to the first factor, migration processes and factors of settlement, Vertovec explains that if the migrants are political or economic refugees, or on a temporary visa, they are likely to live in a ‘myth of return’. In the cultural composition context, Vertovec argues that religion, language, region of origin, caste, and degree of cultural homogenisation are factors in which migrants are bound to their homeland. Concerning the social structure and political power issue, Vertovec suggests that the extent and nature of racial and ethnic pluralism or social stigma, class composition, degree of institutionalised racism, involvement in party politics (or active citizenship) determine migrants’ connection to their new or old home. Finally, community development, including membership in organisations (political, union, religious, cultural, leisure), leadership qualities, and ethnic convergence or conflict (trends towards intra-communal or inter-ethnic/inter-religious co-operation) would also affect the migrants’ sense of belonging. Using these scholarly ideas as triggers, I will examine my home and belonging over the last few decades. My Home In an initial stage of my transmigrant history, my home was my root (place of birth, Dhaka, Bangladesh). Subsequently, my routes (settlement in different countries) reshaped my homes. In all respects, the ethno-cultural factors have played a big part in my definition of ‘home’. But on some occasions my ethnic identification has been overridden by my religious identification and vice versa. By ethnic identity, I mean my language (mother tongue) and my connection to my people (Bangladeshi). By my religious identity, I mean my Muslim religion, and my spiritual connection to the umma, a Muslim nation transcending all boundaries. Umma refers to the Muslim identity and unity within a larger Muslim group across national boundaries. The only thing the members of the umma have in common is their Islamic belief (Spencer and Wollman 169-170). In my childhood my father, a banker, was relocated to Karachi, Pakistan (then West Pakistan). Although I lived in Pakistan for much of my childhood, I have never considered it to be my home, even though it is predominantly a Muslim country. In this case, my home was my root (Bangladesh) where my grandparents and extended family lived. Every year I used to visit my grandparents who resided in a small town in Bangladesh (then East Pakistan). Thus my connection with my home was sustained through my extended family, ethnic traditions, language (Bengali/Bangla), and the occasional visits to the landscape of Bangladesh. Smith (9-11) notes that people build their connection or identity to their homeland through their historic land, common historical memories, myths, symbols and traditions. Though Pakistan and Bangladesh had common histories, their traditions of language, dress and ethnic culture were very different. For example, the celebration of the Bengali New Year (Pohela Baishakh), folk dance, folk music and folk tales, drama, poetry, lyrics of poets Rabindranath Tagore (Rabindra Sangeet) and Nazrul Islam (Nazrul Geeti) are distinct in the cultural heritage of Bangladesh. Special musical instruments such as the banshi (a bamboo flute), dhol (drums), ektara (a single-stringed instrument) and dotara (a four-stringed instrument) are unique to Bangladeshi culture. The Bangladeshi cuisine (rice and freshwater fish) is also different from Pakistan where people mainly eat flat round bread (roti) and meat (gosh). However, my bonding factor to Bangladesh was my relatives, particularly my grandparents as they made me feel one of ‘us’. Their affection for me was irreplaceable. The train journey from Dhaka (capital city) to their town, Noakhali, was captivating. The hustle and bustle at the train station and the lush green paddy fields along the train journey reminded me that this was my ‘home’. Though I spoke the official language (Urdu) in Pakistan and had a few Pakistani friends in Karachi, they could never replace my feelings for my friends, extended relatives and cousins who lived in Bangladesh. I could not relate to the landscape or dry weather of Pakistan. More importantly, some Pakistani women (our neighbours) were critical of my mother’s traditional dress (saree), and described it as revealing because it showed a bit of her back. They took pride in their traditional dress (shalwar, kameez, dopatta), which they considered to be more covered and ‘Islamic’. So, because of our traditional dress (saree) and perhaps other differences, we were regarded as the ‘Other’. In 1970 my father was relocated back to Dhaka, Bangladesh, and I was glad to go home. It should be noted that both Pakistan and Bangladesh were separated from India in 1947 – first as one nation; then, in 1971, Bangladesh became independent from Pakistan. The conflict between Bangladesh (then East Pakistan) and Pakistan (then West Pakistan) originated for economic and political reasons. At this time I was a high school student and witnessed acts of genocide committed by the Pakistani regime against the Bangladeshis (March-December 1971). My memories of these acts are vivid and still very painful. After my marriage, I moved from Bangladesh to the United States. In this instance, my new route (Austin, Texas, USA), as it happened, did not become my home. Here the ethno-cultural and Islamic cultural factors took precedence. I spoke the English language, made some American friends, and studied history at the University of Texas. I appreciated the warm friendship extended to me in the US, but experienced a degree of culture shock. I did not appreciate the pub life, alcohol consumption, and what I perceived to be the lack of family bonds (children moving out at the age of 18, families only meeting occasionally on birthdays and Christmas). Furthermore, I could not relate to de facto relationships and acceptance of sex before marriage. However, to me ‘home’ meant a family orientation and living in close contact with family. Besides the cultural divide, my husband and I were living in the US on student visas and, as Vertovec (21-23) noted, temporary visa status can deter people from their sense of belonging to the host country. In retrospect I can see that we lived in the ‘myth of return’. However, our next move for a better life was not to our root (Bangladesh), but another route to the Muslim world of Dhahran in Saudi Arabia. My husband moved to Dhahran not because it was a Muslim world but because it gave him better economic opportunities. However, I thought this new destination would become my home – the home that was coined by Anderson as the imagined nation, or my Muslim umma. Anderson argues that the imagined communities are “to be distinguished, not by their falsity/genuineness, but by the style in which they are imagined” (6; Wood 61). Hall (122) asserts: identity is actually formed through unconscious processes over time, rather than being innate in consciousness at birth. There is always something ‘imaginary’ or fantasized about its unity. It always remains incomplete, is always ‘in process’, always ‘being formed’. As discussed above, when I had returned home to Bangladesh from Pakistan – both Muslim countries – my primary connection to my home country was my ethnic identity, language and traditions. My ethnic identity overshadowed the religious identity. But when I moved to Saudi Arabia, where my ethnic identity differed from that of the mainstream Arabs and Bedouin/nomadic Arabs, my connection to this new land was through my Islamic cultural and religious identity. Admittedly, this connection to the umma was more psychological than physical, but I was now in close proximity to Mecca, and to my home of Dhaka, Bangladesh. Mecca is an important city in Saudi Arabia for Muslims because it is the holy city of Islam, the home to the Ka’aba (the religious centre of Islam), and the birthplace of Prophet Muhammad [Peace Be Upon Him]. It is also the destination of the Hajj, one of the five pillars of Islamic faith. Therefore, Mecca is home to significant events in Islamic history, as well as being an important present day centre for the Islamic faith. We lived in Dhahran, Saudi Arabia for 5 years. Though it was a 2.5 hours flight away, I treasured Mecca’s proximity and regarded Dhahran as my second and spiritual home. Saudi Arabia had a restricted lifestyle for women, but I liked it because it was a Muslim country that gave me the opportunity to perform umrah Hajj (pilgrimage). However, Saudi Arabia did not allow citizenship to expatriates. Saudi Arabia’s government was keen to protect the status quo and did not want to compromise its cultural values or standard of living by allowing foreigners to become a permanent part of society. In exceptional circumstances only, the King granted citizenship to a foreigner for outstanding service to the state over a number of years. Children of foreigners born in Saudi Arabia did not have rights of local citizenship; they automatically assumed the nationality of their parents. If it was available, Saudi citizenship would assure expatriates a secure and permanent living in Saudi Arabia; as it was, there was a fear among the non-Saudis that they would have to leave the country once their job contract expired. Under the circumstances, though my spiritual connection to Mecca was strong, my husband was convinced that Saudi Arabia did not provide any job security. So, in 1987 when Australia offered migration to highly skilled people, my husband decided to migrate to Australia for a better and more secure economic life. I agreed to his decision, but quite reluctantly because we were again moving to a non-Muslim part of the world, which would be culturally different and far away from my original homeland (Bangladesh). In Australia, we lived first in Brisbane, then Adelaide, and after three years we took our Australian citizenship. At that stage I loved the Barossa Valley and Victor Harbour in South Australia, and the Gold Coast and Sunshine Coast in Queensland, but did not feel at home in Australia. We bought a house in Adelaide and I was a full time home-maker but was always apprehensive that my children (two boys) would lose their culture in this non-Muslim world. In 1990 we once again moved back to the Muslim world, this time to Muscat, Sultanate of Oman. My connection to this route was again spiritual. I valued the fact that we would live in a Muslim country and our children would be brought up in a Muslim environment. But my husband’s move was purely financial as he got a lucrative job offer in Muscat. We had another son in Oman. We enjoyed the luxurious lifestyle provided by my husband’s workplace and the service provided by the housemaid. I loved the beaches and freedom to drive my car, and I appreciated the friendly Omani people. I also enjoyed our frequent trips (4 hours flight) to my root, Dhaka, Bangladesh. So our children were raised within our ethnic and Islamic culture, remained close to my root (family in Dhaka), though they attended a British school in Muscat. But by the time I started considering Oman to be my second home, we had to leave once again for a place that could provide us with a more secure future. Oman was like Saudi Arabia; it employed expatriates only on a contract basis, and did not give them citizenship (not even fellow Muslims). So after 5 years it was time to move back to Australia. It was with great reluctance that I moved with my husband to Brisbane in 1995 because once again we were to face a different cultural context. As mentioned earlier, we lived in Brisbane in the late 1980s; I liked the weather, the landscape, but did not consider it home for cultural reasons. Our boys started attending expensive private schools and we bought a house in a prestigious Western suburb in Brisbane. Soon after arriving I started my tertiary education at the University of Queensland, and finished an MA in Historical Studies in Indian History in 1998. Still Australia was not my home. I kept thinking that we would return to my previous routes or the ‘imagined’ homeland somewhere in the Middle East, in close proximity to my root (Bangladesh), where we could remain economically secure in a Muslim country. But gradually I began to feel that Australia was becoming my ‘home’. I had gradually become involved in professional and community activities (with university colleagues, the Bangladeshi community and Muslim women’s organisations), and in retrospect I could see that this was an early stage of my ‘self-actualisation’ (Maslow). Through my involvement with diverse people, I felt emotionally connected with the concerns, hopes and dreams of my Muslim-Australian friends. Subsequently, I also felt connected with my mainstream Australian friends whose emotions and fears (9/11 incident, Bali bombing and 7/7 tragedy) were similar to mine. In late 1998 I started my PhD studies on the immigration history of Australia, with a particular focus on the historical settlement of Muslims in Australia. This entailed retrieving archival files and interviewing people, mostly Muslims and some mainstream Australians, and enquiring into relevant migration issues. I also became more active in community issues, and was not constrained by my circumstances. By circumstances, I mean that even though I belonged to a patriarchally structured Muslim family, where my husband was the main breadwinner, main decision-maker, my independence and research activities (entailing frequent interstate trips for data collection, and public speaking) were not frowned upon or forbidden (Khan 14-15); fortunately, my husband appreciated my passion for research and gave me his trust and support. This, along with the Muslim community’s support (interviews), and the wider community’s recognition (for example, the publication of my letters in Australian newspapers, interviews on radio and television) enabled me to develop my self-esteem and built up my bicultural identity as a Muslim in a predominantly Christian country and as a Bangladeshi-Australian. In 2005, for the sake of a better job opportunity, my husband moved to the UK, but this time I asserted that I would not move again. I felt that here in Australia (now in Perth) I had a job, an identity and a home. This time my husband was able to secure a good job back in Australia and was only away for a year. I no longer dream of finding a home in the Middle East. Through my bicultural identity here in Australia I feel connected to the wider community and to the Muslim umma. However, my attachment to the umma has become ambivalent. I feel proud of my Australian-Muslim identity but I am concerned about the jihadi ideology of militant Muslims. By jihadi ideology, I mean the extremist ideology of the al-Qaeda terrorist group (Farrar 2007). The Muslim umma now incorporates both moderate and radical Muslims. The radical Muslims (though only a tiny minority of 1.4 billion Muslims worldwide) pose a threat to their moderate counterparts as well as to non-Muslims. In the UK, some second- and third-generation Muslims identify themselves with the umma rather than their parents’ homelands or their country of birth (Husain). It should not be a matter of concern if these young Muslims adopt a ‘pure’ Muslim identity, providing at the same time they are loyal to their country of residence. But when they resort to terrorism with their ‘pure’ Muslim identity (e.g., the 7/7 London bombers) they defame my religion Islam, and undermine my spiritual connection to the umma. As a 1st generation immigrant, the defining criteria of my ‘homeliness’ in Australia are my ethno-cultural and religious identity (which includes my family), my active citizenship, and my community development/contribution through my research work – all of which allow me a sense of efficacy in my life. My ethnic and religious identities generally co-exist equally, but when I see some Muslims kill my fellow Australians (such as the Bali bombings in 2002 and 2005) my Australian identity takes precedence. I feel for the victims and condemn the perpetrators. On the other hand, when I see politics play a role over the human rights issues (e.g., the Tampa incident), my religious identity begs me to comment on it (see Kabir, Muslims in Australia 295-305). Problematising ‘Home’ for Muslim Australians In the European context, Grillo (863) and Werbner (904), and in the Australian context, Kabir (Muslims in Australia) and Poynting and Mason, have identified the diversity within Islam (national, ethnic, religious etc). Werbner (904) notes that in spite of the “wishful talk of the emergence of a ‘British Islam’, even today there are Pakistani, Bangladeshi and Arab mosques, as well as Turkish and Shia’a mosques”; thus British Muslims retain their separate identities. Similarly, in Australia, the existence of separate mosques for the Bangladeshi, Pakistani, Arab and Shia’a peoples indicates that Australian Muslims have also kept their ethnic identities discrete (Saeed 64-77). However, in times of crisis, such as the Salman Rushdie affair in 1989, and the 1990-1991 Gulf crises, both British and Australian Muslims were quick to unite and express their Islamic identity by way of resistance (Kabir, Muslims in Australia 160-162; Poynting and Mason 68-70). In both British and Australian contexts, I argue that a peaceful rally or resistance is indicative of active citizenship of Muslims as it reveals their sense of belonging (also Werbner 905). So when a transmigrant Muslim wants to make a peaceful demonstration, the Western world should be encouraged, not threatened – as long as the transmigrant’s allegiances lie also with the host country. In the European context, Grillo (868) writes: when I asked Mehmet if he was planning to stay in Germany he answered without hesitation: ‘Yes, of course’. And then, after a little break, he added ‘as long as we can live here as Muslims’. In this context, I support Mehmet’s desire to live as a Muslim in a non-Muslim world as long as this is peaceful. Paradoxically, living a Muslim life through ijtihad can be either socially progressive or destructive. The Canadian Muslim feminist Irshad Manji relies on ijtihad, but so does Osama bin Laden! Manji emphasises that ijtihad can be, on the one hand, the adaptation of Islam using independent reasoning, hybridity and the contesting of ‘traditional’ family values (c.f. Doogue and Kirkwood 275-276, 314); and, on the other, ijtihad can take the form of conservative, patriarchal and militant Islamic values. The al-Qaeda terrorist Osama bin Laden espouses the jihadi ideology of Sayyid Qutb (1906-1966), an Egyptian who early in his career might have been described as a Muslim modernist who believed that Islam and Western secular ideals could be reconciled. But he discarded that idea after going to the US in 1948-50; there he was treated as ‘different’ and that treatment turned him against the West. He came back to Egypt and embraced a much more rigid and militaristic form of Islam (Esposito 136). Other scholars, such as Cesari, have identified a third orientation – a ‘secularised Islam’, which stresses general beliefs in the values of Islam and an Islamic identity, without too much concern for practices. Grillo (871) observed Islam in the West emphasised diversity. He stressed that, “some [Muslims were] more quietest, some more secular, some more clamorous, some more negotiatory”, while some were exclusively characterised by Islamic identity, such as wearing the burqa (elaborate veils), hijabs (headscarves), beards by men and total abstinence from drinking alcohol. So Mehmet, cited above, could be living a Muslim life within the spectrum of these possibilities, ranging from an integrating mode to a strict, militant Muslim manner. In the UK context, Zubaida (96) contends that marginalised, culturally-impoverished youth are the people for whom radical, militant Islamism may have an appeal, though it must be noted that the 7/7 bombers belonged to affluent families (O’Sullivan 14; Husain). In Australia, Muslim Australians are facing three challenges. First, the Muslim unemployment rate: it was three times higher than the national total in 1996 and 2001 (Kabir, Muslims in Australia 266-278; Kabir, “What Does It Mean” 63). Second, some spiritual leaders have used extreme rhetoric to appeal to marginalised youth; in January 2007, the Australian-born imam of Lebanese background, Sheikh Feiz Mohammad, was alleged to have employed a DVD format to urge children to kill the enemies of Islam and to have praised martyrs with a violent interpretation of jihad (Chulov 2). Third, the proposed citizenship test has the potential to make new migrants’ – particularly Muslims’ – settlement in Australia stressful (Kabir, “What Does It Mean” 62-79); in May 2007, fuelled by perceptions that some migrants – especially Muslims – were not integrating quickly enough, the Howard government introduced a citizenship test bill that proposes to test applicants on their English language skills and knowledge of Australian history and ‘values’. I contend that being able to demonstrate knowledge of history and having English language skills is no guarantee that a migrant will be a good citizen. Through my transmigrant history, I have learnt that developing a bond with a new place takes time, acceptance and a gradual change of identity, which are less likely to happen when facing assimilationist constraints. I spoke English and studied history in the United States, but I did not consider it my home. I did not speak the Arabic language, and did not study Middle Eastern history while I was in the Middle East, but I felt connected to it for cultural and religious reasons. Through my knowledge of history and English language proficiency I did not make Australia my home when I first migrated to Australia. Australia became my home when I started interacting with other Australians, which was made possible by having the time at my disposal and by fortunate circumstances, which included a fairly high level of efficacy and affluence. If I had been rejected because of my lack of knowledge of ‘Australian values’, or had encountered discrimination in the job market, I would have been much less willing to embrace my host country and call it home. I believe a stringent citizenship test is more likely to alienate would-be citizens than to induce their adoption of values and loyalty to their new home. Conclusion Blunt (5) observes that current studies of home often investigate mobile geographies of dwelling and how it shapes one’s identity and belonging. Such geographies of home negotiate from the domestic to the global context, thus mobilising the home beyond a fixed, bounded and confining location. Similarly, in this paper I have discussed how my mobile geography, from the domestic (root) to global (route), has shaped my identity. Though I received a degree of culture shock in the United States, loved the Middle East, and was at first quite resistant to the idea of making Australia my second home, the confidence I acquired in residing in these ‘several homes’ were cumulative and eventually enabled me to regard Australia as my ‘home’. I loved the Middle East, but I did not pursue an active involvement with the Arab community because I was a busy mother. Also I lacked the communication skill (fluency in Arabic) with the local residents who lived outside the expatriates’ campus. I am no longer a cultural freak. I am no longer the same Bangladeshi woman who saw her ethnic and Islamic culture as superior to all other cultures. I have learnt to appreciate Australian values, such as tolerance, ‘a fair go’ and multiculturalism (see Kabir, “What Does It Mean” 62-79). My bicultural identity is my strength. With my ethnic and religious identity, I can relate to the concerns of the Muslim community and other Australian ethnic and religious minorities. And with my Australian identity I have developed ‘a voice’ to pursue active citizenship. Thus my biculturalism has enabled me to retain and merge my former home with my present and permanent home of Australia. References Anderson, Benedict. Imagined Communities: Reflections on the Origin and Spread of Nationalism. London, New York: Verso, 1983. Australian Bureau of Statistics: Census of Housing and Population, 1996 and 2001. Blunt, Alison. Domicile and Diaspora: Anglo-Indian Women and the Spatial Politics of Home. Oxford: Blackwell, 2005. Blunt, Alison, and Robyn Dowling. Home. London and New York: Routledge, 2006. Cesari, Jocelyne. “Muslim Minorities in Europe: The Silent Revolution.” In John L. Esposito and Burgat, eds., Modernising Islam: Religion in the Public Sphere in Europe and the Middle East. London: Hurst, 2003. 251-269. Chulov, Martin. “Treatment Has Sheik Wary of Returning Home.” Weekend Australian 6-7 Jan. 2007: 2. Cohen, Robin. Global Diasporas: An Introduction. Seattle: University of Washington, 1997. Doogue, Geraldine, and Peter Kirkwood. Tomorrow’s Islam: Uniting Old-Age Beliefs and a Modern World. Sydney: ABC Books, 2005. Esposito, John. The Islamic Threat: Myth or Reality? 3rd ed. New York, Oxford: Oxford UP, 1999. Farrar, Max. “When the Bombs Go Off: Rethinking and Managing Diversity Strategies in Leeds, UK.” International Journal of Diversity in Organisations, Communities and Nations 6.5 (2007): 63-68. Grillo, Ralph. “Islam and Transnationalism.” Journal of Ethnic and Migration Studies 30.5 (Sep. 2004): 861-878. Hall, Stuart. Polity Reader in Cultural Theory. Cambridge: Polity Press, 1994. Huntington, Samuel, P. The Clash of Civilisation and the Remaking of World Order. London: Touchstone, 1998. Husain, Ed. The Islamist: Why I Joined Radical Islam in Britain, What I Saw inside and Why I Left. London: Penguin, 2007. Kabir, Nahid. Muslims in Australia: Immigration, Race Relations and Cultural History. London: Kegan Paul, 2005. ———. “What Does It Mean to Be Un-Australian: Views of Australian Muslim Students in 2006.” People and Place 15.1 (2007): 62-79. Khan, Shahnaz. Aversion and Desire: Negotiating Muslim Female Identity in the Diaspora. Toronto: Women’s Press, 2002. Manji, Irshad. The Trouble with Islam Today. Canada:Vintage, 2005. Maslow, Abraham. Motivation and Personality. New York: Harper, 1954. O’Sullivan, J. “The Real British Disease.” Quadrant (Jan.-Feb. 2006): 14-20. Poynting, Scott, and Victoria Mason. “The Resistible Rise of Islamophobia: Anti-Muslim Racism in the UK and Australia before 11 September 2001.” Journal of Sociology 43.1 (2007): 61-86. Saeed, Abdallah. Islam in Australia. Sydney: Allen and Unwin, 2003. Smith, Anthony D. National Identity. Harmondsworth: Penguin, 1991. Spencer, Philip, and Howard Wollman. Nationalism: A Critical Introduction. London: Sage, 2002. Vertovec, Stevens. The Hindu Diaspora: Comparative Patterns. London: Routledge. 2000. Werbner, Pnina, “Theorising Complex Diasporas: Purity and Hybridity in the South Asian Public Sphere in Britain.” Journal of Ethnic and Migration Studies 30.5 (2004): 895-911. Wood, Dennis. “The Diaspora, Community and the Vagrant Space.” In Cynthia Vanden Driesen and Ralph Crane, eds., Diaspora: The Australasian Experience. New Delhi: Prestige, 2005. 59-64. Zubaida, Sami. “Islam in Europe: Unity or Diversity.” Critical Quarterly 45.1-2 (2003): 88-98. Citation reference for this article MLA Style Kabir, Nahid. "Why I Call Australia ‘Home’?: A Transmigrant’s Perspective." M/C Journal 10.4 (2007). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0708/15-kabir.php>. APA Style Kabir, N. (Aug. 2007) "Why I Call Australia ‘Home’?: A Transmigrant’s Perspective," M/C Journal, 10(4). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0708/15-kabir.php>.
APA, Harvard, Vancouver, ISO, and other styles
21

Musgrove, Brian Michael. "Recovering Public Memory: Politics, Aesthetics and Contempt." M/C Journal 11, no. 6 (November 28, 2008). http://dx.doi.org/10.5204/mcj.108.

Full text
Abstract:
1. Guy Debord in the Land of the Long WeekendIt’s the weekend – leisure time. It’s the interlude when, Guy Debord contends, the proletarian is briefly free of the “total contempt so clearly built into every aspect of the organization and management of production” in commodity capitalism; when workers are temporarily “treated like grown-ups, with a great show of solicitude and politeness, in their new role as consumers.” But this patronising show turns out to be another form of subjection to the diktats of “political economy”: “the totality of human existence falls under the regime of the ‘perfected denial of man’.” (30). As Debord suggests, even the creation of leisure time and space is predicated upon a form of contempt: the “perfected denial” of who we, as living people, really are in the eyes of those who presume the power to legislate our working practices and private identities.This Saturday The Weekend Australian runs an opinion piece by Christopher Pearson, defending ABC Radio National’s Stephen Crittenden, whose program The Religion Report has been axed. “Some of Crittenden’s finest half-hours have been devoted to Islam in Australia in the wake of September 11,” Pearson writes. “Again and again he’s confronted a left-of-centre audience that expected multi-cultural pieties with disturbing assertions.” Along the way in this admirable Crusade, Pearson notes that Crittenden has exposed “the Left’s recent tendency to ally itself with Islam.” According to Pearson, Crittenden has also thankfully given oxygen to claims by James Cook University’s Mervyn Bendle, the “fairly conservative academic whose work sometimes appears in [these] pages,” that “the discipline of critical terrorism studies has been captured by neo-Marxists of a postmodern bent” (30). Both of these points are well beyond misunderstanding or untested proposition. If Pearson means them sincerely he should be embarrassed and sacked. But of course he does not and will not be. These are deliberate lies, the confabulations of an eminent right-wing culture warrior whose job is to vilify minorities and intellectuals (Bendle escapes censure as an academic because he occasionally scribbles for the Murdoch press). It should be observed, too, how the patent absurdity of Pearson’s remarks reveals the extent to which he holds the intelligence of his readers in contempt. And he is not original in peddling these toxic wares.In their insightful—often hilarious—study of Australian opinion writers, The War on Democracy, Niall Lucy and Steve Mickler identify the left-academic-Islam nexus as the brain-child of former Treasurer-cum-memoirist Peter Costello. The germinal moment was “a speech to the Australian American Leadership Dialogue forum at the Art Gallery of NSW in 2005” concerning anti-Americanism in Australian schools. Lucy and Mickler argue that “it was only a matter of time” before a conservative politician or journalist took the plunge to link the left and terrorism, and Costello plunged brilliantly. He drew a mental map of the Great Chain of Being: left-wing academics taught teacher trainees to be anti-American; teacher trainees became teachers and taught kids to be anti-American; anti-Americanism morphs into anti-Westernism; anti-Westernism veers into terrorism (38). This is contempt for the reasoning capacity of the Australian people and, further still, contempt for any observable reality. Not for nothing was Costello generally perceived by the public as a politician whose very physiognomy radiated smugness and contempt.Recycling Costello, Christopher Pearson’s article subtly interpellates the reader as an ordinary, common-sense individual who instinctively feels what’s right and has no need to think too much—thinking too much is the prerogative of “neo-Marxists” and postmodernists. Ultimately, Pearson’s article is about channelling outrage: directing the down-to-earth passions of the Australian people against stock-in-trade culture-war hate figures. And in Pearson’s paranoid world, words like “neo-Marxist” and “postmodern” are devoid of historical or intellectual meaning. They are, as Lucy and Mickler’s War on Democracy repeatedly demonstrate, mere ciphers packed with the baggage of contempt for independent critical thought itself.Contempt is everywhere this weekend. The Weekend Australian’s colour magazine runs a feature story on Malcolm Turnbull: one of those familiar profiles designed to reveal the everyday human touch of the political classes. In this puff-piece, Jennifer Hewett finds Turnbull has “a restless passion for participating in public life” (20); that beneath “the aggressive political rhetoric […] behind the journalist turned lawyer turned banker turned politician turned would-be prime minister is a man who really enjoys that human interaction, however brief, with the many, many ordinary people he encounters” (16). Given all this energetic turning, it’s a wonder that Turnbull has time for human interactions at all. The distinction here of Turnbull and “many, many ordinary people” – the anonymous masses – surely runs counter to Hewett’s brief to personalise and quotidianise him. Likewise, those two key words, “however brief”, have an unfortunate, unintended effect. Presumably meant to conjure a picture of Turnbull’s hectic schedules and serial turnings, the words also convey the image of a patrician who begrudgingly knows one of the costs of a political career is that common flesh must be pressed—but as gingerly as possible.Hewett proceeds to disclose that Turnbull is “no conservative cultural warrior”, “onfounds stereotypes” and “hates labels” (like any baby-boomer rebel) and “has always read widely on political philosophy—his favourite is Edmund Burke”. He sees the “role of the state above all as enabling people to do their best” but knows that “the main game is the economy” and is “content to play mainstream gesture politics” (19). I am genuinely puzzled by this and imagine that my intelligence is being held in contempt once again. That the man of substance is given to populist gesturing is problematic enough; but that the Burke fan believes the state is about personal empowerment is just too much. Maybe Turnbull is a fan of Burke’s complex writings on the sublime and the beautiful—but no, Hewett avers, Turnbull is engaged by Burke’s “political philosophy”. So what is it in Burke that Turnbull finds to favour?Turnbull’s invocation of Edmund Burke is empty, gestural and contradictory. The comfortable notion that the state helps people to realise their potential is contravened by Burke’s view that the state functions so “the inclinations of men should frequently be thwarted, their will controlled, and their passions brought into subjection… by a power out of themselves” (151). Nor does Burke believe that anyone of humble origins could or should rise to the top of the social heap: “The occupation of an hair-dresser, or of a working tallow-chandler, cannot be a matter of honour to any person… the state suffers oppression, if such as they, either individually or collectively, are permitted to rule” (138).If Turnbull’s main game as a would-be statesman is the economy, Burke profoundly disagrees: “the state ought not to be considered as nothing better than a partnership agreement in a trade of pepper and coffee, callico or tobacco, or some other such low concern… It is a partnership in all science; a partnership in all art; a partnership in every virtue, and in all perfection”—a sublime entity, not an economic manager (194). Burke understands, long before Antonio Gramsci or Louis Althusser, that individuals or social fractions must be made admirably “obedient” to the state “by consent or force” (195). Burke has a verdict on mainstream gesture politics too: “When men of rank sacrifice all ideas of dignity to an ambition without a distinct object, and work with low instruments and for low ends, the whole composition [of the state] becomes low and base” (136).Is Malcolm Turnbull so contemptuous of the public that he assumes nobody will notice the gross discrepancies between his own ideals and what Burke stands for? His invocation of Burke is, indeed, “mainstream gesture politics”: on one level, “Burke” signifies nothing more than Turnbull’s performance of himself as a deep thinker. In this process, the real Edmund Burke is historically erased; reduced to the status of stage-prop in the theatrical production of Turnbull’s mass-mediated identity. “Edmund Burke” is re-invented as a term in an aesthetic repertoire.This transmutation of knowledge and history into mere cipher is the staple trick of culture-war discourse. Jennifer Hewett casts Turnbull as “no conservative culture warrior”, but he certainly shows a facility with culture-war rhetoric. And as much as Turnbull “confounds stereotypes” his verbal gesture to Edmund Burke entrenches a stereotype: at another level, the incantation “Edmund Burke” is implicitly meant to connect Turnbull with conservative tradition—in the exact way that John Howard regularly self-nominated as a “Burkean conservative”.This appeal to tradition effectively places “the people” in a power relation. Tradition has a sublimity that is bigger than us; it precedes us and will outlast us. Consequently, for a politician to claim that tradition has fashioned him, that he is welded to it or perhaps even owns it as part of his heritage, is to glibly imply an authority greater than that of “the many, many ordinary people”—Burke’s hair-dressers and tallow-chandlers—whose company he so briefly enjoys.In The Ideology of the Aesthetic, Terry Eagleton assesses one of Burke’s important legacies, placing him beside another eighteenth-century thinker so loved by the right—Adam Smith. Ideology of the Aesthetic is premised on the view that “Aesthetics is born as a discourse of the body”; that the aesthetic gives form to the “primitive materialism” of human passions and organises “the whole of our sensate life together… a society’s somatic, sensational life” (13). Reading Smith’s Theory of Moral Sentiments, Eagleton discerns that society appears as “an immense machine, whose regular and harmonious movements produce a thousand agreeable effects”, like “any production of human art”. In Smith’s work, the “whole of social life is aestheticized” and people inhabit “a social order so spontaneously cohesive that its members no longer need to think about it.” In Burke, Eagleton discovers that the aesthetics of “manners” can be understood in terms of Gramscian hegemony: “in the aesthetics of social conduct, or ‘culture’ as it would later be called, the law is always with us, as the very unconscious structure of our life”, and as a result conformity to a dominant ideological order is deeply felt as pleasurable and beautiful (37, 42). When this conservative aesthetic enters the realm of politics, Eagleton contends, the “right turn, from Burke” onwards follows a dark trajectory: “forget about theoretical analysis… view society as a self-grounding organism, all of whose parts miraculously interpenetrate without conflict and require no rational justification. Think with the blood and the body. Remember that tradition is always wiser and richer than one’s own poor, pitiable ego. It is this line of descent, in one of its tributaries, which will lead to the Third Reich” (368–9).2. Jean Baudrillard, the Nazis and Public MemoryIn 1937, during the Spanish Civil War, the Third Reich’s Condor Legion of the Luftwaffe was on loan to Franco’s forces. On 26 April that year, the Condor Legion bombed the market-town of Guernica: the first deliberate attempt to obliterate an entire town from the air and the first experiment in what became known as “terror bombing”—the targeting of civilians. A legacy of this violence was Pablo Picasso’s monumental canvas Guernica – the best-known anti-war painting in art history.When US Secretary of State Colin Powell addressed the United Nations on 5 February 2003 to make the case for war on Iraq, he stopped to face the press in the UN building’s lobby. The doorstop was globally televised, packaged as a moment of incredible significance: history in the making. It was also theatre: a moment in which history was staged as “event” and the real traces of history were carefully erased. Millions of viewers world-wide were undoubtedly unaware that the blue backdrop before which Powell stood was specifically designed to cover the full-scale tapestry copy of Picasso’s Guernica. This one-act, agitprop drama was a splendid example of politics as aesthetic action: a “performance” of history in the making which required the loss of actual historical memory enshrined in Guernica. Powell’s performance took its cues from the culture wars, which require the ceaseless erasure of history and public memory—on this occasion enacted on a breathtaking global, rather than national, scale.Inside the UN chamber, Powell’s performance was equally staged-crafted. As he brandished vials of ersatz anthrax, the power-point behind him (the theatrical set) showed artists’ impressions of imaginary mobile chemical weapons laboratories. Powell was playing lead role in a kind of populist, hyperreal production. It was Jean Baudrillard’s postmodernism, no less, as the media space in which Powell acted out the drama was not a secondary representation of reality but a reality of its own; the overheads of mobile weapons labs were simulacra, “models of a real without origins or reality”, pictures referring to nothing but themselves (2). In short, Powell’s performance was anchored in a “semiurgic” aesthetic; and it was a dreadful real-life enactment of Walter Benjamin’s maxim that “All efforts to render politics aesthetic culminate in one thing: war” (241).For Benjamin, “Fascism attempts to organize the newly created proletarian masses without affecting the property structure which the masses strive to eliminate.” Fascism gave “these masses not their right, but instead a chance to express themselves.” In turn, this required “the introduction of aesthetics into politics”, the objective of which was “the production of ritual values” (241). Under Adolf Hitler’s Reich, people were able to express themselves but only via the rehearsal of officially produced ritual values: by their participation in the disquisition on what Germany meant and what it meant to be German, by the aesthetic regulation of their passions. As Frederic Spotts’ fine study Hitler and the Power of Aesthetics reveals, this passionate disquisition permeated public and private life, through the artfully constructed total field of national narratives, myths, symbols and iconographies. And the ritualistic reiteration of national values in Nazi Germany hinged on two things: contempt and memory loss.By April 1945, as Berlin fell, Hitler’s contempt for the German people was at its apogee. Hitler ordered a scorched earth operation: the destruction of everything from factories to farms to food stores. The Russians would get nothing, the German people would perish. Albert Speer refused to implement the plan and remembered that “Until then… Germany and Hitler had been synonymous in my mind. But now I saw two entities opposed… A passionate love of one’s country… a leader who seemed to hate his people” (Sereny 472). But Hitler’s contempt for the German people was betrayed in the blusterous pages of Mein Kampf years earlier: “The receptivity of the great masses is very limited, their intelligence is small, but their power of forgetting is enormous” (165). On the back of this belief, Hitler launched what today would be called a culture war, with its Jewish folk devils, loathsome Marxist intellectuals, incitement of popular passions, invented traditions, historical erasures and constant iteration of values.When Theodor Adorno and Max Horkheimer fled Fascism, landing in the United States, their view of capitalist democracy borrowed from Benjamin and anticipated both Baudrillard and Guy Debord. In their well-know essay on “The Culture Industry”, in Dialectic of Enlightenment, they applied Benjamin’s insight on mass self-expression and the maintenance of property relations and ritual values to American popular culture: “All are free to dance and enjoy themselves”, but the freedom to choose how to do so “proves to be the freedom to choose what is always the same”, manufactured by monopoly capital (161–162). Anticipating Baudrillard, they found a society in which “only the copy appears: in the movie theatre, the photograph; on the radio, the recording” (143). And anticipating Debord’s “perfected denial of man” they found a society where work and leisure were structured by the repetition-compulsion principles of capitalism: where people became consumers who appeared “s statistics on research organization charts” (123). “Culture” came to do people’s thinking for them: “Pleasure always means not to think about anything, to forget suffering even where it is shown” (144).In this mass-mediated environment, a culture of repetitions, simulacra, billboards and flickering screens, Adorno and Horkheimer concluded that language lost its historical anchorages: “Innumerable people use words and expressions which they have either ceased to understand or employ only because they trigger off conditioned reflexes” in precisely the same way that the illusory “free” expression of passions in Germany operated, where words were “debased by the Fascist pseudo-folk community” (166).I know that the turf of the culture wars, the US and Australia, are not Fascist states; and I know that “the first one to mention the Nazis loses the argument”. I know, too, that there are obvious shortcomings in Adorno and Horkheimer’s reactions to popular culture and these have been widely criticised. However, I would suggest that there is a great deal of value still in Frankfurt School analyses of what we might call the “authoritarian popular” which can be applied to the conservative prosecution of populist culture wars today. Think, for example, how the concept of a “pseudo folk community” might well describe the earthy, common-sense public constructed and interpellated by right-wing culture warriors: America’s Joe Six-Pack, John Howard’s battlers or Kevin Rudd’s working families.In fact, Adorno and Horkheimer’s observations on language go to the heart of a contemporary culture war strategy. Words lose their history, becoming ciphers and “triggers” in a politicised lexicon. Later, Roland Barthes would write that this is a form of myth-making: “myth is constituted by the loss of the historical quality of things.” Barthes reasoned further that “Bourgeois ideology continuously transforms the products of history into essential types”, generating a “cultural logic” and an ideological re-ordering of the world (142). Types such as “neo-Marxist”, “postmodernist” and “Burkean conservative”.Surely, Benjamin’s assessment that Fascism gives “the people” the occasion to express itself, but only through “values”, describes the right’s pernicious incitement of the mythic “dispossessed mainstream” to reclaim its voice: to shout down the noisy minorities—the gays, greenies, blacks, feminists, multiculturalists and neo-Marxist postmodernists—who’ve apparently been running the show. Even more telling, Benjamin’s insight that the incitement to self-expression is connected to the maintenance of property relations, to economic power, is crucial to understanding the contemptuous conduct of culture wars.3. Jesus Dunked in Urine from Kansas to CronullaAmerican commentator Thomas Frank bases his study What’s the Matter with Kansas? on this very point. Subtitled How Conservatives Won the Heart of America, Frank’s book is a striking analysis of the indexation of Chicago School free-market reform and the mobilisation of “explosive social issues—summoning public outrage over everything from busing to un-Christian art—which it then marries to pro-business policies”; but it is the “economic achievements” of free-market capitalism, “not the forgettable skirmishes of the never-ending culture wars” that are conservatism’s “greatest monuments.” Nevertheless, the culture wars are necessary as Chicago School economic thinking consigns American communities to the rust belt. The promise of “free-market miracles” fails ordinary Americans, Frank reasons, leaving them in “backlash” mode: angry, bewildered and broke. And in this context, culture wars are a convenient form of anger management: “Because some artist decides to shock the hicks by dunking Jesus in urine, the entire planet must remake itself along the lines preferred” by nationalist, populist moralism and free-market fundamentalism (5).When John Howard received the neo-conservative American Enterprise Institute’s Irving Kristol Award, on 6 March 2008, he gave a speech in Washington titled “Sharing Our Common Values”. The nub of the speech was Howard’s revelation that he understood the index of neo-liberal economics and culture wars precisely as Thomas Frank does. Howard told the AEI audience that under his prime ministership Australia had “pursued reform and further modernisation of our economy” and that this inevitably meant “dislocation for communities”. This “reform-dislocation” package needed the palliative of a culture war, with his government preaching the “consistency and reassurance” of “our nation’s traditional values… pride in her history”; his government “became assertive about the intrinsic worth of our national identity. In the process we ended the seemingly endless seminar about that identity which had been in progress for some years.” Howard’s boast that his government ended the “seminar” on national identity insinuates an important point. “Seminar” is a culture-war cipher for intellection, just as “pride” is code for passion; so Howard’s self-proclaimed achievement, in Terry Eagleton’s terms, was to valorise “the blood and the body” over “theoretical analysis”. This speaks stratospheric contempt: ordinary people have their identity fashioned for them; they need not think about it, only feel it deeply and passionately according to “ritual values”. Undoubtedly this paved the way to Cronulla.The rubric of Howard’s speech—“Sharing Our Common Values”—was both a homage to international neo-conservatism and a reminder that culture wars are a trans-national phenomenon. In his address, Howard said that in all his “years in politics” he had not heard a “more evocative political slogan” than Ronald Reagan’s “Morning in America”—the rhetorical catch-cry for moral re-awakening that launched the culture wars. According to Lawrence Grossberg, America’s culture wars were predicated on the perception that the nation was afflicted by “a crisis of our lack of passion, of not caring enough about the values we hold… a crisis of nihilism which, while not restructuring our ideological beliefs, has undermined our ability to organise effective action on their behalf”; and this “New Right” alarmism “operates in the conjuncture of economics and popular culture” and “a popular struggle by which culture can lead politics” in the passionate pursuit of ritual values (31–2). When popular culture leads politics in this way we are in the zone of the image, myth and Adorno and Horkheimer’s “trigger words” that have lost their history. In this context, McKenzie Wark observes that “radical writers influenced by Marx will see the idea of culture as compensation for a fragmented and alienated life as a con. Guy Debord, perhaps the last of the great revolutionary thinkers of Europe, will call it “the spectacle”’ (20). Adorno and Horkheimer might well have called it “the authoritarian popular”. As Jonathan Charteris-Black’s work capably demonstrates, all politicians have their own idiolect: their personally coded language, preferred narratives and myths; their own vision of who “the people” might or should be that is conjured in their words. But the language of the culture wars is different. It is not a personal idiolect. It is a shared vocabulary, a networked vernacular, a pervasive trans-national aesthetic that pivots on the fact that words like “neo-Marxist”, “postmodern” and “Edmund Burke” have no historical or intellectual context or content: they exist as the ciphers of “values”. And the fact that culture warriors continually mouth them is a supreme act of contempt: it robs the public of its memory. And that’s why, as Lucy and Mickler’s War on Democracy so wittily argues, if there are any postmodernists left they’ll be on the right.Benjamin, Adorno, Horkheimer and, later, Debord and Grossberg understood how the political activation of the popular constitutes a hegemonic project. The result is nothing short of persuading “the people” to collaborate in its own oppression. The activation of the popular is perfectly geared to an age where the main stage of political life is the mainstream media; an age in which, Charteris-Black notes, political classes assume the general antipathy of publics to social change and act on the principle that the most effective political messages are sold to “the people” by an appeal “to familiar experiences”—market populism (10). In her substantial study The Persuaders, Sally Young cites an Australian Labor Party survey, conducted by pollster Rod Cameron in the late 1970s, in which the party’s message machine was finely tuned to this populist position. The survey also dripped with contempt for ordinary people: their “Interest in political philosophy… is very low… They are essentially the products (and supporters) of mass market commercialism”. Young observes that this view of “the people” was the foundation of a new order of political advertising and the conduct of politics on the mass-media stage. Cameron’s profile of “ordinary people” went on to assert that they are fatally attracted to “a moderate leader who is strong… but can understand and represent their value system” (47): a prescription for populist discourse which begs the question of whether the values a politician or party represent via the media are ever really those of “the people”. More likely, people are hegemonised into a value system which they take to be theirs. Writing of the media side of the equation, David Salter raises the point that when media “moguls thunder about ‘the public interest’ what they really mean is ‘what we think the public is interested in”, which is quite another matter… Why this self-serving deception is still so sheepishly accepted by the same public it is so often used to violate remains a mystery” (40).Sally Young’s Persuaders retails a story that she sees as “symbolic” of the new world of mass-mediated political life. The story concerns Mark Latham and his “revolutionary” journeys to regional Australia to meet the people. “When a political leader who holds a public meeting is dubbed a ‘revolutionary’”, Young rightly observes, “something has gone seriously wrong”. She notes how Latham’s “use of old-fashioned ‘meet-and-greet’campaigning methods was seen as a breath of fresh air because it was unlike the type of packaged, stage-managed and media-dependent politics that have become the norm in Australia.” Except that it wasn’t. “A media pack of thirty journalists trailed Latham in a bus”, meaning, that he was not meeting the people at all (6–7). He was traducing the people as participants in a media spectacle, as his “meet and greet” was designed to fill the image-banks of print and electronic media. Even meeting the people becomes a media pseudo-event in which the people impersonate the people for the camera’s benefit; a spectacle as artfully deceitful as Colin Powell’s UN performance on Iraq.If the success of this kind of “self-serving deception” is a mystery to David Salter, it would not be so to the Frankfurt School. For them, an understanding of the processes of mass-mediated politics sits somewhere near the core of their analysis of the culture industries in the “democratic” world. I think the Frankfurt school should be restored to a more important role in the project of cultural studies. Apart from an aversion to jazz and other supposedly “elitist” heresies, thinkers like Adorno, Benjamin, Horkheimer and their progeny Debord have a functional claim to provide the theory for us to expose the machinations of the politics of contempt and its aesthetic ruses.ReferencesAdorno, Theodor and Max Horkheimer. "The Culture Industry: Enlightenment as Mass Deception." Dialectic of Enlightenment. London: Verso, 1979. 120–167.Barthes Roland. “Myth Today.” Mythologies. Trans. Annette Lavers. St Albans: Paladin, 1972. 109–58.Baudrillard, Jean. Simulations. New York: Semiotext(e), 1983.Benjamin, Walter. “The Work of Art in the Age of Mechanical Reproduction.” Illuminations. Ed. Hannah Arendt. Trans. Harry Zorn. New York: Schocken Books, 1969. 217–251.Burke, Edmund. Reflections on the Revolution in France. Ed. Conor Cruise O’Brien. Harmondsworth: Penguin, 1969.Charteris-Black, Jonathan. Politicians and Rhetoric: The Persuasive Power of Metaphor. Houndmills: Palgrave Macmillan, 2006.Debord, Guy. The Society of the Spectacle. Trans. Donald Nicholson-Smith. New York: Zone Books, 1994.Eagleton, Terry. The Ideology of the Aesthetic. Oxford: Basil Blackwell, 1990.Frank, Thomas. What’s the Matter with Kansas?: How Conservatives Won the Heart of America. New York: Henry Holt and Company, 2004.Grossberg, Lawrence. “It’s a Sin: Politics, Post-Modernity and the Popular.” It’s a Sin: Essays on Postmodern Politics & Culture. Eds. Tony Fry, Ann Curthoys and Paul Patton. Sydney: Power Publications, 1988. 6–71.Hewett, Jennifer. “The Opportunist.” The Weekend Australian Magazine. 25–26 October 2008. 16–22.Hitler, Adolf. Mein Kampf. Trans. Ralph Manheim. London: Pimlico, 1993.Howard, John. “Sharing Our Common Values.” Washington: Irving Kristol Lecture, American Enterprise Institute. 5 March 2008. ‹http://www.theaustralian.news.com.au/story/0,25197,233328945-5014047,00html›.Lucy, Niall and Steve Mickler. The War on Democracy: Conservative Opinion in the Australian Press. Crawley: University of Western Australia Press, 2006.Pearson, Christopher. “Pray for Sense to Prevail.” The Weekend Australian. 25–26 October 2008. 30.Salter, David. The Media We Deserve: Underachievement in the Fourth Estate. Melbourne: Melbourne UP, 2007. Sereny, Gitta. Albert Speer: His Battle with Truth. London: Picador, 1996.Spotts, Frederic. Hitler and the Power of Aesthetics. London: Pimlico, 2003.Wark, McKenzie. The Virtual Republic: Australia’s Culture Wars of the 1990s. St Leonards: Allen & Unwin, 1997.Young, Sally. The Persuaders: Inside the Hidden Machine of Political Advertising. Melbourne: Pluto Press, 2004.
APA, Harvard, Vancouver, ISO, and other styles
22

Burns, Alex. "Doubting the Global War on Terror." M/C Journal 14, no. 1 (January 24, 2011). http://dx.doi.org/10.5204/mcj.338.

Full text
Abstract:
Photograph by Gonzalo Echeverria (2010)Declaring War Soon after Al Qaeda’s terrorist attacks on 11 September 2001, the Bush Administration described its new grand strategy: the “Global War on Terror”. This underpinned the subsequent counter-insurgency in Afghanistan and the United States invasion of Iraq in March 2003. Media pundits quickly applied the Global War on Terror label to the Madrid, Bali and London bombings, to convey how Al Qaeda’s terrorism had gone transnational. Meanwhile, international relations scholars debated the extent to which September 11 had changed the international system (Brenner; Mann 303). American intellectuals adopted several variations of the Global War on Terror in what initially felt like a transitional period of US foreign policy (Burns). Walter Laqueur suggested Al Qaeda was engaged in a “cosmological” and perpetual war. Paul Berman likened Al Qaeda and militant Islam to the past ideological battles against communism and fascism (Heilbrunn 248). In a widely cited article, neoconservative thinker Norman Podhoretz suggested the United States faced “World War IV”, which had three interlocking drivers: Al Qaeda and trans-national terrorism; political Islam as the West’s existential enemy; and nuclear proliferation to ‘rogue’ countries and non-state actors (Friedman 3). Podhoretz’s tone reflected a revival of his earlier Cold War politics and critique of the New Left (Friedman 148-149; Halper and Clarke 56; Heilbrunn 210). These stances attracted widespread support. For instance, the United States Marine Corp recalibrated its mission to fight a long war against “World War IV-like” enemies. Yet these stances left the United States unprepared as the combat situations in Afghanistan and Iraq worsened (Ricks; Ferguson; Filkins). Neoconservative ideals for Iraq “regime change” to transform the Middle East failed to deal with other security problems such as Pakistan’s Musharraf regime (Dorrien 110; Halper and Clarke 210-211; Friedman 121, 223; Heilbrunn 252). The Manichean and open-ended framing became a self-fulfilling prophecy for insurgents, jihadists, and militias. The Bush Administration quietly abandoned the Global War on Terror in July 2005. Widespread support had given way to policymaker doubt. Why did so many intellectuals and strategists embrace the Global War on Terror as the best possible “grand strategy” perspective of a post-September 11 world? Why was there so little doubt of this worldview? This is a debate with roots as old as the Sceptics versus the Sophists. Explanations usually focus on the Bush Administration’s “Vulcans” war cabinet: Vice President Dick Cheney, Secretary of Defense Donald Rumsfield, and National Security Advisor Condoleezza Rice, who later became Secretary of State (Mann xv-xvi). The “Vulcans” were named after the Roman god Vulcan because Rice’s hometown Birmingham, Alabama, had “a mammoth fifty-six foot statue . . . [in] homage to the city’s steel industry” (Mann x) and the name stuck. Alternatively, explanations focus on how neoconservative thinkers shaped the intellectual climate after September 11, in a receptive media climate. Biographers suggest that “neoconservatism had become an echo chamber” (Heilbrunn 242) with its own media outlets, pundits, and think-tanks such as the American Enterprise Institute and Project for a New America. Neoconservatism briefly flourished in Washington DC until Iraq’s sectarian violence discredited the “Vulcans” and neoconservative strategists like Paul Wolfowitz (Friedman; Ferguson). The neoconservatives' combination of September 11’s aftermath with strongly argued historical analogies was initially convincing. They conferred with scholars such as Bernard Lewis, Samuel P. Huntington and Victor Davis Hanson to construct classicist historical narratives and to explain cultural differences. However, the history of the decade after September 11 also contains mis-steps and mistakes which make it a series of contingent decisions (Ferguson; Bergen). One way to analyse these contingent decisions is to pose “what if?” counterfactuals, or feasible alternatives to historical events (Lebow). For instance, what if September 11 had been a chemical and biological weapons attack? (Mann 317). Appendix 1 includes a range of alternative possibilities and “minimal rewrites” or slight variations on the historical events which occurred. Collectively, these counterfactuals suggest the role of agency, chance, luck, and the juxtaposition of better and worse outcomes. They pose challenges to the classicist interpretation adopted soon after September 11 to justify “World War IV” (Podhoretz). A ‘Two-Track’ Process for ‘World War IV’ After the September 11 attacks, I think an overlapping two-track process occurred with the “Vulcans” cabinet, neoconservative advisers, and two “echo chambers”: neoconservative think-tanks and the post-September 11 media. Crucially, Bush’s “Vulcans” war cabinet succeeded in gaining civilian control of the United States war decision process. Although successful in initiating the 2003 Iraq War this civilian control created a deeper crisis in US civil-military relations (Stevenson; Morgan). The “Vulcans” relied on “politicised” intelligence such as a United Kingdom intelligence report on Iraq’s weapons development program. The report enabled “a climate of undifferentiated fear to arise” because its public version did not distinguish between chemical, biological, radiological or nuclear weapons (Halper and Clarke, 210). The cautious 2003 National Intelligence Estimates (NIE) report on Iraq was only released in a strongly edited form. For instance, the US Department of Energy had expressed doubts about claims that Iraq had approached Niger for uranium, and was using aluminium tubes for biological and chemical weapons development. Meanwhile, the post-September 11 media had become a second “echo chamber” (Halper and Clarke 194-196) which amplified neoconservative arguments. Berman, Laqueur, Podhoretz and others who framed the intellectual climate were “risk entrepreneurs” (Mueller 41-43) that supported the “World War IV” vision. The media also engaged in aggressive “flak” campaigns (Herman and Chomsky 26-28; Mueller 39-42) designed to limit debate and to stress foreign policy stances and themes which supported the Bush Administration. When former Central Intelligence Agency director James Woolsey’s claimed that Al Qaeda had close connections to Iraqi intelligence, this was promoted in several books, including Michael Ledeen’s War Against The Terror Masters, Stephen Hayes’ The Connection, and Laurie Mylroie’s Bush v. The Beltway; and in partisan media such as Fox News, NewsMax, and The Weekly Standard who each attacked the US State Department and the CIA (Dorrien 183; Hayes; Ledeen; Mylroie; Heilbrunn 237, 243-244; Mann 310). This was the media “echo chamber” at work. The group Accuracy in Media also campaigned successfully to ensure that US cable providers did not give Al Jazeera English access to US audiences (Barker). Cosmopolitan ideals seemed incompatible with what the “flak” groups desired. The two-track process converged on two now infamous speeches. US President Bush’s State of the Union Address on 29 January 2002, and US Secretary of State Colin Powell’s presentation to the United Nations on 5 February 2003. Bush’s speech included a line from neoconservative David Frumm about North Korea, Iraq and Iran as an “Axis of Evil” (Dorrien 158; Halper and Clarke 139-140; Mann 242, 317-321). Powell’s presentation to the United Nations included now-debunked threat assessments. In fact, Powell had altered the speech’s original draft by I. Lewis “Scooter” Libby, who was Cheney’s chief of staff (Dorrien 183-184). Powell claimed that Iraq had mobile biological weapons facilities, linked to Abu Musab al-Zarqawi. However, the International Atomic Energy Agency’s (IAEA) Mohamed El-Baradei, the Defense Intelligence Agency, the State Department, and the Institute for Science and International Security all strongly doubted this claim, as did international observers (Dorrien 184; Halper and Clarke 212-213; Mann 353-354). Yet this information was suppressed: attacked by “flak” or given little visible media coverage. Powell’s agenda included trying to rebuild an international coalition and to head off weather changes that would affect military operations in the Middle East (Mann 351). Both speeches used politicised variants of “weapons of mass destruction”, taken from the counterterrorism literature (Stern; Laqueur). Bush’s speech created an inflated geopolitical threat whilst Powell relied on flawed intelligence and scientific visuals to communicate a non-existent threat (Vogel). However, they had the intended effect on decision makers. US Under-Secretary of Defense, the neoconservative Paul Wolfowitz, later revealed to Vanity Fair that “weapons of mass destruction” was selected as an issue that all potential stakeholders could agree on (Wilkie 69). Perhaps the only remaining outlet was satire: Armando Iannucci’s 2009 film In The Loop parodied the diplomatic politics surrounding Powell’s speech and the civil-military tensions on the Iraq War’s eve. In the short term the two track process worked in heading off doubt. The “Vulcans” blocked important information on pre-war Iraq intelligence from reaching the media and the general public (Prados). Alternatively, they ignored area specialists and other experts, such as when Coalition Provisional Authority’s L. Paul Bremer ignored the US State Department’s fifteen volume ‘Future of Iraq’ project (Ferguson). Public “flak” and “risk entrepreneurs” mobilised a range of motivations from grief and revenge to historical memory and identity politics. This combination of private and public processes meant that although doubts were expressed, they could be contained through the dual echo chambers of neoconservative policymaking and the post-September 11 media. These factors enabled the “Vulcans” to proceed with their “regime change” plans despite strong public opposition from anti-war protestors. Expressing DoubtsMany experts and institutions expressed doubt about specific claims the Bush Administration made to support the 2003 Iraq War. This doubt came from three different and sometimes overlapping groups. Subject matter experts such as the IAEA’s Mohamed El-Baradei and weapons development scientists countered the UK intelligence report and Powell’s UN speech. However, they did not get the media coverage warranted due to “flak” and “echo chamber” dynamics. Others could challenge misleading historical analogies between insurgent Iraq and Nazi Germany, and yet not change the broader outcomes (Benjamin). Independent journalists one group who gained new information during the 1990-91 Gulf War: some entered Iraq from Kuwait and documented a more humanitarian side of the war to journalists embedded with US military units (Uyarra). Finally, there were dissenters from bureaucratic and institutional processes. In some cases, all three overlapped. In their separate analyses of the post-September 11 debate on intelligence “failure”, Zegart and Jervis point to a range of analytic misperceptions and institutional problems. However, the intelligence community is separated from policymakers such as the “Vulcans”. Compartmentalisation due to the “need to know” principle also means that doubting analysts can be blocked from releasing information. Andrew Wilkie discovered this when he resigned from Australia’s Office for National Assessments (ONA) as a transnational issues analyst. Wilkie questioned the pre-war assessments in Powell’s United Nations speech that were used to justify the 2003 Iraq War. Wilkie was then attacked publicly by Australian Prime Minister John Howard. This overshadowed a more important fact: both Howard and Wilkie knew that due to Australian legislation, Wilkie could not publicly comment on ONA intelligence, despite the invitation to do so. This barrier also prevented other intelligence analysts from responding to the “Vulcans”, and to “flak” and “echo chamber” dynamics in the media and neoconservative think-tanks. Many analysts knew that the excerpts released from the 2003 NIE on Iraq was highly edited (Prados). For example, Australian agencies such as the ONA, the Department of Foreign Affairs and Trade, and the Department of Defence knew this (Wilkie 98). However, analysts are trained not to interfere with policymakers, even when there are significant civil-military irregularities. Military officials who spoke out about pre-war planning against the “Vulcans” and their neoconservative supporters were silenced (Ricks; Ferguson). Greenlight Capital’s hedge fund manager David Einhorn illustrates in a different context what might happen if analysts did comment. Einhorn gave a speech to the Ira Sohn Conference on 15 May 2002 debunking the management of Allied Capital. Einhorn’s “short-selling” led to retaliation from Allied Capital, a Securities and Exchange Commission investigation, and growing evidence of potential fraud. If analysts adopted Einhorn’s tactics—combining rigorous analysis with targeted, public denunciation that is widely reported—then this may have short-circuited the “flak” and “echo chamber” effects prior to the 2003 Iraq War. The intelligence community usually tries to pre-empt such outcomes via contestation exercises and similar processes. This was the goal of the 2003 NIE on Iraq, despite the fact that the US Department of Energy which had the expertise was overruled by other agencies who expressed opinions not necessarily based on rigorous scientific and technical analysis (Prados; Vogel). In counterterrorism circles, similar disinformation arose about Aum Shinrikyo’s biological weapons research after its sarin gas attack on Tokyo’s subway system on 20 March 1995 (Leitenberg). Disinformation also arose regarding nuclear weapons proliferation to non-state actors in the 1990s (Stern). Interestingly, several of the “Vulcans” and neoconservatives had been involved in an earlier controversial contestation exercise: Team B in 1976. The Central Intelligence Agency (CIA) assembled three Team B groups in order to evaluate and forecast Soviet military capabilities. One group headed by historian Richard Pipes gave highly “alarmist” forecasts and then attacked a CIA NIE about the Soviets (Dorrien 50-56; Mueller 81). The neoconservatives adopted these same tactics to reframe the 2003 NIE from its position of caution, expressed by several intelligence agencies and experts, to belief that Iraq possessed a current, covert program to develop weapons of mass destruction (Prados). Alternatively, information may be leaked to the media to express doubt. “Non-attributable” background interviews to establishment journalists like Seymour Hersh and Bob Woodward achieved this. Wikileaks publisher Julian Assange has recently achieved notoriety due to US diplomatic cables from the SIPRNet network released from 28 November 2010 onwards. Supporters have favourably compared Assange to Daniel Ellsberg, the RAND researcher who leaked the Pentagon Papers (Ellsberg; Ehrlich and Goldsmith). Whilst Elsberg succeeded because a network of US national papers continued to print excerpts from the Pentagon Papers despite lawsuit threats, Assange relied in part on favourable coverage from the UK’s Guardian newspaper. However, suspected sources such as US Army soldier Bradley Manning are not protected whilst media outlets are relatively free to publish their scoops (Walt, ‘Woodward’). Assange’s publication of SIPRNet’s diplomatic cables will also likely mean greater restrictions on diplomatic and military intelligence (Walt, ‘Don’t Write’). Beyond ‘Doubt’ Iraq’s worsening security discredited many of the factors that had given the neoconservatives credibility. The post-September 11 media became increasingly more critical of the US military in Iraq (Ferguson) and cautious about the “echo chamber” of think-tanks and media outlets. Internet sites for Al Jazeera English, Al-Arabiya and other networks have enabled people to bypass “flak” and directly access these different viewpoints. Most damagingly, the non-discovery of Iraq’s weapons of mass destruction discredited both the 2003 NIE on Iraq and Colin Powell’s United Nations presentation (Wilkie 104). Likewise, “risk entrepreneurs” who foresaw “World War IV” in 2002 and 2003 have now distanced themselves from these apocalyptic forecasts due to a series of mis-steps and mistakes by the Bush Administration and Al Qaeda’s over-calculation (Bergen). The emergence of sites such as Wikileaks, and networks like Al Jazeera English and Al-Arabiya, are a response to the politics of the past decade. They attempt to short-circuit past “echo chambers” through providing access to different sources and leaked data. The Global War on Terror framed the Bush Administration’s response to September 11 as a war (Kirk; Mueller 59). Whilst this prematurely closed off other possibilities, it has also unleashed a series of dynamics which have undermined the neoconservative agenda. The “classicist” history and historical analogies constructed to justify the “World War IV” scenario are just one of several potential frameworks. “Flak” organisations and media “echo chambers” are now challenged by well-financed and strategic alternatives such as Al Jazeera English and Al-Arabiya. Doubt is one defence against “risk entrepreneurs” who seek to promote a particular idea: doubt guards against uncritical adoption. Perhaps the enduring lesson of the post-September 11 debates, though, is that doubt alone is not enough. What is needed are individuals and institutions that understand the strategies which the neoconservatives and others have used, and who also have the soft power skills during crises to influence critical decision-makers to choose alternatives. Appendix 1: Counterfactuals Richard Ned Lebow uses “what if?” counterfactuals to examine alternative possibilities and “minimal rewrites” or slight variations on the historical events that occurred. The following counterfactuals suggest that the Bush Administration’s Global War on Terror could have evolved very differently . . . or not occurred at all. Fact: The 2003 Iraq War and 2001 Afghanistan counterinsurgency shaped the Bush Administration’s post-September 11 grand strategy. Counterfactual #1: Al Gore decisively wins the 2000 U.S. election. Bush v. Gore never occurs. After the September 11 attacks, Gore focuses on international alliance-building and gains widespread diplomatic support rather than a neoconservative agenda. He authorises Special Operations Forces in Afghanistan and works closely with the Musharraf regime in Pakistan to target Al Qaeda’s muhajideen. He ‘contains’ Saddam Hussein’s Iraq through measurement and signature, technical intelligence, and more stringent monitoring by the International Atomic Energy Agency. Minimal Rewrite: United 93 crashes in Washington DC, killing senior members of the Gore Administration. Fact: U.S. Special Operations Forces failed to kill Osama bin Laden in late November and early December 2001 at Tora Bora. Counterfactual #2: U.S. Special Operations Forces kill Osama bin Laden in early December 2001 during skirmishes at Tora Bora. Ayman al-Zawahiri is critically wounded, captured, and imprisoned. The rest of Al Qaeda is scattered. Minimal Rewrite: Osama bin Laden’s death turns him into a self-mythologised hero for decades. Fact: The UK Blair Government supplied a 50-page intelligence dossier on Iraq’s weapons development program which the Bush Administration used to support its pre-war planning. Counterfactual #3: Rogue intelligence analysts debunk the UK Blair Government’s claims through a series of ‘targeted’ leaks to establishment news sources. Minimal Rewrite: The 50-page intelligence dossier is later discovered to be correct about Iraq’s weapons development program. Fact: The Bush Administration used the 2003 National Intelligence Estimate to “build its case” for “regime change” in Saddam Hussein’s Iraq. Counterfactual #4: A joint investigation by The New York Times and The Washington Post rebuts U.S. Secretary of State Colin Powell’s speech to the United National Security Council, delivered on 5 February 2003. Minimal Rewrite: The Central Intelligence Agency’s whitepaper “Iraq’s Weapons of Mass Destruction Programs” (October 2002) more accurately reflects the 2003 NIE’s cautious assessments. Fact: The Bush Administration relied on Ahmed Chalabi for its postwar estimates about Iraq’s reconstruction. Counterfactual #5: The Bush Administration ignores Chalabi’s advice and relies instead on the U.S. State Department’s 15 volume report “The Future of Iraq”. Minimal Rewrite: The Coalition Provisional Authority appoints Ahmed Chalabi to head an interim Iraqi government. Fact: L. Paul Bremer signed orders to disband Iraq’s Army and to De-Ba’athify Iraq’s new government. Counterfactual #6: Bremer keeps Iraq’s Army intact and uses it to impose security in Baghdad to prevent looting and to thwart insurgents. Rather than a De-Ba’athification policy, Bremer uses former Baath Party members to gather situational intelligence. Minimal Rewrite: Iraq’s Army refuses to disband and the De-Ba’athification policy uncovers several conspiracies to undermine the Coalition Provisional Authority. AcknowledgmentsThanks to Stephen McGrail for advice on science and technology analysis.References Barker, Greg. “War of Ideas”. PBS Frontline. Boston, MA: 2007. ‹http://www.pbs.org/frontlineworld/stories/newswar/video1.html› Benjamin, Daniel. “Condi’s Phony History.” Slate 29 Aug. 2003. ‹http://www.slate.com/id/2087768/pagenum/all/›. Bergen, Peter L. The Longest War: The Enduring Conflict between America and Al Qaeda. New York: The Free Press, 2011. Berman, Paul. Terror and Liberalism. W.W. Norton & Company: New York, 2003. Brenner, William J. “In Search of Monsters: Realism and Progress in International Relations Theory after September 11.” Security Studies 15.3 (2006): 496-528. Burns, Alex. “The Worldflash of a Coming Future.” M/C Journal 6.2 (April 2003). ‹http://journal.media-culture.org.au/0304/08-worldflash.php›. Dorrien, Gary. Imperial Designs: Neoconservatism and the New Pax Americana. New York: Routledge, 2004. Ehrlich, Judith, and Goldsmith, Rick. The Most Dangerous Man in America: Daniel Ellsberg and the Pentagon Papers. Berkley CA: Kovno Communications, 2009. Einhorn, David. Fooling Some of the People All of the Time: A Long Short (and Now Complete) Story. Hoboken NJ: John Wiley & Sons, 2010. Ellison, Sarah. “The Man Who Spilled The Secrets.” Vanity Fair (Feb. 2011). ‹http://www.vanityfair.com/politics/features/2011/02/the-guardian-201102›. Ellsberg, Daniel. Secrets: A Memoir of Vietnam and the Pentagon Papers. New York: Viking, 2002. Ferguson, Charles. No End in Sight, New York: Representational Pictures, 2007. Filkins, Dexter. The Forever War. New York: Vintage Books, 2008. Friedman, Murray. The Neoconservative Revolution: Jewish Intellectuals and the Shaping of Public Policy. New York: Cambridge UP, 2005. Halper, Stefan, and Jonathan Clarke. America Alone: The Neo-Conservatives and the Global Order. New York: Cambridge UP, 2004. Hayes, Stephen F. The Connection: How Al Qaeda’s Collaboration with Saddam Hussein Has Endangered America. New York: HarperCollins, 2004. Heilbrunn, Jacob. They Knew They Were Right: The Rise of the Neocons. New York: Doubleday, 2008. Herman, Edward S., and Noam Chomsky. Manufacturing Consent: The Political Economy of the Mass Media. Rev. ed. New York: Pantheon Books, 2002. Iannucci, Armando. In The Loop. London: BBC Films, 2009. Jervis, Robert. Why Intelligence Fails: Lessons from the Iranian Revolution and the Iraq War. Ithaca NY: Cornell UP, 2010. Kirk, Michael. “The War behind Closed Doors.” PBS Frontline. Boston, MA: 2003. ‹http://www.pbs.org/wgbh/pages/frontline/shows/iraq/›. Laqueur, Walter. No End to War: Terrorism in the Twenty-First Century. New York: Continuum, 2003. Lebow, Richard Ned. Forbidden Fruit: Counterfactuals and International Relations. Princeton NJ: Princeton UP, 2010. Ledeen, Michael. The War against The Terror Masters. New York: St. Martin’s Griffin, 2003. Leitenberg, Milton. “Aum Shinrikyo's Efforts to Produce Biological Weapons: A Case Study in the Serial Propagation of Misinformation.” Terrorism and Political Violence 11.4 (1999): 149-158. Mann, James. Rise of the Vulcans: The History of Bush’s War Cabinet. New York: Viking Penguin, 2004. Morgan, Matthew J. The American Military after 9/11: Society, State, and Empire. New York: Palgrave Macmillan, 2008. Mueller, John. Overblown: How Politicians and the Terrorism Industry Inflate National Security Threats, and Why We Believe Them. New York: The Free Press, 2009. Mylroie, Laurie. Bush v The Beltway: The Inside Battle over War in Iraq. New York: Regan Books, 2003. Nutt, Paul C. Why Decisions Fail. San Francisco: Berrett-Koelher, 2002. Podhoretz, Norman. “How to Win World War IV”. Commentary 113.2 (2002): 19-29. Prados, John. Hoodwinked: The Documents That Reveal How Bush Sold Us a War. New York: The New Press, 2004. Ricks, Thomas. Fiasco: The American Military Adventure in Iraq. New York: The Penguin Press, 2006. Stern, Jessica. The Ultimate Terrorists. Boston, MA: Harvard UP, 2001. Stevenson, Charles A. Warriors and Politicians: US Civil-Military Relations under Stress. New York: Routledge, 2006. Walt, Stephen M. “Should Bob Woodward Be Arrested?” Foreign Policy 10 Dec. 2010. ‹http://walt.foreignpolicy.com/posts/2010/12/10/more_wikileaks_double_standards›. Walt, Stephen M. “‘Don’t Write If You Can Talk...’: The Latest from WikiLeaks.” Foreign Policy 29 Nov. 2010. ‹http://walt.foreignpolicy.com/posts/2010/11/29/dont_write_if_you_can_talk_the_latest_from_wikileaks›. Wilkie, Andrew. Axis of Deceit. Melbourne: Black Ink Books, 2003. Uyarra, Esteban Manzanares. “War Feels like War”. London: BBC, 2003. Vogel, Kathleen M. “Iraqi Winnebagos™ of Death: Imagined and Realized Futures of US Bioweapons Threat Assessments.” Science and Public Policy 35.8 (2008): 561–573. Zegart, Amy. Spying Blind: The CIA, the FBI and the Origins of 9/11. Princeton NJ: Princeton UP, 2007.
APA, Harvard, Vancouver, ISO, and other styles
23

Smith, Jenny Leigh. "Tushonka: Cultivating Soviet Postwar Taste." M/C Journal 13, no. 5 (October 17, 2010). http://dx.doi.org/10.5204/mcj.299.

Full text
Abstract:
During World War II, the Soviet Union’s food supply was in a state of crisis. Hitler’s army had occupied the agricultural heartlands of Ukraine and Southern Russia in 1941 and, as a result, agricultural production for the entire nation had plummeted. Soldiers in Red Army, who easily ate the best rations in the country, subsisted on a daily allowance of just under a kilogram of bread, supplemented with meat, tea, sugar and butter when and if these items were available. The hunger of the Red Army and its effect on the morale and strength of Europe’s eastern warfront were causes for concern for the Soviet government and its European and American allies. The one country with a food surplus decided to do something to help, and in 1942 the United States agreed to send thousands of pounds of meat, cheese and butter overseas to help feed the Red Army. After receiving several shipments of the all-American spiced canned meat SPAM, the Red Army’s quartermaster put in a request for a more familiar canned pork product, Russian tushonka. Pound for pound, America sent more pigs overseas than soldiers during World War II, in part because pork was in oversupply in the America of the early 1940s. Shipping meat to hungry soldiers and civilians in war torn countries was a practical way to build business for the U.S. meat industry, which had been in decline throughout the 1930s. As per a Soviet-supplied recipe, the first cans of Lend-Lease tushonka were made in the heart of the American Midwest, at meatpacking plants in Iowa and Ohio (Stettinus 6-7). Government contracts in the meat packing industry helped fuel economic recovery, and meatpackers were in a position to take special request orders like the one for tushonka that came through the lines. Unlike SPAM, which was something of a novelty item during the war, tushonka was a food with a past. The original recipe was based on a recipe for preserved meat that had been a traditional product of the Ural Mountains, preserved in jars with salt and fat rather than by pressure and heat. Thus tushonka was requested—and was mass-produced—not simply as a convenience but also as a traditional and familiar food—a taste of home cooking that soldiers could carry with them into the field. Nikita Khrushchev later claimed that the arrival of tushonka was instrumental in helping the Red Army push back against the Nazi invasion (178). Unlike SPAM and other wartime rations, tushonka did not fade away after the war. Instead, it was distributed to the Soviet civilian population, appearing in charity donations and on the shelves of state shops. Often it was the only meat product available on a regular basis. Salty, fatty, and slightly grey-toned, tushonka was an unlikely hero of the postwar-era, but during this period tushonka rose from obscurity to become an emblem of socialist modernity. Because it was shelf stable and could be made from a variety of different cuts of meat, it proved an ideal product for the socialist production lines where supplies and the pace of production were infinitely variable. Unusual in a socialist system of supply, this product shaped production and distribution lines, and even influenced the layout of meatpacking factories and the genetic stocks of the animals that were to be eaten. Tushonka’s initial ubiquity in the postwar Soviet Union had little to do with the USSR’s own hog industry. Pig populations as well as their processing facilities had been decimated in the war, and pigs that did survive the Axis invasion had been evacuated East with human populations. Instead, the early presence of tushonka in the pig-scarce postwar Soviet Union had everything to do with Harry Truman’s unexpected September 1945 decision to end all “economically useful” Lend-Lease shipments to the Soviet Union (Martel). By the end of September, canned meat was practically the only product still being shipped as part of Lend-Lease (NARA RG 59). Although the United Nations was supposed to distribute these supplies to needy civilians free of cost, travelers to the Soviet Union in 1946 spotted cans of American tushonka for sale in state shops (Skeoch 231). After American tushonka “donations” disappeared from store shelves, the Soviet Union’s meat syndicates decided to continue producing the product. Between its first appearance during the war in 1943, and the 1957 announcement by Nikita Khrushchev that Soviet policy would restructure all state animal farms to support the mass production of one or several processed meat products, tushonka helped to drive the evolution of the Soviet Union’s meat packing industry. Its popularity with both planners and the public gave it the power to reach into food commodity chains. It is this backward reach and the longer-term impacts of these policies that make tushonka an unusual byproduct of the Cold War era. State planners loved tushonka: it was cheap to make, the logistics of preparing it were not complicated, it was easy to transport, and most importantly, it served as tangible evidence that the state was accomplishing a long-standing goal to get more meat to its citizenry and improving the diet of the average Soviet worker. Tushonka became a highly visible product in the Soviet Union’s much vaunted push to establish a modern food regime intended to rival that of the United States. Because it was shelf-stable, wartime tushonka had served as a practical food for soldiers, but after the war tushonka became an ideal food for workers who had neither the time nor the space to prepare a home-cooked meal with fresh meat. The Soviet state started to produce its own tushonka because it was such an excellent fit for the needs and abilities of the Soviet state—consumer demand was rarely considered by planners in this era. Not only did tushonka fit the look and taste of a modern processed meat product (that is, it was standard in texture and flavor from can to can, and was an obviously industrially processed product), it was also an excellent way to make the most of the predominant kind of meat the Soviet Union had the in the 1950s: small scraps low-grade pork and beef, trimmings leftover from butchering practices that focused on harvesting as much animal fat, rather than muscle, from the carcass in question. Just like tushonka, pork sausages and frozen pelmeny, a meat-filled pasta dumpling, also became winning postwar foods thanks to a happy synergy of increased animal production, better butchering and new food processing machines. As postwar pigs recovered their populations, the Soviet processed meat industry followed suit. One official source listed twenty-six different kinds of meat products being issued in 1964, although not all of these were pork (Danilov). An instructional manual distributed by the meat and milk syndicate demonstrated how meat shops should wrap and display sausages, and listed 24 different kinds of sausages that all needed a special style of tying up. Because of packaging shortages, the string that bound the sausage was wrapped in a different way for every type of sausage, and shop assistants were expected to be able to identify sausages based on the pattern of their binding. Pelmeny were produced at every meat factory that processed pork. These were “made from start to finish in a special, automated machine, human hands do not touch them. Which makes them a higher quality and better (prevoskhodnogo) product” (Book of Healthy and Delicious Food). These were foods that became possible to produce economically because of a co-occurring increase in pigs, the new standardized practice of equipping meatpacking plants with large-capacity grinders, and freezers or coolers and the enforcement of a system of grading meat. As the state began to rebuild Soviet agriculture from its near-collapse during the war, the Soviet Union looked to the United States for inspiration. Surprisingly, Soviet planners found some of the United States’ more outdated techniques to be quite valuable for new Soviet hog operations. The most striking of these was the adoption of competing phenotypes in the Soviet hog industry. Most major swine varieties had been developed and described in the 19th century in Germany and Great Britain. Breeds had a tendency to split into two phenotypically distinct groups, and in early 20th Century American pig farms, there was strong disagreement as to which style of pig was better suited to industrial conditions of production. Some pigs were “hot-blooded” (in other words, fast maturing and prolific reproducers) while others were a slower “big type” pig (a self-explanatory descriptor). Breeds rarely excelled at both traits and it was a matter of opinion whether speed or size was the most desirable trait to augment. The over-emphasis of either set of qualities damaged survival rates. At their largest, big type pigs resembled small hippopotamuses, and sows were so corpulent they unwittingly crushed their tiny piglets. But the sleeker hot-blooded pigs had a similarly lethal relationship with their young. Sows often produced litters of upwards of a dozen piglets and the stress of tending such a large brood led overwhelmed sows to devour their own offspring (Long). American pig breeders had been forced to navigate between these two undesirable extremes, but by the 1930s, big type pigs were fading in popularity mainly because butter and newly developed plant oils were replacing lard as the cooking fat of preference in American kitchens. The remarkable propensity of the big type to pack on pounds of extra fat was more of a liability than a benefit in this period, as the price that lard and salt pork plummeted in this decade. By the time U.S. meat packers were shipping cans of tushonka to their Soviet allies across the seas, US hog operations had already developed a strong preference for hot-blooded breeds and research had shifted to building and maintaining lean muscle on these swiftly maturing animals. When Soviet industrial planners hoping to learn how to make more tushonka entered the scene however, their interpretation of american efficiency was hardly predictable: scientifically nourished big type pigs may have been advantageous to the United States at midcentury, but the Soviet Union’s farms and hungry citizens had a very different list of needs and wants. At midcentury, Soviet pigs were still handicapped by old-fashioned variables such as cold weather, long winters, poor farm organisation and impoverished feed regimens. The look of the average Soviet hog operation was hardly industrial. In 1955 the typical Soviet pig was petite, shaggy, and slow to reproduce. In the absence of robust dairy or vegetable oil industries, Soviet pigs had always been valued for their fat rather than their meat, and tushonka had been a byproduct of an industry focused mainly on supplying the country with fat and lard. Until the mid 1950s, the most valuable pig on many Soviet state and collective farms was the nondescript but very rotund “lard and bacon” pig, an inefficient eater that could take upwards of two years to reach full maturity. In searching for a way to serve up more tushonka, Soviet planners became aware that their entire industry needed to be revamped. When the Soviet Union looked to the United States, planners were inspired by the earlier competition between hot-blooded and big type pigs, which Soviet planners thought, ambitiously, they could combine into one splendid pig. The Soviet Union imported new pigs from Poland, Lithuania, East Germany and Denmark, trying valiantly to create hybrid pigs that would exhibit both hot blood and big type. Soviet planners were especially interested in inspiring the Poland-China, an especially rotund specimen, to speed up its life cycle during them mid 1950s. Hybrdizing and cross breeding a Soviet super-pig, no matter how closely laid out on paper, was probably always a socialist pipe dream. However, when the Soviets decided to try to outbreed American hog breeders, they created an infrastructure for pigs and pig breeding that had a dramatic positive impact of hog populations across the country, and the 1950s were marked by a large increase in the number of pigs in the Soviet union, as well as dramatic increases in the numbers of purebred and scientific hybrids the country developed, all in the name of tushonka. It was not just the genetic stock that received a makeover in the postwar drive to can more tushonka; a revolution in the barnyard also took place and in less than 10 years, pigs were living in new housing stock and eating new feed sources. The most obvious postwar change was in farm layout and the use of building space. In the early 1950s, many collective farms had been consolidated. In 1940 there were a quarter of a million kolkhozii, by 1951 fewer than half that many remained (NARA RG166). Farm consolidation movements most often combined two, three or four collective farms into one economic unit, thus scaling up the average size and productivity of each collective farm and simplifying their administration. While there were originally ambitious plans to re-center farms around new “agro-city” bases with new, modern farm buildings, these projects were ultimately abandoned. Instead, existing buildings were repurposed and the several clusters of farm buildings that had once been the heart of separate villages acquired different uses. For animals this meant new barns and new daily routines. Barns were redesigned and compartmentalized around ideas of gender and age segregation—weaned baby pigs in one area, farrowing sows in another—as well as maximising growth and health. Pigs spent less outside time and more time at the trough. Pigs that were wanted for different purposes (breeding, meat and lard) were kept in different areas, isolated from each other to minimize the spread of disease as well as improve the efficiency of production. Much like postwar housing for humans, the new and improved pig barn was a crowded and often chaotic place where the electricity, heat and water functioned only sporadically. New barns were supposed to be mechanised. In some places, mechanisation had helped speed things along, but as one American official viewing a new mechanised pig farm in 1955 noted, “it did not appear to be a highly efficient organisation. The mechanised or automated operations, such as the preparation of hog feed, were eclipsed by the amount of hand labor which both preceded and followed the mechanised portion” (NARA RG166 1961). The American official estimated that by mechanizing, Soviet farms had actually increased the amount of human labor needed for farming operations. The other major environmental change took place away from the barnyard, in new crops the Soviet Union began to grow for fodder. The heart and soul of this project was establishing field corn as a major new fodder crop. Originally intended as a feed for cows that would replace hay, corn quickly became the feed of choice for raising pigs. After a visit by a United States delegation to Iowa and other U.S. farms over the summer of 1955, corn became the centerpiece of Khrushchev’s efforts to raise meat and milk productivity. These efforts were what earned Khrushchev his nickname of kukuruznik, or “corn fanatic.” Since so little of the Soviet Union looks or feels much like the plains and hills of Iowa, adopting corn might seem quixotic, but raising corn was a potentially practical move for a cold country. Unlike the other major fodder crops of turnips and potatoes, corn could be harvested early, while still green but already possessing a high level of protein. Corn provided a “gap month” of green feed during July and August, when grazing animals had eaten the first spring green growth but these same plants had not recovered their biomass. What corn remained in the fields in late summer was harvested and made into silage, and corn made the best silage that had been historically available in the Soviet Union. The high protein content of even silage made from green mass and unripe corn ears prevented them from losing weight in the winter. Thus the desire to put more meat on Soviet tables—a desire first prompted by American food donations of surplus pork from Iowa farmers adapting to agro-industrial reordering in their own country—pushed back into the commodity supply network of the Soviet Union. World War II rations that were well adapted to the uncertainty and poor infrastructure not just of war but also of peacetime were a source of inspiration for Soviet planners striving to improve the diets of citizens. To do this, they purchased and bred more and better animals, inventing breeds and paying attention, for the first time, to the efficiency and speed with which these animals were ready to become meat. Reinventing Soviet pigs pushed even back farther, and inspired agricultural economists and state planners to embrace new farm organizational structures. Pigs meant for the tushonka can spent more time inside eating, and led their lives in a rigid compartmentalization that mimicked emerging trends in human urban society. Beyond the barnyard, a new concern with feed-to weight conversions led agriculturalists to seek new crops; crops like corn that were costly to grow but were a perfect food for a pig destined for a tushonka tin. Thus in Soviet industrialization, pigs evolved. No longer simply recyclers of human waste, socialist pigs were consumers in their own right, their newly crafted genetic compositions demanded ever more technical feed sources in order to maximize their own productivity. Food is transformative, and in this case study the prosaic substance of canned meat proved to be unusually transformative for the history of the Soviet Union. In its early history it kept soldiers alive long enough to win an important war, later the requirements for its manufacture re-prioritized muscle tissue over fat tissue in the disassembly of carcasses. This transformative influence reached backwards into the supply lines and farms of the Soviet Union, revolutionizing the scale and goals of farming and meat packing for the Soviet food industry, as well as the relationship between the pig and the consumer. References Bentley, Amy. Eating for Victory: Food Rationing and the Politics of Domesticity. Where: University of Illinois Press, 1998. The Book of Healthy and Delicious Food, Kniga O Vkusnoi I Zdorovoi Pishche. Moscow: AMN Izd., 1952. 161. Danilov, M. M. Tovaravedenie Prodovol’stvennykh Tovarov: Miaso I Miasnye Tovarye. Moscow: Iz. Ekonomika, 1964. Khrushchev, Nikita. Khrushchev Remembers. New York: Little, Brown & Company, 1970. 178. Long, James. The Book of the Pig. London: Upcott Gill, 1886. 102. Lush, Jay & A.L. Anderson, “A Genetic History of Poland-China Swine: I—Early Breed History: The ‘Hot Blood’ versus the ‘Big Type’” Journal of Heredity 30.4 (1939): 149-56. Martel, Leon. Lend-Lease, Loans, and the Coming of the Cold War: A Study of the Implementation of Foreign Policy. Boulder: Westview Press, 1979. 35. National Archive and Records Administration (NARA). RG 59, General Records of the Department of State. Office of Soviet Union affairs, Box 6. “Records relating to Lend Lease with the USSR 1941-1952”. National Archive and Records Administration (NARA). RG166, Records of the Foreign Agricultural Service. Narrative reports 1940-1954. USSR Cotton-USSR Foreign trade. Box 64, Folder “farm management”. Report written by David V Kelly, 6 Apr. 1951. National Archive and Records Administration (NARA). RG 166, Records of the Foreign Agricultural Service. Narrative Reports 1955-1961. Folder: “Agriculture” “Visits to Soviet agricultural installations,” 15 Nov. 1961. Skeoch, L.A. Food Prices and Ration Scale in the Ukraine, 1946 The Review of Economics and Statistics 35.3 (Aug. 1953), 229-35. State Archive of the Russian Federation (GARF). Fond R-7021. The Report of Extraordinary Special State Commission on Wartime Losses Resulting from the German-Fascist Occupation cites the following losses in the German takeover. 1948. Stettinus, Edward R. Jr. Lend-Lease: Weapon for Victory. Penguin Books, 1944.
APA, Harvard, Vancouver, ISO, and other styles
24

Mesch, Claudia. "Racing Berlin." M/C Journal 3, no. 3 (June 1, 2000). http://dx.doi.org/10.5204/mcj.1845.

Full text
Abstract:
Bracketed by a quotation from famed 1950s West German soccer coach S. Herberger and the word "Ende", the running length of the 1998 film Run Lola Run, directed by Tom Tykwer, is 9 minutes short of the official duration of a soccer match. Berlin has often been represented, in visual art and in cinematic imagery, as the modern metropolis: the Expressionist and Dadaist painters, Walter Ruttmann, Fritz Lang and Rainer Werner Fassbinder all depicted it as the modernising city. Since the '60s artists have staged artworks and performances in the public space of the city which critiqued the cold war order of that space, its institutions, and the hysterical attempt by the German government to erase a divided past after 1990. Run Lola Run depicts its setting, Berlin, as a cyberspace obstacle course or environment usually associated with interactive video and computer games. The eerie emptiness of the Berlin of Run Lola Run -- a fantasy projected onto a city which has been called the single biggest construction site in Europe -- is necessary to keep the protagonist Lola moving at high speed from the West to the East part of town and back again -- another fantasy which is only possible when the city is recast as a virtual environment. In Run Lola Run Berlin is represented as an idealised space of bodily and psychic mobility where the instantaneous technology of cyberspace is physically realised as a utopia of speed. The setting of Run Lola Run is not a playing field but a playing level, to use the parlance of video game technology. Underscored by other filmic devices and technologies, Run Lola Run emulates the kinetics and structures of a virtual, quasi-interactive environment: the Berlin setting of the film is paradoxically rendered as an indeterminate, but also site specific, entertainment complex which hinges upon the high-speed functioning of multiple networks of auto-mobility. Urban mobility as circuitry is performed by the film's super-athletic Lola. Lola is a cyber character; she recalls the 'cyberbabe' Lara Croft, heroine of the Sega Tomb Raider video game series. In Tomb Raider the Croft figure is controlled and manipulated by the interactive player to go through as many levels of play, or virtual environments, as possible. In order for the cyber figure to get to the next level of play she must successfully negotiate as many trap and puzzle mechanisms as possible. Speed in this interactive virtual game results from the skill of an experienced player who has practiced coordinating keyboard commands with figure movements and who is familiar with the obstacles the various environments can present. As is the case with Lara Croft, the figure of Lola in Run Lola Run reverses the traditional gender relations of the action/adventure game and of 'damsel in distress' narratives. Run Lola Run focusses on Lola's race to save her boyfriend from a certain death by obtaining DM 100,000 and delivering it across town in twenty minutes. The film adds the element of the race to the game, a variable not included in Tomb Raider. Tykwer repeats Lola's trajectory from home to the location of her boyfriend Manni thrice in the film, each time ending her quest with a different outcome. As in a video game, Lola can therefore be killed as the game unwinds during one turn of play, and on the next attempt she, and also we as viewers or would-be interactive players, would have learned from her previous 'mistakes' and adjust her actions accordingly. The soundtrack of Run Lola Run underscores the speed and mobility of Berlin by means of the fast/slow/fast rhythm of the film, which proceeds primarily at the pace of techno music. This quick rhythm is syncopated with pauses in the forward-moving action brought on by Lola's superhuman screams or by the death of a protagonist. These events mark the end of one turn of 'play' and the restart of Lola's route. Tykwer visually contrasts Lola's linear mobility and her physical and mental capacity for speed with her boyfriend Manni's centripetal fixity, a marker of his helplessness, throughout the film. Manni, a bagman-in-training for a local mafioso, has to make his desperate phone calls from a single phone booth in the borough of Charlottenburg after he bungles a hand-off of payment money by forgetting it on the U-Bahn (the subway). In a black and white flashback sequence, viewers learn about Manni's ill-fated trip to the Polish border with a shipment of stolen cars. In contrast to his earlier mobility, Manni becomes entrapped in the phone booth as a result of his ineptitude. A spiral store sign close to the phone booth symbolizes Manni's entrapment. Tykwer contrasts this circular form with the lines and grids Lola transverses throughout the film. Where at first Lola is also immobilised after her moped is stolen by an 'unbelieveably fast' thief, her quasi-cybernetic thought process soon restores her movement. Tykwer visualizes Lola's frantic thinking in a series of photographic portraits which indicates her consideration of who she can contact to supply a large sum of money. Lola not only moves but thinks with the fast, even pace of a computer working through a database. Tykwer then repeats overhead shots of gridded pavement which Lola follows as she runs through the filmic frame. The grid, emblem of modernity and structure of the metropolis, the semiconductor, and the puzzles of a virtual environment, is necessary for mobility and speed, and is performed by the figure of Lola. The grid is also apparent in the trajectories of traffic of speeding bikes, subway trains,and airplanes passing overhead, which all parallel Lola's movements in the film. The city/virtual environment is thus an idealised nexus of local, national and global lines of mobility and communication.: -- OR -- Tykwer emphasised the arbitrariness of the setting of Run Lola Run, insisting it could easily have been set in any other urban centre such as New York City or Beijing. At no point does the film make explicit that the space of action is Berlin; in fact the setting of the film is far less significant than the filmic self-reflexivity Tykwer explores in Run Lola Run. Berlin becomes a postmodernist filmic text in which earlier films by Lang, Schlöndorff, von Sternberg and Wenders are cited in intertextual fashion. It is not by chance that the protagonist of Run Lola Run shares the name of Marlene Dietrich's legendary character in von Sternberg's The Blue Angel. The running, late-20th-century Lola reconnects with and gains power from the originary Lola Lola as ur-Star of German cinema. The high overhead shots of Run Lola Run technologically exceed those used by Lang in M in 1931 but still quote his filmic text; the spiral form, placed in a shop window in M, becomes a central image of Run Lola Run in marking the immobile spot that Manni occupies. Repeated several times in the film, Lola's scream bends events, characters and chance to her will and slows the relentless pace of the narrative. This vocal punctuation recalls the equally willful vocalisations of Oskar Matzerath in Schlöndorff's Tin Drum (1979). Tykwer's radical expansions and compressions of time in Run Lola Run rely on the temporal exploitation of the filmic medium. The film stretches 20 minutes of 'real time' in the lives of its two protagonists into the 84 minutes of the film. Tykwer also distills the lives of the film's incidental or secondary characters into a few still images and a few seconds of filmic time in the 'und dann...' [and then...] sequences of all three episodes. For example, Lola's momentary encounter with an employee of her father's bank spins off into two completely different life stories for this woman, both of which are told through four or five staged 'snapshots' which are edited together into a rapid sequence. The higher-speed photography of the snapshot keeps up the frenetic pace of Run Lola Run and causes the narrative to move forward even faster, if only for a few seconds. Tykwer also celebrates the technology of 35 mm film in juxtaposing it to the fuzzy imprecision of video in Run Lola Run. The viewer not only notes how scenes shot on video are less visually beautiful than the 35 mm scenes which feature Lola or Manni, but also that they seem to move at a snail's pace. For example, the video-shot scene in Lola's banker-father's office also represents the boredom of his well-paid but stagnant life; another video sequence visually parallels the slow, shuffling movement of the homeless man Norbert as he discovers Manni's forgotten moneybag on the U-Bahn. Comically, he breaks into a run when he realises what he's found. Where Wim Wenders's Wings of Desire made beautiful cinematographic use of Berlin landmarks like the Siegessäule in black and white 35 mm, Tykwer relegates black and white to flashback sequences within the narrative and rejects the relatively meandering contemplation of Wenders's film in favour of the linear dynamism of urban space in Run Lola Run. -- OR -- Tykwer emphasised the arbitrariness of the setting of Run Lola Run, insisting it could easily have been set in any other urban centre such as New York City or Beijing. Nevertheless he establishes the united Berlin as the specific setting of the film. While Run Lola Run does not explicitly indicate that the space of action is Berlin, viewers are clear of the setting: a repeated establishing shot of the Friedrichstrasse U-Bahn stop, a central commuting street near the Brandenburg Gate in the former East Berlin which has undergone extensive reconstruction since 1990, begins each episode of the film. The play between the locality of Berlin and its role as the universal modernist metropolis is a trope of German cinema famously deployed by Fritz Lang in M, where the setting is also never explicitly revealed but implied by means of the use of the Berlin dialect in the dialogue of the film1. The soundtrack of Run Lola Run underscores the speed and mobility of Berlin by means of the fast/slow/fast rhythm of the film which proceeds primarily at the pace of techno music. Techno is also closely identified with the city of Berlin through its annual Techno Festival, which seems to grow larger with each passing year. Quick techno rhythm is syncopated with pauses in the forward-moving action brought on by Lola's superhuman screams or by the death of a protagonist. Berlin is also made explicit as Tykwer often stages scenes at clearly-marked street intersections which identify particular locations or boroughs thoughout east and west Berlin. The viewer notes that Lola escapes her father's bank during one episode and faces Unter den Linden; several scenes unfold on the banks of the river Spree; Lola sprints between the Altes Museum and the Berlin Cathedral. Manni's participation in a car-theft ring points to the Berlin-focussed activity of actual Eastern European and Russian crime syndicates; the film features an interlude at the Polish border where Manni delivers a shipment of stolen Mercedes to underworld buyers, which has to do with the actual geographic proximity of Berlin to Eastern European countries. Tykwer emphasised the arbitrariness of the setting of Run Lola Run, insisting it could easily have been set in any other urban centre such as New York City or Beijing. Nevertheless he establishes the united Berlin as the specific setting of the film. While Run Lola Run does not explicitly indicate that the space of action is Berlin, viewers are clear of the setting: a repeated establishing shot of the Friedrichstrasse U-Bahn stop, a central commuting street near the Brandenburg Gate in the former East Berlin which has undergone extensive reconstruction since 1990, begins each episode of the film. The play between the locality of Berlin and its role as the universal modernist metropolis is a trope of German cinema famously deployed by Fritz Lang in M, where the setting is also never explicitly revealed but implied by means of the use of the Berlin dialect in the dialogue of the film1. The soundtrack of Run Lola Run underscores the speed and mobility of Berlin by means of the fast/slow/fast rhythm of the film which proceeds primarily at the pace of techno music. Techno is also closely identified with the city of Berlin through its annual Techno Festival, which seems to grow larger with each passing year. Quick techno rhythm is syncopated with pauses in the forward-moving action brought on by Lola's superhuman screams or by the death of a protagonist. Berlin is also made explicit as Tykwer often stages scenes at clearly-marked street intersections which identify particular locations or boroughs thoughout east and west Berlin. The viewer notes that Lola escapes her father's bank during one episode and faces Unter den Linden; several scenes unfold on the banks of the river Spree; Lola sprints between the Altes Museum and the Berlin Cathedral. Manni's participation in a car-theft ring points to the Berlin-focussed activity of actual Eastern European and Russian crime syndicates; the film features an interlude at the Polish border where Manni delivers a shipment of stolen Mercedes to underworld buyers, which has to do with the actual geographic proximity of Berlin to Eastern European countries. Yet the speed of purposeful mobility is demanded in the contemporary united and globalised Berlin; lines of action or direction must be chosen and followed and chance encounters become traps or interruptions. Chance must therefore be minimised in the pursuit of urban speed, mobility, and commications access. In the globalised Berlin, Tykwer compresses chance encounters into individual snapshots of visual data which are viewed in quick succession by the viewer. Where artists such Christo and Sophie Calle had investigated the initial chaos of German reunification in Berlin, Run Lola Run rejects the hyper-contemplative and past-obsessed mood demanded by Christo's wrapping of the Reichstag, or Calle's documentation of the artistic destructions of unification3. Run Lola Run recasts Berlin as a network of fast connections, lines of uninterrupted movement, and productive output. It is therefore perhaps not surprising that Tykwer's idealised and embodied representation of Berlin as Lola has been politically appropriated as a convenient icon by the city's status quo: an icon of the successful reconstruction and rewiring of a united Berlin into a fast global broadband digital telecommunications network4. Footnotes See Edward Dimendberg's excellent discussion of filmic representations of the metropolis in "From Berlin to Bunker Hill: Urban Space, Late Modernity, and Film Noir in Fritz Lang's and Joseph Losey's M." Wide Angle 19.4 (1997): 62-93. This is despite the fact that the temporal parameters of the plot of Run Lola Run forbid the aimlessness central to spazieren (strolling). See Walter Benjamin, "A Berlin Chronicle", in Reflections. Ed. Peter Demetz. Trans. Edmund Jephcott. New York: Schocken, 1986. 3-60. See Sophie Calle, The Detachment. London: G+B Arts International and Arndt & Partner Gallery, n.d. The huge success of Tykwer's film in Germany spawned many red-hair-coiffed Lola imitators in the Berlin populace. The mayor of Berlin sported Lola-esque red hair in a poster which imitated the one for the film, but legal intercession put an end to this trendy political statement. Brian Pendreigh. "The Lolaness of the Long-Distance Runner." The Guardian 15 Oct. 1999. I've relied on William J. Mitchell's cultural history of the late 20th century 'rebuilding' of major cities into connection points in the global telecommunications network, City of Bits: Space, Place, and the Infobahn. Cambridge: MIT P, 1995. Citation reference for this article MLA style: Claudia Mesch. "Racing Berlin: The Games of Run Lola Run." M/C: A Journal of Media and Culture 3.3 (2000). [your date of access] <http://www.api-network.com/mc/0006/speed.php>. Chicago style: Claudia Mesch, "Racing Berlin: The Games of Run Lola Run," M/C: A Journal of Media and Culture 3, no. 3 (2000), <http://www.api-network.com/mc/0006/speed.php> ([your date of access]). APA style: Claudia Mesch. (2000) Racing Berlin: the games of Run Lola run. M/C: A Journal of Media and Culture 3(3). <http://www.api-network.com/mc/0006/speed.php> ([your date of access]).
APA, Harvard, Vancouver, ISO, and other styles
25

McNair, Brian. "Vote!" M/C Journal 10, no. 6 (April 1, 2008). http://dx.doi.org/10.5204/mcj.2714.

Full text
Abstract:
The twentieth was, from one perspective, the democratic century — a span of one hundred years which began with no fully functioning democracies in existence anywhere on the planet (if one defines democracy as a political system in which there is both universal suffrage and competitive elections), and ended with 120 countries out of 192 classified by the Freedom House think tank as ‘democratic’. There are of course still many societies where democracy is denied or effectively neutered — the remaining outposts of state socialism, such as China, Cuba, and North Korea; most if not all of the Islamic countries; exceptional states such as Singapore, unapologetically capitalist in its economic system but resolutely authoritarian in its political culture. Many self-proclaimed democracies, including those of the UK, Australia and the US, are procedurally or conceptually flawed. Countries emerging out of authoritarian systems and now in a state of democratic transition, such as Russia and the former Soviet republics, are immersed in constant, sometimes violent struggle between reformers and reactionaries. Russia’s recent parliamentary elections were accompanied by the intimidation of parties and politicians who opposed Vladimir Putin’s increasingly populist and authoritarian approach to leadership. The same Freedom House report which describes the rise of democracy in the twentieth century acknowledges that many self-styled democracies are, at best, only ‘partly free’ in their political cultures (for detailed figures on the rise of global democracy, see the Freedom House website Democracy’s Century). Let’s not for a moment downplay these important qualifications to what can nonetheless be fairly characterised as a century-long expansion and globalisation of democracy, and the acceptance of popular sovereignty, expressed through voting for the party or candidate of one’s choice, as a universally recognised human right. That such a process has occurred, and continues in these early years of the twenty-first century, is irrefutable. In the Gaza strip, Hamas appeals to the legitimacy of a democratic election victory in its campaign to be recognised as the voice of the Palestinian people. However one judges the messianic tendencies and Islamist ideology of Mahmoud Ahmadinejad, it must be acknowledged that the Iranian people elected him, and that they have the power to throw him out of government next time they vote. That was never true of the Shah. The democratic resurgence in Latin America, taking in Venezuela, Peru and Bolivia among others has been a much-noted feature of international politics in recent times (Alves), presenting a welcome contrast to the dictatorships and death squads of the 1980s, even as it creates some uncomfortable dilemmas for the Bush administration (which must champion democratic government at the same time as it resents some of the choices people may make when they have the opportunity to vote). Since 9/11 a kind of democracy has expanded even to Afghanistan and Iraq, albeit at the point of a gun, and with no guarantees of survival beyond the end of military occupation by the US and its coalition allies. As this essay was being written, Pakistan’s state of emergency was ending and democratic elections scheduled, albeit in the shadow cast by the assassination of Benazir Bhutto in December 2007. Democracy, then — imperfect and limited as it can be; grudgingly delivered though it is by political elites in many countries, and subject to attack and roll back at any time — has become a global universal to which all claim allegiance, or at least pay lip service. The scale of this transformation, which has occurred in little more than one quarter of the time elapsed since the Putney debates of 1647 and the English revolution first established the principle of the sovereignty of parliament, is truly remarkable. (Tristram Hunt quotes lawyer Geoffrey Robertson in the Guardian to the effect that the Putney debates, staged in St Mary’s church in south-west London towards the end of the English civil war, launched “the idea that government requires the consent of freely and fairly elected representatives of all adult citizens irrespective of class or caste or status or wealth” – “A Jewel of Democracy”, Guardian, 26 Oct. 2007) Can it be true that less than one hundred years ago, in even the most advanced capitalist societies, 50 per cent of the people — women — did not have the right to vote? Or that black populations, indigenous or migrant, in countries such as the United States and Australia were deprived of basic citizenship rights until the 1960s and even later? Will future generations wonder how on earth it could have been that the vast majority of the people of South Africa were unable to vote until 1994, and that they were routinely imprisoned, tortured and killed when they demanded basic democratic rights? Or will they shrug and take it for granted, as so many of us who live in settled democracies already do? (In so far as ‘we’ includes the community of media and cultural studies scholars, I would argue that where there is reluctance to concede the scale and significance of democratic change, this arises out of continuing ambivalence about what ‘democracy’ means, a continuing suspicion of globalisation (in particular the globalisation of democratic political culture, still associated in some quarters with ‘the west’), and of the notion of ‘progress’ with which democracy is routinely associated. The intellectual roots of that ambivalence were various. Marxist-leninist inspired authoritarianism gripped much of the world until the fall of the Berlin Wall and the end of the cold war. Until that moment, it was still possible for many marxians in the scholarly community to view the idea of democracy with disdain — if not quite a dirty word, then a deeply flawed, highly loaded concept which masked and preserved underlying social inequalities more than it helped resolve them. Until 1989 or thereabouts, it was possible for ‘bourgeois democracy’ to be regarded as just one kind of democratic polity by the liberal and anti-capitalist left, which often regarded the ‘proletarian’ or ‘people’s’ democracy prevailing in the Soviet Union, China, Cuba or Vietnam as legitimate alternatives to the emerging capitalist norm of one person, one vote, for constituent assemblies which had real power and accountability. In terms not very different from those used by Marx and Engels in The German Ideology, belief in the value of democracy was conceived by this materialist school as a kind of false consciousness. It still is, by Noam Chomsky and others who continue to view democracy as a ‘necessary illusion’ (1989) without which capitalism could not be reproduced. From these perspectives voting gave, and gives us merely the illusion of agency and power in societies where capital rules as it always did. For democracy read ‘the manufacture of consent’; its expansion read not as progressive social evolution, but the universalisation of the myth of popular sovereignty, mobilised and utilised by the media-industrial-military complex to maintain its grip.) There are those who dispute this reading of events. In the 1960s, Habermas’s hugely influential Structural Transformation of the Public Sphere critiqued the manner in which democracy, and the public sphere underpinning it, had been degraded by public relations, advertising, and the power of private interests. In the period since, critical scholarly research and writing on political culture has been dominated by the Habermasian discourse of democratic decline, and the pervasive pessimism of those who see democracy, and the media culture which supports it, as fatally flawed, corrupted by commercialisation and under constant threat. Those, myself included, who challenged that view with a more positive reading of the trends (McNair, Journalism and Democracy; Cultural Chaos) have been denounced as naïve optimists, panglossian, utopian and even, in my own case, a ‘neo-liberal apologist’. (See an unpublished paper by David Miller, “System Failure: It’s Not Just the Media, It’s the Whole Bloody System”, delivered at Goldsmith’s College in 2003.) Engaging as they have been, I venture to suggest that these are the discourses and debates of an era now passing into history. Not only is it increasingly obvious that democracy is expanding globally into places where it never previously reached; it is also extending inwards, within nation states, driven by demands for greater local autonomy. In the United Kingdom, for example, the citizen is now able to vote not just in Westminster parliamentary elections (which determine the political direction of the UK government), but for European elections, local elections, and elections for devolved assemblies in Scotland, Wales and Northern Ireland. The people of London can vote for their mayor. There would by now have been devolved assemblies in the regions of England, too, had the people of the North East not voted against it in a November 2004 referendum. Notwithstanding that result, which surprised many in the New Labour government who held it as axiomatic that the more democracy there was, the better for all of us, the importance of enhancing and expanding democratic institutions, of allowing people to vote more often (and also in more efficient ways — many of these expansions of democracy have been tied to the introduction of systems of proportional representation) has become consensual, from the Mid West of America to the Middle East. The Democratic Paradox And yet, as the wave of democratic transformation has rolled on through the late twentieth and into the early twenty first century it is notable that, in many of the oldest liberal democracies at least, fewer people have been voting. In the UK, for example, in the period between 1945 and 2001, turnout at general elections never fell below 70 per cent. In 1992, the last general election won by the Conservatives before the rise of Tony Blair and New Labour, turnout was 78 per cent, roughly where it had been in the 1950s. In 2001, however, as Blair’s government sought re-election, turnout fell to an historic low for the UK of 59.4 per cent, and rose only marginally to 61.4 per cent in the most recent general election of 2005. In the US presidential elections of 1996 and 2000 turnouts were at historic lows of 47.2 and 49.3 per cent respectively, rising just above 50 per cent again in 2004 (figures by International Institute for Democracy and Electoral Assistance). At local level things are even worse. In only the second election for a devolved parliament in Scotland (2003) turnout was a mere 48.5 per cent, rising to 50.5 in 2007. These trends are not universal. In countries with compulsory voting, they mean very little — in Australia, where voting in parliamentary elections is compulsory, turnout averages in the 90s per cent. In France, while turnouts for parliamentary elections show a similar downward trend to the UK and the UK, presidential contests achieve turnouts of 80-plus per cent. In the UK and US, as noted, the most recent elections show modest growth in turnout from those historic lows of the late 1990s and early Noughties. There has grown, nonetheless, the perception, commonplace amongst academic commentators as well as journalists and politicians themselves, that we are living through a ‘crisis’ of democratic participation, a dangerous decline in the tendency to vote in elections which undermines the legitimacy of democracy itself. In communication scholarship a significant body of research and publication has developed around this theme, from Blumler and Gurevitch’s Crisis of Public Communication (1996), through Barnett and Gaber’s Westminster Tales (2000), to more recent studies such as Lewis et al.’s Citizens or Consumers (2005). All presume a problem of some kind with the practice of democracy and the “old fashioned ritual” of voting, as Lewis et al. describe it (2). Most link alleged inadequacies in the performance of the political media to what is interpreted as popular apathy (or antipathy) towards democracy. The media are blamed for the lack of public engagement with democratic politics which declining turnouts are argued to signal. Political journalists are said to be too aggressive and hyper-adversarial (Lloyd), behaving like the “feral beast” spoken of by Tony Blair in his 2007 farewell speech to the British people as prime minister. They are corrosively cynical and a “disaster for democracy”, as Steven Barnett and others argued in the first years of the twenty first century. They are not aggressive or adversarial enough, as the propaganda modellists allege, citing what they interpret as supine media coverage of Coalition policy in Iraq. The media put people off, rather than turn them on to democracy by being, variously, too nice or too nasty to politicians. What then, is the solution to the apparent paradox represented by the fact that there is more democracy, but less voting in elections than ever before; and that after centuries of popular struggle democratic assemblies proliferate, but in some countries barely half of the eligible voters can be bothered to participate? And what role have the media played in this unexpected phenomenon? If the scholarly community has been largely critical on this question, and pessimistic in its analyses of the role of the media, it has become increasingly clear that the one arena where people do vote more than ever before is that presented by the media, and entertainment media in particular. There has been, since the appearance of Big Brother and the subsequent explosion of competitive reality TV formats across the world, evidence of a huge popular appetite for voting on such matters as which amateur contestant on Pop Idol, or X Factor, or Fame Academy, or Operatunity goes on to have a chance of a professional career, a shot at the big time. Millions of viewers of the most popular reality TV strands queue up to register their votes on premium phone lines, the revenue from which makes up a substantial and growing proportion of the income of commercial TV companies. This explosion of voting behaviour has been made possible by the technology-driven emergence of new forms of participatory, interactive, digitised media channels which allow millions to believe that they can have an impact on the outcome of what are, at essence, game and talent shows. At the height of anxiety around the ‘crisis of democratic participation’ in the UK, observers noted that nearly 6.5 million people had voted in the Big Brother UK final in 2004. More than eight million voted during the 2004 run of the BBC’s Fame Academy series. While these numbers do not, contrary to popular belief, exceed the numbers of British citizens who vote in a general election (27.2 million in 2005), they do indicate an enthusiasm for voting which seems to contradict declining rates of democratic participation. People who will never get out and vote for their local councillor often appear more than willing to pick up the telephone or the laptop and cast a vote for their favoured reality TV contestant, even if it costs them money. It would be absurd to suggest that voting for a contestant on Big Brother is directly comparable to the act of choosing a government or a president. The latter is recognised as an expression of citizenship, with potentially significant consequences for the lives of individuals within their society. Voting on Big Brother, on the other hand, is unmistakeably entertainment, game-playing, a relatively risk-free exercise of choice — a bit of harmless fun, fuelled by office chat and relentless tabloid coverage of the contestants’ strengths and weaknesses. There is no evidence that readiness to participate in a telephone or online vote for entertainment TV translates into active citizenship, where ‘active’ means casting a vote in an election. The lesson delivered by the success of participatory media in recent years, however — first reality TV, and latterly a proliferation of online formats which encourage user participation and voting for one thing or another — is that people will vote, when they are able and motivated to do so. Voting is popular, in short, and never more so, irrespective of the level of popular participation recorded in recent elections. And if they will vote in their millions for a contestant on X Factor, or participate in competitions to determine the best movies or books on Facebook, they can presumably be persuaded to do so when an election for parliament comes around. This fact has been recognised by both media producers and politicians, and reflected in attempts to adapt the evermore sophisticated and efficient tools of participatory media to the democratic process, to engage media audiences as citizens by offering the kinds of voting opportunities in political debates, including election processes, which entertainment media have now made routinely available. ITV’s Vote for Me strand, broadcast in the run-up to the UK general election of 2005, used reality TV techniques to select a candidate who would actually take part in the forthcoming poll. The programme was broadcast in a late night, low audience slot, and failed to generate much interest, but it signalled a desire by media producers to harness the appeal of participatory media in a way which could directly impact on levels of democratic engagement. The honourable failure of Vote for Me (produced by the same team which made the much more successful live debate shows featuring prime minister Tony Blair — Ask Tony Blair, Ask the Prime Minister) might be viewed as evidence that readiness to vote in the context of a TV game show does not translate directly into voting for parties and politicians, and that the problem in this respect — the crisis of democratic participation, such that it exists — is located elsewhere. People can vote in democratic elections, but choose not to, perhaps because they feel that the act is meaningless (because parties are ideologically too similar), or ineffectual (because they see no impact of voting in their daily lives or in the state of the country), or irrelevant to their personal priorities and life styles. Voting rates have increased in the US and the UK since September 11 2001, suggesting perhaps that when the political stakes are raised, and the question of who is in government seems to matter more than it did, people act accordingly. Meantime, media producers continue to make money by developing formats and channels on the assumption that audiences wish to participate, to interact, and to vote. Whether this form of participatory media consumption for the purposes of play can be translated into enhanced levels of active citizenship, and whether the media can play a significant contributory role in that process, remains to be seen. References Alves, R.C. “From Lapdog to Watchdog: The Role of the Press in Latin America’s Democratisation.” In H. de Burgh, ed., Making Journalists. London: Routledge, 2005. 181-202. Anderson, P.J., and G. Ward (eds.). The Future of Journalism in the Advanced Democracies. Aldershot: Ashgate Publishing, 2007. Barnett, S. “The Age of Contempt.” Guardian 28 October 2002. http://politics.guardian.co.uk/media/comment/0,12123,820577,00.html>. Barnett, S., and I. Gaber. Westminster Tales. London: Continuum, 2001. Blumler, J., and M. Gurevitch. The Crisis of Public Communication. London: Routledge, 1996. Habermas, J. The Structural Transformation of the Public Sphere. Cambridge: Polity Press, 1989. Lewis, J., S. Inthorn, and K. Wahl-Jorgensen. Citizens or Consumers? What the Media Tell Us about Political Participation. Milton Keynes: Open University Press, 2005. Lloyd, John. What the Media Are Doing to Our Politics. London: Constable, 2004. McNair, B. Journalism and Democracy: A Qualitative Evaluation of the Political Public Sphere. London: Routledge, 2000. ———. Cultural Chaos: News, Journalism and Power in a Globalised World. London: Routledge, 2006. Citation reference for this article MLA Style McNair, Brian. "Vote!." M/C Journal 10.6/11.1 (2008). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0804/01-mcnair.php>. APA Style McNair, B. (Apr. 2008) "Vote!," M/C Journal, 10(6)/11(1). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0804/01-mcnair.php>.
APA, Harvard, Vancouver, ISO, and other styles
26

McNair, Brian. "Vote!" M/C Journal 11, no. 1 (April 1, 2008). http://dx.doi.org/10.5204/mcj.21.

Full text
Abstract:
The twentieth was, from one perspective, the democratic century — a span of one hundred years which began with no fully functioning democracies in existence anywhere on the planet (if one defines democracy as a political system in which there is both universal suffrage and competitive elections), and ended with 120 countries out of 192 classified by the Freedom House think tank as ‘democratic’. There are of course still many societies where democracy is denied or effectively neutered — the remaining outposts of state socialism, such as China, Cuba, and North Korea; most if not all of the Islamic countries; exceptional states such as Singapore, unapologetically capitalist in its economic system but resolutely authoritarian in its political culture. Many self-proclaimed democracies, including those of the UK, Australia and the US, are procedurally or conceptually flawed. Countries emerging out of authoritarian systems and now in a state of democratic transition, such as Russia and the former Soviet republics, are immersed in constant, sometimes violent struggle between reformers and reactionaries. Russia’s recent parliamentary elections were accompanied by the intimidation of parties and politicians who opposed Vladimir Putin’s increasingly populist and authoritarian approach to leadership. The same Freedom House report which describes the rise of democracy in the twentieth century acknowledges that many self-styled democracies are, at best, only ‘partly free’ in their political cultures (for detailed figures on the rise of global democracy, see the Freedom House website Democracy’s Century). Let’s not for a moment downplay these important qualifications to what can nonetheless be fairly characterised as a century-long expansion and globalisation of democracy, and the acceptance of popular sovereignty, expressed through voting for the party or candidate of one’s choice, as a universally recognised human right. That such a process has occurred, and continues in these early years of the twenty-first century, is irrefutable. In the Gaza strip, Hamas appeals to the legitimacy of a democratic election victory in its campaign to be recognised as the voice of the Palestinian people. However one judges the messianic tendencies and Islamist ideology of Mahmoud Ahmadinejad, it must be acknowledged that the Iranian people elected him, and that they have the power to throw him out of government next time they vote. That was never true of the Shah. The democratic resurgence in Latin America, taking in Venezuela, Peru and Bolivia among others has been a much-noted feature of international politics in recent times (Alves), presenting a welcome contrast to the dictatorships and death squads of the 1980s, even as it creates some uncomfortable dilemmas for the Bush administration (which must champion democratic government at the same time as it resents some of the choices people may make when they have the opportunity to vote). Since 9/11 a kind of democracy has expanded even to Afghanistan and Iraq, albeit at the point of a gun, and with no guarantees of survival beyond the end of military occupation by the US and its coalition allies. As this essay was being written, Pakistan’s state of emergency was ending and democratic elections scheduled, albeit in the shadow cast by the assassination of Benazir Bhutto in December 2007. Democracy, then — imperfect and limited as it can be; grudgingly delivered though it is by political elites in many countries, and subject to attack and roll back at any time — has become a global universal to which all claim allegiance, or at least pay lip service. The scale of this transformation, which has occurred in little more than one quarter of the time elapsed since the Putney debates of 1647 and the English revolution first established the principle of the sovereignty of parliament, is truly remarkable. (Tristram Hunt quotes lawyer Geoffrey Robertson in the Guardian to the effect that the Putney debates, staged in St Mary’s church in south-west London towards the end of the English civil war, launched “the idea that government requires the consent of freely and fairly elected representatives of all adult citizens irrespective of class or caste or status or wealth” – “A Jewel of Democracy”, Guardian, 26 Oct. 2007) Can it be true that less than one hundred years ago, in even the most advanced capitalist societies, 50 per cent of the people — women — did not have the right to vote? Or that black populations, indigenous or migrant, in countries such as the United States and Australia were deprived of basic citizenship rights until the 1960s and even later? Will future generations wonder how on earth it could have been that the vast majority of the people of South Africa were unable to vote until 1994, and that they were routinely imprisoned, tortured and killed when they demanded basic democratic rights? Or will they shrug and take it for granted, as so many of us who live in settled democracies already do? (In so far as ‘we’ includes the community of media and cultural studies scholars, I would argue that where there is reluctance to concede the scale and significance of democratic change, this arises out of continuing ambivalence about what ‘democracy’ means, a continuing suspicion of globalisation (in particular the globalisation of democratic political culture, still associated in some quarters with ‘the west’), and of the notion of ‘progress’ with which democracy is routinely associated. The intellectual roots of that ambivalence were various. Marxist-leninist inspired authoritarianism gripped much of the world until the fall of the Berlin Wall and the end of the cold war. Until that moment, it was still possible for many marxians in the scholarly community to view the idea of democracy with disdain — if not quite a dirty word, then a deeply flawed, highly loaded concept which masked and preserved underlying social inequalities more than it helped resolve them. Until 1989 or thereabouts, it was possible for ‘bourgeois democracy’ to be regarded as just one kind of democratic polity by the liberal and anti-capitalist left, which often regarded the ‘proletarian’ or ‘people’s’ democracy prevailing in the Soviet Union, China, Cuba or Vietnam as legitimate alternatives to the emerging capitalist norm of one person, one vote, for constituent assemblies which had real power and accountability. In terms not very different from those used by Marx and Engels in The German Ideology, belief in the value of democracy was conceived by this materialist school as a kind of false consciousness. It still is, by Noam Chomsky and others who continue to view democracy as a ‘necessary illusion’ (1989) without which capitalism could not be reproduced. From these perspectives voting gave, and gives us merely the illusion of agency and power in societies where capital rules as it always did. For democracy read ‘the manufacture of consent’; its expansion read not as progressive social evolution, but the universalisation of the myth of popular sovereignty, mobilised and utilised by the media-industrial-military complex to maintain its grip.) There are those who dispute this reading of events. In the 1960s, Habermas’s hugely influential Structural Transformation of the Public Sphere critiqued the manner in which democracy, and the public sphere underpinning it, had been degraded by public relations, advertising, and the power of private interests. In the period since, critical scholarly research and writing on political culture has been dominated by the Habermasian discourse of democratic decline, and the pervasive pessimism of those who see democracy, and the media culture which supports it, as fatally flawed, corrupted by commercialisation and under constant threat. Those, myself included, who challenged that view with a more positive reading of the trends (McNair, Journalism and Democracy; Cultural Chaos) have been denounced as naïve optimists, panglossian, utopian and even, in my own case, a ‘neo-liberal apologist’. (See an unpublished paper by David Miller, “System Failure: It’s Not Just the Media, It’s the Whole Bloody System”, delivered at Goldsmith’s College in 2003.) Engaging as they have been, I venture to suggest that these are the discourses and debates of an era now passing into history. Not only is it increasingly obvious that democracy is expanding globally into places where it never previously reached; it is also extending inwards, within nation states, driven by demands for greater local autonomy. In the United Kingdom, for example, the citizen is now able to vote not just in Westminster parliamentary elections (which determine the political direction of the UK government), but for European elections, local elections, and elections for devolved assemblies in Scotland, Wales and Northern Ireland. The people of London can vote for their mayor. There would by now have been devolved assemblies in the regions of England, too, had the people of the North East not voted against it in a November 2004 referendum. Notwithstanding that result, which surprised many in the New Labour government who held it as axiomatic that the more democracy there was, the better for all of us, the importance of enhancing and expanding democratic institutions, of allowing people to vote more often (and also in more efficient ways — many of these expansions of democracy have been tied to the introduction of systems of proportional representation) has become consensual, from the Mid West of America to the Middle East. The Democratic Paradox And yet, as the wave of democratic transformation has rolled on through the late twentieth and into the early twenty first century it is notable that, in many of the oldest liberal democracies at least, fewer people have been voting. In the UK, for example, in the period between 1945 and 2001, turnout at general elections never fell below 70 per cent. In 1992, the last general election won by the Conservatives before the rise of Tony Blair and New Labour, turnout was 78 per cent, roughly where it had been in the 1950s. In 2001, however, as Blair’s government sought re-election, turnout fell to an historic low for the UK of 59.4 per cent, and rose only marginally to 61.4 per cent in the most recent general election of 2005. In the US presidential elections of 1996 and 2000 turnouts were at historic lows of 47.2 and 49.3 per cent respectively, rising just above 50 per cent again in 2004 (figures by International Institute for Democracy and Electoral Assistance). At local level things are even worse. In only the second election for a devolved parliament in Scotland (2003) turnout was a mere 48.5 per cent, rising to 50.5 in 2007. These trends are not universal. In countries with compulsory voting, they mean very little — in Australia, where voting in parliamentary elections is compulsory, turnout averages in the 90s per cent. In France, while turnouts for parliamentary elections show a similar downward trend to the UK and the UK, presidential contests achieve turnouts of 80-plus per cent. In the UK and US, as noted, the most recent elections show modest growth in turnout from those historic lows of the late 1990s and early Noughties. There has grown, nonetheless, the perception, commonplace amongst academic commentators as well as journalists and politicians themselves, that we are living through a ‘crisis’ of democratic participation, a dangerous decline in the tendency to vote in elections which undermines the legitimacy of democracy itself. In communication scholarship a significant body of research and publication has developed around this theme, from Blumler and Gurevitch’s Crisis of Public Communication (1996), through Barnett and Gaber’s Westminster Tales (2000), to more recent studies such as Lewis et al.’s Citizens or Consumers (2005). All presume a problem of some kind with the practice of democracy and the “old fashioned ritual” of voting, as Lewis et al. describe it (2). Most link alleged inadequacies in the performance of the political media to what is interpreted as popular apathy (or antipathy) towards democracy. The media are blamed for the lack of public engagement with democratic politics which declining turnouts are argued to signal. Political journalists are said to be too aggressive and hyper-adversarial (Lloyd), behaving like the “feral beast” spoken of by Tony Blair in his 2007 farewell speech to the British people as prime minister. They are corrosively cynical and a “disaster for democracy”, as Steven Barnett and others argued in the first years of the twenty first century. They are not aggressive or adversarial enough, as the propaganda modellists allege, citing what they interpret as supine media coverage of Coalition policy in Iraq. The media put people off, rather than turn them on to democracy by being, variously, too nice or too nasty to politicians. What then, is the solution to the apparent paradox represented by the fact that there is more democracy, but less voting in elections than ever before; and that after centuries of popular struggle democratic assemblies proliferate, but in some countries barely half of the eligible voters can be bothered to participate? And what role have the media played in this unexpected phenomenon? If the scholarly community has been largely critical on this question, and pessimistic in its analyses of the role of the media, it has become increasingly clear that the one arena where people do vote more than ever before is that presented by the media, and entertainment media in particular. There has been, since the appearance of Big Brother and the subsequent explosion of competitive reality TV formats across the world, evidence of a huge popular appetite for voting on such matters as which amateur contestant on Pop Idol, or X Factor, or Fame Academy, or Operatunity goes on to have a chance of a professional career, a shot at the big time. Millions of viewers of the most popular reality TV strands queue up to register their votes on premium phone lines, the revenue from which makes up a substantial and growing proportion of the income of commercial TV companies. This explosion of voting behaviour has been made possible by the technology-driven emergence of new forms of participatory, interactive, digitised media channels which allow millions to believe that they can have an impact on the outcome of what are, at essence, game and talent shows. At the height of anxiety around the ‘crisis of democratic participation’ in the UK, observers noted that nearly 6.5 million people had voted in the Big Brother UK final in 2004. More than eight million voted during the 2004 run of the BBC’s Fame Academy series. While these numbers do not, contrary to popular belief, exceed the numbers of British citizens who vote in a general election (27.2 million in 2005), they do indicate an enthusiasm for voting which seems to contradict declining rates of democratic participation. People who will never get out and vote for their local councillor often appear more than willing to pick up the telephone or the laptop and cast a vote for their favoured reality TV contestant, even if it costs them money. It would be absurd to suggest that voting for a contestant on Big Brother is directly comparable to the act of choosing a government or a president. The latter is recognised as an expression of citizenship, with potentially significant consequences for the lives of individuals within their society. Voting on Big Brother, on the other hand, is unmistakeably entertainment, game-playing, a relatively risk-free exercise of choice — a bit of harmless fun, fuelled by office chat and relentless tabloid coverage of the contestants’ strengths and weaknesses. There is no evidence that readiness to participate in a telephone or online vote for entertainment TV translates into active citizenship, where ‘active’ means casting a vote in an election. The lesson delivered by the success of participatory media in recent years, however — first reality TV, and latterly a proliferation of online formats which encourage user participation and voting for one thing or another — is that people will vote, when they are able and motivated to do so. Voting is popular, in short, and never more so, irrespective of the level of popular participation recorded in recent elections. And if they will vote in their millions for a contestant on X Factor, or participate in competitions to determine the best movies or books on Facebook, they can presumably be persuaded to do so when an election for parliament comes around. This fact has been recognised by both media producers and politicians, and reflected in attempts to adapt the evermore sophisticated and efficient tools of participatory media to the democratic process, to engage media audiences as citizens by offering the kinds of voting opportunities in political debates, including election processes, which entertainment media have now made routinely available. ITV’s Vote for Me strand, broadcast in the run-up to the UK general election of 2005, used reality TV techniques to select a candidate who would actually take part in the forthcoming poll. The programme was broadcast in a late night, low audience slot, and failed to generate much interest, but it signalled a desire by media producers to harness the appeal of participatory media in a way which could directly impact on levels of democratic engagement. The honourable failure of Vote for Me (produced by the same team which made the much more successful live debate shows featuring prime minister Tony Blair — Ask Tony Blair, Ask the Prime Minister) might be viewed as evidence that readiness to vote in the context of a TV game show does not translate directly into voting for parties and politicians, and that the problem in this respect — the crisis of democratic participation, such that it exists — is located elsewhere. People can vote in democratic elections, but choose not to, perhaps because they feel that the act is meaningless (because parties are ideologically too similar), or ineffectual (because they see no impact of voting in their daily lives or in the state of the country), or irrelevant to their personal priorities and life styles. Voting rates have increased in the US and the UK since September 11 2001, suggesting perhaps that when the political stakes are raised, and the question of who is in government seems to matter more than it did, people act accordingly. Meantime, media producers continue to make money by developing formats and channels on the assumption that audiences wish to participate, to interact, and to vote. Whether this form of participatory media consumption for the purposes of play can be translated into enhanced levels of active citizenship, and whether the media can play a significant contributory role in that process, remains to be seen. References Alves, R.C. “From Lapdog to Watchdog: The Role of the Press in Latin America’s Democratisation.” In H. de Burgh, ed., Making Journalists. London: Routledge, 2005. 181-202. Anderson, P.J., and G. Ward (eds.). The Future of Journalism in the Advanced Democracies. Aldershot: Ashgate Publishing, 2007. Barnett, S. “The Age of Contempt.” Guardian 28 October 2002. < http://politics.guardian.co.uk/media/comment/0,12123,820577,00.html >. Barnett, S., and I. Gaber. Westminster Tales. London: Continuum, 2001. Blumler, J., and M. Gurevitch. The Crisis of Public Communication. London: Routledge, 1996. Habermas, J. The Structural Transformation of the Public Sphere. Cambridge: Polity Press, 1989. Lewis, J., S. Inthorn, and K. Wahl-Jorgensen. Citizens or Consumers? What the Media Tell Us about Political Participation. Milton Keynes: Open University Press, 2005. Lloyd, John. What the Media Are Doing to Our Politics. London: Constable, 2004. McNair, B. Journalism and Democracy: A Qualitative Evaluation of the Political Public Sphere. London: Routledge, 2000. ———. Cultural Chaos: News, Journalism and Power in a Globalised World. London: Routledge, 2006.
APA, Harvard, Vancouver, ISO, and other styles
27

Burns, Alex. "The Worldflash of a Coming Future." M/C Journal 6, no. 2 (April 1, 2003). http://dx.doi.org/10.5204/mcj.2168.

Full text
Abstract:
History is not over and that includes media history. Jay Rosen (Zelizer & Allan 33) The media in their reporting on terrorism tend to be judgmental, inflammatory, and sensationalistic. — Susan D. Moeller (169) In short, we are directed in time, and our relation to the future is different than our relation to the past. All our questions are conditioned by this asymmetry, and all our answers to these questions are equally conditioned by it. Norbert Wiener (44) The Clash of Geopolitical Pundits America’s geo-strategic engagement with the world underwent a dramatic shift in the decade after the Cold War ended. United States military forces undertook a series of humanitarian interventions from northern Iraq (1991) and Somalia (1992) to NATO’s bombing campaign on Kosovo (1999). Wall Street financial speculators embraced market-oriented globalization and technology-based industries (Friedman 1999). Meanwhile the geo-strategic pundits debated several different scenarios at deeper layers of epistemology and macrohistory including the breakdown of nation-states (Kaplan), the ‘clash of civilizations’ along religiopolitical fault-lines (Huntington) and the fashionable ‘end of history’ thesis (Fukuyama). Media theorists expressed this geo-strategic shift in reference to the ‘CNN Effect’: the power of real-time media ‘to provoke major responses from domestic audiences and political elites to both global and national events’ (Robinson 2). This media ecology is often contrasted with ‘Gateholder’ and ‘Manufacturing Consent’ models. The ‘CNN Effect’ privileges humanitarian and non-government organisations whereas the latter models focus upon the conformist mind-sets and shared worldviews of government and policy decision-makers. The September 11 attacks generated an uncertain interdependency between the terrorists, government officials, and favourable media coverage. It provided a test case, as had the humanitarian interventions (Robinson 37) before it, to test the claim by proponents that the ‘CNN Effect’ had policy leverage during critical stress points. The attacks also revived a long-running debate in media circles about the risk factors of global media. McLuhan (1964) and Ballard (1990) had prophesied that the global media would pose a real-time challenge to decision-making processes and that its visual imagery would have unforeseen psychological effects on viewers. Wark (1994) noted that journalists who covered real-time events including the Wall Street crash (1987) and collapse of the Berlin Wall (1989) were traumatised by their ‘virtual’ geographies. The ‘War on Terror’ as 21st Century Myth Three recent books explore how the 1990s humanitarian interventions and the September 11 attacks have remapped this ‘virtual’ territory with all too real consequences. Piers Robinson’s The CNN Effect (2002) critiques the theory and proposes the policy-media interaction model. Barbie Zelizer and Stuart Allan’s anthology Journalism After September 11 (2002) examines how September 11 affected the journalists who covered it and the implications for news values. Sandra Silberstein’s War of Words (2002) uncovers how strategic language framed the U.S. response to September 11. Robinson provides the contextual background; Silberstein contributes the specifics; and Zelizer and Allan surface broader perspectives. These books offer insights into the social construction of the nebulous War on Terror and why certain images and trajectories were chosen at the expense of other possibilities. Silberstein locates this world-historical moment in the three-week transition between September 11’s aftermath and the U.S. bombings of Afghanistan’s Taliban regime. Descriptions like the ‘War on Terror’ and ‘Axis of Evil’ framed the U.S. military response, provided a conceptual justification for the bombings, and also brought into being the geo-strategic context for other nations. The crucial element in this process was when U.S. President George W. Bush adopted a pedagogical style for his public speeches, underpinned by the illusions of communal symbols and shared meanings (Silberstein 6-8). Bush’s initial address to the nation on September 11 invoked the ambiguous pronoun ‘we’ to recreate ‘a unified nation, under God’ (Silberstein 4). The 1990s humanitarian interventions had frequently been debated in Daniel Hallin’s sphere of ‘legitimate controversy’; however the grammar used by Bush and his political advisers located the debate in the sphere of ‘consensus’. This brief period of enforced consensus was reinforced by the structural limitations of North American media outlets. September 11 combined ‘tragedy, public danger and a grave threat to national security’, Michael Schudson observed, and in the aftermath North American journalism shifted ‘toward a prose of solidarity rather than a prose of information’ (Zelizer & Allan 41). Debate about why America was hated did not go much beyond Bush’s explanation that ‘they hated our freedoms’ (Silberstein 14). Robert W. McChesney noted that alternatives to the ‘war’ paradigm were rarely mentioned in the mainstream media (Zelizer & Allan 93). A new myth for the 21st century had been unleashed. The Cycle of Integration Propaganda Journalistic prose masked the propaganda of social integration that atomised the individual within a larger collective (Ellul). The War on Terror was constructed by geopolitical pundits as a Manichean battle between ‘an “evil” them and a national us’ (Silberstein 47). But the national crisis made ‘us’ suddenly problematic. Resurgent patriotism focused on the American flag instead of Constitutional rights. Debates about military tribunals and the USA Patriot Act resurrected the dystopian fears of a surveillance society. New York City mayor Rudy Guiliani suddenly became a leadership icon and Time magazine awarded him Person of the Year (Silberstein 92). Guiliani suggested at the Concert for New York on 20 October 2001 that ‘New Yorkers and Americans have been united as never before’ (Silberstein 104). Even the series of Public Service Announcements created by the Ad Council and U.S. advertising agencies succeeded in blurring the lines between cultural tolerance, social inclusion, and social integration (Silberstein 108-16). In this climate the in-depth discussion of alternate options and informed dissent became thought-crimes. The American Council of Trustees and Alumni’s report Defending Civilization: How Our Universities are Failing America (2002), which singled out “blame America first” academics, ignited a firestorm of debate about educational curriculums, interpreting history, and the limits of academic freedom. Silberstein’s perceptive analysis surfaces how ACTA assumed moral authority and collective misunderstandings as justification for its interrogation of internal enemies. The errors she notes included presumed conclusions, hasty generalisations, bifurcated worldviews, and false analogies (Silberstein 133, 135, 139, 141). Op-ed columnists soon exposed ACTA’s gambit as a pre-packaged witch-hunt. But newscasters then channel-skipped into military metaphors as the Afghanistan campaign began. The weeks after the attacks New York City sidewalk traders moved incense and tourist photos to make way for World Trade Center memorabilia and anti-Osama shirts. Chevy and Ford morphed September 11 catchphrases (notably Todd Beamer’s last words “Let’s Roll” on Flight 93) and imagery into car advertising campaigns (Silberstein 124-5). American self-identity was finally reasserted in the face of a domestic recession through this wave of vulgar commercialism. The ‘Simulated’ Fall of Elite Journalism For Columbia University professor James Carey the ‘failure of journalism on September 11’ signaled the ‘collapse of the elites of American journalism’ (Zelizer & Allan 77). Carey traces the rise-and-fall of adversarial and investigative journalism from the Pentagon Papers and Watergate through the intermediation of the press to the myopic self-interest of the 1988 and 1992 Presidential campaigns. Carey’s framing echoes the earlier criticisms of Carl Bernstein and Hunter S. Thompson. However this critique overlooks several complexities. Piers Robinson cites Alison Preston’s insight that diplomacy, geopolitics and elite reportage defines itself through the sense of distance from its subjects. Robinson distinguished between two reportage types: distance framing ‘creates emotional distance’ between the viewers and victims whilst support framing accepts the ‘official policy’ (28). The upsurge in patriotism, the vulgar commercialism, and the mini-cycle of memorabilia and publishing all combined to enhance the support framing of the U.S. federal government. Empathy generated for September 11’s victims was tied to support of military intervention. However this closeness rapidly became the distance framing of the Afghanistan campaign. News coverage recycled the familiar visuals of in-progress bombings and Taliban barbarians. The alternative press, peace movements, and social activists then retaliated against this coverage by reinstating the support framing that revealed structural violence and gave voice to silenced minorities and victims. What really unfolded after September 11 was not the demise of journalism’s elite but rather the renegotiation of reportage boundaries and shared meanings. Journalists scoured the Internet for eyewitness accounts and to interview survivors (Zelizer & Allan 129). The same medium was used by others to spread conspiracy theories and viral rumors that numerology predicted the date September 11 or that the “face of Satan” could be seen in photographs of the World Trade Center (Zelizer & Allan 133). Karim H. Karim notes that the Jihad frame of an “Islamic Peril” was socially constructed by media outlets but then challenged by individual journalists who had learnt ‘to question the essentialist bases of her own socialization and placing herself in the Other’s shoes’ (Zelizer & Allan 112). Other journalists forgot that Jihad and McWorld were not separate but two intertwined worldviews that fed upon each other. The September 11 attacks on the Pentagon and the World Trade Center also had deep symbolic resonances for American sociopolitical ideals that some journalists explored through analysis of myths and metaphors. The Rise of Strategic Geography However these renegotiated boundariesof new media, multiperspectival frames, and ‘layered’ depth approaches to issues analysiswere essentially minority reports. The rationalist mode of journalism was soon reasserted through normative appeals to strategic geography. The U.S. networks framed their documentaries on Islam and the Middle East in bluntly realpolitik terms. The documentary “Minefield: The United States and the Muslim World” (ABC, 11 October 2001) made explicit strategic assumptions of ‘the U.S. as “managing” the region’ and ‘a definite tinge of superiority’ (Silberstein 153). ABC and CNN stressed the similarities between the world’s major monotheistic religions and their scriptural doctrines. Both networks limited their coverage of critiques and dissent to internecine schisms within these traditions (Silberstein 158). CNN also created different coverage for its North American and international audiences. The BBC was more cautious in its September 11 coverage and more global in outlook. Three United Kingdom specials – Panorama (Clash of Cultures, BBC1, 21 October 2001), Question Time (Question Time Special, BBC1, 13 September 2001), and “War Without End” (War on Trial, Channel 4, 27 October 2001) – drew upon the British traditions of parliamentary assembly, expert panels, and legal trials as ways to explore the multiple dimensions of the ‘War on Terror’ (Zelizer & Allan 180). These latter debates weren’t value free: the programs sanctioned ‘a tightly controlled and hierarchical agora’ through different containment strategies (Zelizer & Allan 183). Program formats, selected experts and presenters, and editorial/on-screen graphics were factors that pre-empted the viewer’s experience and conclusions. The traditional emphasis of news values on the expert was renewed. These subtle forms of thought-control enabled policy-makers to inform the public whilst inoculating them against terrorist propaganda. However the ‘CNN Effect’ also had counter-offensive capabilities. Osama bin Laden’s videotaped sermons and the al-Jazeera network’s broadcasts undermined the psychological operations maxim that enemies must not gain access to the mindshare of domestic audiences. Ingrid Volkmer recounts how the Los Angeles based National Iranian Television Network used satellite broadcasts to criticize the Iranian leadership and spark public riots (Zelizer & Allan 242). These incidents hint at why the ‘War on Terror’ myth, now unleashed upon the world, may become far more destabilizing to the world system than previous conflicts. Risk Reportage and Mediated Trauma When media analysts were considering the ‘CNN Effect’ a group of social contract theorists including Anthony Giddens, Zygmunt Bauman, and Ulrich Beck were debating, simultaneously, the status of modernity and the ‘unbounded contours’ of globalization. Beck termed this new environment of escalating uncertainties and uninsurable dangers the ‘world risk society’ (Beck). Although they drew upon constructivist and realist traditions Beck and Giddens ‘did not place risk perception at the center of their analysis’ (Zelizer & Allan 203). Instead this was the role of journalist as ‘witness’ to Ballard-style ‘institutionalized disaster areas’. The terrorist attacks on September 11 materialized this risk and obliterated the journalistic norms of detachment and objectivity. The trauma ‘destabilizes a sense of self’ within individuals (Zelizer & Allan 205) and disrupts the image-generating capacity of collective societies. Barbie Zelizer found that the press selection of September 11 photos and witnesses re-enacted the ‘Holocaust aesthetic’ created when Allied Forces freed the Nazi internment camps in 1945 (Zelizer & Allan 55-7). The visceral nature of September 11 imagery inverted the trend, from the Gulf War to NATO’s Kosovo bombings, for news outlets to depict war in detached video-game imagery (Zelizer & Allan 253). Coverage of the September 11 attacks and the subsequent Bali bombings (on 12 October 2002) followed a four-part pattern news cycle of assassinations and terrorism (Moeller 164-7). Moeller found that coverage moved from the initial event to a hunt for the perpetrators, public mourning, and finally, a sense of closure ‘when the media reassert the supremacy of the established political and social order’ (167). In both events the shock of the initial devastation was rapidly followed by the arrest of al Qaeda and Jamaah Islamiyah members, the creation and copying of the New York Times ‘Portraits of Grief’ template, and the mediation of trauma by a re-established moral order. News pundits had clearly studied the literature on bereavement and grief cycles (Kubler-Ross). However the neo-noir work culture of some outlets also fueled bitter disputes about how post-traumatic stress affected journalists themselves (Zelizer & Allan 253). Reconfiguring the Future After September 11 the geopolitical pundits, a reactive cycle of integration propaganda, pecking order shifts within journalism elites, strategic language, and mediated trauma all combined to bring a specific future into being. This outcome reflected the ‘media-state relationship’ in which coverage ‘still reflected policy preferences of parts of the U.S. elite foreign-policy-making community’ (Robinson 129). Although Internet media and non-elite analysts embraced Hallin’s ‘sphere of deviance’ there is no clear evidence yet that they have altered the opinions of policy-makers. The geopolitical segue from September 11 into the U.S.-led campaign against Iraq also has disturbing implications for the ‘CNN Effect’. Robinson found that its mythic reputation was overstated and tied to issues of policy certainty that the theory’s proponents often failed to examine. Media coverage molded a ‘domestic constituency ... for policy-makers to take action in Somalia’ (Robinson 62). He found greater support in ‘anecdotal evidence’ that the United Nations Security Council’s ‘safe area’ for Iraqi Kurds was driven by Turkey’s geo-strategic fears of ‘unwanted Kurdish refugees’ (Robinson 71). Media coverage did impact upon policy-makers to create Bosnian ‘safe areas’, however, ‘the Kosovo, Rwanda, and Iraq case studies’ showed that the ‘CNN Effect’ was unlikely as a key factor ‘when policy certainty exists’ (Robinson 118). The clear implication from Robinson’s studies is that empathy framing, humanitarian values, and searing visual imagery won’t be enough to challenge policy-makers. What remains to be done? Fortunately there are some possibilities that straddle the pragmatic, realpolitik and emancipatory approaches. Today’s activists and analysts are also aware of the dangers of ‘unfreedom’ and un-reflective dissent (Fromm). Peter Gabriel’s organisation Witness, which documents human rights abuses, is one benchmark of how to use real-time media and the video camera in an effective way. The domains of anthropology, negotiation studies, neuro-linguistics, and social psychology offer valuable lessons on techniques of non-coercive influence. The emancipatory tradition of futures studies offers a rich tradition of self-awareness exercises, institution rebuilding, and social imaging, offsets the pragmatic lure of normative scenarios. The final lesson from these books is that activists and analysts must co-adapt as the ‘War on Terror’ mutates into new and terrifying forms. Works Cited Amis, Martin. “Fear and Loathing.” The Guardian (18 Sep. 2001). 1 March 2001 <http://www.guardian.co.uk/Archive/Article/0,4273,4259170,00.php>. Ballard, J.G. The Atrocity Exhibition (rev. ed.). Los Angeles: V/Search Publications, 1990. Beck, Ulrich. World Risk Society. Malden, MA: Polity Press, 1999. Ellul, Jacques. Propaganda: The Formation of Men’s Attitudes. New York: Vintage Books, 1973. Friedman, Thomas. The Lexus and the Olive Tree. New York: Farrar, Straus & Giroux, 1999. Fromm, Erich. Escape from Freedom. New York: Farrar & Rhinehart, 1941. Fukuyama, Francis. The End of History and the Last Man. New York: Free Press, 1992. Huntington, Samuel P. The Clash of Civilizations and the Remaking of World Order. New York: Simon & Schuster, 1996. Kaplan, Robert. The Coming Anarchy: Shattering the Dreams of the Post Cold War. New York: Random House, 2000. Kubler-Ross, Elizabeth. On Death and Dying. London: Tavistock, 1969. McLuhan, Marshall. Understanding Media: The Extensions of Man. London: Routledge & Kegan Paul, 1964. Moeller, Susan D. Compassion Fatigue: How the Media Sell Disease, Famine, War, and Death. New York: Routledge, 1999. Robinson, Piers. The CNN Effect: The Myth of News, Foreign Policy and Intervention. New York: Routledge, 2002. Silberstein, Sandra. War of Words: Language, Politics and 9/11. New York: Routledge, 2002. Wark, McKenzie. Virtual Geography: Living with Global Media Events. Bloomington IN: Indiana UP, 1994. Wiener, Norbert. Cybernetics: Or Control and Communication in the Animal and the Machine. New York: John Wiley & Sons, 1948. Zelizer, Barbie, and Stuart Allan (eds.). Journalism after September 11. New York: Routledge, 2002. Links http://www.guardian.co.uk/Archive/Article/0 Citation reference for this article Substitute your date of access for Dn Month Year etc... MLA Style Burns, Alex. "The Worldflash of a Coming Future" M/C: A Journal of Media and Culture< http://www.media-culture.org.au/0304/08-worldflash.php>. APA Style Burns, A. (2003, Apr 23). The Worldflash of a Coming Future. M/C: A Journal of Media and Culture, 6,< http://www.media-culture.org.au/0304/08-worldflash.php>
APA, Harvard, Vancouver, ISO, and other styles
28

King, Emerald L., and Denise N. Rall. "Re-imagining the Empire of Japan through Japanese Schoolboy Uniforms." M/C Journal 18, no. 6 (March 7, 2016). http://dx.doi.org/10.5204/mcj.1041.

Full text
Abstract:
Introduction“From every kind of man obedience I expect; I’m the Emperor of Japan.” (“Miyasama,” from Gilbert and Sullivan’s musical The Mikado, 1885)This commentary is facilitated by—surprisingly resilient—oriental stereotypes of an imagined Japan (think of Oscar Wilde’s assertion, in 1889, that Japan was a European invention). During the Victorian era, in Britain, there was a craze for all things oriental, particularly ceramics and “there was a craze for all things Japanese and no middle class drawing room was without its Japanese fan or teapot.“ (V&A Victorian). These pastoral depictions of the ‘oriental life’ included the figures of men and women in oriental garb, with fans, stilt shoes, kimono-like robes, and appropriate headdresses, engaging in garden-based activities, especially tea ceremony variations (Landow). In fact, tea itself, and the idea of a ceremony of serving it, had taken up a central role, even an obsession in middle- and upper-class Victorian life. Similarly, landscapes with wild seas, rugged rocks and stunted pines, wizened monks, pagodas and temples, and particular fauna and flora (cranes and other birds flying through clouds of peonies, cherry blossoms and chrysanthemums) were very popular motifs (see Martin and Koda). Rather than authenticity, these designs heightened the Western-based romantic stereotypes associated with a stylised form of Japanese life, conducted sedately under rule of the Japanese Imperial Court. In reality, prior to the Meiji period (1868–1912), the Emperor was largely removed from everyday concerns, residing as an isolated, holy figure in Kyoto, the traditional capital of Japan. Japan was instead ruled from Edo (modern day Tokyo) led by the Shogun and his generals, according to a strict Confucian influenced code (see Keene). In Japan, as elsewhere, the presence of feudal-style governance includes policies that determine much of everyday life, including restrictions on clothing (Rall 169). The Samurai code was no different, and included a series of protocols that restricted rank, movement, behaviour, and clothing. As Vincent has noted in the case of the ‘lace tax’ in Great Britain, these restrictions were designed to punish those who seek to penetrate the upper classes through their costume (28-30). In Japan, pre-Meiji sumptuary laws, for example, restricted the use of gold, and prohibited the use of a certain shade of red by merchant classes (V&A Kimono).Therefore, in the governance of pre-globalised societies, the importance of clothing and textile is evident; as Jones and Stallybrass comment: We need to understand the antimatedness of clothes, their ability to “pick up” subjects, to mould and shape them both physically and socially—to constitute subjects through their power as material memories […] Clothing is a worn world: a world of social relations put upon the wearer’s body. (2-3, emphasis added)The significant re-imagining of Japanese cultural and national identities are explored here through the cataclysmic impact of Western ideologies on Japanese cultural traditions. There are many ways to examine how indigenous cultures respond to European, British, or American (hereafter Western) influences, particularly in times of conflict (Wilk). Western ideology arrived in Japan after a long period of isolation (during which time Japan’s only contact was with Dutch traders) through the threat of military hostility and war. It is after this outside threat was realised that Japan’s adoption of military and industrial practices begins. The re-imagining of their national identity took many forms, and the inclusion of a Western-style military costuming as a schoolboy uniform became a highly visible indicator of Japan’s mission to protect its sovereign integrity. A brief history of Japan’s rise from a collection of isolated feudal states to a unified military power, in not only the Asian Pacific region but globally, demonstrates the speed at which they adopted the Western mode of warfare. Gunboats on Japan’s ShorelinesJapan was forcefully opened to the West in the 1850s by America under threat of First Name Perry’s ‘gunboat diplomacy’ (Hillsborough 7-8). Following this, Japan underwent a rapid period of modernisation, and an upsurge in nationalism and military expansion that was driven by a desire to catch up to the European powers present in the Pacific. Noted by Ian Ferguson in Civilization: The West and the Rest, Unsure, the Japanese decided […] to copy everything […] Japanese institutions were refashioned on Western models. The army drilled like Germans; the navy sailed like Britons. An American-style system of state elementary and middle schools was also introduced. (221, emphasis added)This was nothing short of a wide-scale reorganisation of Japan’s entire social structure and governance. Under the Emperor Meiji, who wrested power from the Shogunate and reclaimed it for the Imperial head, Japan steamed into an industrial revolution, achieving in a matter of years what had taken Europe over a century.Japan quickly became a major player-elect on the world stage. However, as an island nation, Japan lacked the essentials of both coal and iron with which to fashion not only industrial machinery but also military equipment, the machinery of war. In 1875 Japan forced Korea to open itself to foreign (read: Japanese) trade. In the same treaty, Korea was recognised as a sovereign nation, separate from Qing China (Tucker 1461). The necessity for raw materials then led to the Sino-Japanese War (1894–95), a conflict between Japan and China that marked the emergence of Japan as a major world power. The Korean Peninsula had long been China’s most important client state, but its strategic location adjacent to the Japanese archipelago, and its natural resources of coal and iron, attracted Japan’s interest. Later, the Russo-Japanese War (1904–05), allowed a victorious Japan to force Russia to abandon its expansionist policy in the Far East, becoming the first Asian power in modern times to defeat a European power. The Russo-Japanese War developed out of the rivalry between Russia and Japan for dominance in Korea and Manchuria, again in the struggle for natural resources (Tucker 1534-46).Japan’s victories, together with the county’s drive for resources, meant that Japan could now determine its role within the Asia-Pacific sphere of influence. As Japan’s military, and their adoption of Westernised combat, proved effective in maintaining national integrity, other social institutions also looked to the West (Ferguson 221). In an ironic twist—while Victorian and Continental fashion was busy adopting the exotic, oriental look (Martin and Koda)—the kimono, along with other essentials of Japanese fashions, were rapidly altered (both literally and figuratively) to suit new, warlike ideology. It should be noted that kimono literally means ‘things that you wear’ and which, prior to exposure to Western fashions, signified all worn clothing (Dalby 65-119). “Wearing Things” in Westernised JapanAs Japan modernised during the late 1800s the kimono was positioned as symbolising barbaric, pre-modern, ‘oriental’ Japan. Indeed, on 17 January 1887 the Meiji Empress issued a memorandum on the subject of women’s clothing in Japan: “She [the Empress] believed that western clothes were in fact closer to the dress of women in ancient Japan than the kimonos currently worn and urged that they be adopted as the standard clothes of the reign” (Keene 404). The resemblance between Western skirts and blouses and the simple skirt and separate top that had been worn in ancient times by a people descended from the sun goddess, Amaterasu wo mikami, was used to give authority and cultural authenticity to Japan’s modernisation projects. The Imperial Court, with its newly ennobled European style aristocrats, exchanged kimono silks for Victorian finery, and samurai armour for military pomp and splendour (Figure 1).Figure 1: The Meiji Emperor, Empress and Crown Prince resplendent in European fashions on an outing to Asukayama Park. Illustration: Toyohara Chikanobu, circa 1890.It is argued here that the function of a uniform is to prepare the body for service. Maids and butlers, nurses and courtesans, doctors, policemen, and soldiers are all distinguished by their garb. Prudence Black states: “as a technology, uniforms shape and code the body so they become a unit that belongs to a collective whole” (93). The requirement to discipline bodies through clothing, particularly through uniforms, is well documented (see Craik, Peoples, and Foucault). The need to distinguish enemies from allies on the battlefield requires adherence to a set of defined protocols, as referenced in military fashion compendiums (see Molloy). While the postcolonial adoption of Western-based clothing reflects a new form of subservience (Rall, Kuechler and Miller), in Japan, the indigenous garments were clearly designed in the interests of ideological allegiance. To understand the Japanese sartorial traditions, the kimono itself must be read as providing a strong disciplinary element. The traditional garment is designed to represent an upright and unbending column—where two meters of under bindings are used to discipline the body into shape are then topped with a further four meters of a stiffened silk obi wrapped around the waist and lower chest. To dress formally in such a garment requires helpers (see Dalby). The kimono both constructs and confines the women who wear it, and presses them into their roles as dutiful, upper-class daughters (see Craik). From the 1890s through to the 1930s, when Japan again enters a period of militarism, the myth of the kimono again changes as it is integrated into the build-up towards World War II.Decades later, when Japan re-established itself as a global economic power in the 1970s and 1980s, the kimono was re-authenticated as Japan’s ‘traditional’ garment. This time it was not the myth of a people descended from solar deities that was on display, but that of samurai strength and propriety for men, alongside an exaggerated femininity for women, invoking a powerful vision of Japanese sartorial tradition. This reworking of the kimono was only possible as the garment was already contained within the framework of Confucian family duty. However, in the lead up to World War II, Japanese military advancement demanded of its people soldiers that could win European-style wars. The quickest solution was to copy the military acumen and strategies of global warfare, and the costumes of the soldiery and seamen of Europe, including Great Britain (Ferguson). It was also acknowledged that soldiers were ‘made not born’ so the Japanese educational system was re-vamped to emulate those of its military rivals (McVeigh). It was in the uptake of schoolboy uniforms that this re-imagining of Japanese imperial strength took place.The Japanese Schoolboy UniformCentral to their rapid modernisation, Japan adopted a constitutional system of education that borrowed from American and French models (Tipton 68-69). The government viewed education as a “primary means of developing a sense of nation,” and at its core, was the imperial authorities’ obsession with defining “Japan and Japaneseness” (Tipton 68-69). Numerous reforms eventually saw, after an abolition of fees, nearly 100% attendance by both boys and girls, despite a lingering mind-set that educating women was “a waste of time” (Tipton 68-69). A boys’ uniform based on the French and Prussian military uniforms of the 1860s and 1870s respectively (Kinsella 217), was adopted in 1879 (McVeigh 47). This jacket, initially with Prussian cape and cap, consists of a square body, standing mandarin style collar and a buttoned front. It was through these education reforms, as visually symbolised by the adoption of military style school uniforms, that citizen making, education, and military training became interrelated aspects of Meiji modernisation (Kinsella 217). Known as the gakuran (gaku: to study; ran: meaning both orchid, and a pun on Horanda, meaning Holland, the only Western country with trading relations in pre-Meiji Japan), these jackets were a symbol of education, indicating European knowledge, power and influence and came to reflect all things European in Meiji Japan. By adopting these jackets two objectives were realised:through the magical power of imitation, Japan would, by adopting the clothing of the West, naturally rise in military power; and boys were uniformed to become not only educated as quasi-Europeans, but as fighting soldiers and sons (suns) of the nation.The gakuran jacket was first popularised by state-run schools, however, in the century and a half that the garment has been in use it has come to symbolise young Japanese masculinity as showcased in campus films, anime, manga, computer games, and as fashion is the preeminent garment for boybands and Japanese hipsters.While the gakuran is central to the rise of global militarism in Japan (McVeigh 51-53), the jacket would go on to form the basis of the Sun Yat Sen and Mao Suits as symbols of revolutionary China (see McVeigh). Supposedly, Sun Yat Sen saw the schoolboy jacket in Japan as a utilitarian garment and adopted it with a turn down collar (Cumming et al.). For Sun Yat Sen, the gakuran was the perfect mix of civilian (school boy) and military (the garment’s Prussian heritage) allowing him to walk a middle path between the demands of both. Furthermore, the garment allowed Sun to navigate between Western style suits and old-fashioned Qing dynasty styles (Gerth 116); one was associated with the imperialism of the National Products Movement, while the other represented the corruption of the old dynasty. In this way, the gakuran was further politicised from a national (Japanese) symbol to a global one. While military uniforms have always been political garments, in the late 1800s and early 1900s, as the world was rocked by revolutions and war, civilian clothing also became a means of expressing political ideals (McVeigh 48-49). Note that Mahatma Ghandi’s clothing choices also evolved from wholly Western styles to traditional and emphasised domestic products (Gerth 116).Mao adopted this style circa 1927, further defining the style when he came to power by adding elements from the trousers, tunics, and black cotton shoes worn by peasants. The suit was further codified during the 1960s, reaching its height in the Cultural Revolution. While the gakuran has always been a scholarly black (see Figure 2), subtle differences in the colour palette differentiated the Chinese population—peasants and workers donned indigo blue Mao jackets, while the People’s Liberation Army Soldiers donned khaki green. This limited colour scheme somewhat paradoxically ensured that subtle hierarchical differences were maintained even whilst advocating egalitarian ideals (Davis 522). Both the Sun Yat Sen suit and the Mao jacket represented the rejection of bourgeois (Western) norms that objectified the female form in favour of a uniform society. Neo-Maoism and Mao fever of the early 1990s saw the Mao suit emerge again as a desirable piece of iconic/ironic youth fashion. Figure 2: An example of Gakuran uniform next to the girl’s equivalent on display at Ichikawa Gakuen School (Japan). Photo: Emerald King, 2015.There is a clear and vital link between the influence of the Prussian style Japanese schoolboy uniform on the later creation of the Mao jacket—that of the uniform as an integral piece of worn propaganda (Atkins).For Japan, the rapid deployment of new military and industrial technologies, as well as a sartorial need to present her leaders as modern (read: Western) demanded the adoption of European-style uniforms. The Imperial family had always been removed from Samurai battlefields, so the adoption of Western military costume allowed Japan’s rulers to present a uniform face to other global powers. When Japan found itself in conflict in the Asia Pacific Region, without an organised military, the first requirement was to completely reorganise their system of warfare from a feudal base and to train up national servicemen. Within an American-style compulsory education system, the European-based curriculum included training in mathematics, engineering and military history, as young Britons had for generations begun their education in Greek and Latin, with the study of Ancient Greek and Roman wars (Bantock). It is only in the classroom that ideological change on a mass scale can take place (Reference Please), a lesson not missed by later leaders such as Mao Zedong.ConclusionIn the 1880s, the Japanese leaders established their position in global politics by adopting clothing and practices from the West (Europeans, Britons, and Americans) in order to quickly re-shape their country’s educational system and military establishment. The prevailing military costume from foreign cultures not only disciplined their adopted European bodies, they enforced a new regime through dress (Rall 157-174). For boys, the gakuran symbolised the unity of education and militarism as central to Japanese masculinity. Wearing a uniform, as many authors suggest, furthers compliance (Craik, Nagasawa Kaiser and Hutton, and McVeigh). As conscription became a part of Japanese reality in World War II, the schoolboys just swapped their military-inspired school uniforms for genuine military garments.Re-imagining a Japanese schoolboy uniform from a European military costume might suit ideological purposes (Atkins), but there is more. The gakuran, as a uniform based on a close, but not fitted jacket, was the product of a process of advanced industrialisation in the garment-making industry also taking place in the 1800s:Between 1810 and 1830, technical calibrations invented by tailors working at the very highest level of the craft [in Britain] eventually made it possible for hundreds of suits to be cut up and made in advance [...] and the ready-to-wear idea was put into practice for men’s clothes […] originally for uniforms for the War of 1812. (Hollander 31) In this way, industrialisation became a means to mass production, which furthered militarisation, “the uniform is thus the clothing of the modern disciplinary society” (Black 102). There is a perfect resonance between Japan’s appetite for a modern military and their rise to an industrialised society, and their conquests in Asia Pacific supplied the necessary material resources that made such a rapid deployment possible. The Japanese schoolboy uniform was an integral part of the process of both industrialisation and militarisation, which instilled in the wearer a social role required by modern Japanese society in its rise for global power. Garments are never just clothing, but offer a “world of social relations put upon the wearer’s body” (Jones and Stallybrass 3-4).Today, both the Japanese kimono and the Japanese schoolboy uniform continue to interact with, and interrogate, global fashions as contemporary designers continue to call on the tropes of ‘military chic’ (Tonchi) and Japanese-inspired clothing (Kawamura). References Atkins, Jaqueline. Wearing Propaganda: Textiles on the Home Front in Japan, Britain, and the United States. Princeton: Yale UP, 2005.Bantock, Geoffrey Herman. Culture, Industrialisation and Education. London: Routledge & K. Paul, 1968.Black, Prudence. “The Discipline of Appearance: Military Style and Australian Flight Hostess Uniforms 1930–1964.” Fashion & War in Popular Culture. Ed. Denise N. Rall. Bristol: Intellect/U Chicago P, 2014. 91-106.Craik, Jenifer. Uniforms Exposed: From Conformity to Transgression. Oxford: Berg, 2005.Cumming, Valerie, Cecil Williet Cunnington, and Phillis Emily Cunnington. “Mao Style.” The Dictionary of Fashion History. Eds. Valerie Cumming, Cecil Williet Cunnington, and Phillis Emily Cunnington. Oxford: Berg, 2010.Dalby, Liza, ed. Kimono: Fashioning Culture. London: Vintage, 2001.Davis, Edward L., ed. Encyclopaedia of Contemporary Chinese Culture. London: Routledge, 2005.Dees, Jan. Taisho Kimono: Speaking of Past and Present. Milan: Skira, 2009.Ferguson, N. Civilization: The West and the Rest. London: Penguin, 2011.Foucault, Michel. Discipline and Punish: The Birth of the Prison. Trans. Alan Sheridan. London: Penguin, 1997. Gerth, Karl. China Made: Consumer Culture and the Creation of the Nation, Cambridge: East Asian Harvard Monograph 224, 2003.Gilbert, W.S., and Arthur Sullivan. The Mikado or, The Town of Titipu. 1885. 16 Nov. 2015 ‹http://math.boisestate.edu/gas/mikado/mk_lib.pdf›. Hillsborough, Romulus. Samurai Revolution: The Dawn of Modern Japan Seen through the Eyes of the Shogun's Last Samurai. Vermont: Tuttle, 2014.Jones, Anne R., and Peter Stallybrass, Renaissance Clothing and the Materials of Memory. Cambridge: Cambridge UP, 2000.Keene, Donald. Emperor of Japan: Meiji and His World, 1852-1912. New York: Columbia UP, 2002.King, Emerald L. “Schoolboys and Kimono Ladies.” Presentation to the Un-Thinking Asian Migrations Conference, University of Otago, Dunedin, New Zealand, 24-26 Aug. 2014. Kinsella, Sharon. “What’s Behind the Fetishism of Japanese School Uniforms?” Fashion Theory 6.2 (2002): 215-37. Kuechler, Susanne, and Daniel Miller, eds. Clothing as Material Culture. Oxford: Berg, 2005.Landow, George P. “Liberty and the Evolution of the Liberty Style.” 22 Aug. 2010. ‹http://www.victorianweb.org/art/design/liberty/lstyle.html›.Martin, Richard, and Harold Koda. Orientalism: Vision of the East in Western Dress. New York: Metropolitan Museum of Art, 1994.McVeigh, Brian J. Wearing Ideology: State, Schooling, and Self-Presentation in Japan. Oxford: Berg, 2000.Molloy, John. Military Fashion: A Comparative History of the Uniforms of the Great Armies from the 17th Century to the First World War. New York: Putnam, 1972.Peoples, Sharon. “Embodying the Military: Uniforms.” Critical Studies in Men’s Fashion 1.1 (2014): 7-21.Rall, Denise N. “Costume & Conquest: A Proximity Framework for Post-War Impacts on Clothing and Textile Art.” Fashion & War in Popular Culture, ed. Denise N. Rall. Bristol: Intellect/U Chicago P, 2014. 157-74. Tipton, Elise K. Modern Japan: A Social and Political History. 3rd ed. London: Routledge, 2016.Tucker, Spencer C., ed. A Global Chronology of Conflict: From the Ancient World to the Modern Middle East. Santa Barbara, CA: ABC-CLIO, 2013.V&A Kimono. Victoria and Albert Museum. “A History of the Kimono.” 2004. 2 Oct. 2015 ‹http://www.vam.ac.uk/content/articles/h/a-history-of-the-kimono/›.V&A Victorian. Victoria and Albert Museum. “The Victorian Vision of China and Japan.” 10 Nov. 2015 ‹http://www.vam.ac.uk/content/articles/t/the-victorian-vision-of-china-and-japan/›.Vincent, Susan J. The Anatomy of Fashion: Dressing the Body from the Renaissance to Today. Berg: Oxford, 2009.Wilde, Oscar. “The Decay of Lying.” 1889. In Intentions New York: Berentano’s 1905. 16 Nov. 2015 ‹http://virgil.org/dswo/courses/novel/wilde-lying.pdf›. Wilk, Richard. “Consumer Goods as a Dialogue about Development.” Cultural History 7 (1990) 79-100.
APA, Harvard, Vancouver, ISO, and other styles
29

Keogh, Luke. "The First Four Wells: Unconventional Gas in Australia." M/C Journal 16, no. 2 (March 8, 2013). http://dx.doi.org/10.5204/mcj.617.

Full text
Abstract:
Unconventional energy sources have become increasingly important to the global energy mix. These include coal seam gas, shale gas and shale oil. The unconventional gas industry was pioneered in the United States and embraced following the first oil shock in 1973 (Rogers). As has been the case with many global resources (Hiscock), many of the same companies that worked in the USA carried their experience in this industry to early Australian explorations. Recently the USA has secured significant energy security with the development of unconventional energy deposits such as the Marcellus shale gas and the Bakken shale oil (Dobb; McGraw). But this has not come without environmental impact, including contamination to underground water supply (Osborn, Vengosh, Warner, Jackson) and potential greenhouse gas contributions (Howarth, Santoro, Ingraffea; McKenna). The environmental impact of unconventional gas extraction has raised serious public concern about the introduction and growth of the industry in Australia. In coal rich Australia coal seam gas is currently the major source of unconventional gas. Large gas deposits have been found in prime agricultural land along eastern Australia, such as the Liverpool Plains in New South Wales and the Darling Downs in Queensland. Competing land-uses and a series of environmental incidents from the coal seam gas industry have warranted major protest from a coalition of environmentalists and farmers (Berry; McLeish). Conflict between energy companies wanting development and environmentalists warning precaution is an easy script to cast for frontline media coverage. But historical perspectives are often missing in these contemporary debates. While coal mining and natural gas have often received “boosting” historical coverage (Diamond; Wilkinson), and although historical themes of “development” and “rushes” remain predominant when observing the span of the industry (AGA; Blainey), the history of unconventional gas, particularly the history of its environmental impact, has been little studied. Few people are aware, for example, that the first shale gas exploratory well was completed in late 2010 in the Cooper Basin in Central Australia (Molan) and is considered as a “new” frontier in Australian unconventional gas. Moreover many people are unaware that the first coal seam gas wells were completed in 1976 in Queensland. The first four wells offer an important moment for reflection in light of the industry’s recent move into Central Australia. By locating and analysing the first four coal seam gas wells, this essay identifies the roots of the unconventional gas industry in Australia and explores the early environmental impact of these wells. By analysing exploration reports that have been placed online by the Queensland Department of Natural Resources and Mines through the lens of environmental history, the dominant developmental narrative of this industry can also be scrutinised. These narratives often place more significance on economic and national benefits while displacing the environmental and social impacts of the industry (Connor, Higginbotham, Freeman, Albrecht; Duus; McEachern; Trigger). This essay therefore seeks to bring an environmental insight into early unconventional gas mining in Australia. As the author, I am concerned that nearly four decades on and it seems that no one has heeded the warning gleaned from these early wells and early exploration reports, as gas exploration in Australia continues under little scrutiny. Arrival The first four unconventional gas wells in Australia appear at the beginning of the industry world-wide (Schraufnagel, McBane, and Kuuskraa; McClanahan). The wells were explored by Houston Oils and Minerals—a company that entered the Australian mining scene by sharing a mining prospect with International Australian Energy Company (Wiltshire). The International Australian Energy Company was owned by Black Giant Oil Company in the US, which in turn was owned by International Royalty and Oil Company also based in the US. The Texan oilman Robert Kanton held a sixteen percent share in the latter. Kanton had an idea that the Mimosa Syncline in the south-eastern Bowen Basin was a gas trap waiting to be exploited. To test the theory he needed capital. Kanton presented the idea to Houston Oil and Minerals which had the financial backing to take the risk. Shotover No. 1 was drilled by Houston Oil and Minerals thirty miles south-east of the coal mining town of Blackwater. By late August 1975 it was drilled to 2,717 metres, discovered to have little gas, spudded, and, after a spend of $610,000, abandoned. The data from the Shotover well showed that the porosity of the rocks in the area was not a trap, and the Mimosa Syncline was therefore downgraded as a possible hydrocarbon location. There was, however, a small amount of gas found in the coal seams (Benbow 16). The well had passed through the huge coal seams of both the Bowen and Surat basins—important basins for the future of both the coal and gas industries. Mining Concepts In 1975, while Houston Oil and Minerals was drilling the Shotover well, US Steel and the US Bureau of Mines used hydraulic fracture, a technique already used in the petroleum industry, to drill vertical surface wells to drain gas from a coal seam (Methane Drainage Taskforce 102). They were able to remove gas from the coal seam before it was mined and sold enough to make a profit. With the well data from the Shotover well in Australia compiled, Houston returned to the US to research the possibility of harvesting methane in Australia. As the company saw it, methane drainage was “a novel exploitation concept” and the methane in the Bowen Basin was an “enormous hydrocarbon resource” (Wiltshire 7). The Shotover well passed through a section of the German Creek Coal measures and this became their next target. In September 1976 the Shotover well was re-opened and plugged at 1499 meters to become Australia’s first exploratory unconventional gas well. By the end of the month the rig was released and gas production tested. At one point an employee on the drilling operation observed a gas flame “the size of a 44 gal drum” (HOMA, “Shotover # 1” 9). But apart from the brief show, no gas flowed. And yet, Houston Oil and Minerals was not deterred, as they had already taken out other leases for further prospecting (Wiltshire 4). Only a week after the Shotover well had failed, Houston moved the methane search south-east to an area five miles north of the Moura township. Houston Oil and Minerals had researched the coal exploration seismic surveys of the area that were conducted in 1969, 1972, and 1973 to choose the location. Over the next two months in late 1976, two new wells—Kinma No.1 and Carra No.1—were drilled within a mile from each other and completed as gas wells. Houston Oil and Minerals also purchased the old oil exploration well Moura No. 1 from the Queensland Government and completed it as a suspended gas well. The company must have mined the Department of Mines archive to find Moura No.1, as the previous exploration report from 1969 noted methane given off from the coal seams (Sell). By December 1976 Houston Oil and Minerals had three gas wells in the vicinity of each other and by early 1977 testing had occurred. The results were disappointing with minimal gas flow at Kinma and Carra, but Moura showed a little more promise. Here, the drillers were able to convert their Fairbanks-Morse engine driving the pump from an engine run on LPG to one run on methane produced from the well (Porter, “Moura # 1”). Drink This? Although there was not much gas to find in the test production phase, there was a lot of water. The exploration reports produced by the company are incomplete (indeed no report was available for the Shotover well), but the information available shows that a large amount of water was extracted before gas started to flow (Porter, “Carra # 1”; Porter, “Moura # 1”; Porter, “Kinma # 1”). As Porter’s reports outline, prior to gas flowing, the water produced at Carra, Kinma and Moura totalled 37,600 litres, 11,900 and 2,900 respectively. It should be noted that the method used to test the amount of water was not continuous and these amounts were not the full amount of water produced; also, upon gas coming to the surface some of the wells continued to produce water. In short, before any gas flowed at the first unconventional gas wells in Australia at least 50,000 litres of water were taken from underground. Results show that the water was not ready to drink (Mathers, “Moura # 1”; Mathers, “Appendix 1”; HOMA, “Miscellaneous Pages” 21-24). The water had total dissolved solids (minerals) well over the average set by the authorities (WHO; Apps Laboratories; NHMRC; QDAFF). The well at Kinma recorded the highest levels, almost two and a half times the unacceptable standard. On average the water from the Moura well was of reasonable standard, possibly because some water was extracted from the well when it was originally sunk in 1969; but the water from Kinma and Carra was very poor quality, not good enough for crops, stock or to be let run into creeks. The biggest issue was the sodium concentration; all wells had very high salt levels. Kinma and Carra were four and two times the maximum standard respectively. In short, there was a substantial amount of poor quality water produced from drilling and testing the three wells. Fracking Australia Hydraulic fracturing is an artificial process that can encourage more gas to flow to the surface (McGraw; Fischetti; Senate). Prior to the testing phase at the Moura field, well data was sent to the Chemical Research and Development Department at Halliburton in Oklahoma, to examine the ability to fracture the coal and shale in the Australian wells. Halliburton was the founding father of hydraulic fracture. In Oklahoma on 17 March 1949, operating under an exclusive license from Standard Oil, this company conducted the first ever hydraulic fracture of an oil well (Montgomery and Smith). To come up with a program of hydraulic fracturing for the Australian field, Halliburton went back to the laboratory. They bonded together small slabs of coal and shale similar to Australian samples, drilled one-inch holes into the sample, then pressurised the holes and completed a “hydro-frac” in miniature. “These samples were difficult to prepare,” they wrote in their report to Houston Oil and Minerals (HOMA, “Miscellaneous Pages” 10). Their program for fracturing was informed by a field of science that had been evolving since the first hydraulic fracture but had rapidly progressed since the first oil shock. Halliburton’s laboratory test had confirmed that the model of Perkins and Kern developed for widths of hydraulic fracture—in an article that defined the field—should also apply to Australian coals (Perkins and Kern). By late January 1977 Halliburton had issued Houston Oil and Minerals with a program of hydraulic fracture to use on the central Queensland wells. On the final page of their report they warned: “There are many unknowns in a vertical fracture design procedure” (HOMA, “Miscellaneous Pages” 17). In July 1977, Moura No. 1 became the first coal seam gas well hydraulically fractured in Australia. The exploration report states: “During July 1977 the well was killed with 1% KCL solution and the tubing and packer were pulled from the well … and pumping commenced” (Porter 2-3). The use of the word “kill” is interesting—potassium chloride (KCl) is the third and final drug administered in the lethal injection of humans on death row in the USA. Potassium chloride was used to minimise the effect on parts of the coal seam that were water-sensitive and was the recommended solution prior to adding other chemicals (Montgomery and Smith 28); but a word such as “kill” also implies that the well and the larger environment were alive before fracking commenced (Giblett; Trigger). Pumping recommenced after the fracturing fluid was unloaded. Initially gas supply was very good. It increased from an average estimate of 7,000 cubic feet per day to 30,000, but this only lasted two days before coal and sand started flowing back up to the surface. In effect, the cleats were propped open but the coal did not close and hold onto them which meant coal particles and sand flowed back up the pipe with diminishing amounts of gas (Walters 12). Although there were some interesting results, the program was considered a failure. In April 1978, Houston Oil and Minerals finally abandoned the methane concept. Following the failure, they reflected on the possibilities for a coal seam gas industry given the gas prices in Queensland: “Methane drainage wells appear to offer no economic potential” (Wooldridge 2). At the wells they let the tubing drop into the hole, put a fifteen foot cement plug at the top of the hole, covered it with a steel plate and by their own description restored the area to its “original state” (Wiltshire 8). Houston Oil and Minerals now turned to “conventional targets” which included coal exploration (Wiltshire 7). A Thousand Memories The first four wells show some of the critical environmental issues that were present from the outset of the industry in Australia. The process of hydraulic fracture was not just a failure, but conducted on a science that had never been tested in Australia, was ponderous at best, and by Halliburton’s own admission had “many unknowns”. There was also the role of large multinationals providing “experience” (Briody; Hiscock) and conducting these tests while having limited knowledge of the Australian landscape. Before any gas came to the surface, a large amount of water was produced that was loaded with a mixture of salt and other heavy minerals. The source of water for both the mud drilling of Carra and Kinma, as well as the hydraulic fracture job on Moura, was extracted from Kianga Creek three miles from the site (HOMA, “Carra # 1” 5; HOMA, “Kinma # 1” 5; Porter, “Moura # 1”). No location was listed for the disposal of the water from the wells, including the hydraulic fracture liquid. Considering the poor quality of water, if the water was disposed on site or let drain into a creek, this would have had significant environmental impact. Nobody has yet answered the question of where all this water went. The environmental issues of water extraction, saline water and hydraulic fracture were present at the first four wells. At the first four wells environmental concern was not a priority. The complexity of inter-company relations, as witnessed at the Shotover well, shows there was little time. The re-use of old wells, such as the Moura well, also shows that economic priorities were more important. Even if environmental information was considered important at the time, no one would have had access to it because, as handwritten notes on some of the reports show, many of the reports were “confidential” (Sell). Even though coal mines commenced filing Environmental Impact Statements in the early 1970s, there is no such documentation for gas exploration conducted by Houston Oil and Minerals. A lack of broader awareness for the surrounding environment, from floral and faunal health to the impact on habitat quality, can be gleaned when reading across all the exploration reports. Nearly four decades on and we now have thousands of wells throughout the world. Yet, the challenges of unconventional gas still persist. The implications of the environmental history of the first four wells in Australia for contemporary unconventional gas exploration and development in this country and beyond are significant. Many environmental issues were present from the beginning of the coal seam gas industry in Australia. Owning up to this history would place policy makers and regulators in a position to strengthen current regulation. The industry continues to face the same challenges today as it did at the start of development—including water extraction, hydraulic fracturing and problems associated with drilling through underground aquifers. Looking more broadly at the unconventional gas industry, shale gas has appeared as the next target for energy resources in Australia. Reflecting on the first exploratory shale gas wells drilled in Central Australia, the chief executive of the company responsible for the shale gas wells noted their deliberate decision to locate their activities in semi-desert country away from “an area of prime agricultural land” and conflict with environmentalists (quoted in Molan). Moreover, the journalist Paul Cleary recently complained about the coal seam gas industry polluting Australia’s food-bowl but concluded that the “next frontier” should be in “remote” Central Australia with shale gas (Cleary 195). It appears that preference is to move the industry to the arid centre of Australia, to the ecologically and culturally unique Lake Eyre Basin region (Robin and Smith). Claims to move the industry away from areas that might have close public scrutiny disregard many groups in the Lake Eyre Basin, such as Aboriginal rights to land, and appear similar to other industrial projects that disregard local inhabitants, such as mega-dams and nuclear testing (Nixon). References AGA (Australian Gas Association). “Coal Seam Methane in Australia: An Overview.” AGA Research Paper 2 (1996). Apps Laboratories. “What Do Your Water Test Results Mean?” Apps Laboratories 7 Sept. 2012. 1 May 2013 ‹http://appslabs.com.au/downloads.htm›. Benbow, Dennis B. “Shotover No. 1: Lithology Report for Houston Oil and Minerals Corporation.” November 1975. Queensland Digital Exploration Reports. Company Report 5457_2. Brisbane: Queensland Department of Resources and Mines 4 June 2012. 1 May 2013 ‹https://qdexguest.deedi.qld.gov.au/portal/site/qdex/search?REPORT_ID=5457&COLLECTION_ID=999›. Berry, Petrina. “Qld Minister Refuses to Drink CSG Water.” news.com.au, 22 Apr. 2013. 1 May 2013 ‹http://www.news.com.au/breaking-news/national/qld-minister-refuses-to-drink-csg-water/story-e6frfku9-1226626115742›. Blainey, Geofrey. The Rush That Never Ended: A History of Australian Mining. Carlton: Melbourne University Publishing, 2003. Briody, Dan. The Halliburton Agenda: The Politics of Oil and Money. Singapore: Wiley, 2004. Cleary, Paul. Mine-Field: The Dark Side of Australia’s Resource Rush. Collingwood: Black Inc., 2012. Connor, Linda, Nick Higginbotham, Sonia Freeman, and Glenn Albrecht. “Watercourses and Discourses: Coalmining in the Upper Hunter Valley, New South Wales.” Oceania 78.1 (2008): 76-90. Diamond, Marion. “Coal in Australian History.” Coal and the Commonwealth: The Greatness of an Australian Resource. Eds. Peter Knights and Michael Hood. St Lucia: University of Queensland, 2009. 23-45. 20 Apr. 2013 ‹http://www.peabodyenergy.com/mm/files/News/Publications/Special%20Reports/coal_and_commonwealth%5B1%5D.pdf›. Dobb, Edwin. “The New Oil Landscape.” National Geographic (Mar. 2013): 29-59. Duus, Sonia. “Coal Contestations: Learning from a Long, Broad View.” Rural Society Journal 22.2 (2013): 96-110. Fischetti, Mark. “The Drillers Are Coming.” Scientific American (July 2010): 82-85. Giblett, Rod. “Terrifying Prospects and Resources of Hope: Minescapes, Timescapes and the Aesthetics of the Future.” Continuum: Journal of Media and Cultural Studies 23.6 (2009): 781-789. Hiscock, Geoff. Earth Wars: The Battle for Global Resources. Singapore: Wiley, 2012. HOMA (Houston Oil and Minerals of Australia). “Carra # 1: Well Completion Report.” July 1977. Queensland Digital Exploration Reports. Company Report 6054_1. Brisbane: Queensland Department of Resources and Mines. 21 Feb. 2012 ‹https://qdexguest.deedi.qld.gov.au/portal/site/qdex/search?REPORT_ID=6054&COLLECTION_ID=999›. ———. “Kinma # 1: Well Completion Report.” August 1977. Queensland Digital Exploration Reports. Company Report 6190_2. Brisbane: Queensland Department of Resources and Mines. 21 Feb. 2012 ‹https://qdexguest.deedi.qld.gov.au/portal/site/qdex/search?REPORT_ID=6190&COLLECTION_ID=999›. ———. “Miscellaneous Pages. Including Hydro-Frac Report.” August 1977. Queensland Digital Exploration Reports. Company Report 6190_17. Brisbane: Queensland Department of Resources and Mines. 31 May 2012 ‹https://qdexguest.deedi.qld.gov.au/portal/site/qdex/search?REPORT_ID=6190&COLLECTION_ID=999›. ———. “Shotover # 1: Well Completion Report.” March 1977. Queensland Digital Exploration Reports. Company Report 5457_1. Brisbane: Queensland Department of Resources and Mines. 22 Feb. 2012 ‹https://qdexguest.deedi.qld.gov.au/portal/site/qdex/search?REPORT_ID=5457&COLLECTION_ID=999›. Howarth, Robert W., Renee Santoro, and Anthony Ingraffea. “Methane and the Greenhouse-Gas Footprint of Natural Gas from Shale Formations: A Letter.” Climatic Change 106.4 (2011): 679-690. Mathers, D. “Appendix 1: Water Analysis.” 1-2 August 1977. Brisbane: Government Chemical Laboratory. Queensland Digital Exploration Reports. Company Report 6054_4. Brisbane: Queensland Department of Resources and Mines. 21 Feb. 2012 ‹https://qdexguest.deedi.qld.gov.au/portal/site/qdex/search?REPORT_ID=6054&COLLECTION_ID=999›. ———. “Moura # 1: Testing Report Appendix D Fluid Analyses.” 2 Aug. 1977. Brisbane: Government Chemical Laboratory. Queensland Digital Exploration Reports. Company Report 5991_5. Brisbane: Queensland Department of Resources and Mines. 22 Feb. 2012 ‹https://qdexguest.deedi.qld.gov.au/portal/site/qdex/search?REPORT_ID=5991&COLLECTION_ID=999›. McClanahan, Elizabeth A. “Coalbed Methane: Myths, Facts, and Legends of Its History and the Legislative and Regulatory Climate into the 21st Century.” Oklahoma Law Review 48.3 (1995): 471-562. McEachern, Doug. “Mining Meaning from the Rhetoric of Nature—Australian Mining Companies and Their Attitudes to the Environment at Home and Abroad.” Policy Organisation and Society (1995): 48-69. McGraw, Seamus. The End of Country. New York: Random House, 2011. McKenna, Phil. “Uprising.” Matter 21 Feb. 2013. 1 Mar. 2013 ‹https://www.readmatter.com/a/uprising/›.McLeish, Kathy. “Farmers to March against Coal Seam Gas.” ABC News 27 Apr. 2012. 22 Apr. 2013 ‹http://www.abc.net.au/news/2012-04-27/farmers-to-march-against-coal-seam-gas/3977394›. Methane Drainage Taskforce. Coal Seam Methane. Sydney: N.S.W. Department of Mineral Resources and Office of Energy, 1992. Molan, Lauren. “A New Shift in the Global Energy Scene: Australian Shale.” Gas Today Online. 4 Nov. 2011. 3 May 2012 ‹http://gastoday.com.au/news/a_new_shift_in_the_global_energy_scene_australian_shale/064568/›. Montgomery, Carl T., and Michael B. Smith. “Hydraulic Fracturing: History of an Enduring Technology.” Journal of Petroleum Technology (2010): 26-32. 30 May 2012 ‹http://www.spe.org/jpt/print/archives/2010/12/10Hydraulic.pdf›. NHMRC (National Health and Medical Research Council). National Water Quality Management Strategy: Australian Drinking Water Guidelines 6. Canberra: Australian Government, 2004. 7 Sept. 2012 ‹http://www.nhmrc.gov.au/guidelines/publications/eh52›. Nixon, Rob. “Unimagined Communities: Developmental Refugees, Megadams and Monumental Modernity.” New Formations 69 (2010): 62-80. Osborn, Stephen G., Avner Vengosh, Nathaniel R. Warner, and Robert B. Jackson. “Methane Contamination of Drinking Water Accompanying Gas-Well Drilling and Hydraulic Fracturing.” Proceedings of the National Academy of Sciences 108.20 (2011): 8172-8176. Perkins, T.K., and L.R. Kern. “Widths of Hydraulic Fractures.” Journal of Petroleum Technology 13.9 (1961): 937-949. Porter, Seton M. “Carra # 1:Testing Report, Methane Drainage of the Baralaba Coal Measures, A.T.P. 226P, Central Queensland, Australia.” Oct. 1977. Queensland Digital Exploration Reports. Company Report 6054_7. Brisbane: Queensland Department of Resources and Mines. 21 Feb. 2012 ‹https://qdexguest.deedi.qld.gov.au/portal/site/qdex/search?REPORT_ID=6054&COLLECTION_ID=999›. ———. “Kinma # 1: Testing Report, Methane Drainage of the Baralaba Coal Measures, A.T.P. 226P, Central Queensland, Australia.” Oct. 1977. Queensland Digital Exploration Reports. Company Report 6190_16. Brisbane: Queensland Department of Resources and Mines. 21 Feb. 2012 ‹https://qdexguest.deedi.qld.gov.au/portal/site/qdex/search?REPORT_ID=6190&COLLECTION_ID=999›. ———. “Moura # 1: Testing Report: Methane Drainage of the Baralaba Coal Measures: A.T.P. 226P, Central Queensland, Australia.” Oct. 1977. Queensland Digital Exploration Reports. Company Report 6190_15. Brisbane: Queensland Department of Resources and Mines. 21 Feb. 2012 ‹https://qdexguest.deedi.qld.gov.au/portal/site/qdex/search?REPORT_ID=6190&COLLECTION_ID=999›. QDAFF (Queensland Department of Agriculture, Fisheries and Forestry). “Interpreting Water Analysis for Crop and Pasture.” 1 Aug. 2012. 1 May 2013 ‹http://www.daff.qld.gov.au/ 26_4347.htm›. Robin, Libby, and Mike Smith. “Prologue.” Desert Channels: The Impulse To Conserve. Eds. Libby Robin, Chris Dickman and Mandy Martin. Collingwood: CSIRO Publishing, 2010. XIII-XVII. Rogers, Rudy E. Coalbed Methane: Principles and Practice. Englewood Cliffs: Prentice Hill, 1994. Sell, B.H. “T.E.P.L. Moura No.1 Well Completion Report.” October 1969. Queensland Digital Exploration Reports. Company Report 2899_1. Brisbane: Queensland Department of Resources and Mines. 26 Feb. 2013 ‹https://qdexguest.deedi.qld.gov.au/portal/site/qdex/search?REPORT_ID=2899&COLLECTION_ID=999›. Senate. Management of the Murray Darling Basin: Interim Report: The Impact of Coal Seam Gas on the Management of the Murray Darling Basin. Canberra: Rural Affairs and Transport References Committee, 2011. Schraufnagel, Richard, Richard McBane, and Vello Kuuskraa. “Coalbed Methane Development Faces Technology Gaps.” Oil & Gas Journal 88.6 (1990): 48-54. Trigger, David. “Mining, Landscape and the Culture of Development Ideology in Australia.” Ecumene 4 (1997): 161-180. Walters, Ronald L. Letter to Dennis Benbow. 29 August 1977. In Seton M. Porter, “Moura # 1: Testing Report: Methane Drainage of the Baralaba Coal Measures: A.T.P. 226P, Central Queensland, Australia.” October 1977, 11-14. Queensland Digital Exploration Reports. Company Report 6190_15. Brisbane: Queensland Department of Resources and Mines. 21 Feb. 2012 ‹https://qdexguest.deedi.qld.gov.au/portal/site/qdex/search?REPORT_ID=6190&COLLECTION_ID=999›. WHO (World Health Organization). International Standards for Drinking-Water. 3rd Ed. Geneva, 1971. Wilkinson, Rick. A Thirst for Burning: The Story of Australia's Oil Industry. Sydney: David Ell Press, 1983. Wiltshire, M.J. “A Review to ATP 233P, 231P (210P) – Bowen/Surat Basins, Queensland for Houston Oil Minerals Australia, Inc.” 19 Jan. 1979. Queensland Digital Exploration Reports Database. Company Report 6816. Brisbane: Queensland Department of Resources and Mines. 21 Feb. 2012 ‹https://qdexguest.deedi.qld.gov.au/portal/site/qdex/search?REPORT_ID=6816&COLLECTION_ID=999›. Wooldridge, L.C.P. “Methane Drainage in the Bowen Basin – Queensland.” 25 Aug. 1978. Queensland Digital Exploration Reports Database. Company Report 6626_1. Brisbane: Queensland Department of Resources and Mines. 31 May 2012 ‹https://qdexguest.deedi.qld.gov.au/portal/site/qdex/search?REPORT_ID=6626&COLLECTION_ID=999›.
APA, Harvard, Vancouver, ISO, and other styles
30

Wang, Jing. "The Coffee/Café-Scape in Chinese Urban Cities." M/C Journal 15, no. 2 (May 2, 2012). http://dx.doi.org/10.5204/mcj.468.

Full text
Abstract:
IntroductionIn this article, I set out to accomplish two tasks. The first is to map coffee and cafés in Mainland China in different historical periods. The second is to focus on coffee and cafés in the socio-cultural milieu of contemporary China in order to understand the symbolic value of the emerging coffee/café-scape. Cafés, rather than coffee, are at the centre of this current trend in contemporary Chinese cities. With instant coffee dominating as a drink, the Chinese have developed a cultural and social demand for cafés, but have not yet developed coffee palates. Historical Coffee Map In 1901, coffee was served in a restaurant in the city of Tianjin. This restaurant, named Kiessling, was run by a German chef, a former solider who came to China with the eight-nation alliance. At that time, coffee was reserved mostly for foreign politicians and military officials as well as wealthy businessmen—very few ordinary Chinese drank it. (For more history of Kiessling, including pictures and videos, see Kiessling). Another group of coffee consumers were from the cultural elites—the young revolutionary intellectuals and writers with overseas experience. It was almost a fashion among the literary elite to spend time in cafés. However, this was negatively judged as “Western” and “bourgeois.” For example, in 1932, Lu Xun, one of the most important twentieth century Chinese writers, commented on the café fashion during 1920s (133-36), and listed the reasons why he would not visit one. He did not drink coffee because it was “foreigners’ food”, and he was too busy writing for the kind of leisure enjoyed in cafés. Moreover, he did not, he wrote, have the nerve to go to a café, and particularly not the Revolutionary Café that was popular among cultural celebrities at that time. He claimed that the “paradise” of the café was for genius, and for handsome revolutionary writers (who he described as having red lips and white teeth, whereas his teeth were yellow). His final complaint was that even if he went to the Revolutionary Café, he would hesitate going in (Lu Xun 133-36). From Lu Xun’s list, we can recognise his nationalism and resistance to what were identified as Western foods and lifestyles. It is easy to also feel his dissatisfaction with those dilettante revolutionary intellectuals who spent time in cafés, talking and enjoying Western food, rather than working. In contrast to Lu Xun’s resistance to coffee and café culture, another well-known writer, Zhang Ailing, frequented cafés when she lived in Shanghai from the 1920s to 1950s. She wrote about the smell of cakes and bread sold in Kiessling’s branch store located right next to her parents’ house (Yuyue). Born into a wealthy family, exposed to Western culture and food at a very young age, Zhang Ailing liked to spend her social and writing time in cafés, ordering her favourite cakes, hot chocolate, and coffee. When she left Shanghai and immigrated to the USA, coffee was an important part of her writing life: the smell and taste reminding her of old friends and Shanghai (Chunzi). However, during Zhang’s time, it was still a privileged and elite practice to patronise a café when these were located in foreign settlements with foreign chefs, and served mainly foreigners, wealthy businessmen, and cultural celebrities. After 1949, when the Chinese Communist Party established the People’s Republic of China, until the late 1970s, there were no coffee shops in Mainland China. It was only when Deng Xiaoping suggested neo-liberalism as a so-called “reform-and-open-up” economic policy that foreign commerce and products were again seen in China. In 1988, ten years after the implementation of Deng Xiaoping’s policy, the Nestlé coffee company made the first inroads into the mainland market, featuring homegrown coffee beans in Yunnan province (China Beverage News; Dong; ITC). Nestlé’s bottled instant coffee found its way into the Chinese market, avoiding a direct challenge to the tea culture. Nestlé packaged its coffee to resemble health food products and marketed it as a holiday gift suitable for friends and relatives. As a symbol of modernity and “the West”, coffee-as-gift meshed with the traditional Chinese cultural custom that values gift giving. It also satisfied a collective desire for foreign products (and contact with foreign cultures) during the economic reform era. Even today, with its competitively low price, instant coffee dominates coffee consumption at home, in the workplace, and on Chinese airlines. While Nestlé aimed their product at native Chinese consumers, the multinational companies who later entered China’s coffee market, such as Sara Lee, mainly targeted international hotels such as IHG, Marriott, and Hyatt. The multinationals also favoured coffee shops like Kommune in Shanghai that offered more sophisticated kinds of coffee to foreign consumers and China’s upper class (Byers). If Nestlé introduced coffee to ordinary Chinese families, it was Starbucks who introduced the coffee-based “third space” to urban life in contemporary China on a signficant scale. Differing from the cafés before 1949, Starbucks stores are accessible to ordinary Chinese citizens. The first in Mainland China opened in Beijing’s China World Trade Center in January 1999, targeting mainly white-collar workers and foreigners. Starbucks coffee shops provide a space for informal business meetings, chatting with friends, and relaxing and, with its 500th store opened in 2011, dominate the field in China. Starbucks are located mainly in the central business districts and airports, and the company plans to have 1,500 sites by 2015 (Starbucks). Despite this massive presence, Starbucks constitutes only part of the café-scape in contemporary Chinese cities. There are two other kinds of cafés. One type is usually located in universities or residential areas and is frequented mainly by students or locals working in cultural professions. A representative of this kind is Sculpting in Time Café. In November 1997, two years before the opening of the first Starbucks in Beijing, two newlywed college graduates opened the first small Sculpting in Time Café near Beijing University’s East Gate. This has been expanded into a chain, and boasts 18 branches on the Mainland. (For more about its history, see Sculpting in Time Café). Interestingly, both Starbucks and Sculpting in Time Café acquired their names from literature, Starbucks from Moby Dick, and Sculpting in Time from the Russian filmmaker Andrei Tarkovsky’s film diary of the same name. For Chinese students of literature and the arts, drinking coffee is less about acquiring more energy to accomplish their work, and more about entering a sensual world, where the aroma of coffee mixes with the sounds from the coffee machine and music, as well as the lighting of the space. More importantly, cafés with this ambience become, in themselves, cultural sites associated with literature, films, and music. Owners of this kind of café are often lovers of foreign literatures, films, and cultures, and their cafés host various cultural events, including forums, book clubs, movie screenings, and music clubs. Generally speaking, coffee served in this kind of café is simpler than in the kind discussed below. This third type of café includes those located in tourist and entertainment sites such as art districts, bar areas, and historical sites, and which are frequented by foreign and native tourists, artists and other cultural workers. If Starbucks cultivates a fast-paced business/professional atmosphere, and Sculpting in Time Cafés an artsy and literary atmosphere, this third kind of café is more like an upscale “bar” with trained baristas serving complicated coffees and emphasising their flavour. These coffee shops are more expensive than the other kinds, with an average price three times that of Starbucks. Currently, cafés of this type are found only in “first-tier” cities and usually located in art districts and tourist areas—such as Beijing’s 798 Art District and Nanluo Guxiang, Shanghai’s Tai Kang Road (a.k.a. “the art street”), and Hangzhou’s Westlake area. While Nestlé and Starbucks use coffee beans grown in Yunnan provinces, these “art cafés” are more inclined to use imported coffee beans from suppliers like Sara Lee. Coffee and Cafés in Contemporary China After just ten years, there are hundreds of cafés in Chinese cities. Why has there been such a demand for coffee or, more accurately, cafés, in such a short period of time? The first reason is the lack of “third space” environments in Mainland China. Before cafés appeared in the late 1990s, stores like KFC (which opened its first store in 1987) and McDonald’s (with its first store opened in 1990) filled this role for urban residents, providing locations where customers could experience Western food, meet friends, work, or read. In fact, KFC and McDonald’s were once very popular with college students looking for a place to study. Both stores had relatively clean food environments and good lighting. They also had air conditioning in the summer and heating in the winter, which are not provided in most Chinese university dormitories. However, since neither chain was set up to be a café and customers occupying seats for long periods while ordering minimal amounts of food or drink affected profits, staff members began to indirectly ask customers to leave after dining. At the same time, as more people were able to afford to eat at KFC and McDonald’s, their fast foods were also becoming more and more popular, especially among young people. As a consequence, both types of chain restaurant were becoming noisy and crowded and, thus, no longer ideal for reading, studying, or meeting with friends. Although tea has been a traditional drink in Chinese culture, traditional teahouses were expensive places more suitable for business meetings or for the cultural or intellectual elite. Since almost every family owns a tea set and can readily purchase tea, friends and family would usually make and consume tea at home. In recent years, however, new kinds of teahouses have emerged, similar in style to cafés, targeting the younger generation with more affordable prices and a wider range of choices, so the lack of a “third space” does not fully explain the café boom. Another factor affecting the popularity of cafés has been the development and uptake of Internet technology, including the increasing use of laptops and wireless Internet in recent years. The Internet has been available in China since the late 1990s, while computers and then laptops entered ordinary Chinese homes in the early twenty-first century. The IT industry has created not only a new field of research and production, but has also fostered new professions and demands. Particularly, in recent years in Mainland China, a new socially acceptable profession—freelancing in such areas as graphic design, photography, writing, film, music, and the fashion industry—has emerged. Most freelancers’ work is computer- and Internet-based. Cafés provide suitable working space, with wireless service, and the bonus of coffee that is, first of all, somatically stimulating. In addition, the emergence of the creative and cultural industries (which are supported by the Chinese government) has created work for these freelancers and, arguably, an increasing demand for café-based third spaces where such people can meet, talk and work. Furthermore, the flourishing of cafés in first-tier cities is part of the “aesthetic economy” (Lloyd 24) that caters to the making and selling of lifestyle experience. Alongside foreign restaurants, bars, galleries, and design firms, cafés contribute to city branding, and link a city to the global urban network. Cafés, like restaurants, galleries and bars, provide a space for the flow of global commodities, as well as for the human flow of tourists, travelling artists, freelancers, and cultural specialists. Finally, cafés provide a type of service that contributes to friendly owner/waiter-customer relations. During the planned-economy era, most stores and hotels in China were State-owned, staff salaries were not related to individual performance, and indifferent (and even unfriendly) service was common. During the economic reform era, privately owned stores and shops began to replace State-owned ones. At the same time, a large number of people from the countryside flowed into the cities seeking opportunities. Most had little if any professional training and so could only find work in factories or in the service industry. However, most café employees are urban, with better educational backgrounds, and many were already familiar with coffee culture. In addition, café owners, particularly those of places like Sculpting in Time Cafe, often invest in creating a positive, community atmosphere, learning about their customers and sharing personal experiences with their regular clients. This leads to my next point—the generation of the 1980s’ need for a social community. Cafés’ Symbolic Value—Community A demand for a sense of community among the generation of the 1980s is a unique socio-cultural phenomenon in China, which paradoxically co-exists with their desire for individualism. Mao Zedong started the “One Child Policy” in 1979 to slow the rapid population growth in China, and the generations born under this policy are often called “the lonely generations,” with both parents working full-time. At the same time, they are “the generation of me,” labelled as spoiled, self-centred, and obsessed with consumption (de Kloet; Liu; Rofel; Wang). The individuals of this generation, now aged in their 20s and 30s, constitute the primary consumers of coffee in China. Whereas individualism is an important value to them, a sense of community is also desirable in order to compensate for their lack of siblings. Furthermore, the 1980s’ generation has also benefitted from the university expansion policy implemented in 1999. Since then, China has witnessed a surge of university students and graduates who not only received scientific and other course-based knowledge, but also had a better chance to be exposed to foreign cultures through their books, music, and movies. With this interesting tension between individualism and collectivism, the atmosphere provided by cafés has fostered a series of curious temporary communities built on cultural and culinary taste. Interestingly, it has become an aspiration of many young college students and graduates to open a community-space style café in a city. One of the best examples is the new Henduoren’s (Many People’s) Café. This was a project initiated by Wen Erniu, a recent college graduate who wanted to open a café in Beijing but did not have sufficient funds to do so. She posted a message on the Internet, asking people to invest a minimum of US$316 to open a café with her. With 78 investors, the café opened in September 2011 in Beijing (see pictures of Henduoren’s Café). In an interview with the China Daily, Wen Erniu stated that, “To open a cafe was a dream of mine, but I could not afford it […] We thought opening a cafe might be many people’s dream […] and we could get together via the Internet to make it come true” (quoted in Liu 2011). Conclusion: Café Culture and (Instant) Coffee in China There is a Chinese saying that, if you hate someone—just persuade him or her to open a coffee shop. Since cafés provide spaces where one can spend a relatively long time for little financial outlay, owners have to increase prices to cover their expenses. This can result in fewer customers. In retaliation, cafés—particularly those with cultural and literary ambience—host cultural events to attract people, and/or they offer food and wine along with coffee. The high prices, however, remain. In fact, the average price of coffee in China is often higher than in Europe and North America. For example, a medium Starbucks’ caffè latte in China averaged around US$4.40 in 2010, according to the price list of a Starbucks outlet in Shanghai—and the prices has recently increased again (Xinhua 2012). This partially explains why instant coffee is still so popular in China. A bag of instant Nestlé coffee cost only some US$0.25 in a Beijing supermarket in 2010, and requires only hot water, which is accessible free almost everywhere in China, in any restaurant, office building, or household. As an habitual, addictive treat, however, coffee has not yet become a customary, let alone necessary, drink for most Chinese. Moreover, while many, especially those of the older generations, could discern the quality and varieties of tea, very few can judge the quality of the coffee served in cafés. As a result, few Mainland Chinese coffee consumers have a purely somatic demand for coffee—craving its smell or taste—and the highly sweetened and creamed instant coffee offered by companies like Nestlé or Maxwell has largely shaped the current Chinese palate for coffee. Ben Highmore has proposed that “food spaces (shops, restaurants and so on) can be seen, for some social agents, as a potential space where new ‘not-me’ worlds are encountered” (396) He continues to expand that “how these potential spaces are negotiated—the various affective registers of experience (joy, aggression, fear)—reflect the multicultural shapes of a culture (its racism, its openness, its acceptance of difference)” (396). Cafés in contemporary China provide spaces where one encounters and constructs new “not-me” worlds, and more importantly, new “with-me” worlds. While café-going communicates an appreciation and desire for new lifestyles and new selves, it can be hoped that in the near future, coffee will also be appreciated for its smell, taste, and other benefits. Of course, it is also necessary that future Chinese coffee consumers also recognise the rich and complex cultural, political, and social issues behind the coffee economy in the era of globalisation. References Byers, Paul [former Managing Director, Sara Lee’s Asia Pacific]. Pers. comm. Apr. 2012. China Beverage News. “Nestlé Acquires 70% Stake in Chinese Mineral Water Producer.” (2010). 31 Mar. 2012 ‹http://chinabevnews.wordpress.com/2010/02/21/nestle-acquires-70-stake-in-chinese-mineral-water-producer›. Chunzi. 张爱玲地图[The Map of Eileen Chang]. 汉语大词典出版 [Hanyu Dacidian Chubanshe], 2003. de Kloet, Jeroen. China with a Cut: Globalization, Urban Youth and Popular Music. Amsterdam: Amsterdam UP, 2010. Dong, Jonathan. “A Caffeinated Timeline: Developing Yunnan’s Coffee Cultivation.” China Brief (2011): 24-26. Highmore, Ben. “Alimentary Agents: Food, Cultural Theory and Multiculturalism.” Journal of Intercultural Studies, 29.4 (2008): 381-98. ITC (International Trade Center). The Coffee Sector in China: An Overview of Production, Trade And Consumption, 2010. Liu, Kang. Globalization and Cultural Trends in China. Honolulu: University of Hawai’i Press, 2004. Liu, Zhihu. “From Virtual to Reality.” China Daily (Dec. 2011) 31 Mar. 2012 ‹http://www.chinadaily.com.cn/life/2011-12/26/content_14326490.htm›. Lloyd, Richard. Neobohemia: Art and Commerce in the Postindustrial City. London: Routledge, 2006. Lu, Xun. “Geming Kafei Guan [Revolutionary Café]”. San Xian Ji. Taibei Shi: Feng Yun Shi Dai Chu Ban Gong Si: Fa Xing Suo Xue Wen Hua Gong Si, Mingguo 78 (1989): 133-36. Rofel, Lisa. Desiring China: Experiments in Neoliberalism, Sexuality, and Public Culture. Durham and London: Duke UP, 2007: 1-30. “Starbucks Celebrates Its 500th Store Opening in Mainland China.” Starbucks Newsroom (Oct. 2011) 31 Mar. 2012. ‹http://news.starbucks.com/article_display.cfm?article_id=580›. Wang, Jing. High Culture Fever: Politics, Aesthetics, and Ideology in Deng’s China. Berkeley, Los Angeles, London: U of California P, 1996. Xinhua. “Starbucks Raises Coffee Prices in China Stores.” Xinhua News (Jan. 2012). 31 Mar. 2012 ‹http://news.xinhuanet.com/english/china/2012-01/31/c_131384671.htm›. Yuyue. Ed. “On the History of the Western-Style Restaurants: Aileen Chang A Frequent Customer of Kiessling.” China.com.cn (2010). 31 Mar. 2012 ‹http://www.china.com.cn/culture/txt/2010-01/30/content_19334964.htm›.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography