Journal articles on the topic 'Manufactures Australia Costs'

To see the other types of publications on this topic, follow the link: Manufactures Australia Costs.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 45 journal articles for your research on the topic 'Manufactures Australia Costs.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Tyzack, Daniel, and Luke Sullivan. "BP's evolution in the face of Australia's transforming fuel supply chain: ensuring business sustainability as well as security of supply for the nation's demands." APPEA Journal 58, no. 2 (2018): 637. http://dx.doi.org/10.1071/aj17203.

Full text
Abstract:
BP has embraced the changing landscape of the fuel supply chain in Australia. Historically, fuel supply in Australia has been manufactured and supplied nationally. However, Australia’s future fuel supply will rely on a globalised supply chain in order to meet local needs. In response, BP has (among other initiatives) transformed its operations and maintenance approach to terminals and fuel distribution within Australia to guarantee a sustainable business model, offsetting the trend of high asset ownership costs and low efficiency. When it comes to terminal management, BP’s creation of Australian Terminals Operation Management (ATOM), a joint venture between BP and UGL, differentiates BP from its peers. This model has proven fundamental to maintaining a competitive position in the Australian fuel industry. BP’s unique approach to meeting customer demands through the ATOM business is a success story that comes from pre-empting and embracing change.
APA, Harvard, Vancouver, ISO, and other styles
2

Marks, Robert. "A Freer Market for Heroin in Australia: Alternatives to Subsidizing Organized Crime." Journal of Drug Issues 20, no. 1 (January 1990): 131–76. http://dx.doi.org/10.1177/002204269002000109.

Full text
Abstract:
The problems associated with illicit drug use in general, and the illicit use of heroin in particular, have led to stringent attempts by Australian governments to enforce the laws against drug abuse. The strongest reaction of the criminal justice system has been toward heroin, with a total prohibition on heroin importation, manufacture, distribution, possession, and use. Before attempting to evaluate the extent and costs of heroin use today, this paper reviews the evolution of laws and social attitudes toward heroin in Australia. Using an economic framework for analyzing the black market in heroin, the paper examines proposals for enforcing the prohibition by tightening the supply side, and by reducing the demand for heroin. It argues that attempts to restrict the supply have had the effect of increasing the costs borne not only by the users but by society at large, through increases in acquisitive crime and police corruption. On utilitarian grounds it concludes that the costs to society of the prohibition far outweigh the costs of a policy of freer availability, and suggests that a policy of government supply of price-controlled heroin and methadone would be far preferable to today's failed policy of prohibition.
APA, Harvard, Vancouver, ISO, and other styles
3

Behdarvand, Behrad, Emily A. Karanges, and Lisa Bero. "Pharmaceutical industry funding of events for healthcare professionals on non-vitamin K oral anticoagulants in Australia: an observational study." BMJ Open 9, no. 8 (August 2019): e030253. http://dx.doi.org/10.1136/bmjopen-2019-030253.

Full text
Abstract:
ObjectivesTo describe the nature, frequency and content of non-vitamin K oral anticoagulant (NOAC)-related events for healthcare professionals sponsored by the manufacturers of the NOACs in Australia. A secondary objective is to compare these data to the rate of dispensing of the NOACs in Australia.Design and settingThis cross-sectional study examined consolidated data from publicly available Australian pharmaceutical industry transparency reports from October 2011 to September 2015 on NOAC-related educational events. Data from April 2011 to June 2016 on NOAC dispensing, subsidised under Australia’s Pharmaceutical Benefits Scheme (PBS), were obtained from the Department of Health and the Department of Human Services.Main outcome measuresCharacteristics of NOAC-related educational events including costs (in Australian dollars, $A), numbers of events, information on healthcare professional attendees and content of events; and NOAC dispensing rates.ResultsDuring the study period, there were 2797 NOAC-related events, costing manufacturers a total of $A10 578 745. Total expenditure for meals and beverages at all events was $A4 238 962. Events were predominantly attended by general practitioners (42%, 1174/2797), cardiologists (35%, 977/2797) and haematologists (23%, 635/2797). About 48% (1347/2797) of events were held in non-clinical settings, mainly restaurants, bars and cafes. Around 55% (1551/2797) of events consisted of either conferences, meetings or seminars. The analysis of the content presented at two events detected promotion of NOACs for unapproved indications, an emphasis on a favourable benefit/harm profile, and that all speakers had close ties with the manufacturers of the NOACs. Following PBS listings relevant to each NOAC, the numbers of events related to that NOAC and the prescribing of that NOAC increased.ConclusionsOur findings suggest that the substantial investment in NOAC-related events made by four pharmaceutical companies had a promotional purpose. Healthcare professionals should seek independent information on newly subsidised medicines from, for example, government agencies or drug bulletins.
APA, Harvard, Vancouver, ISO, and other styles
4

Marshall, Dale E. "386 MECHANICAL ASPARAGUS HARVESTING STATUS--WORLDWIDE." HortScience 29, no. 5 (May 1994): 486d—486. http://dx.doi.org/10.21273/hortsci.29.5.486d.

Full text
Abstract:
For over 86 years producers, processors, engineers, and equipment manufacturers have attempted to mechanize the harvest of asparagus. Over 60 U.S. patents have been issued. Probably the most sophisticated harvester tested was started in 1987 by Edgells Birdseye, Cowra, Australia. After successful field tests of the 3-row, selective (fiber optic), harvester for flat-bed green asparagus used in canning, 3 more were built at a cost of $US 4.5 million, and harvested 500 acres until 1991 when the company ceased canning. Recovery was 30 to 80% with 50% being typical. Wollogong University in Australia is now researching a selective (fiber optic), harvester for flat-bed green asparagus. It utilizes multiple side-by-side 3 in. wide by 24 in. dia. rubber gripper discs which rotate at ground speed. No harvester prototype has been commercially acceptable to the asparagus industry due to poor selectivity, low overall recovery (low yield relative to hand harvest), mechanical damage to spears, low field capacity per harvester, or overall harvesting costs that exceed those for hand harvesting. The reality may be that asparagus production will cease in the traditional geographical areas where growing costs and labor costs are high, although niche fresh markets may help some growers survive.
APA, Harvard, Vancouver, ISO, and other styles
5

Dong-Woon Kim. "A British Business in Australia before World War II: J. & P. Coats, Cotton Thread Manufacturers." Journal of Eurasian Studies 8, no. 3 (September 2011): 167–90. http://dx.doi.org/10.31203/aepa.2011.8.3.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

THAMPAPILLAI, DODO J. "EZRA MISHAN’S COST OF ECONOMIC GROWTH: EVIDENCE FROM THE ENTROPY OF ENVIRONMENTAL CAPITAL." Singapore Economic Review 61, no. 03 (June 2016): 1640018. http://dx.doi.org/10.1142/s021759081640018x.

Full text
Abstract:
Ezra Mishan’s (1967) famous articulation of the costs of economic growth included amongst others the rearrangement and loss of nature. This paper builds on this theme by recourse to two important concepts in science, namely the assimilative capacity of nature and the entropy of law of thermodynamics. These concepts enable the formulation of an alternative conceptual framework for the explanation of national income (Y) in terms of factor-utilization. In this framework, environmental capital (KN) is an explicit factor besides manufactured capital (KM) and labor (L). A simple methodology that permits the estimation of the volume of KN utilized is used towards demonstrating that economic growth is an entropic process. Empirical illustration of KN utilization as point-estimates is made for Australia and South Korea.
APA, Harvard, Vancouver, ISO, and other styles
7

Matthews, Alison, Laura Ruykys, Bill Ellis, Sean FitzGibbon, Daniel Lunney, Mathew S. Crowther, Alistair S. Glen, et al. "The success of GPS collar deployments on mammals in Australia." Australian Mammalogy 35, no. 1 (2013): 65. http://dx.doi.org/10.1071/am12021.

Full text
Abstract:
Global Positioning System (GPS) wildlife telemetry collars are being used increasingly to understand the movement patterns of wild mammals. However, there are few published studies on which to gauge their general utility and success. This paper highlights issues faced by some of the first researchers to use GPS technology for terrestrial mammal tracking in Australia. Our collated data cover 24 studies where GPS collars were used in 280 deployments on 13 species, including dingoes or other wild dogs (Canis lupus dingo and hybrids), cats (Felis catus), foxes (Vulpes vulpes), kangaroos (Macropus giganteus), koalas (Phascolarctos cinereus), livestock guardian dogs (C. l. familiaris), pademelons (Thylogale billardierii), possums (Trichosurus cunninghami), quolls (Dasyurus geoffroii and D. maculatus), wallabies (Macropus rufogriseus and Petrogale lateralis), and wombats (Vombatus ursinus). Common problems encountered were associated with collar design, the GPS, VHF and timed-release components, and unforseen costs in retrieving and refurbishing collars. We discuss the implications of collar failures for research programs and animal welfare, and suggest how these could be avoided or improved. Our intention is to provide constructive advice so that researchers and manufacturers can make informed decisions about using this technology, and maximise the many benefits of GPS while reducing the risks.
APA, Harvard, Vancouver, ISO, and other styles
8

Vitelli, Joseph, and Barbara Madigan. "Evaluating the efficacy of the EZ-Ject herbicide system in Queensland, Australia." Rangeland Journal 33, no. 3 (2011): 299. http://dx.doi.org/10.1071/rj11038.

Full text
Abstract:
The EZ-Ject herbicide system was evaluated as a stem injection method for controlling woody weeds in a range of situations where traditional chemical application methods have limited scope. The equipment was trialled on three Queensland weed species; pond apple (Annona glabra), velvety tree pear (Opuntia tomentosa) and yellow oleander (Cascabela thevetia); at five different cartridge densities (0, 1, 2, 3 and 4) and with two herbicides (glyphosate and imazapyr). Cartridges filled with imazapyr were significantly more effective at controlling the three woody weed species than those filled with glyphosate. Injecting plants with three imazapyr cartridges resulted in plant kills ranging from 93 to 100%, compared with glyphosate kills of 17 to 100%. Pond apple was the most susceptible species, requiring one imazapyr cartridge or two glyphosate cartridges to kill 97 and 92% of the treated plants. Plant mortality increased as the number of cartridges injected increased. Mortality did not differ significantly for treatments receiving three and four imazapyr cartridges, as these cartridge densities met the criterion of injecting one cartridge per 10-cm basal circumference, a criterion recommended by the manufacturers for treating large plants (>6.35 cm in diameter at breast height). The cost of treating a weed infestation of 1500 plants ha–1 with three cartridges per tree is $1070 ha–1, with labour costs accounting for 16% of the total. The high chemical costs would preclude this technique from broad-scale use, but the method could have application for treating woody weeds in sensitive, high conservation areas.
APA, Harvard, Vancouver, ISO, and other styles
9

Karanges, Emily Aspasia, Conrad Nangla, Lisa Parker, Alice Fabbri, Cynthia Farquhar, and Lisa Bero. "Pharmaceutical industry payments and assisted reproduction in Australia: a retrospective observational study." BMJ Open 11, no. 9 (August 31, 2021): e049710. http://dx.doi.org/10.1136/bmjopen-2021-049710.

Full text
Abstract:
ObjectivesTo investigate the extent and nature of pharmaceutical industry payments related to fertility and assisted reproduction in Australia.Design and settingThis retrospective observational study employed four databases compiled from publicly available pharmaceutical industry transparency reports on educational event sponsorship (October 2011–April 2018), payments to healthcare professionals (October 2015–April 2018) and patient group support (January 2013–December 2017). Analyses were restricted to fertility-related payments by two major manufacturers of fertility medicines in Australia: Merck Serono and Merck, Sharp and Dohme (MSD).Primary and secondary outcome measuresDescriptive statistics on fertility-related payments and other transfers of value (counts, total and median costs in Australian dollars) for educational events and to healthcare professionals and patient groups.ResultsBetween October 2011 and April 2018, Merck Serono and MSD spent $A4 522 263 on 970 fertility-related events for healthcare professionals, including doctors, nurses and fertility scientists. 56.8% (551/970) events were held by fertility clinics and 29.3% (284/970) by professional medical associations. Between October 2015 and April 2018, Merck Serono spent $A403 800 across 177 payments to 118 fertility healthcare professionals, predominantly for educational event attendance. Recipients included obstetricians and gynaecologists (76.3% of payments, 135/177), nurses (11.3%, 20/177) and embryologists/fertility scientists (9.6%, 17/117). The highest paid healthcare professionals held leadership positions in major fertility clinics. Merck Serono provided $A662 850 to fertility-related patient groups for advocacy and education (January 2013–December 2017).ConclusionsThe pharmaceutical industry sponsored a broad range of fertility clinicians and organisations, including doctors, nurses, embryologists, professional medical organisations, fertility clinics and patient groups. This sponsorship may contribute to the overuse of fertility services.
APA, Harvard, Vancouver, ISO, and other styles
10

Ross, Graeme. "Importance of ambient cure for high-temperature coatings." APPEA Journal 60, no. 2 (2020): 654. http://dx.doi.org/10.1071/aj19087.

Full text
Abstract:
Due to increasing demand for energy around the world, the prevalence of global megaprojects within the oil and gas industry is increasing. Process pipes, valves and vessels may be manufactured and coated in China or Korea, where labour costs are comparatively low, before being transported to the final project location, such as Western Australia. During the transport and fabrication phase, coated steelwork may spend months or even years exposed to harsh offshore or coastal environments before going into service. This means coatings must be able to provide protection throughout an extensive construction phase, in addition to the in-service lifetime of the steel. This paper examines the demands on high temperature performance coatings both before and once in service. Test methodology and exposure data are reviewed with a focus on how modern aluminium pigmented silicone coatings provide a solution to the corrosion challenges faced in global megaprojects.
APA, Harvard, Vancouver, ISO, and other styles
11

Wales, W. J., J. W. Heard, C. K. M. Ho, C. M. Leddin, C. R. Stockdale, G. P. Walker, and P. T. Doyle. "Profitable feeding of dairy cows on irrigated dairy farms in northern Victoria." Australian Journal of Experimental Agriculture 46, no. 7 (2006): 743. http://dx.doi.org/10.1071/ea05357.

Full text
Abstract:
Milk production per cow and per farm in the irrigated region in northern Victoria have increased dramatically over the past 2 decades. However, these increases have involved large increases in inputs, and average productivity gains on farms have been modest. Before the early 1980s, cows were fed predominantly pasture and conserved fodder. There is now large diversity in feeding systems and feed costs comprise 40–65% of total costs on irrigated dairy farms. This diversity in feeding systems has increased the need to understand the nutrient requirements of dairy cows and the unique aspects of nutrient intake and digestion in cows at grazing. Principles of nutrient intake and supply to the grazing dairy cow from the past 15 years’ research in northern Victoria are summarised and gaps in knowledge for making future productivity gains are identified. Moreover, since the majority of the milk produced in south-eastern Australia is used in the manufacture of products for export, dairy companies have increased their interest in value-added dairy products that better meet nutritional requirements or provide health benefits for humans. Finally, some examples of the impacts of farm system changes on operating profit for some case study farms in northern Victoria are presented to illustrate the need for thorough analysis of such management decisions.
APA, Harvard, Vancouver, ISO, and other styles
12

Heorhiiants, V. A. "Trends of ex tempore drug preparation in Ukraine. Ways of their introduction into the practice." Infusion & Chemotherapy, no. 3.2 (December 15, 2020): 51–52. http://dx.doi.org/10.32902/2663-0338-2020-3.2-51-52.

Full text
Abstract:
Background. Advantages of the ex tempore formulation include the ability to provide the drug in the form and dosage, not available on the pharmaceutical market, but necessary for a particular patient; the ability to include the required ingredient in any desired form; the option of combining drugs; the manufacture of drugs without flavorings, preservatives and stabilizers; the possibility of adjusting of the drug taste; the possibility of manufacturing drugs that are in short supply on the market; no possibility of counterfeiting; production of specific drugs. Objective. Assess the current situation and trends in the preparation of oncological drugs ex tempore. Materials and methods. Analysis of the literature on this topic. Results and discussion. In oncology the possibilities of ex tempore preparation include the manufacture of chemotherapeutic agents with individual dosage, of any adjuvant agents without excipients, of radiopharmaceuticals, combined and orphan drugs. In Ukraine, ex tempore formulation is not widespread: as of 2017, the percentage of drugs prepared in such way was 1.7 %, in 2018 – 1.43 %, in 2019 – 1.41 %. The situation is different in the European Union. For instance, in Poland the pharmacy receives a license only after creating the conditions for the manufacture of drugs, in Estonia any pharmacy must be able to produce non-sterile drugs, in Latvia 50 % of pharmacies have a license to manufacture drugs. The popularity of ex tempore preparation is also growing in the other countries (Australia, USA, Brazil, Jordan). The main areas of application of ex tempore drugs include hormone replacement therapy, analgesia, dermatology, chemotherapy, ophthalmology, treatment of orphan diseases, parenteral nutrition. Civilized countries often choose to develop the hospital pharmacy. However, there are a number of problems, including the lack of well-trained staff, the high cost of equipment and maintenance of sterile facilities, the constant changes in regulatory requirements and the need to gain the consumers’ respect. Conclusions. 1. Ex tempore drug preparation has a number of advantages, in particular, the ability to provide the drug in the individual form and dosage, the ability to combine drugs and the manufacture of specific drugs. 2. Extemporaneous preparation of drugs is especially important for oncology. 3. In contrast to European Union countries, ex tempore drug preparation is not widespread in Ukraine. 4. Problems of ex tempore drug preparation include lack of the staff, high equipment costs, and constant changes in regulatory requirements.
APA, Harvard, Vancouver, ISO, and other styles
13

Rand, Leah Z., and Aaron S. Kesselheim. "An International Review of Health Technology Assessment Approaches to Prescription Drugs and Their Ethical Principles." Journal of Law, Medicine & Ethics 48, no. 3 (2020): 583–94. http://dx.doi.org/10.1177/1073110520958885.

Full text
Abstract:
In many countries, health technology assessment (HTA) organizations determine the economic value of new drugs and make recommendations regarding appropriate pricing and coverage in national health systems. In the US, recent policy proposals aimed at reducing drug costs would link drug prices to six countries: Australia, Canada, France, Germany, Japan, and the UK. We reviewed these countries’ methods of HTA and guidance on price and coverage recommendations, analyzing methods and guidance documents for differences in (1) the methodologies HTA organizations use to conduct their evaluations and (2) considerations they use when making recommendations. We found important differences in the methods, interpretations of HTA findings, and condition-specific carve-outs that HTA organizations use to conduct evaluations and make recommendations. These variations have ethical implications because they influence the recommendations of HTA organizations, which affect access to the drug through national insurance and price negotiations with manufacturers. The differences in HTA approaches result from the distinct political, social, and cultural contexts of each organization and its value judgments. New cost-containment policies in the US should consider the ethical implications of the HTA reviews that they are considering relying on to negotiate drug prices and what values should be included in US pricing policy.
APA, Harvard, Vancouver, ISO, and other styles
14

Antille, Diogenes L., John McL Bennett, and Troy A. Jensen. "Soil compaction and controlled traffic considerations in Australian cotton-farming systems." Crop and Pasture Science 67, no. 1 (2016): 1. http://dx.doi.org/10.1071/cp15097.

Full text
Abstract:
A literature review was conducted to collate best practice techniques for soil compaction management within cotton-farming systems in Australia. Universally negative effects of traffic-induced soil compaction on the whole-farm system and the wider environment include: (i) increased gap between attainable and potential yields, (ii) increased costs of energy and labour, (iii) reduced fertiliser-use efficiency, (iv) reduced water use efficiency (irrigation and rainfall), (v) increased tillage intensity. Knowledge gaps that merit research priority, and research strategies, are suggested. These include: (i) identifying wider impacts on farm economics to guide decision-making and development of decision support systems that capture the effects of compaction on fertiliser, water, and energy use efficiency; (ii) predicting risks at the field or subfield scale and implementing precision management of traffic compaction; (iii) canopy management at terminal stages of the crop cycle to manipulate soil-moisture deficits before crop harvest, thereby optimising trafficability for harvesting equipment; (iv) the role of controlled traffic farming (CTF) in mitigating greenhouse gas emissions and loss of soil organic carbon, and in enhancing fertiliser and water-use efficiencies; (v) recent developments in tyre technology, such as low ground-pressure tyres, require investigation to assess their cost-effectiveness compared with other available options; and (vi) catchment-scale modelling incorporating changes in arable land-use, such as increased area under CTF coupled with no- or minimum-tillage, and variable rate technology is suggested. Such modelling should assess the potential of CTF and allied technologies to reduce sediment and nutrient losses, and improve water quality in intensively managed arable catchments. Resources must be efficiently managed within increasingly sophisticated farming systems to enable long-term economic viability of cotton production. Agronomic and environmental performance of cotton farming systems could be improved with a few changes, and possibly, at a reasonable cost. Key to managing soil compaction appears to be encouraging increased adoption of CTF. This process may benefit from financial support to growers, such as agri-environmental stewardships, and it would be assisted by product customisation from machinery manufacturers.
APA, Harvard, Vancouver, ISO, and other styles
15

Kovacs, Aaron C., and Tran-Lee Kaing. "Point-of-care computer-assisted design and manufacturing technology and its utility in post-traumatic mandibular reconstruction: An Australian public hospital experience." SAGE Open Medical Case Reports 10 (January 2022): 2050313X2211037. http://dx.doi.org/10.1177/2050313x221103733.

Full text
Abstract:
Application of load-bearing osteosynthesis plates is the current gold-standard management for complex mandibular fractures. Traditionally, this has required a transcutaneous submandibular approach, carrying with it the risk of damage to the facial nerve and obvious extraoral scarring. The existing literature describes the use of computer-assisted design and manufacturing technology through external vendors to aid transoral mandibular reconstruction. However, the reliance on third-party manufacturers comes with significant drawbacks, notably increased financial costs and manufacturing delays. We describe our experience in using point-of-care three-dimensional-printed surgical models to aid with the application of mandibular reconstruction plates. Utilising a virtual three-dimensional reconstruction of the patient’s preoperative computed tomography facial bones, we fabricate a custom model of the patient’s mandible with the department’s in-house three-dimensional printer. Stock plates are subsequently pre-bent and adapted to the three-dimensional model, with plate and screw position marked and screw lengths measured with callipers. By using a custom three-dimensional-printed surgical model to pre-contour the plates, we are able to position stock reconstruction plates via a transoral approach. Moreover, our unit’s utilisation of in-house computer-assisted design and manufacturing software and hardware allows us deliver a same-day turnaround for both surgical planning and performing the operation. Patient-specific surgical planning guides can facilitate the safe and efficient transoral application of mandibular reconstruction plates. Moreover, the use of point-of-care computer-assisted design and manufacturing technology ensures timely and cost-effective manufacturing of the necessary biomodel.
APA, Harvard, Vancouver, ISO, and other styles
16

Harris, Gregory Edward. "Wellsite serviced power model for the CSG industry." APPEA Journal 58, no. 2 (2018): 715. http://dx.doi.org/10.1071/aj17186.

Full text
Abstract:
CD Power (CDP) was challenged by a coal seam gas (CSG) operator to develop a wellsite power supply solution that would lower the operators running costs ($/kWh) whilst improving availability from low 90% to above 97% power availability. Furthermore, the operator sought a supply, install and maintain approach – resulting in a cheaper, more reliable outsourced energy solution. Typically, energy supply to wellsites in Australia consists of expensive inflexible and slow to install high voltage (HV) cable reticulated to each well. As wellsites deplete, these HV cables cannot be rerouted, resulting in expensive loss of capital. These limitations prompted operators to seek a highly mobile and cost focussed solution. CDP was contracted to deliver an innovative program of power solutions for a major CSG operator at 50 wellsites in 2017. The contract is to design, manufacture, install and maintain mobile 3-phase power supply units for wellsites. These units are powered by gas reciprocating engine generators and solar battery modules with synchronising switchboards. The full wellsite serviced power contracting model is a first of its kind for the CSG industry and is based on delivering high power availability and collaborative field service performance. CDP anticipates that providing a high level of technical innovation and superior performance, they will see the model extended across the industry and challenge the traditional economics around HV wellsite power. This paper will address the challenges faced in moving from a traditional segregated power model to a performance-based model at a time when the power industry is surrounded by innovation.
APA, Harvard, Vancouver, ISO, and other styles
17

Anisimova, Tatiana. "The effects of corporate brand symbolism on consumer satisfaction and loyalty." Asia Pacific Journal of Marketing and Logistics 28, no. 3 (June 13, 2016): 481–98. http://dx.doi.org/10.1108/apjml-05-2015-0086.

Full text
Abstract:
Purpose – The purpose of this paper is to test the effects of corporate brand symbolism on consumer satisfaction and loyalty on a sample of Australian automobile consumers. Design/methodology/approach – Survey research was employed to test the study hypotheses. The regression analysis was used to evaluate the relationships between an independent variable (corporate brand symbolism) and dependent variables (consumer satisfaction and loyalty). Findings – Support was found for all hypotheses formulated in this study. Regression results reveal consistent favourable and significant effects of corporate brand symbolism on both consumer satisfaction and loyalty. Research limitations/implications – Although this paper makes contributions in international marketing, the cross-sectional nature of the data collection method limits the information gained to the single point in time. This research studied the impact of corporate brand symbolism on consumers of one original equipment manufacturers (OEM). Having a larger number of participating car manufacturers/OEMs would have provided a wider insight. However, time and resources limitation did not allow to study a larger sample. In the future, practitioners are recommended to further understand the relationship between self and social aspects of brand symbolism in order to formulate more targeted communication strategies. Practical implications – The findings of this study point to the strategic role of the brand in generating both satisfaction and loyalty. In the light of increasing advertising costs and decreasing consumer loyalty, strengthening corporate brand symbolism makes a lot of economic sense. The findings suggest that managers need to take into account consumer need for identity expression and consider this in their branding strategies. Social implications – Humans are social beings by nature. However, international brand research has paid relatively little attention to how products are used by consumers in everyday life, including their social life. Consumer behaviours increasingly depend on social meanings they imbue brands with beyond products’ functional utility. It is argued the focus of symbolic consumption needs to be broadened and integrated more with social science concepts. Originality/value – This study captures a construct of corporate brand symbolism by including self and social aspects of symbolism. The current study also comprehensively measures consumer loyalty, including cognitive, affective and behavioural types of loyalty.
APA, Harvard, Vancouver, ISO, and other styles
18

Twigg, Laurie E., Tim J. Lowe, and Gary R. Martin. "Evaluation of bait stations for broadacre control of rabbits." Wildlife Research 29, no. 5 (2002): 513. http://dx.doi.org/10.1071/wr01093.

Full text
Abstract:
The efficacy of bait stations (200-L drum cut in half longitudinally) for the broadacre control of rabbits was compared with that obtained with standard trail-baiting procedures in the southern agricultural region of Western Australia. Bait stations were tested with and without the provision of pre-feed. The bait used was 1.0% 1080 One-shot oats, and corresponding experimental control sites were treated with unpoisoned oats. On the basis of spotlight counts before and after baiting, the reduction in rabbit numbers obtained with bait stations in the absence of pre-feed was poor, with a mean reduction of only 27% within 14 days. These reductions did not improve appreciably where sites were monitored for a further 28 days (i.e. 42 days in total). In contrast, the provision of pre-feed for 21 days prior to adding the 1080 bait resulted in a mean reduction in rabbit numbers of 57% within 14 days after the poisoned bait was added. However, the greatest reductions in rabbit numbers were achieved with trail baiting, where, relative to pre-treatment counts, rabbit numbers were reduced by 72% at Day 7 and by 84% at Day 14. The oats used to manufacture the 1080 One-shot product are subjected to gamma-sterilisation to prevent the germination of the oats, and any associated seeds of weed species. When offered a choice (matched sets), there was no difference in the amount of non-toxic gamma-sterilised oats and unsterilised oats consumed by free-ranging wild rabbits. On the basis of the costs incurred during the trials, trail baiting was by far the cheapest option for the broadacre control of rabbits. However, the cost of using bait stations would be discounted to some degree once these stations are able to be reused. The cost of trail baiting to protect a 15-ha 'border' of crop was $157 and $113 for three and two parallel trails (6 kg km–1 trail–1), respectively.
APA, Harvard, Vancouver, ISO, and other styles
19

Blaikie, S. J., J. Leonardi, E. K. Chacko, W. J. Muller, and N. Steele Scott. "Effect of cincturing and chemical treatments on growth, flowering and yield of mango (Mangifera indica L.) cv. Kensington Pride." Australian Journal of Experimental Agriculture 39, no. 6 (1999): 761. http://dx.doi.org/10.1071/ea98178.

Full text
Abstract:
Summary. Mango (Mangifera indica cv. Kensington Pride) is the main horticultural tree crop grown in the tropical regions of northern Australia. A major problem for growers is that flowering and fruiting of this cultivar is highly variable from year to year. A series of field experiments was conducted to evaluate cincturing and chemical treatments as a means of improving mango productivity. A standard cincture (Cincture) was compared with a modified technique in which twine was tied tightly into a cincture groove (Twine). The chemical treatment was based on a morphactin formulation (MF) and was introduced to the trees by either painting directly onto the bark of the tree trunk (MF-paint) or soaking twine in MF before tying it into a trunk cincture (MF-twine). The amount of morphactin applied varied with tree age and was in the range from 0–0.06 g active ingredient (a.i.) per tree. Tree responses, measured in terms of vegetative growth, flowering and fruiting, were compared with trees that had been treated with a physical cincture only, or with paclobutrazol (up to 5.0 g a.i.), applied as a collar drench. In young (3–8-year-old) trees, Twine, MF-twine and MF-paint had a positive effect on flowering and fruiting. These trees had earlier, more intense flowering, produced early (September) maturing fruit (up to about 4-fold increase) and had high fruit production (up to about 2-fold increase in fruit number) compared with controls. In some cases vegetative growth was reduced by 50–60% compared with untreated trees. Twine and MF-twine are favoured over MF-paint because (i) the paint must be applied annually, incurring high labour costs, and (ii) paint treatments carry the risk of overdosing the trees with morphactin. The positive effects of Twine and MF-twine treatments were sustained, with large responses in flowering and/or fruiting 2–4 years after application. The responses in fruit production from paclobutrazol, applied at rates based on manufacturer’s recommendations, were less than with Twine, MF-twine or MF-paint.
APA, Harvard, Vancouver, ISO, and other styles
20

Khatibi, Ali, V. Thyagarajan, and A. Seetharaman. "E-commerce in Malaysia: Perceived Benefits and Barriers." Vikalpa: The Journal for Decision Makers 28, no. 3 (July 2003): 77–82. http://dx.doi.org/10.1177/0256090920030307.

Full text
Abstract:
Rapid developments in information technology and telecommunications have set the pace for an electronic revolution leading to emergence of E-commerce. The advent of internet offers many business firms new opportunities and challenges. However, there are various psychological and behavioural issues such as trust, security of the internet transactions, reluctance to change, and preference for human interface which appear to impede the growth of E-commerce. This paper analyses the current situation of E- commerce in Malaysia, the merits of E-commerce, and factors affecting the adoption of E-commerce. Internet has transformed the traditional marketing model and system. Besides functioning as a communication medium, it has been used as a market space where buyers and sellers exchange information, goods, and services without the hindrance of time and geographical constraints. Marketing functions are performed under a hypermedia-computer-mediated-environment where interactivity and connectivity are replacing the traditional mode of ‘face to face’ negotiation and communication. Internet allows interactivity between buyers and sellers to create a shared real-time common marketspace. Connectivity links buyers-sellers worldwide creating a shared global marketspace. No other industry in the world history has achieved a rapid growth in as short a time as E-commerce. Though only a few years old, E-commerce has taken off at an unprecedented speed despite much skepticism and some initial hesitation. It is univer-sally accepted that the world is in the grip of an E-commerce revolution. But, the hyper growth of Internet sales is still an American phenomenon and E-commerce has not taken off in other parts of the globe although some countries like Europe, Japan, and Australia are rapidly joining the bandwagon. Although E-commerce is a relatively new method of business, it has radically altered the marketing and distribution paradigms. The scale of business generated through E- commerce is multiplying exponentially. However, Malaysian E-commerce industry has not taken off as expected. Based on primary data collected by MATRADE using a survey of 222 Malaysian manufacturers, traders, and service providers, this paper examines the perceived benefits as well as barriers to E-commerce adoption. Though the sample firms felt that E-commerce was beneficial to business in general, they were uncertain as to how it would benefit their actual business operations. The perceived benefits included: competitiveness better image efficient processes better information system. However, despite the perceived benefits, E-commerce adoption was hindered by a number of constraints. Major barriers were thought to be the problems of keeping up and understanding the technology itself lack of trained manpower uncertainties with regard to its operations and regulations high switching costs. These findings are helpful in providing the firms' perspective of E-commerce in terms of its benefits to their companies as well as barriers to its full scale adoption. Hence, any policy that aims at promoting E-commerce should take these factors into consideration. The results support the development of E-business portals to cater to their needs and rectify their problems. E-commerce portals would enable companies to share the high investment cost of constantly changing technology, reduce the manpower requirement, and keep abreast with the advances in technology.
APA, Harvard, Vancouver, ISO, and other styles
21

Lurie, L. "Real clinical practice." Infusion & Chemotherapy, no. 3.2 (December 15, 2020): 188–90. http://dx.doi.org/10.32902/2663-0338-2020-3.2-188-190.

Full text
Abstract:
Background. Real clinical practice (RCP) exists in an evidence-based and regulatory framework, taking into account the social, political and economic situation in the country. Coronavirus pandemic (COVID-19) is the main challenge of modern RCP. Objective. To describe the modern features of the RCP. Materials and methods. Analysis of literature sources on this issue. Results and discussion. On December 31, 2019, WHO was informed about 27 cases of pneumonia of unknown origin. On January 1, 2020, the first WHO guidelines were issued. The COVID-19 outbreak was declared a health emergency on January 30 and a pandemic – on March 11. Experience with COVID-19 varies from country to country. In Germany, for example, pharmacies were allowed to produce disinfectants on their own, in Australia the telemedicine system was expanded, and in Poland a law was issued that provided the regulation of remote work, simplification of public procurement, and emergency pharmacy prescriptions. In Ukraine, the first information from the Ministry of Health on coronavirus was published on January 21. On February 19, a decision was made to procure medicines to combat COVID-19. On March 11, the export of personal protective equipment was banned, and on March 12, quarantine was imposed throughout Ukraine. On March 17, the first laws of Ukraine on combating the coronavirus were adopted. One in four patients who fell ill at the beginning of the outbreak was a health worker, which reduced the availability of medical care. The imposition of a pandemic on phase 2 of health care reform has limited health care and patients’ access to clinics and hospitals, and suspended planned hospitalizations and surgeries. Medicines without evidence were included in the COVID-19 National Treatment Protocol. An analysis of drug sales in pharmacies showed that quarantine had decreased the sales of cough and cold remedies, nasal irrigation solutions (due to a reduction in the number of socially transmitted diseases), and antidiarrheal drugs. Instead, sales of laxatives have increased (presumably due to changes in diet and limited physical activity). Sales of drugs for the treatment of sexually transmitted diseases also decreased. Quarantine in combination with the restriction of the availability of infusion therapy in the practice of the family doctor has led to a reduction in the appointment of parenteral drugs by half. In the absence of planned hospitalizations and surgeries, the volume of prescriptions for infusion drugs decreased by 13 %. There was a redistribution of drug consumption in favor of domestic drugs. “Yuria-Pharm” was in the top 3 among Ukrainian drug manufacturers. 6 out of 10 general leaders are domestic companies. “Yuria-Pharm” is a leader in blood substitutes and perfusion solutions prescribed by doctors of 16 specialties. The solutions were most often prescribed for pneumonia, mental and behavioral disorders caused by alcohol abuse, acute pancreatitis, cerebrovascular diseases, delivery, acute appendicitis, malignant tumors, insulin-dependent diabetes mellitus, chronic ischemic heart disease. For example, Tivortin (“Yuria-Pharm”) is most often prescribed by gynecologists, less often – by physicians / family doctors, neurologists, surgeons, cardiologists, anesthesiologists. In turn, Reosorbilact (“Yuria-Pharm”) is among the top 3 drugs administered by hospital doctors for the period 2014-2020. Repeated prescriptions for reimbursement were issued remotely, however, despite government programs, treatment in Ukraine still depends on the patient’s money. The National Health Service of Ukraine for 2021 proposed to increase the salaries of health care workers and reduce the catastrophic costs of medicines paid by patient on its own. At present, there is a need to transfer the results of clinical trials to the RCP, as the studies are conducted in specialized strictly controlled conditions, and the RCP allows to obtain more real results. There are several types of RCP studies: non-interventional, post-registration, marketing, pharmacoeconomic, and patient database and registry studies. Conclusions. 1. COVID-19 pandemic is the main challenge of modern RCP. 2. The imposition of a pandemic onto phase 2 of health care reform has limited health care and patients’ access to clinics and hospitals, and suspended planned hospitalizations and surgeries. 3. In the conditions of pandemic and quarantine there was a redistribution of drug consumption in favor of domestic drugs. 4. Reosorbilact (“Yuria-Pharm”) is among the top 3 drugs administered by hospital doctors for the period 2014-2020.
APA, Harvard, Vancouver, ISO, and other styles
22

Enzing, Joost J., Saskia Knies, Jop Engel, Maarten J. IJzerman, Beate Sander, Rick Vreman, Bert Boer, and Werner B. F. Brouwer. "Do Health Technology Assessment organisations consider manufacturers’ costs in relation to drug price? A study of reimbursement reports." Cost Effectiveness and Resource Allocation 20, no. 1 (August 31, 2022). http://dx.doi.org/10.1186/s12962-022-00383-y.

Full text
Abstract:
Abstract Introduction Drug reimbursement decisions are often made based on a price set by the manufacturer. In some cases, this price leads to public and scientific debates about whether its level can be justified in relation to its costs, including those related to research and development (R&D) and manufacturing. Such considerations could enter the decision process in collectively financed health care systems. This paper investigates whether manufacturers’ costs in relation to drug prices, or profit margins, are explicitly mentioned and considered by health technology assessment (HTA) organisations. Method An analysis of reimbursement reports for cancer drugs was performed. All relevant Dutch HTA-reports, published between 2017 and 2019, were selected and matched with HTA-reports from three other jurisdictions (England, Canada, Australia). Information was extracted. Additionally, reimbursement reports for three cases of expensive non-oncolytic orphan drugs prominent in pricing debates in the Netherlands were investigated in depth to examine consideration of profit margins. Results A total of 66 HTA-reports concerning 15 cancer drugs were included. None of these reports contained information on manufacturer’s costs or profit margins. Some reports contained general considerations of the HTA organisation which related prices to manufacturers’ costs: six contained a statement on the lack of price setting transparency, one mentioned recouping R&D costs as a potential argument to justify a high price. For the case studies, 21 HTA-reports were selected. One contained a cost-based price justification provided by the manufacturer. None of the other reports contained information on manufacturer’s costs or profit margins. Six reports contained a discussion about lack of transparency. Reports from two jurisdictions contained invitations to justify high prices by demonstrating high costs. Conclusion Despite the attention given to manufacturers’ costs in relation to price in public debates and in the literature, this issue does not seem to get explicit systematic consideration in the reimbursement reports of expensive drugs.
APA, Harvard, Vancouver, ISO, and other styles
23

"BioBoard." Asia-Pacific Biotech News 11, no. 09 (May 15, 2007): 534–43. http://dx.doi.org/10.1142/s0219030307000602.

Full text
Abstract:
Australia — New Federal Research Money for Plant Genomics. Australia — New Clinical Trial Facility Under Construction in Western Australia. Australia — New Blood Test to Check for Parkinson's Disease. Australia — Australian Prime Minister Discusses Possibility of Stricter Immigration Rules for HIV-Positive People. Australia — Murdoch Study Shows Positive Results for Rockeby. Australia — Biotech Companies Benefit from Australian Government's Marketing Grants. China — Chinese Project to Help Prepare for Flu Pandemics. China — China Experts Identify Cancer-Preventing Gene Type. China — China Plans On-the-Spot Checks of Drug Manufacturers. China — HIV/AIDS Victims in Henan Get Free TCM Treatment. China — WHO Calls for Human Bird Flu Samples from China. China — Sino-Swiss Center for Cassava Technology Opens in Shanghai. Hong Kong — HK and Australian Experts to Launch Trials on New Therapy for NPC. India — NGOs in India Protest Abbott's Decision to Withhold Essential Medicines from Thailand. India — Global AIDS Research Body CAVD Keen on India. India — Torrent Pharma Launched the World's First Polypill. India — India and Germany to Set up Joint Group on Agriculture. India — India Among Six WHO Developing Nations to Receive Grant for Influenza Vaccine Technology. India — NHS of Britain to Make Huge Investments in Indian Healthcare Sector. India — ICMR's HIV Vaccine Showing Positive Response in Clinical Trials. India — Dr Reddy's Laboratories Cuts Costs of Cancer Therapy. India — Indian Herbal Remedy Cancer Hope. Indonesia — Indonesia to get Influenza Vaccine Technology from WHO. Indonesia — US Sets Up Jakarta Office to Boost Bird-Flu Fight. Singapore — TNT Appoints New Managing Director. Singapore — Cancer Research Gets Boost with US$20 Million Donation. Taiwan — Research Brings Hope for Alzheimer's Cure. Taiwan — Researchers Demonstrate Cost-Effective Platform for Producing Blood Clotting Proteins. Others — Bristol-Myers Squibb and Pfizer Announce Worldwide Collaboration to Develop and Commercialize Anticoagulant and Metabolic Compounds. Others — WHO's 9-Point Plan will Protect Patients from Medical Errors.
APA, Harvard, Vancouver, ISO, and other styles
24

Milford, Bernard, and Warren Males. "Futures based pricing for Australian sugarcane growers." Sugar Industry, 2010, 502–5. http://dx.doi.org/10.36961/si10153.

Full text
Abstract:
Australian cane growers can now access futures based pricing for their sugarcane. Direct access to futures market pricing has long been available to growers of commodities such as grains, cattle, cotton and others. Some sugar manufacturers have had access to these mechanisms but, in situations where cane growing and processing are separate, cane growers have traditionally been unable to hedge the price they receive for their product as unprocessed sugarcane is not deliverable against a futures contract. In Australia, many growers have recently commenced using futures based pricing mechanisms through new cane supply contracts with their sugar mills. Tonnages of sugar are apportioned to growers based on the cane price formula. Pricing contracts are ultimately closed through the pricing platform offered by the raw sugar marketing company. In this paper the range of price mechanisms available to cane growers in Australia are examined and the implications of these for industry sustainability are reviewed. These include a new, more commercial culture, longer-term cane supply contracts, growers’ ability to manage pricing decisions in light of their knowledge of growing costs and determine, to some degree, how much of the price risk they wish to manage and how much they wish to place in the hands of others.
APA, Harvard, Vancouver, ISO, and other styles
25

Dunn, Rebecca, Keith Lovegrove, Greg Burgess, and John Pye. "An Experimental Study of Ammonia Receiver Geometries for Dish Concentrators." Journal of Solar Energy Engineering 134, no. 4 (August 6, 2012). http://dx.doi.org/10.1115/1.4006891.

Full text
Abstract:
This paper presents experimental evaluation of ammonia receiver geometries with a 9 m2 dish concentrator. The experiments involved varying the geometric arrangement of reactor tubes in a thermochemical reactor built from a series of tubes arranged in a conical shape inside a cavity receiver. Differences in conical arrangement were found to affect the efficiency of energy conversion. The solar-to-chemical efficiency gain obtained by varying the receiver geometry was up to 7% absolute. From this, it is apparent that geometric optimizations are worth pursuing since the resulting efficiency gains are achieved with no increase in costs of manufacture for receivers. The experimental results and methodology can be used when developing receivers for larger dish concentrators, such as the second generation 500 m2 dish concentrator developed at the Australian National University.
APA, Harvard, Vancouver, ISO, and other styles
26

Kim, Hansoo, Danny Liew, and Stephen Goodall. "Current Issues in Health Technology Assessment of Cancer Therapies: A Survey of Stakeholders and Opinion Leaders in Australia." International Journal of Technology Assessment in Health Care 38, no. 1 (2022). http://dx.doi.org/10.1017/s0266462322000368.

Full text
Abstract:
Abstract Objective The aim of this study was to find ways of bridging the gap in opinions concerning health technology assessment (HTA) in reimbursement submission between manufacturers and payers to avoid access delays for patients of vital medicines such as oncology drugs. This was done by investigating differences and similarities of opinion among key stakeholders in Australia. Methods The survey comprised of nine sections: background demographics, general statements on HTA, clinical claim, extrapolations, quality of life, costs and health resource utilization, agreements, decision making, and capability/capacity. Responses to each question were summarized using descriptive statistics and comparisons were made using chi-square statistics. Results There were ninety-seven respondents in total, thirty-seven from the public sector (academia/government) and sixty from the private sector (industry/consultancies). Private and public sector respondents had similar views on clinical claims. They were divided when it came to extrapolation of survival data and costs and health resource utilization. However, they generally agreed that rebates are useful, outcomes-based agreements are difficult to implement, managed entry schemes are required when data are limited, and willingness to pay is higher in cancer compared to other therapeutic areas. They also agreed that training mostly takes place through on the job training and that guideline updates were a least favored opportunity for continued training. Conclusions Private sector respondents favor methods that reduce the incremental cost-effectiveness ratio when compared to the public sector respondents. There still exist a number of challenges for HTA in oncology and many research opportunities as a result of this study.
APA, Harvard, Vancouver, ISO, and other styles
27

Walker, Russell. "CEMEX: Information Technology, an Enabler for Building the Future." Kellogg School of Management Cases, January 25, 2017, 1–9. http://dx.doi.org/10.1108/case.kellogg.2021.000051.

Full text
Abstract:
The case examines the role of IT in CEMEX, a giant Mexican building materials manufacturer in an industry categorized by low margins and high costs. In the early 1990s, CEMEX made significant investments in its IT systems, resulting in a data-based management operation that put it at the forefront of the industry. As the company grew through acquisitions, it integrated IT through “The CEMEX Way,” a set of standardized processes, organizations, and systems implemented on a common IT platform. In 2007, when CEMEX acquired Rinker, a major Australian concrete company, aligning Rinker with CEMEX IT systems was critical to quickly streamline operations and realize efficiencies. The CIO of CEMEX had developed a new integration process called Processes & IT (P&IT) that he was considering using for the Rinker integration. However, P&IT required additional resources, including significant upfront fixed costs and investment in new personnel teams at a time when the company was already struggling with the integration of another acquisition. CEMEX could either align Rinker to The CEMEX Way or use the opportunity to invest significantly more in evolving to the new P&IT approach that focused on business process management.
APA, Harvard, Vancouver, ISO, and other styles
28

Pant, Sirjana, Rupinder Bagha, and Sarah McGill. "International Plasma Collection Practices: Project Report." Canadian Journal of Health Technologies 1, no. 12 (December 2, 2021). http://dx.doi.org/10.51731/cjht.2021.213.

Full text
Abstract:
Plasma is used by pharmaceutical companies to make plasma-derived medicinal products (PDMPs). PDMPs are used to treat conditions such as immune deficiencies and bleeding disorders. Several PDMPS are included in the WHO Model Lists of Essential Medicines. According to the WHO, self-sufficiency driven by voluntary (non-remunerated) plasma donations is an important national goal to ensure an adequate supply is secured to meet the needs of the population. Australia, New Zealand, the UK, the Netherlands, and France only allow public or not-for-profit sectors to collect plasma for fractionation. Each of the 5 countries have toll or contract agreements with 1 private commercial plasma fractionator to manufacture PDMPs from the plasma collected within their respective countries. None of these countries pay plasma donors. Donors are only permitted to donate every 2 weeks (24 to 26 times per year) in these 5 countries. Austria, the Czech Republic, Germany, and the US allow both public and non-for-profit sectors, as well as commercial private plasma collection centres, to operate in the country. Private, not-for-profit, or public plasma collection centres in these 4 countries offer monetary compensation and other in-kind incentives to plasma donors. While the Czech Republic limits plasma donation to every 2 weeks, a much higher frequency of donation is allowed in other countries; up to 50 times per year in Austria, 60 times per year in Germany, and more than 100 times per year in the US. Austria, the Czech Republic, Germany, and the US (which allow commercial private plasma collectors to operate and pay donors) are 100% self-sufficient in immunoglobulins. These 4 countries collect the most plasma, which is from paid donors. In 2017, Austria, the Czech Republic, Germany, and the US collected 75 litres per 1,000 people, 45 litres per 1,000 people, 36 litres per 1,000 people, and 113 litres per 1,000 people of plasma for fractionation, respectively. Countries that do not pay donors including Australia, New Zealand, the UK, the Netherlands, and France are dependent to some extent on US and European Union donors who are paid for the supply of plasma or imported PDMPs. The limited literature search conducted for the Environmental Scan did not identify publications on events of disease transmission through PDMPs manufactured from either paid or non-renumerated donors’ plasma, the impact of plasma collection centres (including those that do or do not pay donors) on the collection of whole blood or other blood components, or the long-term costs associated with plasma self-sufficiency on the health care system.
APA, Harvard, Vancouver, ISO, and other styles
29

Stockwell, Stephen. "The Manufacture of World Order." M/C Journal 7, no. 6 (January 1, 2005). http://dx.doi.org/10.5204/mcj.2481.

Full text
Abstract:
Since the fall of the Berlin Wall, and most particularly since 9/11, the government of the United States has used its security services to enforce the order it desires for the world. The US government and its security services appreciate the importance of creating the ideological environment that allows them full-scope in their activities. To these ends they have turned to the movie industry which has not been slow in accommodating the purposes of the state. In establishing the parameters of the War Against Terror after 9/11, one of the Bush Administration’s first stops was Hollywood. White House strategist Karl Rove called what is now described as the Beverley Hills Summit on 19 November 2001 where top movie industry players including chairman of the Motion Picture Association of America, Jack Valenti met to discuss ways in which the movie industry could assist in the War Against Terror. After a ritual assertion of Hollywood’s independence, the movie industry’s powerbrokers signed up to the White House’s agenda: “that Americans must be called to national service; that Americans should support the troops; that this is a global war that needs a global response; that this is a war against evil” (Cooper 13). Good versus evil is, of course, a staple commodity for the movie industry but storylines never require the good guys to fight fair so with this statement the White House got what it really wanted: Hollywood’s promise to stay on the big picture in black and white while studiously avoiding the troubling detail in the exercise extra-judicial force and state-sanctioned murder. This is not to suggest that the movie industry is a monolithic ideological enterprise. Alternative voices like Mike Moore and Susan Sarandon still find space to speak. But the established economics of the scenario trade are too strong for the movie industry to resist: producers gain access to expensive weaponry to assist production if their story-lines are approved by Pentagon officials (‘Pentagon provides for Hollywood’); the Pentagon finances movie and gaming studios to provide original story formulas to keep their war-gaming relevant to emerging conditions (Lippman); and the Central Intelligence Agency’s “entertainment liaison officer” assists producers in story development and production (Gamson). In this context, the moulding of story-lines to the satisfaction of the Pentagon and CIA is not even an issue, and protestations of Hollywood’s independence is meaningless, as the movie industry pursues patriotic audiences at home and seeks to garner hearts and minds abroad. This is old history made new again. The Cold War in the 1950s saw movies addressing the disruption of world order not so much by Communists as by “others”: sci-fi aliens, schlock horror zombies, vampires and werewolves and mad scientists galore. By the 1960s the James Bond movie franchise, developed by MI5 operative Ian Fleming, saw Western secret agents ‘licensed to kill’ with the justification that such powers were required to deal with threats to world order, albeit by fanciful “others” such as the fanatical scientist Dr. No (1962). The Bond villains provide a catalogue of methods for the disruption of world order: commandeering atomic weapons and space flights, manipulating finance markets, mind control systems and so on. As the Soviet Union disintegrated, Hollywood produced a wealth of material that excused the paranoid nationalism of the security services through the hegemonic masculinity of stars such as Sylvester Stallone, Arnold Schwarzenegger, Steven Seagal and Bruce Willis (Beasley). Willis’s Die Hard franchise (1988/1990/1995) characterised US insouciance in the face of newly created terrorist threats. Willis personified the strategy of the Reagan, first Bush and Clinton administrations: a willingness to up the ante, second guess the terrorists and cower them with the display of firepower advantage. But the 1997 instalment of the James Bond franchise saw an important shift in expectations about the source of threats to world order. Tomorrow Never Dies features a media tycoon bent on world domination, manipulating the satellite feed, orchestrating conflicts and disasters in the name of ratings, market share and control. Having dealt with all kinds of Cold War plots, Bond is now confronted with the power of the media itself. As if to mark this shift, Austin Powers: International Man of Mystery (1997) made a mockery of the creatively bankrupt conventions of the spy genre. But it was the politically corrupt use to which the security services could be put that was troubling a string of big-budget filmmakers in the late 90s. In Enemy of the State (1998), an innocent lawyer finds himself targeted by the National Security Agency after receiving evidence of a political murder motivated by the push to extend the NSA’s powers. In Mercury Rising (1998), a renegade FBI agent protects an autistic boy who cracks a top-secret government code and becomes the target for assassins from an NSA-like organisation. Arlington Road (1999) features a college professor who learns too much about terrorist organisations and has his paranoia justified when he becomes the target of a complex operation to implicate him as a terrorist. The attacks on September 11 and subsequent Beverley Hills Summit had a major impact on movie product. Many film studios edited films (Spiderman) or postponed their release (Schwarzenegger’s Collateral Damage) where they were seen as too close to actual events but insufficiently patriotic (Townsend). The Bond franchise returned to its staple of fantastical villains. In Die Another Day (2002), the bad guy is a billionaire with a laser cannon. The critical perspective on the security services disappeared overnight. But the most interesting development has been how fantasy has become the key theme in a number of franchises dealing with world order that have had great box-office success since 9/11, particularly Lord of the Rings (2001/2/3) and Harry Potter (2001/2/4). While deeply entrenched in the fantasy genre, each of these franchises also addresses security issues: geo-political control in the Rings franchise; the subterfuges of the Ministry for Muggles in the _Potter _franchise. Looking at world order through the supernatural lens has particular appeal to audiences confronted with everyday threats. These fantasies follow George Bush’s rhetoric of the “axis of evil” in normalising the struggle for world order in term of black and white with the expectation that childish innocence and naïve ingenuity will prevail. Only now with three years hindsight since September 11 can we begin to see certain amount of self-reflection by disenchanted security staff return to the cinema. In Man on Fire (2004) the burned-out ex-CIA assassin has given up on life but regains some hope while guarding a child only to have everything disintegrate when the child is killed and he sets out on remorseless revenge. Spartan (2004) features a special forces officer who fails to find a girl and resorts to remorseless revenge as he becomes lost in a maze of security bureaucracies and chance events. Security service personnel once again have their doubts but only find redemption in violence and revenge without remorse. From consideration of films before and after September 11, it becomes apparent that the movie industry has delivered on their promises to the Bush administration. The first response has been the shift to fantasy that, in historical terms, will be seen as akin to the shift to musicals in the Depression. The flight to fantasy makes the point that complex situations can be reduced to simple moral decisions under the rubric of good versus evil, which is precisely what the US administration requested. The second, more recent response has been to accept disenchantment with the personal costs of the War on Terror but still validate remorseless revenge. Quentin Tarantino’s Kill Bill franchise (2003/4) seeks to do both. Thus the will to world order being fought out in the streets of Iraq is sublimated into fantasy or excused as a natural response to a world of violence. It is interesting to note that television has provided more opportunities for the productive consideration of world order and the security services than the movies since September 11. While programs that have had input from the CIA’s “entertainment liaison officer” such as teen-oriented, Buffy-inspired Alias and quasi-authentic The Agency provide a no-nonsense justification for the War on Terror (Gamson), others such as 24, West Wing _and _Threat Matrix have confronted the moral problems of torture and murder in the War on Terrorism. 24 uses reality TV conventions of real-time plot, split screen exposition, unexpected interventions and a close focus on personal emotions to explore the interactions between a US President and an officer in the Counter Terrorism Unit. The CTU officer does not hesitate to summarily behead a criminal or kill a colleague for operational purposes and the president takes only a little longer to begin torturing recalcitrant members of his own staff. Similarly, the president in West Wing orders the extra-judicial death of a troublesome player and the team in Threat Matrix are ready to exceeded their powers. But in these programs the characters struggle with the moral consequences of their violent acts, particularly as family members are drawn into the plot. A running theme of Threat Matrix is the debate within the group of their choices between gung-ho militarism and peaceful diplomacy: the consequences of a simplistic, hawkish approach are explored when an Arab-American college professor is wrongfully accused of supporting terrorists and driven towards the terrorists because of his very ordeal of wrongful accusation. The world is not black and white. Almost half the US electorate voted for John Kerry. Television still must cater for liberal, and wealthy, demographics who welcome the extended format of weekly television that allows a continuing engagement with questions of good or evil and whether there is difference between them any more. Against the simple world view of the Bush administration we have the complexities of the real world. References Beasley, Chris. “Reel Politics.” Australian Political Studies Association Conference, University of Adelaide, 2004. Cooper, Marc. “Lights! Cameras! Attack!: Hollywood Enlists.” The Nation 10 December 2001: 13-16. Gamson, J. “Double Agents.” The American Prospect 12.21 (3 December 2001): 38-9. Lippman, John. “Hollywood Casts About for a War Role.” Wall Street Journal 9 November 2001: A1. “Pentagon Provides for Hollywood.” USA Today 29 March 2001. http://www.usatoday.com/life/movies/2001-05-17-pentagon-helps-hollywood.htm>. Townsend, Gary. “Hollywood Uses Selective Censorship after 9/11.” e.press 12 December 2002. http://www.scc.losrios.edu/~express/021212hollywood.html>. Citation reference for this article MLA Style Stockwell, Stephen. "The Manufacture of World Order: The Security Services and the Movie Industry." M/C Journal 7.6 (2005). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0501/10-stockwell.php>. APA Style Stockwell, S. (Jan. 2005) "The Manufacture of World Order: The Security Services and the Movie Industry," M/C Journal, 7(6). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0501/10-stockwell.php>.
APA, Harvard, Vancouver, ISO, and other styles
30

Brien, Donna Lee. "Unplanned Educational Obsolescence: Is the ‘Traditional’ PhD Becoming Obsolete?" M/C Journal 12, no. 3 (July 15, 2009). http://dx.doi.org/10.5204/mcj.160.

Full text
Abstract:
Discussions of the economic theory of planned obsolescence—the purposeful embedding of redundancy into the functionality or other aspect of a product—in the 1980s and 1990s often focused on the impact of such a design strategy on manufacturers, consumers, the market, and, ultimately, profits (see, for example, Bulow; Lee and Lee; Waldman). More recently, assessments of such shortened product life cycles have included calculations of the environmental and other costs of such waste (Claudio; Kondoh; Unruh). Commonly utilised examples are consumer products such as cars, whitegoods and small appliances, fashion clothing and accessories, and, more recently, new technologies and their constituent components. This discourse has been adopted by those who configure workers as human resources, and who speak both of skills (Janßen and Backes-Gellner) and human capital itself (Chauhan and Chauhan) being made obsolete by market forces in both predictable and unplanned ways. This includes debate over whether formal education can assist in developing the skills that make their possessors less liable to become obsolete in the workforce (Dubin; Holtmann; Borghans and de Grip; Gould, Moav and Weinberg). However, aside from periodic expressions of disciplinary angst (as in questions such as whether the Liberal Arts and other disciplines are becoming obsolete) are rarely found in discussions regarding higher education. Yet, higher education has been subsumed into a culture of commercial service provision as driven by markets and profit as the industries that design and deliver consumer goods. McKelvey and Holmén characterise this as a shift “from social institution to knowledge business” in the subtitle of their 2009 volume on European universities, and the recent decade has seen many higher educational institutions openly striving to be entrepreneurial. Despite some debate over the functioning of market or market-like mechanisms in higher education (see, for instance, Texeira et al), the corporatisation of higher education has led inevitably to market segmentation in the products the sector delivers. Such market segmentation results in what are called over-differentiated products, seemingly endless variations in the same product to attempt to increase consumption and attendant sales. Milk is a commonly cited example, with supermarkets today stocking full cream, semi-skimmed, skimmed, lactose-free, soy, rice, goat, GM-free and ‘smart’ (enriched with various vitamins, minerals and proteins) varieties; and many of these available in fresh, UHT, dehydrated and/or organic versions. In the education market, this practice has resulted in a large number of often minutely differentiated, but differently named, degrees and other programs. Where there were once a small number of undergraduate degrees with discipline variety within them (including the Bachelor of Arts and Bachelor of Science awards), students can now graduate with a named qualification in a myriad of discipline and professional areas. The attempt to secure a larger percentage of the potential client pool (who are themselves often seeking to update their own skills and knowledges to avoid workforce obsolescence) has also resulted in a significant increase in the number of postgraduate coursework certificates, diplomas and other qualifications across the sector. The Masters degree has fractured from a research program into a range of coursework, coursework plus research, and research only programs. Such proliferation has also affected one of the foundations of the quality and integrity of the higher education system, and one of the last bastions of conventional practice, the doctoral degree. The PhD as ‘Gold-Standard’ Market Leader? The Doctor of Philosophy (PhD) is usually understood as a largely independent discipline-based research project that results in a substantial piece of reporting, the thesis, that makes a “substantial original contribution to knowledge in the form of new knowledge or significant and original adaptation, application and interpretation of existing knowledge” (AQF). As the highest level of degree conferred by most universities, the PhD is commonly understood as indicating the height of formal educational attainment, and has, until relatively recently, been above reproach and alteration. Yet, whereas universities internationally once offered a single doctorate named the PhD, many now offer a number of doctoral level degrees. In Australia, for example, candidates can also complete PhDs by Publication and by Project, as well as practice-led doctorates in, and named Doctorates of/in, Creative Arts, Creative Industries, Laws, Performance and other ‘new’ discipline areas. The Professional Doctorate, introduced into Australia in the early 1990s, has achieved such longevity that it now has it’s own “first generation” incarnations in (and about) disciplines such as Education, Business, Psychology and Journalism, as well as a contemporary “second generation” version which features professionally-practice-led Mode 2 knowledge production (Maxwell; also discussed in Lee, Brennan and Green 281). The uniquely Australian PhD by Project in the disciplines of architecture, design, business, engineering and education also includes coursework, and is practice and particularly workplace (or community) focused, but unlike the above, does not have to include a research element—although this is not precluded (Usher). A significant number of Australian universities also currently offer a PhD by Publication, known also as the PhD by Published Papers and PhD by Published Works. Introduced in the 1960s in the UK, the PhD by Publication there is today almost exclusively undertaken by academic staff at their own institutions, and usually consists of published work(s), a critical appraisal of that work within the research context, and an oral examination. The named degree is rare in the USA, although the practice of granting PhDs on the basis of prior publications is not unknown. In Australia, an examination of a number of universities that offer the degree reveals no consistency in terms of the framing policies except for the generic Australian Qualifications Framework accreditation statement (AQF), entry requirements and conditions of candidature, or resulting form and examination guidelines. Some Australian universities, for instance, require all externally peer-refereed publications, while others will count works that are self-published. Some require actual publications or works in press, but others count works that are still at submission stage. The UK PhD by Publication shows similar variation, with no consensus on purpose, length or format of this degree (Draper). Across Australia and the UK, some institutions accept previously published work and require little or no campus participation, while others have a significant minimum enrolment period and count only work generated during candidature (see Brien for more detail). Despite the plethora of named degrees at doctoral level, many academics continue to support the PhD’s claim to rigor and intellectual attainment. Most often, however, these arguments cite tradition rather than any real assessment of quality. The archaic trappings of conferral—the caps, gowns and various other instruments of distinction—emphasise a narrative in which it is often noted that doctorates were first conferred by the University of Paris in the 12th century and then elsewhere in medieval Europe. However, challenges to this account note that today’s largely independently researched thesis is a relatively recent arrival to educational history, being only introduced into Germany in the early nineteenth century (Bourner, Bowden and Laing; Park 4), the USA in a modified form in the mid-nineteenth century and the UK in 1917 (Jolley 227). The Australian PhD is even more recent, with the first only awarded in 1948 and still relatively rare until the 1970s (Nelson 3; Valadkhani and Ville). Additionally, PhDs in the USA, Canada and Denmark today almost always incorporate a significant taught coursework element (Noble). This is unlike the ‘traditional’ PhD in the UK and Australia, although the UK also currently offers a number of what are known there as ‘taught doctorates’. Somewhat confusingly, while these do incorporate coursework, they still include a significant research component (UKCGE). However, the UK is also adopting what has been identified as an American-inflected model which consists mostly, or largely, of coursework, and which is becoming known as the ‘New Route British PhD’ (Jolley 228). It could be posited that, within such a competitive market environment, which appears to be driven by both a drive for novelty and a desire to meet consumer demand, obsolescence therefore, and necessarily, threatens the very existence of the ‘traditional’ PhD. This obsolescence could be seen as especially likely as, alongside the existence of the above mentioned ‘new’ degrees, the ‘traditional’ research-based PhD at some universities in Australia and the UK in particular is, itself, also in the process of becoming ‘professionalised’, with some (still traditionally-framed) programs nevertheless incorporating workplace-oriented frameworks and/or experiences (Jolley 229; Kroll and Brien) to meet professionally-focused objectives that it is acknowledged cannot be met by producing a research thesis alone. While this emphasis can be seen as operating at the expense of specific disciplinary knowledge (Pole 107; Ball; Laing and Brabazon 265), and criticised for that, this workplace focus has arisen, internationally, as an institutional response to requests from both governments and industry for training in generic skills in university programs at all levels (Manathunga and Wissler). At the same time, the acknowledged unpredictability of the future workplace is driving a cognate move from discipline specific knowledge to what have been described as “problem solving and knowledge management approaches” across all disciplines (Gilbert; Valadkhani and Ville 2). While few query a link between university-level learning and the needs of the workplace, or the motivating belief that the overarching role of higher education is the provision of professional training for its client-students (see Laing and Brabazon for an exception), it also should be noted that a lack of relevance is one of the contributors to dysfunction, and thence to obsolescence. The PhD as Dysfunctional Degree? Perhaps, however, it is not competition that threatens the traditional PhD but, rather, its own design flaws. A report in The New York Times in 2007 alerted readers to what many supervisors, candidates, and researchers internationally have recognised for some time: that the PhD may be dysfunctional (Berger). In Australia and elsewhere, attention has focused on the uneven quality of doctoral-level degrees across institutions, especially in relation to their content, rigor, entry and assessment standards, and this has not precluded questions regarding the PhD (AVCC; Carey, Webb, Brien; Neumann; Jolley; McWilliam et al., "Silly"). It should be noted that this important examination of standards has, however, been accompanied by an increase in the awarding of Honorary Doctorates. This practice ranges from the most reputable universities’ recognising individuals’ significant contributions to knowledge, culture and/or society, to wholly disreputable institutions offering such qualifications in return for payment (Starrs). While generally contested in terms of their status, Honorary Doctorates granted to sports, show business and political figures are the most controversial and include an award conferred on puppet Kermit the Frog in 1996 (Jeffries), and some leading institutions including MIT, Cornell University and the London School of Economics and Political Science are distinctive in not awarding Honorary Doctorates. However, while distracting, the Honorary Doctorate itself does not answer all the questions regarding the quality of doctoral programs in general, or the Doctor of Philosophy in particular. The PhD also has high attrition rates: 50 per cent or more across Australia, the USA and Canada (Halse 322; Lovitts and Nelson). For those who remain in the programs, lengthy completion times (known internationally as ‘time-to-degree’) are common in many countries, with averages of 10.5 years to completion in Canada, and from 8.2 to more than 13 years (depending on discipline) in the USA (Berger). The current government performance-based funding model for Australian research higher degrees focuses attention on timely completion, and there is no doubt that, under this system—where universities only receive funding for a minimum period of candidature when those candidates have completed their degrees—more candidates are completing within the required time periods (Cuthbert). Yet, such a focus has distracted from assessment of the quality and outcomes of such programs of study. A detailed survey, based on the theses lodged in Australian libraries, has estimated that at least 51,000 PhD theses were completed in Australia to 2003 (Evans et al. 7). However, little attention has been paid to the consequences of this work, that is, the effects that the generation of these theses has had on either candidates or the nation. There has been no assessment, for instance, of the impact on candidates of undertaking and completing a doctorate on such facets of their lives as their employment opportunities, professional choices and salary levels, nor any effect on their personal happiness or levels of creativity. Nor has there been any real evaluation of the effect of these degrees on GDP, rates of the commercialisation of research, the generation of intellectual property, meeting national agendas in areas such as innovation, productivity or creativity, and/or the quality of the Australian creative and performing arts. Government-funded and other Australian studies have, however, noted for at least a decade both that the high numbers of graduates are mismatched to a lack of market demand for doctoral qualifications outside of academia (Kemp), and that an oversupply of doctorally qualified job seekers is driving wages down in some sectors (Jones 26). Even academia is demanding more than a PhD. Within the USA, doctoral graduates of some disciplines (English is an often-cited example) are undertaking second PhDs in their quest to secure an academic position. In Australia, entry-level academic positions increasingly require a scholarly publishing history alongside a doctoral-level qualification and, in common with other quantitative exercises in the UK and in New Zealand, the current Excellence in Research for Australia research evaluation exercise values scholarly publications more than higher degree qualifications. Concluding Remarks: The PhD as Obsolete or Retro-Chic? Disciplines and fields are reacting to this situation in various ways, but the trend appears to be towards increased market segmentation. Despite these charges of PhD dysfunction, there are also dangers in the over-differentiation of higher degrees as a practice. If universities do not adequately resource the professional development and other support for supervisors and all those involved in the delivery of all these degrees, those institutions may find that they have spread the existing skills, knowledge and other institutional assets too thinly to sustain some or even any of these degrees. This could lead to the diminishing quality (and an attendant diminishing perception of the value) of all the higher degrees available in those institutions as well as the reputation of the hosting country’s entire higher education system. As works in progress, the various ‘new’ doctoral degrees can also promote a sense of working on unstable ground for both candidates and supervisors (McWilliam et al., Research Training), and higher degree examiners will necessarily be unfamiliar with expected standards. Candidates are attempting to discern the advantages and disadvantages of each form in order to choose the degree that they believe is right for them (see, for example, Robins and Kanowski), but such assessment is difficult without the benefit of hindsight. Furthermore, not every form may fit the unpredictable future aspirations of candidates or the volatile future needs of the workplace. The rate with which everything once new descends from stylish popularity through stages of unfashionableness to become outdated and, eventually, discarded is increasing. This escalation may result in the discipline-based research PhD becoming seen as archaic and, eventually, obsolete. Perhaps, alternatively, it will lead to newer and more fashionable forms of doctoral study being discarded instead. Laing and Brabazon go further to find that all doctoral level study’s inability to “contribute in a measurable and quantifiable way to social, economic or political change” problematises the very existence of all these degrees (265). Yet, we all know that some objects, styles, practices and technologies that become obsolete are later recovered and reassessed as once again interesting. They rise once again to be judged as fashionable and valuable. Perhaps even if made obsolete, this will be the fate of the PhD or other doctoral degrees?References Australian Qualifications Framework (AQF). “Doctoral Degree”. AQF Qualifications. 4 May 2009 ‹http://www.aqf.edu.au/doctor.htm›. Australian Vice-Chancellors’ Committee (AVCC). Universities and Their Students: Principles for the Provision of Education by Australian Universities. Canberra: AVCC, 2002. 4 May 2009 ‹http://www.universitiesaustralia.edu.au/documents/publications/Principles_final_Dec02.pdf›. Ball, L. “Preparing Graduates in Art and Design to Meet the Challenges of Working in the Creative Industries: A New Model For Work.” Art, Design and Communication in Higher Education 1.1 (2002): 10–24. Berger, Joseph. “Exploring Ways to Shorten the Ascent to a Ph.D.” Education. The New York Times, 3 Oct. 2008. 4 May 2009 ‹http://nytimes.com/2007/10/03/education/03education.html›. Borghans, Lex, and Andries de Grip. Eds. The Overeducated Worker?: The Economics of Skill Utilization. Cheltenham, UK: Edward Elgar, 2000. Bourner, T., R. Bowden and S. Laing. “Professional Doctorates in England”. Studies in Higher Education 26 (2001) 65–83. Brien, Donna Lee. “Publish or Perish?: Investigating the Doctorate by Publication in Writing”. The Creativity and Uncertainty Papers: the Refereed Proceedings of the 13th Conference of the Australian Association of Writing Programs. AAWP, 2008. 4 May 2009 ‹http://www.aawp.org.au/creativity-and-uncertainty-papers›. Bulow, Jeremy. “An Economic Theory of Planned Obsolescence.” The Quarterly Journal of Economics 101.4 (Nov. 1986): 729–50. Carey, Janene, Jen Webb, and Donna Lee Brien. “Examining Uncertainty: Australian Creative Research Higher Degrees”. The Creativity and Uncertainty Papers: the Refereed Proceedings of the 13th Conference of the Australian Association of Writing Programs. AAWP, 2008. 4 May 2009 ‹http://www.aawp.org.au/creativity-and-uncertainty-papers›. Chauhan, S. P., and Daisy Chauhan. “Human Obsolescence: A Wake–up Call to Avert a Crisis.” Global Business Review 9.1 (2008): 85–100. Claudio, Luz. "Environmental Impact of the Clothing Industry." Environmental Health Perspectives 115.9 (Set. 2007): A449–54. Cuthbert, Denise. “HASS PhD Completions Rates: Beyond the Doom and Gloom”. Council for the Humanities, Arts and Social Sciences, 3 March 2008. 4 May 2009 ‹http://www.chass.org.au/articles/ART20080303DC.php›. Draper, S. W. PhDs by Publication. University of Glasgow, 11 Aug. 2008. 4 May 2009 ‹http://www.psy.gla.ac.uk/~steve/resources/phd.html. Dubin, Samuel S. “Obsolescence or Lifelong Education: A Choice for the Professional.” American Psychologist 27.5 (1972): 486–98. Evans, Terry, Peter Macauley, Margot Pearson, and Karen Tregenza. “A Brief Review of PhDs in Creative and Performing Arts in Australia”. Proceeding of the Association for Active Researchers Newcastle Mini-Conference, 2–4 October 2003. Melbourne: Australian Association for Research in Education, 2003. 4 May 2009 ‹http://www.aare.edu.au/conf03nc. Gilbert, R. “A Framework for Evaluating the Doctoral Curriculum”. Assessment and Evaluation in Higher Education 29.3 (2004): 299–309. Gould, Eric D., Omer Moav, and Bruce A. Weinberg. “Skill Obsolescence and Wage Inequality within Education Groups.” The Economics of Skills Obsolescence. Eds. Andries de Grip, Jasper van Loo, and Ken Mayhew. Amsterdam: JAI Press, 2002. 215–34. Halse, Christine. “Is the Doctorate in Crisis?” Nagoya Journal of Higher Education 34 Apr. (2007): 321–37. Holtmann, A.G. “On-the-Job Training, Obsolescence, Options, and Retraining.” Southern Economic Journal 38.3 (1972): 414–17. Janßen, Simon, and Uschi Backes-Gellner. “Skill Obsolescence, Vintage Effects and Changing Tasks.” Applied Economics Quarterly 55.1 (2009): 83–103. Jeffries, Stuart. “I’m a Celebrity, Get Me an Honorary Degree”. The Guardian 6 July 2006. 4 May 2009 ‹http://www.guardian.co.uk/music/2006/jul/06/highereducation.popandrock. Jolley, Jeremy. “Choose your Doctorate.” Journal of Clinical Nursing 16.2 (2007): 225–33. Jones, Elka. “Beyond Supply and Demand: Assessing the Ph.D. Job Market.” Occupational Outlook Quarterly Winter (2002-2003): 22–33. Kemp, D. ­New Knowledge, New Opportunities: A Discussion Paper on Higher Education Research and Research Training. Canberra: Australian Government Printing Service, 1999. Kondoh, Shinsuke, Keijiro Masui, Mitsuro Hattori, Nozomu Mishima, and Mitsutaka Matsumoto. “Total Performance Analysis of Product Life Cycle Considering the Deterioration and Obsolescence of Product Value.” International Journal of Product Development 6.3–4 (2008): 334–52. Kroll, Jeri, and Donna Lee Brien. “Studying for the Future: Training Creative Writing Postgraduates For Life After Degrees.” Australian Online Journal of Arts Education 2.1 July (2006): 1–13. Laing, Stuart, and Tara Brabazon. “Creative Doctorates, Creative Education? Aligning Universities with the Creative Economy.” Nebula 4.2 (June 2007): 253–67. Lee, Alison, Marie Brennan, and Bill Green. “Re-imagining Doctoral Education: Professional Doctorates and Beyond.” Higher Education Research & Development 28.3 2009): 275–87. Lee, Ho, and Jonghwa Lee. “A Theory of Economic Obsolescence.” The Journal of Industrial Economics 46.3 (Sep. 1998): 383–401. Lovitts, B. E., and C. Nelson. “The Hidden Crisis in Graduate Education: Attrition from Ph.D. Programs.” Academe 86.6 (2000): 44–50. Manathunga, Catherine, and Rod Wissler. “Generic Skill Development for Research Higher Degree Students: An Australian Example”. International Journal of Instructional Media, 30.3 (2003): 233–46. Maxwell, T. W. “From First to Second Generation Professional Doctorate.” Studies in Higher Education 28.3 (2003): 279–91. McKelvey, Maureen, and Magnus Holmén. Ed. Learning to Compete in European Universities: From Social Institution to Knowledge Business. Cheltenham, UK: Edward Elgar Publishing, 2009. McWilliam, Erica, Alan Lawson, Terry Evans, and Peter G Taylor. “‘Silly, Soft and Otherwise Suspect’: Doctoral Education as Risky Business”. Australian Journal of Education 49.2 (2005): 214–27. 4 May 2009. http://eprints.qut.edu.au/archive/00004171. McWilliam, Erica, Peter G. Taylor, P. Thomson, B. Green, T. W. Maxwell, H. Wildy, and D. Simmons. Research Training in Doctoral Programs: What Can Be Learned for Professional Doctorates? Evaluations and Investigations Programme 02/8. Canberra: Commonwealth of Australia, 2002. Nelson, Hank. “A Doctor in Every House: The PhD Then Now and Soon”. Occasional Paper GS93/3. Canberra: The Graduate School, Australian National University, 1993. 4 May 2009 ‹http://dspace.anu.edu.au/bitstream/1885/41552/1/GS93_3.pdf›. Neumann, Ruth. The Doctoral Education Experience: Diversity and Complexity. 03/12 Evaluations and Investigations Programme. Canberra: Department of Education, Science and Training, 2003. Noble K. A. Changing Doctoral Degrees: An International Perspective. Buckingham: Society for Research into Higher Education, 1994. Park, Chris. Redefining the Doctorate: Discussion Paper. York: The Higher Education Academy, 2007. Pole, Christopher. “Technicians and Scholars in Pursuit of the PhD: Some Reflections on Doctoral Study.” Research Papers in Education 15 (2000): 95–111. Robins, Lisa M., and Peter J. Kanowski. “PhD by Publication: A Student’s Perspective”. Journal of Research Practice 4.2 (2008). 4 May 2009 ‹http://jrp.icaap.org›. Sheely, Stephen. “The First Among Equals: The PhD—Academic Standard or Historical Accident?”. Advancing International Perspectives: Proceedings of the Higher Education Research and Development Society of Australasia Conference, 1997. 654-57. 4 May 2009 ‹http://www.herdsa.org.au/wp-content/uploads/conference/1997/sheely01.pdf›. Texeira, Pedro, Ben Jongbloed, David Dill, and Alberto Amaral. Eds. Markets in Higher Education: Rethoric or Reality? Dordrecht, the Netherlands: Kluwer, 2004. UK Council for Graduate Education (UKCGE). Professional Doctorates. Dudley: UKCGE, 2002. Unruh, Gregory C. “The Biosphere Rules.” Harvard Business Review Feb. 2008: 111–17. Usher R. “A Diversity of Doctorates: Fitness for the Knowledge Economy?”. Higher Education Research & Development 21 (2002): 143–53. Valadkhani, Abbas, and Simon Ville. “A Disciplinary Analysis of the Contribution of Academic Staff to PhD Completions in Australian Universities”. International Journal of Business & Management Education 15.1 (2007): 1–22. Waldman, Michael. “A New Perspective on Planned Obsolescence.” The Quarterly Journal of Economics 108.1 (Feb. 1993): 273–83.
APA, Harvard, Vancouver, ISO, and other styles
31

Keogh, Samantha, Caroline Shelverton, Julie Flynn, Gabor Mihala, Saira Mathew, Karen M. Davies, Nicole Marsh, and Claire M. Rickard. "Implementation and evaluation of short peripheral intravenous catheter flushing guidelines: a stepped wedge cluster randomised trial." BMC Medicine 18, no. 1 (September 30, 2020). http://dx.doi.org/10.1186/s12916-020-01728-1.

Full text
Abstract:
Abstract Background Peripheral intravenous catheters (PIVCs) are ubiquitous medical devices, crucial to providing essential fluids and drugs. However, post-insertion PIVC failure occurs frequently, likely due to inconsistent maintenance practice such as flushing. The aim of this implementation study was to evaluate the impact a multifaceted intervention centred on short PIVC maintenance had on patient outcomes. Methods This single-centre, incomplete, stepped wedge, cluster randomised trial with an implementation period was undertaken at a quaternary hospital in Queensland, Australia. Eligible patients were from general medical and surgical wards, aged ≥ 18 years, and requiring a PIVC for > 24 h. Wards were the unit of randomisation and allocation was concealed until the time of crossover to the implementation phase. Patients, clinicians, and researchers were not masked but infections were adjudicated by a physician masked to allocation. Practice during the control period was standard care (variable practice with manually prepared flushes of 0.9% sodium chloride). The intervention group received education reinforcing practice guidelines (including administration with manufacturer-prepared pre-filled flush syringes). The primary outcome was all-cause PIVC failure (as a composite of occlusion, infiltration, dislodgement, phlebitis, and primary bloodstream or local infection). Analysis was by intention-to-treat. Results Between July 2016 and February 2017, 619 patients from 9 clusters (wards) were enrolled (control n = 306, intervention n = 313), with 617 patients comprising the intention-to-treat population. PIVC failure was 91 (30%) in the control and 69 (22%) in the intervention group (risk difference − 8%, 95% CI − 14 to − 1, p = 0.032). Total costs were lower in the intervention group. No serious adverse events related to study intervention occurred. Conclusions This study demonstrated the effectiveness of post-insertion PIVC flushing according to recommended guidelines. Evidence-based education, surveillance and products for post-insertion PIVC management are vital to improve patient outcomes. Trial registration Trial submitted for registration on 25 January 2016. Approved and retrospectively registered on 4 August 2016. Ref: ACTRN12616001035415.
APA, Harvard, Vancouver, ISO, and other styles
32

Brien, Donna Lee. "From Waste to Superbrand: The Uneasy Relationship between Vegemite and Its Origins." M/C Journal 13, no. 4 (August 18, 2010). http://dx.doi.org/10.5204/mcj.245.

Full text
Abstract:
This article investigates the possibilities for understanding waste as a resource, with a particular focus on understanding food waste as a food resource. It considers the popular yeast spread Vegemite within this frame. The spread’s origins in waste product, and how it has achieved and sustained its status as a popular symbol of Australia despite half a century of Australian gastro-multiculturalism and a marked public resistance to other recycling and reuse of food products, have not yet been a focus of study. The process of producing Vegemite from waste would seem to align with contemporary moves towards recycling food waste, and ensuring environmental sustainability and food security, yet even during times of austerity and environmental concern this has not provided the company with a viable marketing strategy. Instead, advertising copywriting and a recurrent cycle of product memorialisation have created a superbrand through focusing on Vegemite’s nutrient and nostalgic value.John Scanlan notes that producing waste is a core feature of modern life, and what we dispose of as surplus to our requirements—whether this comprises material objects or more abstract products such as knowledge—reveals much about our society. In observing this, Scanlan asks us to consider the quite radical idea that waste is central to everything of significance to us: the “possibility that the surprising core of all we value results from (and creates even more) garbage (both the material and the metaphorical)” (9). Others have noted the ambivalent relationship we have with the waste we produce. C. T. Anderson notes that we are both creator and agent of its disposal. It is our ambivalence towards waste, coupled with its ubiquity, that allows waste materials to be described so variously: negatively as garbage, trash and rubbish, or more positively as by-products, leftovers, offcuts, trimmings, and recycled.This ambivalence is also crucial to understanding the affectionate relationship the Australian public have with Vegemite, a relationship that appears to exist in spite of the product’s unpalatable origins in waste. A study of Vegemite reveals that consumers can be comfortable with waste, even to the point of eating recycled waste, as long as that fact remains hidden and unmentioned. In Vegemite’s case not only has the product’s connection to waste been rendered invisible, it has been largely kept out of sight despite considerable media and other attention focusing on the product. Recycling Food Waste into Food ProductRecent work such as Elizabeth Royte’s Garbage Land and Tristram Stuart’s Waste make waste uncomfortably visible, outlining how much waste, and food waste in particular, the Western world generates and how profligately this is disposed of. Their aim is clear: a call to less extravagant and more sustainable practices. The relatively recent interest in reducing our food waste has, of course, introduced more complexity into a simple linear movement from the creation of a food product, to its acquisition or purchase, and then to its consumption and/or its disposal. Moreover, the recycling, reuse and repurposing of what has previously been discarded as waste is reconfiguring the whole idea of what waste is, as well as what value it has. The initiatives that seem to offer the most promise are those that reconfigure the way waste is understood. However, it is not only the process of transforming waste from an abject nuisance into a valued product that is central here. It is also necessary to reconfigure people’s acculturated perceptions of, and reactions to waste. Food waste is generated during all stages of the food cycle: while the raw materials are being grown; while these are being processed; when the resulting food products are being sold; when they are prepared in the home or other kitchen; and when they are only partly consumed. Until recently, the food industry in the West almost universally produced large volumes of solid and liquid waste that not only posed problems of disposal and pollution for the companies involved, but also represented a reckless squandering of total food resources in terms of both nutrient content and valuable biomass for society at large. While this is currently changing, albeit slowly, the by-products of food processing were, and often are, dumped (Stuart). In best-case scenarios, various gardening, farming and industrial processes gather household and commercial food waste for use as animal feed or as components in fertilisers (Delgado et al; Wang et al). This might, on the surface, appear a responsible application of waste, yet the reality is that such food waste often includes perfectly good fruit and vegetables that are not quite the required size, shape or colour, meat trimmings and products (such as offal) that are completely edible but extraneous to processing need, and other high grade product that does not meet certain specifications—such as the mountains of bread crusts sandwich producers discard (Hickman), or food that is still edible but past its ‘sell by date.’ In the last few years, however, mounting public awareness over the issues of world hunger, resource conservation, and the environmental and economic costs associated with food waste has accelerated efforts to make sustainable use of available food supplies and to more efficiently recycle, recover and utilise such needlessly wasted food product. This has fed into and led to multiple new policies, instances of research into, and resultant methods for waste handling and treatment (Laufenberg et al). Most straightforwardly, this involves the use or sale of offcuts, trimmings and unwanted ingredients that are “often of prime quality and are only rejected from the production line as a result of standardisation requirements or retailer specification” from one process for use in another, in such processed foods as soups, baby food or fast food products (Henningsson et al. 505). At a higher level, such recycling seeks to reclaim any reusable substances of significant food value from what could otherwise be thought of as a non-usable waste product. Enacting this is largely dependent on two elements: an available technology and being able to obtain a price or other value for the resultant product that makes the process worthwhile for the recycler to engage in it (Laufenberg et al). An example of the latter is the use of dehydrated restaurant food waste as a feedstuff for finishing pigs, a reuse process with added value for all involved as this process produces both a nutritious food substance as well as a viable way of disposing of restaurant waste (Myer et al). In Japan, laws regarding food waste recycling, which are separate from those governing other organic waste, are ensuring that at least some of food waste is being converted into animal feed, especially for the pigs who are destined for human tables (Stuart). Other recycling/reuse is more complex and involves more lateral thinking, with the by-products from some food processing able to be utilised, for instance, in the production of dyes, toiletries and cosmetics (Henningsson et al), although many argue for the privileging of food production in the recycling of foodstuffs.Brewing is one such process that has been in the reuse spotlight recently as large companies seek to minimise their waste product so as to be able to market their processes as sustainable. In 2009, for example, the giant Foster’s Group (with over 150 brands of beer, wine, spirits and ciders) proudly claimed that it recycled or reused some 91.23% of 171,000 tonnes of operational waste, with only 8.77% of this going to landfill (Foster’s Group). The treatment and recycling of the massive amounts of water used for brewing, rinsing and cooling purposes (Braeken et al.; Fillaudeaua et al.) is of significant interest, and is leading to research into areas as diverse as the development microbial fuel cells—where added bacteria consume the water-soluble brewing wastes, thereby cleaning the water as well as releasing chemical energy that is then converted into electricity (Lagan)—to using nutrient-rich wastewater as the carbon source for creating bioplastics (Yu et al.).In order for the waste-recycling-reuse loop to be closed in the best way for securing food supplies, any new product salvaged and created from food waste has to be both usable, and used, as food (Stuart)—and preferably as a food source for people to consume. There is, however, considerable consumer resistance to such reuse. Resistance to reusing recycled water in Australia has been documented by the CSIRO, which identified negative consumer perception as one of the two primary impediments to water reuse, the other being the fundamental economics of the process (MacDonald & Dyack). This consumer aversion operates even in times of severe water shortages, and despite proof of the cleanliness and safety of the resulting treated water. There was higher consumer acceptance levels for using stormwater rather than recycled water, despite the treated stormwater being shown to have higher concentrations of contaminants (MacDonald & Dyack). This reveals the extent of public resistance to the potential consumption of recycled waste product when it is labelled as such, even when this consumption appears to benefit that public. Vegemite: From Waste Product to Australian IconIn this context, the savoury yeast spread Vegemite provides an example of how food processing waste can be repurposed into a new food product that can gain a high level of consumer acceptability. It has been able to retain this status despite half a century of Australian gastronomic multiculturalism and the wide embrace of a much broader range of foodstuffs. Indeed, Vegemite is so ubiquitous in Australian foodways that it is recognised as an international superbrand, a standing it has been able to maintain despite most consumers from outside Australasia finding it unpalatable (Rozin & Siegal). However, Vegemite’s long product history is one in which its origin as recycled waste has been omitted, or at the very least, consistently marginalised.Vegemite’s history as a consumer product is narrated in a number of accounts, including one on the Kraft website, where the apocryphal and actual blend. What all these narratives agree on is that in the early 1920s Fred Walker—of Fred Walker and Company, Melbourne, canners of meat for export and Australian manufacturers of Bonox branded beef stock beverage—asked his company chemist to emulate Marmite yeast extract (Farrer). The imitation product was based, as was Marmite, on the residue from spent brewer’s yeast. This waste was initially sourced from Melbourne-based Carlton & United Breweries, and flavoured with vegetables, spices and salt (Creswell & Trenoweth). Today, the yeast left after Foster Group’s Australian commercial beer making processes is collected, put through a sieve to remove hop resins, washed to remove any bitterness, then mixed with warm water. The yeast dies from the lack of nutrients in this environment, and enzymes then break down the yeast proteins with the effect that vitamins and minerals are released into the resulting solution. Using centrifugal force, the yeast cell walls are removed, leaving behind a nutrient-rich brown liquid, which is then concentrated into a dark, thick paste using a vacuum process. This is seasoned with significant amounts of salt—although less today than before—and flavoured with vegetable extracts (Richardson).Given its popularity—Vegemite was found in 2009 to be the third most popular brand in Australia (Brand Asset Consulting)—it is unsurprising to find that the product has a significant history as an object of study in popular culture (Fiske et al; White), as a marker of national identity (Ivory; Renne; Rozin & Siegal; Richardson; Harper & White) and as an iconic Australian food, brand and product (Cozzolino; Luck; Khamis; Symons). Jars, packaging and product advertising are collected by Australian institutions such as Sydney’s Powerhouse Museum and the National Museum of Australia in Canberra, and are regularly included in permanent and travelling exhibitions profiling Australian brands and investigating how a sense of national identity is expressed through identification with these brands. All of this significant study largely focuses on how, when and by whom the product has been taken up, and how it has been consumed, rather than its links to waste, and what this circumstance could add to current thinking about recycling of food waste into other food products.It is worth noting that Vegemite was not an initial success in the Australian marketplace, but this does not seem due to an adverse public perception to waste. Indeed, when it was first produced it was in imitation of an already popular product well-known to be made from brewery by-products, hence this origin was not an issue. It was also introduced during a time when consumer relationships to waste were quite unlike today, and thrifty re-use of was a common feature of household behaviour. Despite a national competition mounted to name the product (Richardson), Marmite continued to attract more purchasers after Vegemite’s launch in 1923, so much so that in 1928, in an attempt to differentiate itself from Marmite, Vegemite was renamed “Parwill—the all Australian product” (punning on the idea that “Ma-might” but “Pa-will”) (White 16). When this campaign was unsuccessful, the original, consumer-suggested name was reinstated, but sales still lagged behind its UK-owned prototype. It was only after remaining in production for more than a decade, and after two successful marketing campaigns in the second half of the 1930s that the Vegemite brand gained some market traction. The first of these was in 1935 and 1936, when a free jar of Vegemite was offered with every sale of an item from the relatively extensive Kraft-Walker product list (after Walker’s company merged with Kraft) (White). The second was an attention-grabbing contest held in 1937, which invited consumers to compose Vegemite-inspired limericks. However, it was not the nature of the product itself or even the task set by the competition which captured mass attention, but the prize of a desirable, exotic and valuable imported Pontiac car (Richardson 61; Superbrands).Since that time, multinational media company, J Walter Thompson (now rebranded as JWT) has continued to manage Vegemite’s marketing. JWT’s marketing has never looked to Vegemite’s status as a thrifty recycler of waste as a viable marketing strategy, even in periods of austerity (such as the Depression years and the Second World War) or in more recent times of environmental concern. Instead, advertising copywriting and a recurrent cycle of cultural/media memorialisation have created a superbrand by focusing on two factors: its nutrient value and, as the brand became more established, its status as national icon. Throughout the regular noting and celebration of anniversaries of its initial invention and launch, with various commemorative events and products marking each of these product ‘birthdays,’ Vegemite’s status as recycled waste product has never been more than mentioned. Even when its 60th anniversary was marked in 1983 with the laying of a permanent plaque in Kerferd Road, South Melbourne, opposite Walker’s original factory, there was only the most passing reference to how, and from what, the product manufactured at the site was made. This remained the case when the site itself was prioritised for heritage listing almost twenty years later in 2001 (City of Port Phillip).Shying away from the reality of this successful example of recycling food waste into food was still the case in 1990, when Kraft Foods held a nationwide public campaign to recover past styles of Vegemite containers and packaging, and then donated their collection to Powerhouse Museum. The Powerhouse then held an exhibition of the receptacles and the historical promotional material in 1991, tracing the development of the product’s presentation (Powerhouse Museum), an occasion that dovetailed with other nostalgic commemorative activities around the product’s 70th birthday. Although the production process was noted in the exhibition, it is noteworthy that the possibilities for recycling a number of the styles of jars, as either containers with reusable lids or as drinking glasses, were given considerably more notice than the product’s origins as a recycled product. By this time, it seems, Vegemite had become so incorporated into Australian popular memory as a product in its own right, and with such a rich nostalgic history, that its origins were no longer of any significant interest or relevance.This disregard continued in the commemorative volume, The Vegemite Cookbook. With some ninety recipes and recipe ideas, the collection contains an almost unimaginably wide range of ways to use Vegemite as an ingredient. There are recipes on how to make the definitive Vegemite toast soldiers and Vegemite crumpets, as well as adaptations of foreign cuisines including pastas and risottos, stroganoffs, tacos, chilli con carne, frijole dip, marinated beef “souvlaki style,” “Indian-style” chicken wings, curries, Asian stir-fries, Indonesian gado-gado and a number of Chinese inspired dishes. Although the cookbook includes a timeline of product history illustrated with images from the major advertising campaigns that runs across 30 pages of the book, this timeline history emphasises the technological achievement of Vegemite’s creation, as opposed to the matter from which it orginated: “In a Spartan room in Albert Park Melbourne, 20 year-old food technologist Cyril P. Callister employed by Fred Walker, conducted initial experiments with yeast. His workplace was neither kitchen nor laboratory. … It was not long before this rather ordinary room yielded an extra-ordinary substance” (2). The Big Vegemite Party Book, described on its cover as “a great book for the Vegemite fan … with lots of old advertisements from magazines and newspapers,” is even more openly nostalgic, but similarly includes very little regarding Vegemite’s obviously potentially unpalatable genesis in waste.Such commemorations have continued into the new century, each one becoming more self-referential and more obviously a marketing strategy. In 2003, Vegemite celebrated its 80th birthday with the launch of the “Spread the Smile” campaign, seeking to record the childhood reminisces of adults who loved Vegemite. After this, the commemorative anniversaries broke free from even the date of its original invention and launch, and began to celebrate other major dates in the product’s life. In this way, Kraft made major news headlines when it announced that it was trying to locate the children who featured in the 1954 “Happy little Vegemites” campaign as part of the company’s celebrations of the 50th anniversary of the television advertisement. In October 2006, these once child actors joined a number of past and current Kraft employees to celebrate the supposed production of the one-billionth jar of Vegemite (Rood, "Vegemite Spreads" & "Vegemite Toasts") but, once again, little about the actual production process was discussed. In 2007, the then iconic marching band image was resituated into a contemporary setting—presumably to mobilise both the original messages (nutritious wholesomeness in an Australian domestic context) as well as its heritage appeal. Despite the real interest at this time in recycling and waste reduction, the silence over Vegemite’s status as recycled, repurposed food waste product continued.Concluding Remarks: Towards Considering Waste as a ResourceIn most parts of the Western world, including Australia, food waste is formally (in policy) and informally (by consumers) classified, disposed of, or otherwise treated alongside garden waste and other organic materials. Disposal by individuals, industry or local governments includes a range of options, from dumping to composting or breaking down in anaerobic digestion systems into materials for fertiliser, with food waste given no special status or priority. Despite current concerns regarding the security of food supplies in the West and decades of recognising that there are sections of all societies where people do not have enough to eat, it seems that recycling food waste into food that people can consume remains one of the last and least palatable solutions to these problems. This brief study of Vegemite has attempted to show how, despite the growing interest in recycling and sustainability, the focus in both the marketing of, and public interest in, this iconic and popular product appears to remain rooted in Vegemite’s nutrient and nostalgic value and its status as a brand, and firmly away from any suggestion of innovative and prudent reuse of waste product. That this is so for an already popular product suggests that any initiatives that wish to move in this direction must first reconfigure not only the way waste itself is seen—as a valuable product to be used, rather than as a troublesome nuisance to be disposed of—but also our own understandings of, and reactions to, waste itself.Acknowledgements Many thanks to the reviewers for their perceptive, useful, and generous comments on this article. All errors are, of course, my own. The research for this work was carried out with funding from the Faculty of Arts, Business, Informatics and Education, CQUniversity, Australia.ReferencesAnderson, C. T. “Sacred Waste: Ecology, Spirit, and the American Garbage Poem.” Interdisciplinary Studies in Literature and Environment 17 (2010): 35-60.Blake, J. The Vegemite Cookbook: Delicious Recipe Ideas. Melbourne: Ark Publishing, 1992.Braeken, L., B. Van der Bruggen and C. Vandecasteele. “Regeneration of Brewery Waste Water Using Nanofiltration.” Water Research 38.13 (July 2004): 3075-82.City of Port Phillip. “Heritage Recognition Strategy”. Community and Services Development Committee Agenda, 20 Aug. 2001.Cozzolino, M. Symbols of Australia. Ringwood: Penguin, 1980.Creswell, T., and S. Trenoweth. “Cyril Callister: The Happiest Little Vegemite”. 1001 Australians You Should Know. North Melbourne: Pluto Press, 2006. 353-4.Delgado, C. L., M. Rosegrant, H. Steinfled, S. Ehui, and C. Courbois. Livestock to 2020: The Next Food Revolution. Food, Agriculture, and the Environment Discussion Paper, 28. Washington, D. C.: International Food Policy Research Institute, 2009.Farrer, K. T. H. “Callister, Cyril Percy (1893-1949)”. Australian Dictionary of Biography 7. Melbourne: Melbourne University Press, 1979. 527-8.Fillaudeaua, L., P. Blanpain-Avetb and G. Daufinc. “Water, Wastewater and Waste Management in Brewing Industries”. Journal of Cleaner Production 14.5 (2006): 463-71.Fiske, J., B. Hodge and G. Turner. Myths of Oz: Reading Australian Popular Culture. Sydney: Allen & Unwin, 1987.Foster’s Group Limited. Transforming Fosters: Sustainability Report 2009.16 June 2010 ‹http://fosters.ice4.interactiveinvestor.com.au/Fosters0902/2009SustainabilityReport/EN/body.aspx?z=1&p=-1&v=2&uid›.George Patterson Young and Rubicam (GPYR). Brand Asset Valuator, 2009. 6 Aug. 2010 ‹http://www.brandassetconsulting.com/›.Harper, M., and R. White. Symbols of Australia. UNSW, Sydney: UNSW Press, 2010.Henningsson, S., K. Hyde, A. Smith, and M. Campbell. “The Value of Resource Efficiency in the Food Industry: A Waste Minimisation Project in East Anglia, UK”. Journal of Cleaner Production 12.5 (June 2004): 505-12.Hickman, M. “Exposed: The Big Waste Scandal”. The Independent, 9 July 2009. 18 June 2010 ‹http://www.independent.co.uk/life-style/food-and-drink/features/exposed-the-big-waste-scandal-1737712.html›.Ivory, K. “Australia’s Vegemite”. Hemispheres (Jan. 1998): 83-5.Khamis, S. “Buy Australiana: Diggers, Drovers and Vegemite”. Write/Up. Eds. E. Hartrick, R. Hogg and S. Supski. St Lucia: API Network and UQP, 2004. 121-30.Lagan, B. “Australia Finds a New Power Source—Beer”. The Times 5 May 2007. 18 June 2010 ‹http://www.timesonline.co.uk/tol/news/science/article1749835.ece›.Laufenberg, G., B. Kunz and M. Nystroem. “Transformation of Vegetable Waste into Value Added Products: (A) The Upgrading Concept; (B) Practical Implementations [review paper].” Bioresource Technology 87 (2003): 167-98.Luck, P. Australian Icons: Things That Make Us What We Are. Melbourne: William Heinemann Australia, 1992.MacDonald, D. H., and B. Dyack. Exploring the Institutional Impediments to Conservation and Water Reuse—National Issues: Report for the Australian Water Conservation and Reuse Research Program. March. CSIRO Land and Water, 2004.Myer, R. O., J. H. Brendemuhl, and D. D. Johnson. “Evaluation of Dehydrated Restaurant Food Waste Products as Feedstuffs for Finishing Pigs”. Journal of Animal Science 77.3 (1999): 685-92.Pittaway, M. The Big Vegemite Party Book. Melbourne: Hill of Content, 1992. Powerhouse Museum. Collection & Research. 16 June 2010.Renne, E. P. “All Right, Vegemite!: The Everyday Constitution of an Australian National Identity”. Visual Anthropology 6.2 (1993): 139-55.Richardson, K. “Vegemite, Soldiers, and Rosy Cheeks”. Gastronomica 3.4 (Fall 2003): 60-2.Rood, D. “Vegemite Spreads the News of a Happy Little Milestone”. Sydney Morning Herald 6 Oct. 2008. 16 March 2010 ‹http://www.smh.com.au/news/national/vegemite-spreads-the-news-of-a-happy-little-milestone/2008/10/05/1223145175371.html›.———. “Vegemite Toasts a Billion Jars”. The Age 6 Oct. 2008. 16 March 2010 ‹http://www.theage.com.au/national/vegemite-toasts-a-billion-jars-20081005-4uc1.html›.Royte, E. Garbage Land: On the Secret Trail of Trash. New York: Back Bay Books, 2006.Rozin, P., and M. Siegal “Vegemite as a Marker of National Identity”. Gastronomica 3.4 (Fall 2003): 63-7.Scanlan, J. On Garbage. London: Reaktion Books, 2005.Stuart, T. Waste: Uncovering the Global Food Scandal. New York: W. W. Norton & Company, 2009.Superbrands. Superbrands: An Insight into Many of Australia’s Most Trusted Brands. Vol IV. Ingleside, NSW: Superbrands, 2004.Symons, M. One Continuous Picnic: A History of Eating in Australia. Ringwood: Penguin Books, 1982.Wang, J., O. Stabnikova, V. Ivanov, S. T. Tay, and J. Tay. “Intensive Aerobic Bioconversion of Sewage Sludge and Food Waste into Fertiliser”. Waste Management & Research 21 (2003): 405-15.White, R. S. “Popular Culture as the Everyday: A Brief Cultural History of Vegemite”. Australian Popular Culture. Ed. I. Craven. Cambridge UP, 1994. 15-21.Yu, P. H., H. Chua, A. L. Huang, W. Lo, and G. Q. Chen. “Conversion of Food Industrial Wastes into Bioplastics”. Applied Biochemistry and Biotechnology 70-72.1 (March 1998): 603-14.
APA, Harvard, Vancouver, ISO, and other styles
33

Green, Lelia. "No Taste for Health: How Tastes are Being Manipulated to Favour Foods that are not Conducive to Health and Wellbeing." M/C Journal 17, no. 1 (March 17, 2014). http://dx.doi.org/10.5204/mcj.785.

Full text
Abstract:
Background “The sense of taste,” write Nelson and colleagues in a 2002 issue of Nature, “provides animals with valuable information about the nature and quality of food. Mammals can recognize and respond to a diverse repertoire of chemical entities, including sugars, salts, acids and a wide range of toxic substances” (199). The authors go on to argue that several amino acids—the building blocks of proteins—taste delicious to humans and that “having a taste pathway dedicated to their detection probably had significant evolutionary implications”. They imply, but do not specify, that the evolutionary implications are positive. This may be the case with some amino acids, but contemporary tastes, and changes in them, are far from universally beneficial. Indeed, this article argues that modern food production shapes and distorts human taste with significant implications for health and wellbeing. Take the western taste for fried chipped potatoes, for example. According to Schlosser in Fast Food Nation, “In 1960, the typical American ate eighty-one pounds of fresh potatoes and about four pounds of frozen french fries. Today [2002] the typical American eats about forty-nine pounds of fresh potatoes every year—and more than thirty pounds of frozen french fries” (115). Nine-tenths of these chips are consumed in fast food restaurants which use mass-manufactured potato-based frozen products to provide this major “foodservice item” more quickly and cheaply than the equivalent dish prepared from raw ingredients. These choices, informed by human taste buds, have negative evolutionary implications, as does the apparently long-lasting consumer preference for fried goods cooked in trans-fats. “Numerous foods acquire their elastic properties (i.e., snap, mouth-feel, and hardness) from the colloidal fat crystal network comprised primarily of trans- and saturated fats. These hardstock fats contribute, along with numerous other factors, to the global epidemics related to metabolic syndrome and cardiovascular disease,” argues Michael A. Rogers (747). Policy makers and public health organisations continue to compare notes internationally about the best ways in which to persuade manufacturers and fast food purveyors to reduce the use of these trans-fats in their products (L’Abbé et al.), however, most manufacturers resist. Hank Cardello, a former fast food executive, argues that “many products are designed for ‘high hedonic value’, with carefully balanced combinations of salt, sugar and fat that, experience has shown, induce people to eat more” (quoted, Trivedi 41). Fortunately for the manufactured food industry, salt and sugar also help to preserve food, effectively prolonging the shelf life of pre-prepared and packaged goods. Physiological Factors As Glanz et al. discovered when surveying 2,967 adult Americans, “taste is the most important influence on their food choices, followed by cost” (1118). A person’s taste is to some extent an individual response to food stimuli, but the tongue’s taste buds respond to five basic categories of food: salty, sweet, sour, bitter, and umami. ‘Umami’ is a Japanese word indicating “delicious savoury taste” (Coughlan 11) and it is triggered by the amino acid glutamate. Japanese professor Kikunae Ikeda identified glutamate while investigating the taste of a particular seaweed which he believed was neither sweet, sour, bitter, or salty. When Ikeda combined the glutamate taste essence with sodium he formed the food additive sodium glutamate, which was patented in 1908 and subsequently went into commercial production (Japan Patent Office). Although individual, a person’s taste preferences are by no means fixed. There is ample evidence that people’s tastes are being distorted by modern food marketing practices that process foods to make them increasingly appealing to the average palate. In particular, this industrialisation of food promotes the growth of a snack market driven by salty and sugary foods, popularly constructed as posing a threat to health and wellbeing. “[E]xpanding waistlines [are] fuelled by a boom in fast food and a decline in physical activity” writes Stark, who reports upon the 2008 launch of a study into Australia’s future ‘fat bomb’. As Deborah Lupton notes, such reports were a particular feature of the mid 2000s when: intense concern about the ‘obesity epidemic’ intensified and peaked. Time magazine named 2004 ‘The Year of Obesity’. That year the World Health Organization’s Global Strategy on Diet, Physical Activity and Health was released and the [US] Centers for Disease Control predicted that a poor diet and lack of exercise would soon claim more lives than tobacco-related disease in the United States. (4) The American Heart Association recommends eating no more than 1500mg of salt per day (Hamzelou 11) but salt consumption in the USA averages more than twice this quantity, at 3500mg per day (Bernstein and Willett 1178). In the UK, a sustained campaign and public health-driven engagement with food manufacturers by CASH—Consensus Action on Salt and Health—resulted in a reduction of between 30 and 40 percent of added salt in processed foods between 2001 and 2011, with a knock-on 15 percent decline in the UK population’s salt intake overall. This is the largest reduction achieved by any developed nation (Brinsden et al.). “According to the [UK’s] National Institute for Health and Care Excellence (NICE), this will have reduced [UK] stroke and heart attack deaths by a minimum of 9,000 per year, with a saving in health care costs of at least £1.5bn a year” (MacGregor and Pombo). Whereas there has been some success over the past decade in reducing the amount of salt consumed, in the Western world the consumption of sugar continues to rise, as a graph cited in the New Scientist indicates (O’Callaghan). Regular warnings that sugar is associated with a range of health threats and delivers empty calories devoid of nutrition have failed to halt the increase in sugar consumption. Further, although some sugar is a natural product, processed foods tend to use a form invented in 1957: high-fructose corn syrup (HFCS). “HFCS is a gloopy solution of glucose and fructose” writes O’Callaghan, adding that it is “as sweet as table sugar but has typically been about 30% cheaper”. She cites Serge Ahmed, a French neuroscientist, as arguing that in a world of food sufficiency people do not need to consume more, so they need to be enticed to overeat by making food more pleasurable. Ahmed was part of a team that ran an experiment with cocaine-addicted rats, offering them a mutually exclusive choice between highly-sweetened water and cocaine: Our findings clearly indicate that intense sweetness can surpass cocaine reward, even in drug-sensitized and -addicted individuals. We speculate that the addictive potential of intense sweetness results from an inborn hypersensitivity to sweet tastants. In most mammals, including rats and humans, sweet receptors evolved in ancestral environments poor in sugars and are thus not adapted to high concentrations of sweet tastants. The supranormal stimulation of these receptors by sugar-rich diets, such as those now widely available in modern societies, would generate a supranormal reward signal in the brain, with the potential to override self-control mechanisms and thus lead to addiction. (Lenoir et al.) The Tongue and the Brain One of the implications of this research about the mammalian desire for sugar is that our taste for food is about more than how these foods actually taste in the mouth on our tongues. It is also about the neural response to the food we eat. The taste of French fries thus also includes that “snap, mouth-feel, and hardness” and the “colloidal fat crystal network” (Rogers, “Novel Structuring” 747). While there is no taste receptor for fats, these nutrients have important effects upon the brain. Wang et al. offered rats a highly fatty, but palatable, diet and allowed them to eat freely. 33 percent of the calories in the food were delivered via fat, compared with 21 percent in a normal diet. The animals almost doubled their usual calorific intake, both because the food had a 37 percent increased calorific content and also because the rats ate 47 percent more than was standard (2786). The research team discovered that in as little as three days the rats “had already lost almost all of their ability to respond to leptin” (Martindale 27). Leptin is a hormone that acts on the brain to communicate feelings of fullness, and is thus important in assisting animals to maintain a healthy body weight. The rats had also become insulin resistant. “Severe resistance to the metabolic effects of both leptin and insulin ensued after just 3 days of overfeeding” (Wang et al. 2786). Fast food restaurants typically offer highly palatable, high fat, high sugar, high salt, calorific foods which can deliver 130 percent of a day’s recommended fat intake, and almost a day’s worth of an adult man’s calories, in one meal. The impacts of maintaining such a diet over a comparatively short time-frame have been recorded in documentaries such as Super Size Me (Spurlock). The after effects of what we widely call “junk food” are also evident in rat studies. Neuroscientist Paul Kenny, who like Ahmed was investigating possible similarities between food- and cocaine-addicted rats, allowed his animals unlimited access to both rat ‘junk food’ and healthy food for rats. He then changed their diets. “The rats with unlimited access to junk food essentially went on a hunger strike. ‘It was as if they had become averse to healthy food’, says Kenny. It took two weeks before the animals began eating as much [healthy food] as those in the control group” (quoted, Trivedi 40). Developing a taste for certain food is consequently about much more than how they taste in the mouth; it constitutes an individual’s response to a mixture of taste, hormonal reactions and physiological changes. Choosing Health Glanz et al. conclude their study by commenting that “campaigns attempting to change people’s perception of the importance of nutrition will be interpreted in terms of existing values and beliefs. A more promising strategy might be to stress the good taste of healthful foods” (1126). Interestingly, this is the strategy already adopted by some health-focused cookbooks. I have 66 cookery books in my kitchen. None of ten books sampled from the five spaces in which these books are kept had ‘taste’ as an index entry, but three books had ‘taste’ in their titles: The Higher Taste, Taste of Life, and The Taste of Health. All three books seek to promote healthy eating, and they all date from the mid-1980s. It might be that taste is not mentioned in cookbook indexes because it is a sine qua non: a focus upon taste is so necessary and fundamental to a cookbook that it goes without saying. Yet, as the physiological evidence makes clear, what we find palatable is highly mutable, varying between people, and capable of changing significantly in comparatively short periods of time. The good news from the research studies is that the changes wrought by high salt, high sugar, high fat diets need not be permanent. Luciano Rossetti, one of the authors on Wang et al’s paper, told Martindale that the physiological changes are reversible, but added a note of caution: “the fatter a person becomes the more resistant they will be to the effects of leptin and the harder it is to reverse those effects” (27). Morgan Spurlock’s experience also indicates this. In his case it took the actor/director 14 months to lose the 11.1 kg (13 percent of his body mass) that he gained in the 30 days of his fast-food-only experiment. Trivedi was more fortunate, stating that, “After two weeks of going cold turkey, I can report I have successfully kicked my ice cream habit” (41). A reader’s letter in response to Trivedi’s article echoes this observation. She writes that “the best way to stop the craving was to switch to a diet of vegetables, seeds, nuts and fruits with a small amount of fish”, adding that “cravings stopped in just a week or two, and the diet was so effective that I no longer crave junk food even when it is in front of me” (Mackeown). Popular culture indicates a range of alternative ways to resist food manufacturers. In the West, there is a growing emphasis on organic farming methods and produce (Guthman), on sl called Urban Agriculture in the inner cities (Mason and Knowd), on farmers’ markets, where consumers can meet the producers of the food they eat (Guthrie et al.), and on the work of advocates of ‘real’ food, such as Jamie Oliver (Warrin). Food and wine festivals promote gourmet tourism along with an emphasis upon the quality of the food consumed, and consumption as a peak experience (Hall and Sharples), while environmental perspectives prompt awareness of ‘food miles’ (Weber and Matthews), fair trade (Getz and Shreck) and of land degradation, animal suffering, and the inequitable use of resources in the creation of the everyday Western diet (Dare, Costello and Green). The burgeoning of these different approaches has helped to stimulate a commensurate growth in relevant disciplinary fields such as Food Studies (Wessell and Brien). One thing that all these new ways of looking at food and taste have in common is that they are options for people who feel they have the right to choose what and when to eat; and to consume the tastes they prefer. This is not true of all groups of people in all countries. Hiding behind the public health campaigns that encourage people to exercise and eat fresh fruit and vegetables are the hidden “social determinants of health: The conditions in which people are born, grow, live, work and age, including the health system” (WHO 45). As the definitions explain, it is the “social determinants of health [that] are mostly responsible for health iniquities” with evidence from all countries around the world demonstrating that “in general, the lower an individual’s socioeconomic position, the worse his or her health” (WHO 45). For the comparatively disadvantaged, it may not be the taste of fast food that attracts them but the combination of price and convenience. If there is no ready access to cooking facilities, or safe food storage, or if a caregiver is simply too time-poor to plan and prepare meals for a family, junk food becomes a sensible choice and its palatability an added bonus. For those with the education, desire, and opportunity to break free of the taste for salty and sugary fats, however, there are a range of strategies to achieve this. There is a persuasive array of evidence that embracing a plant-based diet confers a multitude of health benefits for the individual, for the planet and for the animals whose lives and welfare would otherwise be sacrificed to feed us (Green, Costello and Dare). Such a choice does involve losing the taste for foods which make up the lion’s share of the Western diet, but any sense of deprivation only lasts for a short time. The fact is that our sense of taste responds to the stimuli offered. It may be that, notwithstanding the desires of Jamie Oliver and the like, a particular child never will never get to like broccoli, but it is also the case that broccoli tastes differently to me, seven years after becoming a vegan, than it ever did in the years in which I was omnivorous. When people tell me that they would love to adopt a plant-based diet but could not possibly give up cheese, it is difficult to reassure them that the pleasure they get now from that specific cocktail of salty fats will be more than compensated for by the sheer exhilaration of eating crisp, fresh fruits and vegetables in the future. Conclusion For decades, the mass market food industry has tweaked their products to make them hyper-palatable and difficult to resist. They do this through marketing experiments and consumer behaviour research, schooling taste buds and brains to anticipate and relish specific cocktails of sweet fats (cakes, biscuits, chocolate, ice cream) and salty fats (chips, hamburgers, cheese, salted nuts). They add ingredients to make these products stimulate taste buds more effectively, while also producing cheaper items with longer life on the shelves, reducing spoilage and the complexity of storage for retailers. Consumers are trained to like the tastes of these foods. Bitter, sour, and umami receptors are comparatively under-stimulated, with sweet, salty, and fat-based tastes favoured in their place. Western societies pay the price for this learned preference in high blood pressure, high cholesterol, diabetes, and obesity. Public health advocate Bruce Neal and colleagues, working to reduce added salt in processed foods, note that the food and manufacturing industries can now provide most of the calories that the world needs to survive. “The challenge now”, they argue, “is to have these same industries provide foods that support long and healthy adult lives. And in this regard there remains a very considerable way to go”. If the public were to believe that their sense of taste is mutable and has been distorted for corporate and industrial gain, and if they were to demand greater access to natural foods in their unprocessed state, then that journey towards a healthier future might be far less protracted than these and many other researchers seem to believe. References Bernstein, Adam, and Walter Willett. “Trends in 24-Hr Sodium Excretion in the United States, 1957–2003: A Systematic Review.” American Journal of Clinical Nutrition 92 (2010): 1172–1180. Bhaktivedanta Book Trust. The Higher Taste: A Guide to Gourmet Vegetarian Cooking and a Karma-Free Diet, over 60 Famous Hare Krishna Recipes. Botany, NSW: Bhaktivedanta Book Trust, 1987. Brinsden, Hannah C., Feng J. He, Katharine H. Jenner, & Graham A. MacGregor. “Surveys of the Salt Content in UK Bread: Progress Made and Further Reductions Possible.” British Medical Journal Open 3.6 (2013). 2 Feb. 2014 ‹http://bmjopen.bmj.com/content/3/6/e002936.full›. Coughlan, Andy. “In Good Taste.” New Scientist 2223 (2000): 11. Dare, Julie, Leesa Costello, and Lelia Green. “Nutritional Narratives: Examining Perspectives on Plant Based Diets in the Context of Dominant Western Discourse”. Proceedings of the 2013 Australian and New Zealand Communication Association Conference. Ed. In Terence Lee, Kathryn Trees, and Renae Desai. Fremantle, Western Australia, 3-5 Jul. 2013. 2 Feb. 2014 ‹http://www.anzca.net/conferences/past-conferences/159.html›. Getz, Christy, and Aimee Shreck. “What Organic and Fair Trade Labels Do Not Tell Us: Towards a Place‐Based Understanding of Certification.” International Journal of Consumer Studies 30.5 (2006): 490–501. Glanz, Karen, Michael Basil, Edward Maibach, Jeanne Goldberg, & Dan Snyder. “Why Americans Eat What They Do: Taste, Nutrition, Cost, Convenience, and Weight Control Concerns as Influences on Food Consumption.” Journal of the American Dietetic Association 98.10 (1988): 1118–1126. Green, Lelia, Leesa Costello, and Julie Dare. “Veganism, Health Expectancy, and the Communication of Sustainability.” Australian Journal of Communication 37.3 (2010): 87–102 Guthman, Julie. Agrarian Dreams: the Paradox of Organic Farming in California. Berkley and Los Angeles, CA: U of California P, 2004 Guthrie, John, Anna Guthrie, Rob Lawson, & Alan Cameron. “Farmers’ Markets: The Small Business Counter-Revolution in Food Production and Retailing.” British Food Journal 108.7 (2006): 560–573. Hall, Colin Michael, and Liz Sharples. Eds. Food and Wine Festivals and Events Around the World: Development, Management and Markets. Oxford, UK: Routledge, 2008. Hamzelou, Jessica. “Taste Bud Trickery Needed to Cut Salt Intake.” New Scientist 2799 (2011): 11. Japan Patent Office. History of Industrial Property Rights, Ten Japanese Great Inventors: Kikunae Ikeda: Sodium Glutamate. Tokyo: Japan Patent Office, 2002. L’Abbé, Mary R., S. Stender, C. M. Skeaff, Ghafoorunissa, & M. Tavella. “Approaches to Removing Trans Fats from the Food Supply in Industrialized and Developing Countries.” European Journal of Clinical Nutrition 63 (2009): S50–S67. Lenoir, Magalie, Fuschia Serre, Lauriane Cantin, & Serge H. Ahmed. “Intense Sweetness Surpasses Cocaine Reward.” PLOS One (2007). 2 Feb. 2014 ‹http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0000698›. Lupton, Deborah. Fat. Oxford, UK: Routledge, 2013. MacGregor, Graham, and Sonia Pombo. “The Amount of Hidden Sugar in Your Diet Might Shock You.” The Conversation 9 January (2014). 2 Feb. 2014 ‹http://theconversation.com/the-amount-of-hidden-sugar-in-your-diet-might-shock-you-21867›. Mackeown, Elizabeth. “Cold Turkey?” [Letter]. New Scientist 2787 (2010): 31. Martindale, Diane. “Burgers on the Brain.” New Scientist 2380 (2003): 26–29. Mason, David, and Ian Knowd. “The Emergence of Urban Agriculture: Sydney, Australia.” The International Journal of Agricultural Sustainability 8.1–2 (2010): 62–71. Neal, Bruce, Jacqui Webster, and Sebastien Czernichow. “Sanguine About Salt Reduction.” European Journal of Preventative Cardiology 19.6 (2011): 1324–1325. Nelson, Greg, Jayaram Chandrashekar, Mark A. Hoon, Luxin Feng, Grace Zhao, Nicholas J. P. Ryba, & Charles S. Zuker. “An Amino-Acid Taste Receptor.” Nature 416 (2002): 199–202. O’Callaghan, Tiffany. “Sugar on Trial: What You Really Need to Know.” New Scientist 2954 (2011): 34–39. Rogers, Jenny. Ed. The Taste of Health: The BBC Guide to Healthy Cooking. London, UK: British Broadcasting Corporation, 1985. Rogers, Michael A. “Novel Structuring Strategies for Unsaturated Fats—Meeting the Zero-Trans, Zero-Saturated Fat Challenge: A Review.” Food Research International 42.7 August (2009): 747–753. Schlosser, Eric. Fast Food Nation. London, UK: Penguin, 2002. Super Size Me. Dir. Morgan Spurlock. Samuel Goldwyn Films, 2004. Stafford, Julie. Taste of Life. Richmond, Vic: Greenhouse Publications Ltd, 1983. Stark, Jill. “Australia Now World’s Fattest Nation.” The Age 20 June (2008). 2 Feb. 2014 ‹http://www.theage.com.au/news/health/australia-worlds-fattest-nation/2008/06/19/1213770886872.html›. Trivedi, Bijal. “Junkie Food: Tastes That Your Brain Cannot Resist.” New Scientist 2776 (2010): 38–41. Wang, Jiali, Silvana Obici, Kimyata Morgan, Nir Barzilai, Zhaohui Feng, & Luciano Rossetti. “Overfeeding Rapidly Increases Leptin and Insulin Resistance.” Diabetes 50.12 (2001): 2786–2791. Warin, Megan. “Foucault’s Progeny: Jamie Oliver and the Art of Governing Obesity.” Social Theory & Health 9.1 (2011): 24–40. Weber, Christopher L., and H. Scott Matthews. “Food-miles and the Relative Climate Impacts of Food Choices in the United States.” Environmental Science & Technology 42.10 (2008): 3508–3513. Wessell, Adele, and Donna Lee Brien. Eds. Rewriting the Menu: the Cultural Dynamics of Contemporary Food Choices. Special Issue 9, TEXT: Journal of Writing and Writing Programs October 2010. World Health Organisation. Closing the Gap: Policy into Practice on Social Determinants of Health [Discussion Paper]. Rio de Janeiro, Brazil: World Conference on Social Determinants of Health, World Health Organisation, 19–21 October 2011.
APA, Harvard, Vancouver, ISO, and other styles
34

Musgrove, Brian Michael. "Recovering Public Memory: Politics, Aesthetics and Contempt." M/C Journal 11, no. 6 (November 28, 2008). http://dx.doi.org/10.5204/mcj.108.

Full text
Abstract:
1. Guy Debord in the Land of the Long WeekendIt’s the weekend – leisure time. It’s the interlude when, Guy Debord contends, the proletarian is briefly free of the “total contempt so clearly built into every aspect of the organization and management of production” in commodity capitalism; when workers are temporarily “treated like grown-ups, with a great show of solicitude and politeness, in their new role as consumers.” But this patronising show turns out to be another form of subjection to the diktats of “political economy”: “the totality of human existence falls under the regime of the ‘perfected denial of man’.” (30). As Debord suggests, even the creation of leisure time and space is predicated upon a form of contempt: the “perfected denial” of who we, as living people, really are in the eyes of those who presume the power to legislate our working practices and private identities.This Saturday The Weekend Australian runs an opinion piece by Christopher Pearson, defending ABC Radio National’s Stephen Crittenden, whose program The Religion Report has been axed. “Some of Crittenden’s finest half-hours have been devoted to Islam in Australia in the wake of September 11,” Pearson writes. “Again and again he’s confronted a left-of-centre audience that expected multi-cultural pieties with disturbing assertions.” Along the way in this admirable Crusade, Pearson notes that Crittenden has exposed “the Left’s recent tendency to ally itself with Islam.” According to Pearson, Crittenden has also thankfully given oxygen to claims by James Cook University’s Mervyn Bendle, the “fairly conservative academic whose work sometimes appears in [these] pages,” that “the discipline of critical terrorism studies has been captured by neo-Marxists of a postmodern bent” (30). Both of these points are well beyond misunderstanding or untested proposition. If Pearson means them sincerely he should be embarrassed and sacked. But of course he does not and will not be. These are deliberate lies, the confabulations of an eminent right-wing culture warrior whose job is to vilify minorities and intellectuals (Bendle escapes censure as an academic because he occasionally scribbles for the Murdoch press). It should be observed, too, how the patent absurdity of Pearson’s remarks reveals the extent to which he holds the intelligence of his readers in contempt. And he is not original in peddling these toxic wares.In their insightful—often hilarious—study of Australian opinion writers, The War on Democracy, Niall Lucy and Steve Mickler identify the left-academic-Islam nexus as the brain-child of former Treasurer-cum-memoirist Peter Costello. The germinal moment was “a speech to the Australian American Leadership Dialogue forum at the Art Gallery of NSW in 2005” concerning anti-Americanism in Australian schools. Lucy and Mickler argue that “it was only a matter of time” before a conservative politician or journalist took the plunge to link the left and terrorism, and Costello plunged brilliantly. He drew a mental map of the Great Chain of Being: left-wing academics taught teacher trainees to be anti-American; teacher trainees became teachers and taught kids to be anti-American; anti-Americanism morphs into anti-Westernism; anti-Westernism veers into terrorism (38). This is contempt for the reasoning capacity of the Australian people and, further still, contempt for any observable reality. Not for nothing was Costello generally perceived by the public as a politician whose very physiognomy radiated smugness and contempt.Recycling Costello, Christopher Pearson’s article subtly interpellates the reader as an ordinary, common-sense individual who instinctively feels what’s right and has no need to think too much—thinking too much is the prerogative of “neo-Marxists” and postmodernists. Ultimately, Pearson’s article is about channelling outrage: directing the down-to-earth passions of the Australian people against stock-in-trade culture-war hate figures. And in Pearson’s paranoid world, words like “neo-Marxist” and “postmodern” are devoid of historical or intellectual meaning. They are, as Lucy and Mickler’s War on Democracy repeatedly demonstrate, mere ciphers packed with the baggage of contempt for independent critical thought itself.Contempt is everywhere this weekend. The Weekend Australian’s colour magazine runs a feature story on Malcolm Turnbull: one of those familiar profiles designed to reveal the everyday human touch of the political classes. In this puff-piece, Jennifer Hewett finds Turnbull has “a restless passion for participating in public life” (20); that beneath “the aggressive political rhetoric […] behind the journalist turned lawyer turned banker turned politician turned would-be prime minister is a man who really enjoys that human interaction, however brief, with the many, many ordinary people he encounters” (16). Given all this energetic turning, it’s a wonder that Turnbull has time for human interactions at all. The distinction here of Turnbull and “many, many ordinary people” – the anonymous masses – surely runs counter to Hewett’s brief to personalise and quotidianise him. Likewise, those two key words, “however brief”, have an unfortunate, unintended effect. Presumably meant to conjure a picture of Turnbull’s hectic schedules and serial turnings, the words also convey the image of a patrician who begrudgingly knows one of the costs of a political career is that common flesh must be pressed—but as gingerly as possible.Hewett proceeds to disclose that Turnbull is “no conservative cultural warrior”, “onfounds stereotypes” and “hates labels” (like any baby-boomer rebel) and “has always read widely on political philosophy—his favourite is Edmund Burke”. He sees the “role of the state above all as enabling people to do their best” but knows that “the main game is the economy” and is “content to play mainstream gesture politics” (19). I am genuinely puzzled by this and imagine that my intelligence is being held in contempt once again. That the man of substance is given to populist gesturing is problematic enough; but that the Burke fan believes the state is about personal empowerment is just too much. Maybe Turnbull is a fan of Burke’s complex writings on the sublime and the beautiful—but no, Hewett avers, Turnbull is engaged by Burke’s “political philosophy”. So what is it in Burke that Turnbull finds to favour?Turnbull’s invocation of Edmund Burke is empty, gestural and contradictory. The comfortable notion that the state helps people to realise their potential is contravened by Burke’s view that the state functions so “the inclinations of men should frequently be thwarted, their will controlled, and their passions brought into subjection… by a power out of themselves” (151). Nor does Burke believe that anyone of humble origins could or should rise to the top of the social heap: “The occupation of an hair-dresser, or of a working tallow-chandler, cannot be a matter of honour to any person… the state suffers oppression, if such as they, either individually or collectively, are permitted to rule” (138).If Turnbull’s main game as a would-be statesman is the economy, Burke profoundly disagrees: “the state ought not to be considered as nothing better than a partnership agreement in a trade of pepper and coffee, callico or tobacco, or some other such low concern… It is a partnership in all science; a partnership in all art; a partnership in every virtue, and in all perfection”—a sublime entity, not an economic manager (194). Burke understands, long before Antonio Gramsci or Louis Althusser, that individuals or social fractions must be made admirably “obedient” to the state “by consent or force” (195). Burke has a verdict on mainstream gesture politics too: “When men of rank sacrifice all ideas of dignity to an ambition without a distinct object, and work with low instruments and for low ends, the whole composition [of the state] becomes low and base” (136).Is Malcolm Turnbull so contemptuous of the public that he assumes nobody will notice the gross discrepancies between his own ideals and what Burke stands for? His invocation of Burke is, indeed, “mainstream gesture politics”: on one level, “Burke” signifies nothing more than Turnbull’s performance of himself as a deep thinker. In this process, the real Edmund Burke is historically erased; reduced to the status of stage-prop in the theatrical production of Turnbull’s mass-mediated identity. “Edmund Burke” is re-invented as a term in an aesthetic repertoire.This transmutation of knowledge and history into mere cipher is the staple trick of culture-war discourse. Jennifer Hewett casts Turnbull as “no conservative culture warrior”, but he certainly shows a facility with culture-war rhetoric. And as much as Turnbull “confounds stereotypes” his verbal gesture to Edmund Burke entrenches a stereotype: at another level, the incantation “Edmund Burke” is implicitly meant to connect Turnbull with conservative tradition—in the exact way that John Howard regularly self-nominated as a “Burkean conservative”.This appeal to tradition effectively places “the people” in a power relation. Tradition has a sublimity that is bigger than us; it precedes us and will outlast us. Consequently, for a politician to claim that tradition has fashioned him, that he is welded to it or perhaps even owns it as part of his heritage, is to glibly imply an authority greater than that of “the many, many ordinary people”—Burke’s hair-dressers and tallow-chandlers—whose company he so briefly enjoys.In The Ideology of the Aesthetic, Terry Eagleton assesses one of Burke’s important legacies, placing him beside another eighteenth-century thinker so loved by the right—Adam Smith. Ideology of the Aesthetic is premised on the view that “Aesthetics is born as a discourse of the body”; that the aesthetic gives form to the “primitive materialism” of human passions and organises “the whole of our sensate life together… a society’s somatic, sensational life” (13). Reading Smith’s Theory of Moral Sentiments, Eagleton discerns that society appears as “an immense machine, whose regular and harmonious movements produce a thousand agreeable effects”, like “any production of human art”. In Smith’s work, the “whole of social life is aestheticized” and people inhabit “a social order so spontaneously cohesive that its members no longer need to think about it.” In Burke, Eagleton discovers that the aesthetics of “manners” can be understood in terms of Gramscian hegemony: “in the aesthetics of social conduct, or ‘culture’ as it would later be called, the law is always with us, as the very unconscious structure of our life”, and as a result conformity to a dominant ideological order is deeply felt as pleasurable and beautiful (37, 42). When this conservative aesthetic enters the realm of politics, Eagleton contends, the “right turn, from Burke” onwards follows a dark trajectory: “forget about theoretical analysis… view society as a self-grounding organism, all of whose parts miraculously interpenetrate without conflict and require no rational justification. Think with the blood and the body. Remember that tradition is always wiser and richer than one’s own poor, pitiable ego. It is this line of descent, in one of its tributaries, which will lead to the Third Reich” (368–9).2. Jean Baudrillard, the Nazis and Public MemoryIn 1937, during the Spanish Civil War, the Third Reich’s Condor Legion of the Luftwaffe was on loan to Franco’s forces. On 26 April that year, the Condor Legion bombed the market-town of Guernica: the first deliberate attempt to obliterate an entire town from the air and the first experiment in what became known as “terror bombing”—the targeting of civilians. A legacy of this violence was Pablo Picasso’s monumental canvas Guernica – the best-known anti-war painting in art history.When US Secretary of State Colin Powell addressed the United Nations on 5 February 2003 to make the case for war on Iraq, he stopped to face the press in the UN building’s lobby. The doorstop was globally televised, packaged as a moment of incredible significance: history in the making. It was also theatre: a moment in which history was staged as “event” and the real traces of history were carefully erased. Millions of viewers world-wide were undoubtedly unaware that the blue backdrop before which Powell stood was specifically designed to cover the full-scale tapestry copy of Picasso’s Guernica. This one-act, agitprop drama was a splendid example of politics as aesthetic action: a “performance” of history in the making which required the loss of actual historical memory enshrined in Guernica. Powell’s performance took its cues from the culture wars, which require the ceaseless erasure of history and public memory—on this occasion enacted on a breathtaking global, rather than national, scale.Inside the UN chamber, Powell’s performance was equally staged-crafted. As he brandished vials of ersatz anthrax, the power-point behind him (the theatrical set) showed artists’ impressions of imaginary mobile chemical weapons laboratories. Powell was playing lead role in a kind of populist, hyperreal production. It was Jean Baudrillard’s postmodernism, no less, as the media space in which Powell acted out the drama was not a secondary representation of reality but a reality of its own; the overheads of mobile weapons labs were simulacra, “models of a real without origins or reality”, pictures referring to nothing but themselves (2). In short, Powell’s performance was anchored in a “semiurgic” aesthetic; and it was a dreadful real-life enactment of Walter Benjamin’s maxim that “All efforts to render politics aesthetic culminate in one thing: war” (241).For Benjamin, “Fascism attempts to organize the newly created proletarian masses without affecting the property structure which the masses strive to eliminate.” Fascism gave “these masses not their right, but instead a chance to express themselves.” In turn, this required “the introduction of aesthetics into politics”, the objective of which was “the production of ritual values” (241). Under Adolf Hitler’s Reich, people were able to express themselves but only via the rehearsal of officially produced ritual values: by their participation in the disquisition on what Germany meant and what it meant to be German, by the aesthetic regulation of their passions. As Frederic Spotts’ fine study Hitler and the Power of Aesthetics reveals, this passionate disquisition permeated public and private life, through the artfully constructed total field of national narratives, myths, symbols and iconographies. And the ritualistic reiteration of national values in Nazi Germany hinged on two things: contempt and memory loss.By April 1945, as Berlin fell, Hitler’s contempt for the German people was at its apogee. Hitler ordered a scorched earth operation: the destruction of everything from factories to farms to food stores. The Russians would get nothing, the German people would perish. Albert Speer refused to implement the plan and remembered that “Until then… Germany and Hitler had been synonymous in my mind. But now I saw two entities opposed… A passionate love of one’s country… a leader who seemed to hate his people” (Sereny 472). But Hitler’s contempt for the German people was betrayed in the blusterous pages of Mein Kampf years earlier: “The receptivity of the great masses is very limited, their intelligence is small, but their power of forgetting is enormous” (165). On the back of this belief, Hitler launched what today would be called a culture war, with its Jewish folk devils, loathsome Marxist intellectuals, incitement of popular passions, invented traditions, historical erasures and constant iteration of values.When Theodor Adorno and Max Horkheimer fled Fascism, landing in the United States, their view of capitalist democracy borrowed from Benjamin and anticipated both Baudrillard and Guy Debord. In their well-know essay on “The Culture Industry”, in Dialectic of Enlightenment, they applied Benjamin’s insight on mass self-expression and the maintenance of property relations and ritual values to American popular culture: “All are free to dance and enjoy themselves”, but the freedom to choose how to do so “proves to be the freedom to choose what is always the same”, manufactured by monopoly capital (161–162). Anticipating Baudrillard, they found a society in which “only the copy appears: in the movie theatre, the photograph; on the radio, the recording” (143). And anticipating Debord’s “perfected denial of man” they found a society where work and leisure were structured by the repetition-compulsion principles of capitalism: where people became consumers who appeared “s statistics on research organization charts” (123). “Culture” came to do people’s thinking for them: “Pleasure always means not to think about anything, to forget suffering even where it is shown” (144).In this mass-mediated environment, a culture of repetitions, simulacra, billboards and flickering screens, Adorno and Horkheimer concluded that language lost its historical anchorages: “Innumerable people use words and expressions which they have either ceased to understand or employ only because they trigger off conditioned reflexes” in precisely the same way that the illusory “free” expression of passions in Germany operated, where words were “debased by the Fascist pseudo-folk community” (166).I know that the turf of the culture wars, the US and Australia, are not Fascist states; and I know that “the first one to mention the Nazis loses the argument”. I know, too, that there are obvious shortcomings in Adorno and Horkheimer’s reactions to popular culture and these have been widely criticised. However, I would suggest that there is a great deal of value still in Frankfurt School analyses of what we might call the “authoritarian popular” which can be applied to the conservative prosecution of populist culture wars today. Think, for example, how the concept of a “pseudo folk community” might well describe the earthy, common-sense public constructed and interpellated by right-wing culture warriors: America’s Joe Six-Pack, John Howard’s battlers or Kevin Rudd’s working families.In fact, Adorno and Horkheimer’s observations on language go to the heart of a contemporary culture war strategy. Words lose their history, becoming ciphers and “triggers” in a politicised lexicon. Later, Roland Barthes would write that this is a form of myth-making: “myth is constituted by the loss of the historical quality of things.” Barthes reasoned further that “Bourgeois ideology continuously transforms the products of history into essential types”, generating a “cultural logic” and an ideological re-ordering of the world (142). Types such as “neo-Marxist”, “postmodernist” and “Burkean conservative”.Surely, Benjamin’s assessment that Fascism gives “the people” the occasion to express itself, but only through “values”, describes the right’s pernicious incitement of the mythic “dispossessed mainstream” to reclaim its voice: to shout down the noisy minorities—the gays, greenies, blacks, feminists, multiculturalists and neo-Marxist postmodernists—who’ve apparently been running the show. Even more telling, Benjamin’s insight that the incitement to self-expression is connected to the maintenance of property relations, to economic power, is crucial to understanding the contemptuous conduct of culture wars.3. Jesus Dunked in Urine from Kansas to CronullaAmerican commentator Thomas Frank bases his study What’s the Matter with Kansas? on this very point. Subtitled How Conservatives Won the Heart of America, Frank’s book is a striking analysis of the indexation of Chicago School free-market reform and the mobilisation of “explosive social issues—summoning public outrage over everything from busing to un-Christian art—which it then marries to pro-business policies”; but it is the “economic achievements” of free-market capitalism, “not the forgettable skirmishes of the never-ending culture wars” that are conservatism’s “greatest monuments.” Nevertheless, the culture wars are necessary as Chicago School economic thinking consigns American communities to the rust belt. The promise of “free-market miracles” fails ordinary Americans, Frank reasons, leaving them in “backlash” mode: angry, bewildered and broke. And in this context, culture wars are a convenient form of anger management: “Because some artist decides to shock the hicks by dunking Jesus in urine, the entire planet must remake itself along the lines preferred” by nationalist, populist moralism and free-market fundamentalism (5).When John Howard received the neo-conservative American Enterprise Institute’s Irving Kristol Award, on 6 March 2008, he gave a speech in Washington titled “Sharing Our Common Values”. The nub of the speech was Howard’s revelation that he understood the index of neo-liberal economics and culture wars precisely as Thomas Frank does. Howard told the AEI audience that under his prime ministership Australia had “pursued reform and further modernisation of our economy” and that this inevitably meant “dislocation for communities”. This “reform-dislocation” package needed the palliative of a culture war, with his government preaching the “consistency and reassurance” of “our nation’s traditional values… pride in her history”; his government “became assertive about the intrinsic worth of our national identity. In the process we ended the seemingly endless seminar about that identity which had been in progress for some years.” Howard’s boast that his government ended the “seminar” on national identity insinuates an important point. “Seminar” is a culture-war cipher for intellection, just as “pride” is code for passion; so Howard’s self-proclaimed achievement, in Terry Eagleton’s terms, was to valorise “the blood and the body” over “theoretical analysis”. This speaks stratospheric contempt: ordinary people have their identity fashioned for them; they need not think about it, only feel it deeply and passionately according to “ritual values”. Undoubtedly this paved the way to Cronulla.The rubric of Howard’s speech—“Sharing Our Common Values”—was both a homage to international neo-conservatism and a reminder that culture wars are a trans-national phenomenon. In his address, Howard said that in all his “years in politics” he had not heard a “more evocative political slogan” than Ronald Reagan’s “Morning in America”—the rhetorical catch-cry for moral re-awakening that launched the culture wars. According to Lawrence Grossberg, America’s culture wars were predicated on the perception that the nation was afflicted by “a crisis of our lack of passion, of not caring enough about the values we hold… a crisis of nihilism which, while not restructuring our ideological beliefs, has undermined our ability to organise effective action on their behalf”; and this “New Right” alarmism “operates in the conjuncture of economics and popular culture” and “a popular struggle by which culture can lead politics” in the passionate pursuit of ritual values (31–2). When popular culture leads politics in this way we are in the zone of the image, myth and Adorno and Horkheimer’s “trigger words” that have lost their history. In this context, McKenzie Wark observes that “radical writers influenced by Marx will see the idea of culture as compensation for a fragmented and alienated life as a con. Guy Debord, perhaps the last of the great revolutionary thinkers of Europe, will call it “the spectacle”’ (20). Adorno and Horkheimer might well have called it “the authoritarian popular”. As Jonathan Charteris-Black’s work capably demonstrates, all politicians have their own idiolect: their personally coded language, preferred narratives and myths; their own vision of who “the people” might or should be that is conjured in their words. But the language of the culture wars is different. It is not a personal idiolect. It is a shared vocabulary, a networked vernacular, a pervasive trans-national aesthetic that pivots on the fact that words like “neo-Marxist”, “postmodern” and “Edmund Burke” have no historical or intellectual context or content: they exist as the ciphers of “values”. And the fact that culture warriors continually mouth them is a supreme act of contempt: it robs the public of its memory. And that’s why, as Lucy and Mickler’s War on Democracy so wittily argues, if there are any postmodernists left they’ll be on the right.Benjamin, Adorno, Horkheimer and, later, Debord and Grossberg understood how the political activation of the popular constitutes a hegemonic project. The result is nothing short of persuading “the people” to collaborate in its own oppression. The activation of the popular is perfectly geared to an age where the main stage of political life is the mainstream media; an age in which, Charteris-Black notes, political classes assume the general antipathy of publics to social change and act on the principle that the most effective political messages are sold to “the people” by an appeal “to familiar experiences”—market populism (10). In her substantial study The Persuaders, Sally Young cites an Australian Labor Party survey, conducted by pollster Rod Cameron in the late 1970s, in which the party’s message machine was finely tuned to this populist position. The survey also dripped with contempt for ordinary people: their “Interest in political philosophy… is very low… They are essentially the products (and supporters) of mass market commercialism”. Young observes that this view of “the people” was the foundation of a new order of political advertising and the conduct of politics on the mass-media stage. Cameron’s profile of “ordinary people” went on to assert that they are fatally attracted to “a moderate leader who is strong… but can understand and represent their value system” (47): a prescription for populist discourse which begs the question of whether the values a politician or party represent via the media are ever really those of “the people”. More likely, people are hegemonised into a value system which they take to be theirs. Writing of the media side of the equation, David Salter raises the point that when media “moguls thunder about ‘the public interest’ what they really mean is ‘what we think the public is interested in”, which is quite another matter… Why this self-serving deception is still so sheepishly accepted by the same public it is so often used to violate remains a mystery” (40).Sally Young’s Persuaders retails a story that she sees as “symbolic” of the new world of mass-mediated political life. The story concerns Mark Latham and his “revolutionary” journeys to regional Australia to meet the people. “When a political leader who holds a public meeting is dubbed a ‘revolutionary’”, Young rightly observes, “something has gone seriously wrong”. She notes how Latham’s “use of old-fashioned ‘meet-and-greet’campaigning methods was seen as a breath of fresh air because it was unlike the type of packaged, stage-managed and media-dependent politics that have become the norm in Australia.” Except that it wasn’t. “A media pack of thirty journalists trailed Latham in a bus”, meaning, that he was not meeting the people at all (6–7). He was traducing the people as participants in a media spectacle, as his “meet and greet” was designed to fill the image-banks of print and electronic media. Even meeting the people becomes a media pseudo-event in which the people impersonate the people for the camera’s benefit; a spectacle as artfully deceitful as Colin Powell’s UN performance on Iraq.If the success of this kind of “self-serving deception” is a mystery to David Salter, it would not be so to the Frankfurt School. For them, an understanding of the processes of mass-mediated politics sits somewhere near the core of their analysis of the culture industries in the “democratic” world. I think the Frankfurt school should be restored to a more important role in the project of cultural studies. Apart from an aversion to jazz and other supposedly “elitist” heresies, thinkers like Adorno, Benjamin, Horkheimer and their progeny Debord have a functional claim to provide the theory for us to expose the machinations of the politics of contempt and its aesthetic ruses.ReferencesAdorno, Theodor and Max Horkheimer. "The Culture Industry: Enlightenment as Mass Deception." Dialectic of Enlightenment. London: Verso, 1979. 120–167.Barthes Roland. “Myth Today.” Mythologies. Trans. Annette Lavers. St Albans: Paladin, 1972. 109–58.Baudrillard, Jean. Simulations. New York: Semiotext(e), 1983.Benjamin, Walter. “The Work of Art in the Age of Mechanical Reproduction.” Illuminations. Ed. Hannah Arendt. Trans. Harry Zorn. New York: Schocken Books, 1969. 217–251.Burke, Edmund. Reflections on the Revolution in France. Ed. Conor Cruise O’Brien. Harmondsworth: Penguin, 1969.Charteris-Black, Jonathan. Politicians and Rhetoric: The Persuasive Power of Metaphor. Houndmills: Palgrave Macmillan, 2006.Debord, Guy. The Society of the Spectacle. Trans. Donald Nicholson-Smith. New York: Zone Books, 1994.Eagleton, Terry. The Ideology of the Aesthetic. Oxford: Basil Blackwell, 1990.Frank, Thomas. What’s the Matter with Kansas?: How Conservatives Won the Heart of America. New York: Henry Holt and Company, 2004.Grossberg, Lawrence. “It’s a Sin: Politics, Post-Modernity and the Popular.” It’s a Sin: Essays on Postmodern Politics & Culture. Eds. Tony Fry, Ann Curthoys and Paul Patton. Sydney: Power Publications, 1988. 6–71.Hewett, Jennifer. “The Opportunist.” The Weekend Australian Magazine. 25–26 October 2008. 16–22.Hitler, Adolf. Mein Kampf. Trans. Ralph Manheim. London: Pimlico, 1993.Howard, John. “Sharing Our Common Values.” Washington: Irving Kristol Lecture, American Enterprise Institute. 5 March 2008. ‹http://www.theaustralian.news.com.au/story/0,25197,233328945-5014047,00html›.Lucy, Niall and Steve Mickler. The War on Democracy: Conservative Opinion in the Australian Press. Crawley: University of Western Australia Press, 2006.Pearson, Christopher. “Pray for Sense to Prevail.” The Weekend Australian. 25–26 October 2008. 30.Salter, David. The Media We Deserve: Underachievement in the Fourth Estate. Melbourne: Melbourne UP, 2007. Sereny, Gitta. Albert Speer: His Battle with Truth. London: Picador, 1996.Spotts, Frederic. Hitler and the Power of Aesthetics. London: Pimlico, 2003.Wark, McKenzie. The Virtual Republic: Australia’s Culture Wars of the 1990s. St Leonards: Allen & Unwin, 1997.Young, Sally. The Persuaders: Inside the Hidden Machine of Political Advertising. Melbourne: Pluto Press, 2004.
APA, Harvard, Vancouver, ISO, and other styles
35

Paull, John. "Beyond Equal: From Same But Different to the Doctrine of Substantial Equivalence." M/C Journal 11, no. 2 (June 1, 2008). http://dx.doi.org/10.5204/mcj.36.

Full text
Abstract:
A same-but-different dichotomy has recently been encapsulated within the US Food and Drug Administration’s ill-defined concept of “substantial equivalence” (USFDA, FDA). By invoking this concept the genetically modified organism (GMO) industry has escaped the rigors of safety testing that might otherwise apply. The curious concept of “substantial equivalence” grants a presumption of safety to GMO food. This presumption has yet to be earned, and has been used to constrain labelling of both GMO and non-GMO food. It is an idea that well serves corporatism. It enables the claim of difference to secure patent protection, while upholding the contrary claim of sameness to avoid labelling and safety scrutiny. It offers the best of both worlds for corporate food entrepreneurs, and delivers the worst of both worlds to consumers. The term “substantial equivalence” has established its currency within the GMO discourse. As the opportunities for patenting food technologies expand, the GMO recruitment of this concept will likely be a dress rehearsal for the developing debates on the labelling and testing of other techno-foods – including nano-foods and clone-foods. “Substantial Equivalence” “Are the Seven Commandments the same as they used to be, Benjamin?” asks Clover in George Orwell’s “Animal Farm”. By way of response, Benjamin “read out to her what was written on the wall. There was nothing there now except a single Commandment. It ran: ALL ANIMALS ARE EQUAL BUT SOME ANIMALS ARE MORE EQUAL THAN OTHERS”. After this reductionist revelation, further novel and curious events at Manor Farm, “did not seem strange” (Orwell, ch. X). Equality is a concept at the very core of mathematics, but beyond the domain of logic, equality becomes a hotly contested notion – and the domain of food is no exception. A novel food has a regulatory advantage if it can claim to be the same as an established food – a food that has proven its worth over centuries, perhaps even millennia – and thus does not trigger new, perhaps costly and onerous, testing, compliance, and even new and burdensome regulations. On the other hand, such a novel food has an intellectual property (IP) advantage only in terms of its difference. And thus there is an entrenched dissonance for newly technologised foods, between claiming sameness, and claiming difference. The same/different dilemma is erased, so some would have it, by appeal to the curious new dualist doctrine of “substantial equivalence” whereby sameness and difference are claimed simultaneously, thereby creating a win/win for corporatism, and a loss/loss for consumerism. This ground has been pioneered, and to some extent conquered, by the GMO industry. The conquest has ramifications for other cryptic food technologies, that is technologies that are invisible to the consumer and that are not evident to the consumer other than via labelling. Cryptic technologies pertaining to food include GMOs, pesticides, hormone treatments, irradiation and, most recently, manufactured nano-particles introduced into the food production and delivery stream. Genetic modification of plants was reported as early as 1984 by Horsch et al. The case of Diamond v. Chakrabarty resulted in a US Supreme Court decision that upheld the prior decision of the US Court of Customs and Patent Appeal that “the fact that micro-organisms are alive is without legal significance for purposes of the patent law”, and ruled that the “respondent’s micro-organism plainly qualifies as patentable subject matter”. This was a majority decision of nine judges, with four judges dissenting (Burger). It was this Chakrabarty judgement that has seriously opened the Pandora’s box of GMOs because patenting rights makes GMOs an attractive corporate proposition by offering potentially unique monopoly rights over food. The rear guard action against GMOs has most often focussed on health repercussions (Smith, Genetic), food security issues, and also the potential for corporate malfeasance to hide behind a cloak of secrecy citing commercial confidentiality (Smith, Seeds). Others have tilted at the foundational plank on which the economics of the GMO industry sits: “I suggest that the main concern is that we do not want a single molecule of anything we eat to contribute to, or be patented and owned by, a reckless, ruthless chemical organisation” (Grist 22). The GMO industry exhibits bipolar behaviour, invoking the concept of “substantial difference” to claim patent rights by way of “novelty”, and then claiming “substantial equivalence” when dealing with other regulatory authorities including food, drug and pesticide agencies; a case of “having their cake and eating it too” (Engdahl 8). This is a clever slight-of-rhetoric, laying claim to the best of both worlds for corporations, and the worst of both worlds for consumers. Corporations achieve patent protection and no concomitant specific regulatory oversight; while consumers pay the cost of patent monopolization, and are not necessarily apprised, by way of labelling or otherwise, that they are purchasing and eating GMOs, and thereby financing the GMO industry. The lemma of “substantial equivalence” does not bear close scrutiny. It is a fuzzy concept that lacks a tight testable definition. It is exactly this fuzziness that allows lots of wriggle room to keep GMOs out of rigorous testing regimes. Millstone et al. argue that “substantial equivalence is a pseudo-scientific concept because it is a commercial and political judgement masquerading as if it is scientific. It is moreover, inherently anti-scientific because it was created primarily to provide an excuse for not requiring biochemical or toxicological tests. It therefore serves to discourage and inhibit informative scientific research” (526). “Substantial equivalence” grants GMOs the benefit of the doubt regarding safety, and thereby leaves unexamined the ramifications for human consumer health, for farm labourer and food-processor health, for the welfare of farm animals fed a diet of GMO grain, and for the well-being of the ecosystem, both in general and in its particularities. “Substantial equivalence” was introduced into the food discourse by an Organisation for Economic Co-operation and Development (OECD) report: “safety evaluation of foods derived by modern biotechnology: concepts and principles”. It is from this document that the ongoing mantra of assumed safety of GMOs derives: “modern biotechnology … does not inherently lead to foods that are less safe … . Therefore evaluation of foods and food components obtained from organisms developed by the application of the newer techniques does not necessitate a fundamental change in established principles, nor does it require a different standard of safety” (OECD, “Safety” 10). This was at the time, and remains, an act of faith, a pro-corporatist and a post-cautionary approach. The OECD motto reveals where their priorities lean: “for a better world economy” (OECD, “Better”). The term “substantial equivalence” was preceded by the 1992 USFDA concept of “substantial similarity” (Levidow, Murphy and Carr) and was adopted from a prior usage by the US Food and Drug Agency (USFDA) where it was used pertaining to medical devices (Miller). Even GMO proponents accept that “Substantial equivalence is not intended to be a scientific formulation; it is a conceptual tool for food producers and government regulators” (Miller 1043). And there’s the rub – there is no scientific definition of “substantial equivalence”, no scientific test of proof of concept, and nor is there likely to be, since this is a ‘spinmeister’ term. And yet this is the cornerstone on which rests the presumption of safety of GMOs. Absence of evidence is taken to be evidence of absence. History suggests that this is a fraught presumption. By way of contrast, the patenting of GMOs depends on the antithesis of assumed ‘sameness’. Patenting rests on proven, scrutinised, challengeable and robust tests of difference and novelty. Lightfoot et al. report that transgenic plants exhibit “unexpected changes [that] challenge the usual assumptions of GMO equivalence and suggest genomic, proteomic and metanomic characterization of transgenics is advisable” (1). GMO Milk and Contested Labelling Pesticide company Monsanto markets the genetically engineered hormone rBST (recombinant Bovine Somatotropin; also known as: rbST; rBGH, recombinant Bovine Growth Hormone; and the brand name Prosilac) to dairy farmers who inject it into their cows to increase milk production. This product is not approved for use in many jurisdictions, including Europe, Australia, New Zealand, Canada and Japan. Even Monsanto accepts that rBST leads to mastitis (inflammation and pus in the udder) and other “cow health problems”, however, it maintains that “these problems did not occur at rates that would prohibit the use of Prosilac” (Monsanto). A European Union study identified an extensive list of health concerns of rBST use (European Commission). The US Dairy Export Council however entertain no doubt. In their background document they ask “is milk from cows treated with rBST safe?” and answer “Absolutely” (USDEC). Meanwhile, Monsanto’s website raises and answers the question: “Is the milk from cows treated with rbST any different from milk from untreated cows? No” (Monsanto). Injecting cows with genetically modified hormones to boost their milk production remains a contested practice, banned in many countries. It is the claimed equivalence that has kept consumers of US dairy products in the dark, shielded rBST dairy farmers from having to declare that their milk production is GMO-enhanced, and has inhibited non-GMO producers from declaring their milk as non-GMO, non rBST, or not hormone enhanced. This is a battle that has simmered, and sometimes raged, for a decade in the US. Finally there is a modest victory for consumers: the Pennsylvania Department of Agriculture (PDA) requires all labels used on milk products to be approved in advance by the department. The standard issued in October 2007 (PDA, “Standards”) signalled to producers that any milk labels claiming rBST-free status would be rejected. This advice was rescinded in January 2008 with new, specific, department-approved textual constructions allowed, and ensuring that any “no rBST” style claim was paired with a PDA-prescribed disclaimer (PDA, “Revised Standards”). However, parsimonious labelling is prohibited: No labeling may contain references such as ‘No Hormones’, ‘Hormone Free’, ‘Free of Hormones’, ‘No BST’, ‘Free of BST’, ‘BST Free’,’No added BST’, or any statement which indicates, implies or could be construed to mean that no natural bovine somatotropin (BST) or synthetic bovine somatotropin (rBST) are contained in or added to the product. (PDA, “Revised Standards” 3) Difference claims are prohibited: In no instance shall any label state or imply that milk from cows not treated with recombinant bovine somatotropin (rBST, rbST, RBST or rbst) differs in composition from milk or products made with milk from treated cows, or that rBST is not contained in or added to the product. If a product is represented as, or intended to be represented to consumers as, containing or produced from milk from cows not treated with rBST any labeling information must convey only a difference in farming practices or dairy herd management methods. (PDA, “Revised Standards” 3) The PDA-approved labelling text for non-GMO dairy farmers is specified as follows: ‘From cows not treated with rBST. No significant difference has been shown between milk derived from rBST-treated and non-rBST-treated cows’ or a substantial equivalent. Hereinafter, the first sentence shall be referred to as the ‘Claim’, and the second sentence shall be referred to as the ‘Disclaimer’. (PDA, “Revised Standards” 4) It is onto the non-GMO dairy farmer alone, that the costs of compliance fall. These costs include label preparation and approval, proving non-usage of GMOs, and of creating and maintaining an audit trail. In nearby Ohio a similar consumer versus corporatist pantomime is playing out. This time with the Ohio Department of Agriculture (ODA) calling the shots, and again serving the GMO industry. The ODA prescribed text allowed to non-GMO dairy farmers is “from cows not supplemented with rbST” and this is to be conjoined with the mandatory disclaimer “no significant difference has been shown between milk derived from rbST-supplemented and non-rbST supplemented cows” (Curet). These are “emergency rules”: they apply for 90 days, and are proposed as permanent. Once again, the onus is on the non-GMO dairy farmers to document and prove their claims. GMO dairy farmers face no such governmental requirements, including no disclosure requirement, and thus an asymmetric regulatory impost is placed on the non-GMO farmer which opens up new opportunities for administrative demands and technocratic harassment. Levidow et al. argue, somewhat Eurocentrically, that from its 1990s adoption “as the basis for a harmonized science-based approach to risk assessment” (26) the concept of “substantial equivalence” has “been recast in at least three ways” (58). It is true that the GMO debate has evolved differently in the US and Europe, and with other jurisdictions usually adopting intermediate positions, yet the concept persists. Levidow et al. nominate their three recastings as: firstly an “implicit redefinition” by the appending of “extra phrases in official documents”; secondly, “it has been reinterpreted, as risk assessment processes have … required more evidence of safety than before, especially in Europe”; and thirdly, “it has been demoted in the European Union regulatory procedures so that it can no longer be used to justify the claim that a risk assessment is unnecessary” (58). Romeis et al. have proposed a decision tree approach to GMO risks based on cascading tiers of risk assessment. However what remains is that the defects of the concept of “substantial equivalence” persist. Schauzu identified that: such decisions are a matter of “opinion”; that there is “no clear definition of the term ‘substantial’”; that because genetic modification “is aimed at introducing new traits into organisms, the result will always be a different combination of genes and proteins”; and that “there is no general checklist that could be followed by those who are responsible for allowing a product to be placed on the market” (2). Benchmark for Further Food Novelties? The discourse, contestation, and debate about “substantial equivalence” have largely focussed on the introduction of GMOs into food production processes. GM can best be regarded as the test case, and proof of concept, for establishing “substantial equivalence” as a benchmark for evaluating new and forthcoming food technologies. This is of concern, because the concept of “substantial equivalence” is scientific hokum, and yet its persistence, even entrenchment, within regulatory agencies may be a harbinger of forthcoming same-but-different debates for nanotechnology and other future bioengineering. The appeal of “substantial equivalence” has been a brake on the creation of GMO-specific regulations and on rigorous GMO testing. The food nanotechnology industry can be expected to look to the precedent of the GMO debate to head off specific nano-regulations and nano-testing. As cloning becomes economically viable, then this may be another wave of food innovation that muddies the regulatory waters with the confused – and ultimately self-contradictory – concept of “substantial equivalence”. Nanotechnology engineers particles in the size range 1 to 100 nanometres – a nanometre is one billionth of a metre. This is interesting for manufacturers because at this size chemicals behave differently, or as the Australian Office of Nanotechnology expresses it, “new functionalities are obtained” (AON). Globally, government expenditure on nanotechnology research reached US$4.6 billion in 2006 (Roco 3.12). While there are now many patents (ETC Group; Roco), regulation specific to nanoparticles is lacking (Bowman and Hodge; Miller and Senjen). The USFDA advises that nano-manufacturers “must show a reasonable assurance of safety … or substantial equivalence” (FDA). A recent inventory of nano-products already on the market identified 580 products. Of these 11.4% were categorised as “Food and Beverage” (WWICS). This is at a time when public confidence in regulatory bodies is declining (HRA). In an Australian consumer survey on nanotechnology, 65% of respondents indicated they were concerned about “unknown and long term side effects”, and 71% agreed that it is important “to know if products are made with nanotechnology” (MARS 22). Cloned animals are currently more expensive to produce than traditional animal progeny. In the course of 678 pages, the USFDA Animal Cloning: A Draft Risk Assessment has not a single mention of “substantial equivalence”. However the Federation of Animal Science Societies (FASS) in its single page “Statement in Support of USFDA’s Risk Assessment Conclusion That Food from Cloned Animals Is Safe for Human Consumption” states that “FASS endorses the use of this comparative evaluation process as the foundation of establishing substantial equivalence of any food being evaluated. It must be emphasized that it is the food product itself that should be the focus of the evaluation rather than the technology used to generate cloned animals” (FASS 1). Contrary to the FASS derogation of the importance of process in food production, for consumers both the process and provenance of production is an important and integral aspect of a food product’s value and identity. Some consumers will legitimately insist that their Kalamata olives are from Greece, or their balsamic vinegar is from Modena. It was the British public’s growing awareness that their sugar was being produced by slave labour that enabled the boycotting of the product, and ultimately the outlawing of slavery (Hochschild). When consumers boycott Nestle, because of past or present marketing practices, or boycott produce of USA because of, for example, US foreign policy or animal welfare concerns, they are distinguishing the food based on the narrative of the food, the production process and/or production context which are a part of the identity of the food. Consumers attribute value to food based on production process and provenance information (Paull). Products produced by slave labour, by child labour, by political prisoners, by means of torture, theft, immoral, unethical or unsustainable practices are different from their alternatives. The process of production is a part of the identity of a product and consumers are increasingly interested in food narrative. It requires vigilance to ensure that these narratives are delivered with the product to the consumer, and are neither lost nor suppressed. Throughout the GM debate, the organic sector has successfully skirted the “substantial equivalence” debate by excluding GMOs from the certified organic food production process. This GMO-exclusion from the organic food stream is the one reprieve available to consumers worldwide who are keen to avoid GMOs in their diet. The organic industry carries the expectation of providing food produced without artificial pesticides and fertilizers, and by extension, without GMOs. Most recently, the Soil Association, the leading organic certifier in the UK, claims to be the first organisation in the world to exclude manufactured nonoparticles from their products (Soil Association). There has been the call that engineered nanoparticles be excluded from organic standards worldwide, given that there is no mandatory safety testing and no compulsory labelling in place (Paull and Lyons). The twisted rhetoric of oxymorons does not make the ideal foundation for policy. Setting food policy on the shifting sands of “substantial equivalence” seems foolhardy when we consider the potentially profound ramifications of globally mass marketing a dysfunctional food. If there is a 2×2 matrix of terms – “substantial equivalence”, substantial difference, insubstantial equivalence, insubstantial difference – while only one corner of this matrix is engaged for food policy, and while the elements remain matters of opinion rather than being testable by science, or by some other regime, then the public is the dupe, and potentially the victim. “Substantial equivalence” has served the GMO corporates well and the public poorly, and this asymmetry is slated to escalate if nano-food and clone-food are also folded into the “substantial equivalence” paradigm. Only in Orwellian Newspeak is war peace, or is same different. It is time to jettison the pseudo-scientific doctrine of “substantial equivalence”, as a convenient oxymoron, and embrace full disclosure of provenance, process and difference, so that consumers are not collateral in a continuing asymmetric knowledge war. References Australian Office of Nanotechnology (AON). Department of Industry, Tourism and Resources (DITR) 6 Aug. 2007. 24 Apr. 2008 < http://www.innovation.gov.au/Section/Innovation/Pages/ AustralianOfficeofNanotechnology.aspx >.Bowman, Diana, and Graeme Hodge. “A Small Matter of Regulation: An International Review of Nanotechnology Regulation.” Columbia Science and Technology Law Review 8 (2007): 1-32.Burger, Warren. “Sidney A. Diamond, Commissioner of Patents and Trademarks v. Ananda M. Chakrabarty, et al.” Supreme Court of the United States, decided 16 June 1980. 24 Apr. 2008 < http://caselaw.lp.findlaw.com/cgi-bin/getcase.pl?court=US&vol=447&invol=303 >.Curet, Monique. “New Rules Allow Dairy-Product Labels to Include Hormone Info.” The Columbus Dispatch 7 Feb. 2008. 24 Apr. 2008 < http://www.dispatch.com/live/content/business/stories/2008/02/07/dairy.html >.Engdahl, F. William. Seeds of Destruction. Montréal: Global Research, 2007.ETC Group. Down on the Farm: The Impact of Nano-Scale Technologies on Food and Agriculture. Ottawa: Action Group on Erosion, Technology and Conservation, November, 2004. European Commission. Report on Public Health Aspects of the Use of Bovine Somatotropin. Brussels: European Commission, 15-16 March 1999.Federation of Animal Science Societies (FASS). Statement in Support of FDA’s Risk Assessment Conclusion That Cloned Animals Are Safe for Human Consumption. 2007. 24 Apr. 2008 < http://www.fass.org/page.asp?pageID=191 >.Grist, Stuart. “True Threats to Reason.” New Scientist 197.2643 (16 Feb. 2008): 22-23.Hochschild, Adam. Bury the Chains: The British Struggle to Abolish Slavery. London: Pan Books, 2006.Horsch, Robert, Robert Fraley, Stephen Rogers, Patricia Sanders, Alan Lloyd, and Nancy Hoffman. “Inheritance of Functional Foreign Genes in Plants.” Science 223 (1984): 496-498.HRA. Awareness of and Attitudes toward Nanotechnology and Federal Regulatory Agencies: A Report of Findings. Washington: Peter D. Hart Research Associates, 25 Sep. 2007.Levidow, Les, Joseph Murphy, and Susan Carr. “Recasting ‘Substantial Equivalence’: Transatlantic Governance of GM Food.” Science, Technology, and Human Values 32.1 (Jan. 2007): 26-64.Lightfoot, David, Rajsree Mungur, Rafiqa Ameziane, Anthony Glass, and Karen Berhard. “Transgenic Manipulation of C and N Metabolism: Stretching the GMO Equivalence.” American Society of Plant Biologists Conference: Plant Biology, 2000.MARS. “Final Report: Australian Community Attitudes Held about Nanotechnology – Trends 2005-2007.” Report prepared for Department of Industry, Tourism and Resources (DITR). Miranda, NSW: Market Attitude Research Services, 12 June 2007.Miller, Georgia, and Rye Senjen. “Out of the Laboratory and on to Our Plates: Nanotechnology in Food and Agriculture.” Friends of the Earth, 2008. 24 Apr. 2008 < http://nano.foe.org.au/node/220 >.Miller, Henry. “Substantial Equivalence: Its Uses and Abuses.” Nature Biotechnology 17 (7 Nov. 1999): 1042-1043.Millstone, Erik, Eric Brunner, and Sue Mayer. “Beyond ‘Substantial Equivalence’.” Nature 401 (7 Oct. 1999): 525-526.Monsanto. “Posilac, Bovine Somatotropin by Monsanto: Questions and Answers about bST from the United States Food and Drug Administration.” 2007. 24 Apr. 2008 < http://www.monsantodairy.com/faqs/fda_safety.html >.Organisation for Economic Co-operation and Development (OECD). “For a Better World Economy.” Paris: OECD, 2008. 24 Apr. 2008 < http://www.oecd.org/ >.———. “Safety Evaluation of Foods Derived by Modern Biotechnology: Concepts and Principles.” Paris: OECD, 1993.Orwell, George. Animal Farm. Adelaide: ebooks@Adelaide, 2004 (1945). 30 Apr. 2008 < http://ebooks.adelaide.edu.au/o/orwell/george >.Paull, John. “Provenance, Purity and Price Premiums: Consumer Valuations of Organic and Place-of-Origin Food Labelling.” Research Masters thesis, University of Tasmania, Hobart, 2006. 24 Apr. 2008 < http://eprints.utas.edu.au/690/ >.Paull, John, and Kristen Lyons. “Nanotechnology: The Next Challenge for Organics.” Journal of Organic Systems (in press).Pennsylvania Department of Agriculture (PDA). “Revised Standards and Procedure for Approval of Proposed Labeling of Fluid Milk.” Milk Labeling Standards (2.0.1.17.08). Bureau of Food Safety and Laboratory Services, Pennsylvania Department of Agriculture, 17 Jan. 2008. ———. “Standards and Procedure for Approval of Proposed Labeling of Fluid Milk, Milk Products and Manufactured Dairy Products.” Milk Labeling Standards (2.0.1.17.08). Bureau of Food Safety and Laboratory Services, Pennsylvania Department of Agriculture, 22 Oct. 2007.Roco, Mihail. “National Nanotechnology Initiative – Past, Present, Future.” In William Goddard, Donald Brenner, Sergy Lyshevski and Gerald Iafrate, eds. Handbook of Nanoscience, Engineering and Technology. 2nd ed. Boca Raton, FL: CRC Press, 2007.Romeis, Jorg, Detlef Bartsch, Franz Bigler, Marco Candolfi, Marco Gielkins, et al. “Assessment of Risk of Insect-Resistant Transgenic Crops to Nontarget Arthropods.” Nature Biotechnology 26.2 (Feb. 2008): 203-208.Schauzu, Marianna. “The Concept of Substantial Equivalence in Safety Assessment of Food Derived from Genetically Modified Organisms.” AgBiotechNet 2 (Apr. 2000): 1-4.Soil Association. “Soil Association First Organisation in the World to Ban Nanoparticles – Potentially Toxic Beauty Products That Get Right under Your Skin.” London: Soil Association, 17 Jan. 2008. 24 Apr. 2008 < http://www.soilassociation.org/web/sa/saweb.nsf/848d689047 cb466780256a6b00298980/42308d944a3088a6802573d100351790!OpenDocument >.Smith, Jeffrey. Genetic Roulette: The Documented Health Risks of Genetically Engineered Foods. Fairfield, Iowa: Yes! Books, 2007.———. Seeds of Deception. Melbourne: Scribe, 2004.U.S. Dairy Export Council (USDEC). Bovine Somatotropin (BST) Backgrounder. Arlington, VA: U.S. Dairy Export Council, 2006.U.S. Food and Drug Administration (USFDA). Animal Cloning: A Draft Risk Assessment. Rockville, MD: Center for Veterinary Medicine, U.S. Food and Drug Administration, 28 Dec. 2006.———. FDA and Nanotechnology Products. U.S. Department of Health and Human Services, U.S. Food and Drug Administration, 2008. 24 Apr. 2008 < http://www.fda.gov/nanotechnology/faqs.html >.Woodrow Wilson International Center for Scholars (WWICS). “A Nanotechnology Consumer Products Inventory.” Data set as at Sep. 2007. Woodrow Wilson International Center for Scholars, Project on Emerging Technologies, Sep. 2007. 24 Apr. 2008 < http://www.nanotechproject.org/inventories/consumer >.
APA, Harvard, Vancouver, ISO, and other styles
36

Simpson, Catherine. "Communicating Uncertainty about Climate Change: The Scientists’ Dilemma." M/C Journal 14, no. 1 (January 26, 2011). http://dx.doi.org/10.5204/mcj.348.

Full text
Abstract:
Photograph by Gonzalo Echeverria (2010)We need to get some broad-based support, to capture the public’s imagination … so we have to offer up scary scenarios, make simplified, dramatic statements and make little mention of any doubts … each of us has to decide what the right balance is between being effective and being honest (Hulme 347). Acclaimed climate scientist, the late Stephen Schneider, made this comment in 1988. Later he regretted it and said that there are ways of using metaphors that can “convey both urgency and uncertainty” (Hulme 347). What Schneider encapsulates here is the great conundrum for those attempting to communicate climate change to the everyday public. How do scientists capture the public’s imagination and convey the desperation they feel about climate change, but do it ethically? If scientific findings are presented carefully, in boring technical jargon that few can understand, then they are unlikely to attract audiences or provide an impetus for behavioural change. “What can move someone to act?” asks communication theorists Susan Moser and Lisa Dilling (37). “If a red light blinks on in a cockpit” asks Donella Meadows, “should the pilot ignore it until in speaks in an unexcited tone? … Is there any way to say [it] sweetly? Patiently? If one did, would anyone pay attention?” (Moser and Dilling 37). In 2010 Tim Flannery was appointed Panasonic Chair in Environmental Sustainability at Macquarie University. His main teaching role remains within the new science communication programme. One of the first things Flannery was emphatic about was acquainting students with Karl Popper and the origin of the scientific method. “There is no truth in science”, he proclaimed in his first lecture to students “only theories, hypotheses and falsifiabilities”. In other words, science’s epistemological limits are framed such that, as Michael Lemonick argues, “a statement that cannot be proven false is generally not considered to be scientific” (n.p., my emphasis). The impetus for the following paper emanates precisely from this issue of scientific uncertainty — more specifically from teaching a course with Tim Flannery called Communicating climate change to a highly motivated group of undergraduate science communication students. I attempt to illuminate how uncertainty is constructed differently by different groups and that the “public” does not necessarily interpret uncertainty in the same way the sciences do. This paper also analyses how doubt has been politicised and operates polemically in media coverage of climate change. As Andrew Gorman-Murray and Gordon Waitt highlight in an earlier issue of M/C Journal that focused on the climate-culture nexus, an understanding of the science alone is not adequate to deal with the cultural change necessary to address the challenges climate change brings (n.p). Far from being redundant in debates around climate change, the humanities have much to offer. Erosion of Trust in Science The objectives of Macquarie’s science communication program are far more ambitious than it can ever hope to achieve. But this is not necessarily a bad thing. The initiative is a response to declining student numbers in maths and science programmes around the country and is designed to address the perceived lack of communication skills in science graduates that the Australian Council of Deans of Science identified in their 2001 report. According to Macquarie Vice Chancellor Steven Schwartz’s blog, a broader, and much more ambitious aim of the program is to “restore public trust in science and scientists in the face of widespread cynicism” (n.p.). In recent times the erosion of public trust in science was exacerbated through the theft of e-mails from East Anglia University’s Climate Research Unit and the so-called “climategate scandal” which ensued. With the illegal publication of the e-mails came claims against the Research Unit that climate experts had been manipulating scientific data to suit a pro-global warming agenda. Three inquiries later, all the scientists involved were cleared of any wrongdoing, however the damage had already been done. To the public, what this scandal revealed was a certain level of scientific hubris around the uncertainties of the science and an unwillingness to explain the nature of these uncertainties. The prevailing notion remained that the experts were keeping information from public scrutiny and not being totally honest with them, which at least in the short term, damaged the scientists’s credibility. Many argued that this signalled a shift in public opinion and media portrayal on the issue of climate change in late 2009. University of Sydney academic, Rod Tiffen, claimed in the Sydney Morning Herald that the climategate scandal was “one of the pivotal moments in changing the politics of climate change” (n.p). In Australia this had profound implications and meant that the bipartisan agreement on an emissions trading scheme (ETS) that had almost been reached, subsequently collapsed with (climate sceptic) Tony Abbott's defeat of (ETS advocate) Malcolm Turnbull to become opposition leader (Tiffen). Not long after the reputation of science received this almighty blow, albeit unfairly, the federal government released a report in February 2010, Inspiring Australia – A national strategy for engagement with the sciences as part of the country’s innovation agenda. The report outlines a commitment from the Australian government and universities around the country to address the challenges of not only communicating science to the broader community but, in the process, renewing public trust and engagement in science. The report states that: in order to achieve a scientifically engaged Australia, it will be necessary to develop a culture where the sciences are recognized as relevant to everyday life … Our science institutions will be expected to share their knowledge and to help realize full social, economic, health and environmental benefits of scientific research and in return win ongoing public support. (xiv-xv) After launching the report, Innovation Minister Kim Carr went so far as to conflate “hope” with “science” and in the process elevate a discourse of technological determinism: “it’s time for all true friends of science to step up and defend its values and achievements” adding that, "when you denigrate science, you destroy hope” (n.p.). Forever gone is our naïve post-war world when scientists were held in such high esteem that they could virtually use humans as guinea pigs to test out new wonder chemicals; such as organochlorines, of which DDT is the most widely known (Carson). Thanks to government-sponsored nuclear testing programs, if you were born in the 1950s, 1960s or early 1970s, your brain carries a permanent nuclear legacy (Flannery, Here On Earth 158). So surely, for the most part, questioning the authority and hubristic tendencies of science is a good thing. And I might add, it’s not just scientists who bear this critical burden, the same scepticism is directed towards journalists, politicians and academics alike – something that many cultural theorists have noted is characteristic of our contemporary postmodern world (Lyotard). So far from destroying hope, as the former Innovation Minister Kim Carr (now Minister for Innovation, Industry, Science and Research) suggests, surely we need to use the criticisms of science as a vehicle upon which to initiate hope and humility. Different Ways of Knowing: Bayesian Beliefs and Matters of Concern At best, [science] produces a robust consensus based on a process of inquiry that allows for continued scrutiny, re-examination, and revision. (Oreskes 370) In an attempt to capitalise on the Macquarie Science Faculty’s expertise in climate science, I convened a course in second semester 2010 called SCOM201 Science, Media, Community: Communicating Climate Change, with invaluable assistance from Penny Wilson, Elaine Kelly and Liz Morgan. Mike Hulme’s provocative text, Why we disagree about climate change: Understanding controversy, inaction and opportunity provided an invaluable framework for the course. Hulme’s book brings other types of knowledge, beyond the scientific, to bear on our attitudes towards climate change. Climate change, he claims, has moved from being just a physical, scientific, and measurable phenomenon to becoming a social and cultural phenomenon. In order to understand the contested nature of climate change we need to acknowledge the dynamic and varied meanings climate has played in different cultures throughout history as well as the role that our own subjective attitudes and judgements play. Climate change has become a battleground between different ways of knowing, alternative visions of the future, competing ideas about what’s ethical and what’s not. Hulme makes the point that one of the reasons that we disagree about climate change is because we disagree about the role of science in today’s society. He encourages readers to use climate change as a tool to rigorously question the basis of our beliefs, assumptions and prejudices. Since uncertainty was the course’s raison d’etre, I was fortunate to have an extraordinary cohort of students who readily engaged with a course that forced them to confront their own epistemological limits — both personally and in a disciplinary sense. (See their blog: https://scom201.wordpress.com/). Science is often associated with objective realities. It thus tends to distinguish itself from the post-structuralist vein of critique that dominates much of the contemporary humanities. At the core of post-structuralism is scepticism about everyday, commonly accepted “truths” or what some call “meta-narratives” as well as an acknowledgement of the role that subjectivity plays in the pursuit of knowledge (Lyotard). However if we can’t rely on objective truths or impartial facts then where does this leave us when it comes to generating policy or encouraging behavioural change around the issue of climate change? Controversial philosophy of science scholar Bruno Latour sits squarely in the post-structuralist camp. In his 2004 article, “Why has critique run out of steam? From matters of fact to matters of concern”, he laments the way the right wing has managed to gain ground in the climate change debate through arguing that uncertainty and lack of proof is reason enough to deny demands for action. Or to use his turn-of-phrase, “dangerous extremists are using the very same argument of social construction to destroy hard-won evidence that could save our lives” (Latour n.p). Through co-opting (the Left’s dearly held notion of) scepticism and even calling themselves “climate sceptics”, they exploited doubt as a rationale for why we should do nothing about climate change. Uncertainty is not only an important part of science, but also of the human condition. However, as sociologist Sheila Jasanoff explains in her Nature article, “Technologies of Humility”, uncertainty has become like a disease: Uncertainty has become a threat to collective action, the disease that knowledge must cure. It is the condition that poses cruel dilemmas for decision makers; that must be reduced at all costs; that is tamed with scenarios and assessments; and that feeds the frenzy for new knowledge, much of it scientific. (Jasanoff 33) If we move from talking about climate change as “a matter of fact” to “a matter of concern”, argues Bruno Latour, then we can start talking about useful ways to combat it, rather than talking about whether the science is “in” or not. Facts certainly matter, claims Latour, but they can’t give us the whole story, rather “they assemble with other ingredients to produce a matter of concern” (Potter and Oster 123). Emily Potter and Candice Oster suggest that climate change can’t be understood through either natural or cultural frames alone and, “unlike a matter of fact, matters of concern cannot be explained through a single point of view or discursive frame” (123). This makes a lot of what Hulme argues far more useful because it enables the debate to be taken to another level. Those of us with non-scientific expertise can centre debates around the kinds of societies we want, rather than being caught up in the scientific (un)certainties. If we translate Latour’s concept of climate change being “a matter of concern” into the discourse of environmental management then what we come up with, I think, is the “precautionary principle”. In the YouTube clip, “Stephen Schneider vs Skeptics”, Schneider argues that when in doubt about the potential environmental impacts of climate change, we should always apply the precautionary principle. This principle emerged from the UN conference on Environment and Development in Rio de Janeiro in 1992 and concerns the management of scientific risk. However its origins are evident much earlier in documents such as the “Use of Pesticides” from US President’s Science Advisory Committee in 1962. Unlike in criminal and other types of law where the burden of proof is on the prosecutor to show that the person charged is guilty of a particular offence, in environmental law the onus of proof is on the manufacturers to demonstrate the safety of their product. For instance, a pesticide should be restricted or disproved for use if there is “reasonable doubt” about its safety (Oreskes 374). Principle 15 of the Rio Declaration on Environment and Development in 1992 has its foundations in the precautionary principle: “Where there are threats of serious or irreversible environmental damage, lack of full scientific certainty should not be used as a reason for postponing measures to prevent environmental degradation” (n.p). According to Environmental Law Online, the Rio declaration suggests that, “The precautionary principle applies where there is a ‘lack of full scientific certainty’ – that is, when science cannot say what consequences to expect, how grave they are, or how likely they are to occur” (n.p.). In order to make predictions about the likelihood of an event occurring, scientists employ a level of subjectivity, or need to “reveal their degree of belief that a prediction will turn out to be correct … [S]omething has to substitute for this lack of certainty” otherwise “the only alternative is to admit that absolutely nothing is known” (Hulme 85). These statements of “subjective probabilities or beliefs” are called Bayesian, after eighteenth century English mathematician Sir Thomas Bayes who developed the theory of evidential probability. These “probabilities” are estimates, or in other words, subjective, informed judgements that draw upon evidence and experience about the likelihood of event occurring. The Intergovernmental Panel on Climate Change (IPCC) uses Bayesian beliefs to determine the risk or likelihood of an event occurring. The IPCC provides the largest international scientific assessment of climate change and often adopts a consensus model where viewpoint reached by the majority of scientists is used to establish knowledge amongst an interdisciplinary community of scientists and then communicate it to the public (Hulme 88). According to the IPCC, this consensus is reached amongst more than more than 450 lead authors, more than 800 contributing authors, and 2500 scientific reviewers. While it is an advisory body and is not policy-prescriptive, the IPCC adopts particular linguistic conventions to indicate the probability of a statement being correct. Stephen Schneider convinced the IPCC to use this approach to systemise uncertainty (Lemonick). So for instance, in the IPCC reports, the term “likely” denotes a chance of 66%-90% of the statement being correct, while “very likely” denotes more than a 90% chance. Note the change from the Third Assessment Report (2001), indicating that “most of the observed warming in over the last fifty years is likely to have been due to the increase in greenhouse gas emissions” to the Fourth Assessment (February 2007) which more strongly states: “Most of the observed increase in global average temperatures since the mid twentieth century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations” (Hulme 51, my italics). A fiery attack on Tim Flannery by Andrew Bolt on Steve Price’s talkback radio show in June 2010 illustrates just how misunderstood scientific uncertainty is in the broader community. When Price introduces Flannery as former Australian of the Year, Bolt intercedes, claiming Flannery is “Alarmist of the Year”, then goes on to chastise Flannery for making various forecasts which didn’t eventuate, such as that Perth and Brisbane might run out of water by 2009. “How much are you to blame for the swing in sentiment, the retreat from global warming policy and rise of scepticism?” demands Bolt. In the context of the events of late 2009 and early 2010, the fact that these events didn’t materialise made Flannery, and others, seem unreliable. And what Bolt had to say on talkback radio, I suspect, resonated with a good proportion of its audience. What Bolt was trying to do was discredit Flannery’s scientific credentials and in the process erode trust in the expert. Flannery’s response was to claim that, what he said was that these events might eventuate. In much the same way that the climate sceptics have managed to co-opt scepticism and use it as a rationale for inaction on climate change, Andrew Bolt here either misunderstands basic scientific method or quite consciously misleads and manipulates the public. As Naomi Oreskes argues, “proof does not play the role in science that most people think it does (or should), and therefore it cannot play the role in policy that skeptics demand it should” (Oreskes 370). Doubt and ‘Situated’ Hope Uncertainty and ambiguity then emerge here as resources because they force us to confront those things we really want–not safety in some distant, contested future but justice and self-understanding now. (Sheila Jasanoff, cited in Hulme, back cover) In his last published book before his death in mid-2010, Science as a contact sport, Stephen Schneider’s advice to aspiring science communicators is that they should engage with the media “not at all, or a lot”. Climate scientist Ann Henderson-Sellers adds that there are very few scientists “who have the natural ability, and learn or cultivate the talents, of effective communication with and through the media” (430). In order to attract the public’s attention, it was once commonplace for scientists to write editorials and exploit fear-provoking measures by including a “useful catastrophe or two” (Moser and Dilling 37). But are these tactics effective? Susanne Moser thinks not. She argues that “numerous studies show that … fear may change attitudes … but not necessarily increase active engagement or behaviour change” (Moser 70). Furthermore, risk psychologists argue that danger is always context specific (Hulme 196). If the risk or danger is “situated” and “tangible” (such as lead toxicity levels in children in Mt Isa from the Xstrata mine) then the public will engage with it. However if it is “un-situated” (distant, intangible and diffuse) like climate change, the audience is less likely to. In my SCOM201 class we examined the impact of two climate change-related campaigns. The first one was a short film used to promote the 2010 Copenhagen Climate Change Summit (“Scary”) and the second was the State Government of Victoria’s “You have the power: Save Energy” public awareness campaign (“You”). Using Moser’s article to guide them, students evaluated each campaign’s effectiveness. Their conclusions were that the “You have the power” campaign had far more impact because it a) had very clear objectives (to cut domestic power consumption) b) provided a very clear visualisation of carbon dioxide through the metaphor of black balloons wafting up into the atmosphere, c) gave viewers a sense of empowerment and hope through describing simple measures to cut power consumption and, d) used simple but effective metaphors to convey a world progressed beyond human control, such as household appliances robotically operating themselves in the absence of humans. Despite its high production values, in comparison, the Copenhagen Summit promotion was more than ineffective and bordered on propaganda. It actually turned viewers off with its whining, righteous appeal of, “please help the world”. Its message and objectives were ambiguous, it conveyed environmental catastrophe through hackneyed images, exploited children through a narrative based on fear and gave no real sense of hope or empowerment. In contrast the Victorian Government’s campaign focused on just one aspect of climate change that was made both tangible and situated. Doubt and uncertainty are productive tools in the pursuit of knowledge. Whether it is scientific or otherwise, uncertainty will always be the motivation that “feeds the frenzy for new knowledge” (Jasanoff 33). Articulating the importance of Hulme’s book, Sheila Jasanoff indicates we should make doubt our friend, “Without downplaying its seriousness, Hulme demotes climate change from ultimate threat to constant companion, whose murmurs unlock in us the instinct for justice and equality” (Hulme back cover). The “murmurs” that Jasanoff gestures to here, I think, can also be articulated as hope. And it is in this discussion of climate change that doubt and hope sit side-by-side as bedfellows, mutually entangled. Since the “failed” Copenhagen Summit, there has been a distinct shift in climate change discourse from “experts”. We have moved away from doom and gloom discourses and into the realm of what I shall call “situated” hope. “Situated” hope is not based on blind faith alone, but rather hope grounded in evidence, informed judgements and experience. For instance, in distinct contrast to his cautionary tale The Weather Makers: The History & Future Impact of Climate Change, Tim Flannery’s latest book, Here on Earth is a biography of our Earth; a planet that throughout its history has oscillated between Gaian and Medean impulses. However Flannery’s wonder about the natural world and our potential to mitigate the impacts of climate change is not founded on empty rhetoric but rather tempered by evidence; he presents a series of case studies where humanity has managed to come together for a global good. Whether it’s the 1987 Montreal ban on CFCs (chlorinated fluorocarbons) or the lesser-known 2001 Stockholm Convention on POP (Persistent Organic Pollutants), what Flannery envisions is an emerging global civilisation, a giant, intelligent super-organism glued together through social bonds. He says: If that is ever achieved, the greatest transformation in the history of our planet would have occurred, for Earth would then be able to act as if it were as Francis Bacon put it all those centuries ago, ‘one entire, perfect living creature’. (Here on Earth, 279) While science might give us “our most reliable understanding of the natural world” (Oreskes 370), “situated” hope is the only productive and ethical currency we have. ReferencesAustralian Council of Deans of Science. What Did You Do with Your Science Degree? A National Study of Employment Outcomes for Science Degree Holders 1990-2000. Melbourne: Centre for the Study of Higher Education, University of Melbourne, 2001. Australian Government Department of Innovation, Industry, Science and Research, Inspiring Australia – A National Strategy for Engagement with the Sciences. Executive summary. Canberra: DIISR, 2010. 24 May 2010 ‹http://www.innovation.gov.au/SCIENCE/INSPIRINGAUSTRALIA/Documents/InspiringAustraliaSummary.pdf›. “Andrew Bolt with Tim Flannery.” Steve Price. Hosted by Steve Price. Melbourne: Melbourne Talkback Radio, 2010. 9 June 2010 ‹http://www.mtr1377.com.au/index2.php?option=com_newsmanager&task=view&id=6209›. Carson, Rachel. Silent Spring. London: Penguin, 1962 (2000). Carr, Kim. “Celebrating Nobel Laureate Professor Elizabeth Blackburn.” Canberra: DIISR, 2010. 19 Feb. 2010 ‹http://minister.innovation.gov.au/Carr/Pages/CELEBRATINGNOBELLAUREATEPROFESSORELIZABETHBLACKBURN.aspx›. Environmental Law Online. “The Precautionary Principle.” N.d. 19 Jan 2011 ‹http://www.envirolaw.org.au/articles/precautionary_principle›. Flannery, Tim. The Weather Makers: The History & Future Impact of Climate Change. Melbourne: Text Publishing, 2005. ———. Here on Earth: An Argument for Hope. Melbourne: Text Publishing, 2010. Gorman-Murray, Andrew, and Gordon Waitt. “Climate and Culture.” M/C Journal 12.4 (2009). 9 Mar 2011 ‹http://journal.media-culture.org.au/index.php/mcjournal/article/viewArticle/184/0›. Harrison, Karey. “How ‘Inconvenient’ Is Al Gore’s Climate Change Message?” M/C Journal 12.4 (2009). 9 Mar 2011 ‹http://journal.media-culture.org.au/index.php/mcjournal/article/viewArticle/175›. Henderson-Sellers, Ann. “Climate Whispers: Media Communication about Climate Change.” Climatic Change 40 (1998): 421–456. Hulme, Mike. Why We Disagree about Climate Change: Understanding, Controversy, Inaction and Opportunity. Cambridge: Cambridge UP, 2009. Intergovernmental Panel on Climate Change. A Picture of Climate Change: The Current State of Understanding. 2007. 11 Jan 2011 ‹http://www.ipcc.ch/pdf/press-ar4/ipcc-flyer-low.pdf›. Jasanoff, Sheila. “Technologies of Humility.” Nature 450 (2007): 33. Latour, Bruno. “Why Has Critique Run Out of Steam? From Matters of Fact to Matters of Concern.” Critical Inquiry 30.2 (2004). 19 Jan 2011 ‹http://criticalinquiry.uchicago.edu/issues/v30/30n2.Latour.html›. Lemonick, Michael D. “Climate Heretic: Judith Curry Turns on Her Colleagues.” Nature News 1 Nov. 2010. 9 Mar 2011 ‹http://www.nature.com/news/2010/101101/full/news.2010.577.html›. Lyotard, Jean-Francois. The Postmodern Condition: A Report on Knowledge. Minneapolis: U of Minnesota P, 1984. Moser, Susanne, and Lisa Dilling. “Making Climate Hot: Communicating the Urgency and Challenge of Global Climate Change.” Environment 46.10 (2004): 32-46. Moser, Susie. “More Bad News: The Risk of Neglecting Emotional Responses to Climate Change Information.” In Susanne Moser and Lisa Dilling (eds.), Creating a Climate for Change: Communicating Climate Change and Facilitating Social Change. Cambridge: Cambridge UP, 2007. 64-81. Oreskes, Naomi. “Science and Public Policy: What’s Proof Got to Do with It?” Environmental Science and Policy 7 (2004): 369-383. Potter, Emily, and Candice Oster. “Communicating Climate Change: Public Responsiveness and Matters of Concern.” Media International Australia 127 (2008): 116-126. President’s Science Advisory Committee. “Use of Pesticides”. Washington, D.C.: The White House, 1963. United Nations Declaration on Environment and Development. Rio de Janeiro, 1992. 19 Jan 2011 ‹http://www.unep.org/Documents.Multilingual/Default.asp?DocumentID=78&ArticleID=1163›. “Scary Global Warming Propaganda Video Shown at the Copenhagen Climate Meeting – 7 Dec. 2009.” YouTube. 21 Mar. 2011‹http://www.youtube.com/watch?v=jzSuP_TMFtk&feature=related›. Schneider, Stephen. Science as a Contact Sport: Inside the Battle to Save Earth’s Climate. National Geographic Society, 2010. ———. “Stephen Schneider vs. the Sceptics”. YouTube. 21 Mar. 2011 ‹http://www.youtube.com/watch?v=7rj1QcdEqU0›. Schwartz, Steven. “Science in Search of a New Formula.” 2010. 20 May 2010 ‹http://www.vc.mq.edu.au/blog/2010/03/11/science-in-search-of-a-new-formula/›. Tiffen, Rodney. "You Wouldn't Read about It: Climate Scientists Right." Sydney Morning Herald 26 July 2010. 19 Jan 2011 ‹http://www.smh.com.au/environment/climate-change/you-wouldnt-read-about-it-climate-scientists-right-20100727-10t5i.html›. “You Have the Power: Save Energy.” YouTube. 21 Mar. 2011 ‹http://www.youtube.com/watch?v=SCiS5k_uPbQ›.
APA, Harvard, Vancouver, ISO, and other styles
37

Simpson, Catherine. "Cars, Climates and Subjectivity: Car Sharing and Resisting Hegemonic Automobile Culture?" M/C Journal 12, no. 4 (September 3, 2009). http://dx.doi.org/10.5204/mcj.176.

Full text
Abstract:
Al Gore brought climate change into … our living rooms. … The 2008 oil price hikes [and the global financial crisis] awakened the world to potential economic hardship in a rapidly urbanising world where the petrol-driven automobile is still king. (Mouritz 47) Six hundred million cars (Urry, “Climate Change” 265) traverse the world’s roads, or sit idly in garages and clogging city streets. The West’s economic progress has been built in part around the success of the automotive industry, where the private car rules the spaces and rhythms of daily life. The problem of “automobile dependence” (Newman and Kenworthy) is often cited as one of the biggest challenges facing countries attempting to combat anthropogenic climate change. Sociologist John Urry has claimed that automobility is an “entire culture” that has re-defined movement in the contemporary world (Urry Mobilities 133). As such, it is the single most significant environmental challenge “because of the intensity of resource use, the production of pollutants and the dominant culture which sustains the major discourses of what constitutes the good life” (Urry Sociology 57-8). Climate change has forced a re-thinking of not only how we produce and dispose of cars, but also how we use them. What might a society not dominated by the private, petrol-driven car look like? Some of the pre-eminent writers on climate change futures, such as Gwynne Dyer, James Lovelock and John Urry, discuss one possibility that might emerge when oil becomes scarce: societies will descend into civil chaos, “a Hobbesian war of all against all” where “regional warlordism” and the most brutish, barbaric aspects of human nature come to the fore (Urry, “Climate Change” 261). Discussing a post-car society, John Urry also proffers another scenario in his “sociologies of the future:” an Orwellian “digital panopticon” in which other modes of transport, far more suited to a networked society, might emerge on a large scale and, in the long run, “might tip the system” into post-car one before it is too late (Urry, “Climate Change” 261). Amongst the many options he discusses is car sharing. Since its introduction in Germany more than 30 years ago, most of the critical literature has been devoted to the planning, environmental and business innovation aspects of car sharing; however very little has been written on its cultural dimensions. This paper analyses this small but developing trend in many Western countries, but more specifically its emergence in Sydney. The convergence of climate change discourse with that of the global financial crisis has resulted in a focus in the mainstream media, over the last few months, on technologies and practices that might save us money and also help the environment. For instance, a Channel 10 News story in May 2009 focused on the boom in car sharing in Sydney (see: http://www.youtube.com/watch? v=EPTT8vYVXro). Car sharing is an adaptive technology that doesn’t do away with the car altogether, but rather transforms the ways in which cars are used, thought about and promoted. I argue that car sharing provides a challenge to the dominant consumerist model of the privately owned car that has sustained capitalist structures for at least the last 50 years. In addition, through looking at some marketing and promotion tactics of car sharing in Australia, I examine some emerging car sharing subjectivities that both extend and subvert the long-established discourses of the automobile’s flexibility and autonomy to tempt monogamous car buyers into becoming philandering car sharers. Much literature has emerged over the last decade devoted to the ubiquitous phenomenon of automobility. “The car is the literal ‘iron cage’ of modernity, motorised, moving and domestic,” claims Urry (“Connections” 28). Over the course of twentieth century, automobility became “the dominant form of daily movement over much of the planet (dominating even those who do not move by cars)” (Paterson 132). Underpinning Urry’s prolific production of literature is his concept of automobility. This he defines as a complex system of “intersecting assemblages” that is not only about driving cars but the nexus between “production, consumption, machinic complexes, mobility, culture and environmental resource use” (Urry, “Connections” 28). In addition, Matthew Paterson, in his Automobile Politics, asserts that “automobility” should be viewed as everything that makes driving around in a car possible: highways, parking structures and traffic rules (87). While the private car seems an inevitable outcome of a capitalistic, individualistic modern society, much work has gone into the process of naturalising a dominant notion of automobility on drivers’ horizons. Through art, literature, popular music and brand advertising, the car has long been associated with seductive forms of identity, and societies have been built around a hegemonic culture of car ownership and driving as the pre-eminent, modern mode of self-expression. And more than 50 years of a popular Hollywood film genre—road movies—has been devoted to glorifying the car as total freedom, or in its more nihilistic version, “freedom on the road to nowhere” (Corrigan). As Paterson claims, “autonomous mobility of car driving is socially produced … by a range of interventions that have made it possible” (18). One of the main reasons automobility has been so successful, he claims, is through its ability to reproduce capitalist society. It provided a commodity around which a whole set of symbols, images and discourses could be constructed which served to effectively legitimise capitalist society. (30) Once the process is locked-in, it then becomes difficult to reverse as billions of agents have adapted to it and built their lives around “automobility’s strange mixture of co-ercion and flexibility” (Urry, “Climate Change” 266). The Decline of the Car Globally, the greatest recent rupture in the automobile’s meta-narrative of success came about in October 2008 when three CEOs from the major US car firms (General Motors, Ford and Chrysler) begged the United States Senate for emergency loan funds to avoid going bankrupt. To put the economic significance of this into context, Emma Rothschild notes “when the listing of the ‘Fortune 500’ began in 1955, General Motors was the largest American corporation, and it was one of the three largest, measured in revenues, every year until 2007” (Rothschilds, “Can we transform”). Curiously, instead of focusing on the death of the car (industry), as we know it, that this scenario might inevitably herald, much of the media attention focused on the hypocrisy and environmental hubris of the fact that all the CEOs had flown in private luxury jets to Washington. “Couldn’t they have at least jet-pooled?” complained one Democrat Senator (Wutkowski). In their next visit to Washington, most of them drove up in experimental vehicles still in pre-production, including plug-in hybrids. Up until that point no other manufacturing industry had been bailed out in the current financial crisis. Of course it’s not the first time the automobile industries have been given government assistance. The Australian automotive industry has received on-going government subsidies since the 1980s. Most recently, PM Kevin Rudd granted a 6.2 billion dollar ‘green car’ package to Australian automotive manufacturers. His justification to the growing chorus of doubts about the economic legitimacy of such a move was: “Some might say it's not worth trying to have a car industry, that is not my view, it is not the view of the Australian government and it never will be the view of any government which I lead” (The Australian). Amongst the many reasons for the government support of these industries must include the extraordinary interweaving of discourses of nationhood and progress with the success of the car industry. As the last few months reveal, evidently the mantra still prevails of “what’s good for the country is good for GM and vice versa”, as the former CEO of General Motors, Charles “Engine” Wilson, argued back in 1952 (Hirsch). In post-industrial societies like Australia it’s not only the economic aspects of the automotive industries that are criticised. Cars seem to be slowly losing their grip on identity-formation that they managed to maintain throughout “the century of the car” (Gilroy). They are no longer unproblematically associated with progress, freedom, youthfulness and absolute autonomy. The decline and eventual death of the automobile as we know it will be long, arduous and drawn-out. But there are some signs of a post-automobile society emerging, perhaps where cars will still be used but they will not dominate our society, urban space and culture in quite the same way that they have over the last 50 years. Urry discusses six transformations that might ‘tip’ the hegemonic system of automobility into a post-car one. He mentions new fuel systems, new materials for car construction, the de-privatisation of cars, development of communications technologies and integration of networked public transport through smart card technology and systems (Urry, Mobilities 281-284). As Paterson and others have argued, computers and mobile phones have somehow become “more genuine symbols of mobility and in turn progress” than the car (157). As a result, much automobile advertising now intertwines communications technologies with brand to valorise mobility. Car sharing goes some way in not only de-privatising cars but also using smart card technology and networked systems enabling an association with mobility futures. In Automobile Politics Paterson asks, “Is the car fundamentally unsustainable? Can it be greened? Has the car been so naturalised on our mobile horizons that we can’t imagine a society without it?” (27). From a sustainability perspective, one of the biggest problems with cars is still the amount of space devoted to them; highways, garages, car parks. About one-quarter of the land in London and nearly one-half of that in Los Angeles is devoted to car-only environments (Urry, “Connections” 29). In Sydney, it is more like a quarter. We have to reduce the numbers of cars on our roads to make our societies livable (Newman and Kenworthy). Car sharing provokes a re-thinking of urban space. If one quarter of Sydney’s population car shared and we converted this space into green use or local market gardens, then we’d have a radically transformed city. Car sharing, not to be confused with ‘ride sharing’ or ‘car pooling,’ involves a number of people using cars that are parked centrally in dedicated car bays around the inner city. After becoming a member (much like a 6 or 12 monthly gym membership), the cars can be booked (and extended) by the hour via the web or phone. They can then be accessed via a smart card. In Sydney there are 3 car sharing organisations operating: Flexicar (http://www.flexicar.com.au/), CharterDrive (http://www.charterdrive.com.au/) and GoGet (http://www.goget.com.au/).[1] The largest of these, GoGet, has been operating for 6 years and has over 5000 members and 200 cars located predominantly in the inner city suburbs. Anecdotally, GoGet claims its membership is primarily drawn from professionals living in the inner-urban ring. Their motivation for joining is, firstly, the convenience that car sharing provides in a congested, public transport-challenged city like Sydney; secondly, the financial savings derived; and thirdly, members consider the environmental and social benefits axiomatic. [2] The promotion tactics of car sharing seems to reflect this by barely mentioning the environment but focusing on those aspects which link car sharing to futuristic and flexible subjectivities which I outline in the next section. Unlike traditional car rental, the vehicles in car sharing are scattered through local streets in a network allowing local residents and businesses access to the vehicles mostly on foot. One car share vehicle is used by 22-24 members and gets about seven cars off the street (Mehlman 22). With lots of different makes and models of vehicles in each of their fleets, Flexicar’s website claims, “around the corner, around the clock” “Flexicar offers you the freedom of driving your own car without the costs and hassles of owning one,” while GoGet asserts, “like owning a car only better.” Due to the initial lack of interest from government, all the car sharing organisations in Australia are privately owned. This is very different to the situation in Europe where governments grant considerable financial assistance and have often integrated car sharing into pre-existing public transport networks. Urry discusses the spread of car sharing across the Western world: Six hundred plus cities across Europe have developed car-sharing schemes involving 50,000 people (Cervero, 2001). Prototype examples are found such as Liselec in La Rochelle, and in northern California, Berlin and Japan (Motavalli, 2000: 233). In Deptford there is an on-site car pooling service organized by Avis attached to a new housing development, while in Jersey electric hire cars have been introduced by Toyota. (Urry, “Connections” 34) ‘Collaborative Consumption’ and Flexible, Philandering Subjectivities Car sharing shifts the dominant conception of a car from being a ‘commodity’, which people purchase and subsequently identify with, to a ‘service’ or network of vehicles that are collectively used. It does this through breaking down the one car = one person (or one family) ratio with one car instead servicing 20 or more people. One of Paterson’s biggest criticisms concerns car driving as “a form of social exclusion” (44). Car sharing goes some way in subverting the model of hyper-individualism that supports both hegemonic automobility and capitalist structures, whereby the private motorcar produces a “separation of individuals from one another driving in their own private universes with no account for anyone else” (Paterson 90). As a car sharer, the driver has to acknowledge that this is not their private domain, and the car no longer becomes an extension of their living room or bedroom, as is noted in much literature around car cultures (Morris, Sheller, Simpson). There are a community of people using the car, so the driver needs to be attentive to things like keeping the car clean and bringing it back on time so another person can use it. So while car sharing may change the affective relationship and self-identification with the vehicle itself, it doesn’t necessarily change the phenomenological dimensions of car driving, such as the nostalgic pleasure of driving on the open road, or perhaps more realistically in Sydney, the frustration of being caught in a traffic jam. However, the fact the driver doesn’t own the vehicle does alter their relationship to the space and the commodity in a literal as well as a figurative way. Like car ownership, evidently car sharing also produces its own set of limitations on freedom and convenience. That mobility and car ownership equals freedom—the ‘freedom to drive’—is one imaginary which car firms were able to successfully manipulate and perpetuate throughout the twentieth century. However, car sharing also attaches itself to the same discourses of freedom and pervasive individualism and then thwarts them. For instance, GoGet in Sydney have run numerous marketing campaigns that attempt to contest several ‘self-evident truths’ about automobility. One is flexibility. Flexibility (and associated convenience) was one thing that ownership of a car in the late twentieth century was firmly able to affiliate itself with. However, car ownership is now more often associated with being expensive, a hassle and a long-term commitment, through things like buying, licensing, service and maintenance, cleaning, fuelling, parking permits, etc. Cars have also long been linked with sexuality. When in the 1970s financial challenges to the car were coming as a result of the oil shocks, Chair of General Motors, James Roche stated that, “America’s romance with the car is not over. Instead it has blossomed into a marriage” (Rothschilds, Paradise Lost). In one marketing campaign GoGet asked, ‘Why buy a car when all you need is a one night stand?’, implying that owning a car is much like a monogamous relationship that engenders particular commitments and responsibilities, whereas car sharing can just be a ‘flirtation’ or a ‘one night stand’ and you don’t have to come back if you find it a hassle. Car sharing produces a philandering subjectivity that gives individuals the freedom to have lots of different types of cars, and therefore relationships with each of them: I can be a Mini Cooper driver one day and a Falcon driver the next. This disrupts the whole kind of identification with one type of car that ownership encourages. It also breaks down a stalwart of capitalism—brand loyalty to a particular make of car with models changing throughout a person’s lifetime. Car sharing engenders far more fluid types of subjectivities as opposed to those rigid identities associated with ownership of one car. Car sharing can also be regarded as part of an emerging phenomenon of what Rachel Botsman and Roo Rogers have called “collaborative consumption”—when a community gets together “through organized sharing, swapping, bartering, trading, gifting and renting to get the same pleasures of ownership with reduced personal cost and burden, and lower environmental impact” (www.collaborativeconsumption.com). As Urry has stated, these developments indicate a gradual transformation in current economic structures from ownership to access, as shown more generally by many services offered and accessed via the web (Urry Mobilities 283). Rogers and Botsman maintain that this has come about through the “convergence of online social networks increasing cost consciousness and environmental necessity." In the future we could predict an increasing shift to payment to ‘access’ for mobility services, rather than the outright private ownerships of vehicles (Urry, “Connections”). Networked-Subjectivities or a ‘Digital Panopticon’? Cars, no longer able on their own to signify progress in either technical or social terms, attain their symbolic value through their connection to other, now more prevalently ‘progressive’ technologies. (Paterson 155) The term ‘digital panopticon’ has often been used to describe a dystopian world of virtual surveillance through such things as web-enabled social networking sites where much information is public, or alternatively, for example, the traffic surveillance system in London whereby the public can be constantly scrutinised through the centrally monitored cameras that track people’s/vehicle’s movements on city streets. In his “sociologies of the future,” Urry maintains that one thing which might save us from descending into post-car civil chaos is a system governed by a “digital panopticon” mobility system. This would be governed by a nexus system “that orders, regulates, tracks and relatively soon would ‘drive’ each vehicle and monitor each driver/passenger” (Urry, “Connections” 33). The transformation of mobile technologies over the last decade has made car sharing, as a viable business model, possible. Through car sharing’s exploitation of an online booking system, and cars that can be tracked, monitored and traced, the seeds of a mobile “networked-subjectivity” are emerging. But it’s not just the technology people are embracing; a cultural shift is occurring in the way that people understand mobility, their own subjectivity, and more importantly, the role of cars. NETT Magazine did a feature on car sharing, and advertised it on their front cover as “GoGet’s web and mobile challenge to car owners” (May 2009). Car sharing seems to be able to tap into more contemporary understandings of what mobility and flexibility might mean in the twenty-first century. In their marketing and promotion tactics, car sharing organisations often discursively exploit science fiction terminology and generate a subjectivity much more dependent on networks and accessibility (158). In the suburbs people park their cars in garages. In car sharing, the vehicles are parked not in car bays or car parks, but in publically accessible ‘pods’, which promotes a futuristic, sci-fi experience. Even the phenomenological dimensions of swiping a smart card over the front of the windscreen to open the car engender a transformation in access to the car, instead of through a key. This is service-technology of the future while those stuck in car ownership are from the old economy and the “century of the car” (Gilroy). The connections between car sharing and the mobile phone and other communications technologies are part of the notion of a networked, accessible vehicle. However, the more problematic side to this is the car under surveillance. Nic Lowe, of his car sharing organisation GoGet says, “Because you’re tagged on and we know it’s you, you are able to drive the car… every event you do is logged, so we know what time you turned the key, what time you turned it off and we know how far you drove … if a car is lost we can sound the horn to disable it remotely to prevent theft. We can track how fast you were going and even how fast you accelerated … track the kilometres for billing purposes and even find out when people are using the car when they shouldn’t be” (Mehlman 27). The possibility with the GPS technology installed in the car is being able to monitor speeds at which people drive, thereby fining then every minute spent going over the speed limit. While this conjures up the notion of the car under surveillance, it is also a much less bleaker scenario than “a Hobbesian war of all against all”. Conclusion: “Hundreds of Cars, No Garage” The prospect of climate change is provoking innovation at a whole range of levels, as well as providing a re-thinking of how we use taken-for-granted technologies. Sometime this century the one tonne, privately owned, petrol-driven car will become an artefact, much like Sydney trams did last century. At this point in time, car sharing can be regarded as an emerging transitional technology to a post-car society that provides a challenge to hegemonic automobile culture. It is evidently not a radical departure from the car’s vast machinic complex and still remains a part of what Urry calls the “system of automobility”. From a pro-car perspective, its networked surveillance places constraints on the free agency of the car, while for those of the deep green variety it is, no doubt, a compromise. Nevertheless, it provides a starting point for re-thinking the foundations of the privately-owned car. While Urry makes an important point in relation to a society moving from ownership to access, he doesn’t take into account the cultural shifts occurring that are enabling car sharing to be attractive to prospective members: the notion of networked subjectivities, the discursive constructs used to establish car sharing as a thing of the future with pods and smart cards instead of garages and keys. If car sharing became mainstream it could have radical environmental impacts on things like urban space and pollution, as well as the dominant culture of “automobile dependence” (Newman and Kenworthy), as Australia attempts to move to a low carbon economy. Notes [1] My partner Bruce Jeffreys, together with Nic Lowe, founded Newtown Car Share in 2002, which is now called GoGet. [2] Several layers down in the ‘About Us’ link on GoGet’s website is the following information about the environmental benefits of car sharing: “GoGet's aim is to provide a reliable, convenient and affordable transport service that: allows people to live car-free, decreases car usage, improves local air quality, removes private cars from local streets, increases patronage for public transport, allows people to lead more active lives” (http://www.goget.com.au/about-us.html). References The Australian. “Kevin Rudd Throws $6.2bn Lifeline to Car Industry.” 10 Nov. 2008. < http://www.theaustralian.news.com.au/business/story/ 0,28124,24628026-5018011,00.html >.Corrigan, Tim. “Genre, Gender, and Hysteria: The Road Movie in Outer Space.” A Cinema Without Walls: Movies, Culture after Vietnam. New Jersey: Rutgers University Press, 1991. Dwyer, Gwynne. Climate Wars. North Carlton: Scribe, 2008. Featherstone, Mike. “Automobilities: An Introduction.” Theory, Culture and Society 21.4-5 (2004): 1-24. Gilroy, Paul. “Driving while Black.” Car Cultures. Ed. Daniel Miller. Oxford: Berg, 2000. Hirsch, Michael. “Barack the Saviour.” Newsweek 13 Nov. 2008. < http://www.newsweek.com/id/168867 >. Lovelock, James. The Revenge of Gaia: Earth’s Climate Crisis and the Fate of Humanity. Penguin, 2007. Lovelock, James. The Vanishing Face of Gaia. Penguin, 2009. Mehlman, Josh. “Community Driven Success.” NETT Magazine (May 2009): 22-28. Morris, Meaghan. “Fate and the Family Sedan.” East West Film Journal 4.1 (1989): 113-134. Mouritz, Mike. “City Views.” Fast Thinking Winter 2009: 47-50. Newman, P. and J. Kenworthy. Sustainability and Cities: Overcoming Automobile Dependence. Washington DC: Island Press, 1999. Paterson, Matthew. Automobile Politics: Ecology and Cultural Political Economy. Cambridge: Cambridge University Press, 2007. Rothschilds, Emma. Paradise Lost: The Decline of the Auto-Industrial Age. New York: Radom House, 1973. Rothschilds, Emma. “Can We Transform the Auto-Industrial Society?” New York Review of Books 56.3 (2009). < http://www.nybooks.com/articles/22333 >. Sheller, Mimi. “Automotive Emotions: Feeling the Car.” Theory, Culture and Society 21 (2004): 221–42. Simpson, Catherine. “Volatile Vehicles: When Women Take the Wheel.” Womenvision. Ed. Lisa French. Melbourne: Damned Publishing, 2003. 197-210. Urry, John. Sociology Beyond Societies: Mobilities for the 21st Century. London: Routledge, 2000. Urry, John. “Connections.” Environment and Planning D: Society and Space 22 (2004): 27-37. Urry, John. Mobilities. Cambridge, and Maiden, MA: Polity Press, 2008. Urry, John. “Climate Change, Travel and Complex Futures.” British Journal of Sociology 59. 2 (2008): 261-279. Watts, Laura, and John Urry. “Moving Methods, Travelling Times.” Environment and Planning D: Society and Space 26 (2008): 860-874. Wutkowski, Karey. “Auto Execs' Private Flights to Washington Draw Ire.” Reuters News Agency 19 Nov. 2008. < http://www.reuters.com/article/newsOne/idUSTRE4AI8C520081119 >.
APA, Harvard, Vancouver, ISO, and other styles
38

Hackett, Lisa J. "Addressing Rage: The Fast Fashion Revolt." M/C Journal 22, no. 1 (March 13, 2019). http://dx.doi.org/10.5204/mcj.1496.

Full text
Abstract:
Wearing clothing from the past is all the rage now. Different styles and aesthetics of vintage and historical clothing, original or appropriated, are popular with fashion wearers and home sewers. Social media is rich with images of anachronistic clothing and the major pattern companies have a large range of historical sewing patterns available. Butterick McCall, for example, have a Making History range of patterns for sewers of clothing from a range of historical periods up to the 1950s. The 1950s styled fashion is particularly popular with pattern producers. Yet little research exists that explains why anachronistic clothing is all the rage. Drawing on 28 interviews conducted by the author with women who wear/make 1950s styles clothing and a survey of 229 people who wear/make historical clothing, this article outlines four key reasons that help explain the popularity of wearing/making anachronistic clothing: It argues that there exists rage against four ‘fast fashion’ practices: environmental disregard, labour breaches, poor quality, and poor fit. Ethical consumption practices such as home sewing quality clothes that fit, seeks to ameliorate this rage. That much of what is being made is anachronistic speaks to past sewing techniques that were ethical and produced quality fitting garments rather than fashion today that doesn’t fit, is of poor quality, and it unethical in its production. Fig. 1: Craftivist Collective Rage: Protesting Fast FashionRage against Fast Fashion Rage against fast fashion is not new. Controversies over Disney and Nike’s use of child labour in the 1990s, the anti-fur campaigns of the 1980s, the widespread condemnation of factory conditions in Bangladesh in the wake of the 2016 Rana Plaza collapse and Tess Holiday’s Eff Your Beauty Standards campaign, are evidence of this. Fast fashion is “cheap, trendy clothing, that samples ideas from the catwalk or celebrity culture and turns them into garments … at breakneck speed” (Rauturier). It is produced cheaply in short turnarounds, manufactured offshore by slave labour, with the industry hiding these exploitative practices behind, and in, complex supply chains. The clothing is made from poor quality material, meaning it doesn’t last, and the material is not environmentally sustainable. Because of this fast fashion is generally not recycled and ends up as waste in landfills. This for Rauturier is what fast fashion is: “cheap, low quality materials, where clothes degrade after just a few wears and get thrown away”. The fast fashion industry engages in two discrete forms of obsolescence; planned and perceived. Planned obsolescence is where clothes are designed to have a short life-span, thus coercing the consumer into buying a replacement item sooner than intended. Claims that clothes now last only a few washes before falling apart are common in the media (Dunbar). This is due to conscious manufacturing techniques that reduce the lifespan of the clothes including using mixed fibres, poor-quality interfacing, and using polyester threads, to name a few. Perceived obsolescence is where the consumer believes an otherwise functioning item of clothing to no longer to be valued. This is borne out in the idea that an item is deemed to be “in vogue” or “in fashion” and its value to the consumer is thus embedded in that quality. Once it falls out of fashion is deemed worthless. Laver’s “fashion cycle” elucidated this idea over eighty years ago. Since the 1980s the fashion industry has sped up, moving from the traditional twice annual fashion seasons to the fast fashion system of constantly manufacturing new styles, sometimes weekly. The technologies that have allowed the rapid manufacturing of fast fashion mean that the clothes are cheaper and more readily available. The average price of clothing has dropped accordingly. An item that cost US$100 in 1993 only cost US$59.10 in 2013, a drop of 41 per cent (Perry, Chart). The average person in 2014 bought 60 per cent more clothing that they did in 2000. Fast fashion is generally unsaleable in the second-hand market, due to its volume and poor design and manufacture. Green notes that many charity clothing stores bin a large percentage of the fast fashion items they receive. Environmental Rage Consumers are increasingly expressing rage about the environmental impact of fast fashion. The production of different textiles places different stresses on the environment. Cotton, for example, accounts for one third of the fibres found in all textiles, yet it requires high levels of water. A single cotton shirt needs 2,700 litres of water alone, the equivalent to “what one person drinks in two-and-a-half years” (Drew & Yehounme). Synthetics don’t represent an environmentally friendly alternative. While they may need less water, they are more carbon-intensive and polyester has twice the carbon footprint of cotton (Drew & Yehounme). Criticisms of fast fashion also include “water pollution, the use of toxic chemicals and increasing levels of textile waste”. Textile dyeing is the “second largest polluter of clean water globally.” The inclusion of chemical in the manufacturing of textiles is “disruptive to hormones and carcinogenic” (Perry, Cost). Naomi Klein’s exposure of the past problems of fast fashion, and revelations such as these, inform why consumers are enraged by the fast fashion system. The State of Fashion 2019 Report found many of the issues Klein interrogated remain of concern to consumers. Consumers continue to feel enraged at the industry’s disregard for the environment (Shaw et al.) any many are seeking alternative sources of sustainable fashion. For some consumers, the ethical dilemmas are overcome by purchasing second-hand or recycled clothing, or participate in Clothing Exchanges. Another alternative to ameliorating the rage is to stop buying new clothes and to make and wear their own clothes. A recent article in The Guardian, “’Don’t Feed the Monster!’ The People Who Have Stopped Buying New Clothes” highlights the “growing movement” of people seeking to make a “personal change” in response to the ethical dilemmas fast fashion poses to the environment. While political groups like Fashion of Tomorrow argue for collective legislative changes to ensure environmental sustainability in the industry, consumers are also finding their own individual ways of ameliorating their rage against fast fashion. Over recent decades Australians have consistently shown concern over environmental issues. A 2016 national survey found that 63 per cent of Australians considered themselves to be environmentalists and this is echoed in the ABC’s War on Waste programme which examined attitudes to and effects of clothing waste in Australia. In my interviews with women wearing 1950s style clothing, almost 65 per cent indicated a distinct dissatisfaction with mainstream fashion and frustration particularly with pernicious ‘fast fashion’. One participant offered, “seeing the War on Waste and all the fast fashion … I really like if I can get it second hand … you know I feel like I am helping a little bit” [Gabrielle]. Traid, a network of UK charity clothes shops diverts 3 000 tonnes of clothes from landfill to the second-hand market annually, reported for 2017-18 a 30 per cent increase in its second-hand clothes sales (Coccoza). The Internet has helped expand the second-hand clothing market. Two participants offered these insights: “I am completely addicted to the Review Buy Swap and Sell Page” [Anna] and “Instagram is huge for girls like us to communicate and get ideas” [Ashleigh]. Slave Rage The history of fashion is replete with examples of exploitation of workers. From the seamstresses of France in the eighteenth century who had to turn to prostitution to supplement their meagre wages (Jones 16) to the twenty-first century sweatshop workers earning less than a living wage in developing nations, poor work conditions have plagued the industry. For Karl Marx fashion represented a contradiction within capitalism where labour was exploited to create a mass-produced item. He lambasted the fashion industry and its “murderous caprices”, and despite his dream that the invention of the sewing machine would alleviate the stress placed on garment workers, technology has only served to intensify its demands on its poor workers (Sullivan 36-37). The 2013 Rena Plaza factory disaster shows just how far some sections of the industry are willing to go in their race to the bottom.In the absence of enforceable, global fair-trade initiatives, it is hard for consumers to purchase goods that reflect their ethos (Shaw et al. 428). While there is much more focus on better labour practices in the fashion industry, as the Baptist World Aid Australia’s annual Ethical Fashion Report shows, consumers are still critical of the industry and its labour practices.A significant number of participants in my research indicated that they actively sought to purchase products that were produced free from worker exploitation. For some participants, the purchasing of second-hand clothing allowed them to circumnavigate the fast fashion system. For others, mid-century reproduction fashion was sourced from markets with strong labour laws and “ethically made” without the use of sweat shop labour” [Emma]. Alternatively, another participant rejected buying new vintage fashion and instead purchased originally made fashion, in this case clothing made 50 to 60 years ago. This was one was of ensuring “some poor … person has [not] had to work really hard for very little money … [while the] shop is gaining all the profits” [Melissa]. Quality Rage Planned obsolescence in fashion has existed at least since the 1940s when Dupont ensured their nylon stockings were thin enough to ladder to ensure repeat custom (Meynen). Since then manufacturers have deliberately used poor techniques and poor material – blended fabrics, unfinished seams, unfixed dyes, for example – to ensure that clothes fail quickly. A 2015 UK Barnardo’s survey found clothes were worn an average of just seven times, which is not surprising given that clothes can last as little as two washes before being worn out (Dunbar). Extreme planned obsolescence in concert with perceived obsolescence can lead to clothes being discarded before their short lifespan had expired. The War on Waste interviewed young women who wore clothes sometimes only once before discarding them.Not all women are concerned with keeping up to date with fashion, instead wanting to create their own identify though clothes and are therefore looking for durability in their clothes. Many of the women interviewed for this research were aware of the declining quality of clothes, often referring to those made before the fast fashion era as evidence of quality clothing. For many in this study, manufacturing of classically styled clothing was of higher concern than mimicking the latest fashion trend. Some indicated their “disgust” at the poor quality of fast fashion [Gabrielle]. Others has specific outrage at the cost of poorly made fast fashion: “I don’t like spending a lot of money on clothing that I know may not necessarily be well made” [Skye] and “I got sick of dresses just being see through … you know, seeing my bras under things” [Becky]. For another: “I don’t like the whole mass-produced thing. I don’t think that they are particularly well made … Sometimes they are made with a tiny waist but big boobs, there’s no seams on them, they’re just overlocked together …” [Vicky]. For other participants in this research fast fashion produced items were considered inferior to original items. One put it is this way: “[On using vintage wares] If something broke, you fixed it. You didn’t throw it away and go down to [the shop] and buy a new one ... You look at stuff from these days … you could buy a handbag today and you are like “is this going to be here in two years? Or is it going to fall apart in my hands?” … there’s that strength and durability that I do like” [Ashleigh]. For another, “vintage reproduction stuff is so well made, it’s not like fast fashion, like Vivien of Holloway and Pin Up Girl Clothing, their pieces last forever, they don’t fall apart after five washes like fast fashion” [Emma]. The following encapsulates the rage felt in response to fast fashion. I think a lot of people are wearing true vintage clothing more often as a kind of backlash to the whole fast fashion scene … you could walk into any shop and you could see a lot of clothing that is very, very cheap, but it’s also very cheaply made. You are going to wear it and it’s going to fall apart in six months and that is not something that I want to invest in. [Melissa]Fit RageFit is a multi-faceted issue that affects consumers in several ways: body size; body shape; and height. Body size refers to the actual physical size of the body, whether one is underweight, slim, average, muscular or fat. Fast fashion body size labelling reflects what the industry considers to be of ‘normal sizes’, ranging from a size 8 through to a size 16 (Hackett & Rall). Body shape is a separate, if not entirely discrete issue. Women differ widely in the ratios between their hips, bust and waist. Body shape distribution varies widely within populations, for example, the ‘Size USA’ study identified 11 different female body shapes with wide variations between populations (Lee et al.). Even this doesn’t consider bodies with physical disabilities. Clothing is designed to fit women of ‘average’ height, thus bodies that are taller or shorter are often excluded from fast fashion (Valtonen). Even though Australian sizing practices are based on erroneous historical data (Hackett and Rall; Kennedy), the fast fashion system continues to manufacture for average body shapes and average body heights, to the exclusion of others. Discrimination through clothing sizes represents one way in which social norms are reinforced. Garments for larger women are generally regarded as less fashionable (Peters 48). Enraged consumers label some of the offerings ‘fat sacks’, ‘tents’ and ‘camouflage wear’ (Colls 591-592). Further, plus size is often more expensive and having been ‘sized up’ from smaller sizes, the result is poor fit. Larger body’s therefore have less autonomy in fashioning their identity (Peters 45). Size restrictions can lead to consumers having to choose between going without a desired item or wearing a size too small for them as no larger alternative is available (Laitala et al. 33-34).The ideology behind the thin aesthetic is that it is framed as aspirational (Barry) and thus consumers are motivated to purchase clothes based upon a desire to fit in with this beauty ideal. This is a false dichotomy (Halliwell and Dittmar 105; Bian and Wang). For participants in this research rage at fashion fashions persistance in producing for ‘average’ sized women was clearly evident. For a plus-size participant: “I don’t suit modern stuff. I’m a bigger girl and that’s not what style is these days. And so, I find it just doesn’t work for me” [Ashleigh]. For non-plus participants, sizing rage was also evident: I’m just like a praying mantis, a long string bean. I’m slim, tall … I do have the body shape … that fast fashion catered for, and I can still dress in fast fashion, but I think the idea that so many women feel excluded by that kind of fashion, I just want to distance myself from it. So, so many women have struggles in the change rooms in shopping centres because things don’t fit them nicely. [Emma] For this participant reproduction fashion wasn’t vanity sized. That is, a dress from the 1950s had the body measurements on the label rather than a number reflecting an arbitrary and erroneous sizing system. Some noted their disregard for standardised sizing systems used exclusively for fast fashion: “I have very non-standard measurements … I don’t buy dresses for that reason … My bust and my waist and my hips don’t fit a standard. You know I can’t go “ooh that’s a 12, that’s an 18”. You know, I don’t believe in standard sizing basically” [Skye]. Variations of sizing by brands adds to the frustration of fashion consumers: “if someone says 'I’m a size 16' that means absolutely nothing. If you go between brands … [shop A] XXL to a [shop B] to a [shop C] XXL to a [shop D] XXL, you know … they’re not the same. They won’t fit the same, they don’t have the same fit” [Skye]. These women recognise that their body shape, size and/or height is not catered for by fast fashion. This frees them to look for alternatives beyond the product offerings of the mainstream fashion industry. Although the rage against aspects of fast fashion discussed here – environmental, labour, quality and fit – is not seeing people in the streets protesting, people are actively choosing to find alternatives to the problem of sourcing clothes that fit their ethos. ReferencesABC Television. "Coffee Cups and Fast Fashion." War on Waste. 30 May 2017. Barnardo's. "Once Worn, Thrice Shy – British Women’s Wardrobe Habits Exposed!" 11 June 2015. 1 Mar. 2019 <http://www.barnardos.org.uk/news/press_releases.htm?ref=105244http://www.barnardos.org.uk/news/press_releases.htm?ref=105244>.Barry, Ben. "Selling Whose Dream? A Taxonomy of Aspiration in Fashion Imagery." Fashion, Style & Popular Culture 1.2 (2014): 175-92.Cocozza, Paula. “‘Don’t Feed The Monster!’ The People Who Have Stopped Buying New Clothes”. The Guardian 19 Feb. 2019. 20 Feb. 2019 <http://www.theguardian.com/fashion/2019/feb/19/dont-feed-monster-the-people-who-have-stopped-buying-new-clothes#comment-126048716>.Colls, Rachel. "‘Looking Alright, Feeling Alright’: Emotions, Sizing and the Geographies of Women's Experiences of Clothing Consumption." Social & Cultural Geography 5.4 (2004): 583-96.Drew, Deborah, and Genevieve Yehounme. "The Apparel Industry’s Environmental Impact in 6 Graphics." World Resources Institute July 2005. 24 Feb. 2018 <http://www.wri.org/blog/2017/07/apparel-industrys-environmental-impact-6-graphics>.Dunbar, Polly. "How Your Clothes Are Designed to Fall Apart: From Dodgy Stitching to Cheap Fabrics, Today's Fashions Are Made Not to Last – So You Have to Buy More." Daily Mail 18 Aug. 2016. 25 Feb. 2018 <http://www.dailymail.co.uk/femail/article-3746186/Are-clothes-fall-apart-dodgy-stitching-cheap-fabrics-today-s-fashions-designed-not-buy-more.htmlhttp://www.dailymail.co.uk/femail/article-3746186/Are-clothes-fall-apart-dodgy-stitching-cheap-fabrics-today-s-fashions-designed-not-buy-more.html>.Hackett, Lisa J., and Denise N. Rall. "The Size of the Problem with the Problem of Sizing: How Clothing Measurement Systems Have Misrepresented Women’s Bodies from the 1920s – Today." Clothing Cultures 5.2 (2018): 263-83.Kennedy, Kate. "What Size Am I? Decoding Women's Clothing Standards." Fashion Theory 13.4 (2009): 511-30.Klein, Naomi. No Logo, No Space, No Choice, No Jobs: Taking Aim at the Brand Bullies. London: Flamingo, 2000.Laitala, Kirsi, Ingun Grimstad Klepp, and Benedict Hauge. "Materialised Ideals Sizes and Beauty." Culture Unbound: Journal of Current Cultural Research 3 (2011): 19-41.Laver, James. Taste and Fashion. London: George G. Harrap, 1937.Lee, Jeong Yim, Cynthia L. Istook, Yun Ja Nam, Sun Mi Pak. "Comparison of Body Shape between USA and Korean Women." International Journal of Clothing Science and Technology 19.5 (2007): 374-91.Perry, Mark J. "Chart of the Day: The CPI for Clothing Has Fallen by 3.3% over the Last 20 Years, while Overall Prices Increased by 63.5%." AEIdeas 12 Oct. 2013. 4 Jan. 2019 <http://www.aei.org/publication/chart-of-the-day-the-cpi-for-clothing-has-fallen-by-3-3-over-the-last-20-years-while-overall-prices-increased-by-63-5/http://www.aei.org/publication/chart-of-the-day-the-cpi-for-clothing-has-fallen-by-3-3-over-the-last-20-years-while-overall-prices-increased-by-63-5/>. Perry, Patsy. “The Environmental Cost of Fast Fashion.” Independent 8 Jan. 2018. 1 Mar. 2019 <https://www.independent.co.uk/life-style/fashion/environment-costs-fast-fashion-pollution-waste-sustainability-a8139386.html>.Peters, Lauren Downing. "You Are What You Wear: How Plus-Size Fashion Figures in Fat Identity Formation." Fashion Theory 18.1 (2014): 45-71.Rauturier, Solene. “What Is Fast Fashion?” 1 Aug. 2010. 1 Mar. 2019 <https://goodonyou.eco/what-is-fast-fashion/>.Shaw, Deirdre, Gillian Hogg, Edward Shui, and Elaine Wilson. "Fashion Victim: The Impact of Fair Trade Concerns on Clothing Choice." Journal of Strategic Marketing 14.4 (2006): 427-40.Sullivan, Anthony. "Karl Marx: Fashion and Capitalism." Thinking through Fashion. Eds. Agnès Rocamora and Anneke Smelik. London: I.B. Tauris, 2016. 28-45. Valtonen, Anu. "Height Matters: Practicing Consumer Agency, Gender, and Body Politics." Consumption Markets & Culture 16.2 (2013): 196-221.
APA, Harvard, Vancouver, ISO, and other styles
39

Jethani, Suneel. "Lists, Spatial Practice and Assistive Technologies for the Blind." M/C Journal 15, no. 5 (October 12, 2012). http://dx.doi.org/10.5204/mcj.558.

Full text
Abstract:
IntroductionSupermarkets are functionally challenging environments for people with vision impairments. A supermarket is likely to house an average of 45,000 products in a median floor-space of 4,529 square meters and many visually impaired people are unable to shop without assistance, which greatly impedes personal independence (Nicholson et al.). The task of selecting goods in a supermarket is an “activity that is expressive of agency, identity and creativity” (Sutherland) from which many vision-impaired persons are excluded. In response to this, a number of proof of concept (demonstrating feasibility) and prototype assistive technologies are being developed which aim to use smart phones as potential sensorial aides for vision impaired persons. In this paper, I discuss two such prototypic technologies, Shop Talk and BlindShopping. I engage with this issue’s list theme by suggesting that, on the one hand, list making is a uniquely human activity that demonstrates our need for order, reliance on memory, reveals our idiosyncrasies, and provides insights into our private lives (Keaggy 12). On the other hand, lists feature in the creation of spatial inventories that represent physical environments (Perec 3-4, 9-10). The use of lists in the architecture of assistive technologies for shopping illuminates the interaction between these two modalities of list use where items contained in a list are not only textual but also cartographic elements that link the material and immaterial in space and time (Haber 63). I argue that despite the emancipatory potential of assistive shopping technologies, their efficacy in practical situations is highly dependent on the extent to which they can integrate a number of lists to produce representations of space that are meaningful for vision impaired users. I suggest that the extent to which these prototypes may translate to becoming commercially viable, widely adopted technologies is heavily reliant upon commercial and institutional infrastructures, data sources, and regulation. Thus, their design, manufacture and adoption-potential are shaped by the extent to which certain data inventories are accessible and made interoperable. To overcome such constraints, it is important to better understand the “spatial syntax” associated with the shopping task for a vision impaired person; that is, the connected ordering of real and virtual spatial elements that result in a supermarket as a knowable space within which an assisted “spatial practice” of shopping can occur (Kellerman 148, Lefebvre 16).In what follows, I use the concept of lists to discuss the production of supermarket-space in relation to the enabling and disabling potentials of assistive technologies. First, I discuss mobile digital technologies relative to disability and impairment and describe how the shopping task produces a disabling spatial practice. Second, I present a case study showing how assistive technologies function in aiding vision impaired users in completing the task of supermarket shopping. Third, I discuss various factors that may inhibit the liberating potential of technology assisted shopping by vision-impaired people. Addressing Shopping as a Disabling Spatial Practice Consider how a shopping list might inform one’s experience of supermarket space. The way shopping lists are written demonstrate the variability in the logic that governs list writing. As Bill Keaggy demonstrates in his found shopping list Web project and subsequent book, Milk, Eggs, Vodka, a shopping list may be written on a variety of materials, be arranged in a number of orientations, and the writer may use differing textual attributes, such as size or underlining to show emphasis. The writer may use longhand, abbreviate, write neatly, scribble, and use an array of alternate spelling and naming conventions. For example, items may be listed based on knowledge of the location of products, they may be arranged on a list as a result of an inventory of a pantry or fridge, or they may be copied in the order they appear in a recipe. Whilst shopping, some may follow strictly the order of their list, crossing back and forth between aisles. Some may work through their list item-by-item, perhaps forward scanning to achieve greater economies of time and space. As a person shops, their memory may be stimulated by visual cues reminding them of products they need that may not be included on their list. For the vision impaired, this task is near impossible to complete without the assistance of a relative, friend, agency volunteer, or store employee. Such forms of assistance are often unsatisfactory, as delays may be caused due to the unavailability of an assistant, or the assistant having limited literacy, knowledge, or patience to adequately meet the shopper’s needs. Home delivery services, though readily available, impede personal independence (Nicholson et al.). Katie Ellis and Mike Kent argue that “an impairment becomes a disability due to the impact of prevailing ableist social structures” (3). It can be said, then, that supermarkets function as a disability producing space for the vision impaired shopper. For the vision impaired, a supermarket is a “hegemonic modern visual infrastructure” where, for example, merchandisers may reposition items regularly to induce customers to explore areas of the shop that they wouldn’t usually, a move which adds to the difficulty faced by those customers with impaired vision who work on the assumption that items remain as they usually are (Schillmeier 161).In addressing this issue, much emphasis has been placed on the potential of mobile communications technologies in affording vision impaired users greater mobility and flexibility (Jolley 27). However, as Gerard Goggin argues, the adoption of mobile communication technologies has not necessarily “gone hand in hand with new personal and collective possibilities” given the limited access to standard features, even if the device is text-to-speech enabled (98). Issues with Digital Rights Management (DRM) limit the way a device accesses and reproduces information, and confusion over whether audio rights are needed to convert text-to-speech, impede the accessibility of mobile communications technologies for vision impaired users (Ellis and Kent 136). Accessibility and functionality issues like these arise out of the needs, desires, and expectations of the visually impaired as a user group being considered as an afterthought as opposed to a significant factor in the early phases of design and prototyping (Goggin 89). Thus, the development of assistive technologies for the vision impaired has been left to third parties who must adopt their solutions to fit within certain technical parameters. It is valuable to consider what is involved in the task of shopping in order to appreciate the considerations that must be made in the design of shopping intended assistive technologies. Shopping generally consists of five sub-tasks: travelling to the store; finding items in-store; paying for and bagging items at the register; exiting the store and getting home; and, the often overlooked task of putting items away once at home. In this process supermarkets exhibit a “trichotomous spatial ontology” consisting of locomotor space that a shopper moves around the store, haptic space in the immediate vicinity of the shopper, and search space where individual products are located (Nicholson et al.). In completing these tasks, a shopper will constantly be moving through and switching between all three of these spaces. In the next section I examine how assistive technologies function in producing supermarkets as both enabling and disabling spaces for the vision impaired. Assistive Technologies for Vision Impaired ShoppersJason Farman (43) and Adriana de Douza e Silva both argue that in many ways spaces have always acted as information interfaces where data of all types can reside. Global Positioning System (GPS), Radio Frequency Identification (RFID), and Quick Response (QR) codes all allow for practically every spatial encounter to be an encounter with information. Site-specific and location-aware technologies address the desire for meaningful representations of space for use in everyday situations by the vision impaired. Further, the possibility of an “always-on” connection to spatial information via a mobile phone with WiFi or 3G connections transforms spatial experience by “enfolding remote [and latent] contexts inside the present context” (de Souza e Silva). A range of GPS navigation systems adapted for vision-impaired users are currently on the market. Typically, these systems convert GPS information into text-to-speech instructions and are either standalone devices, such as the Trekker Breeze, or they use the compass, accelerometer, and 3G or WiFi functions found on most smart phones, such as Loadstone. Whilst both these products are adequate in guiding a vision-impaired user from their home to a supermarket, there are significant differences in their interfaces and data architectures. Trekker Breeze is a standalone hardware device that produces talking menus, maps, and GPS information. While its navigation functionality relies on a worldwide radio-navigation system that uses a constellation of 24 satellites to triangulate one’s position (May and LaPierre 263-64), its map and text-to-speech functionality relies on data on a DVD provided with the unit. Loadstone is an open source software system for Nokia devices that has been developed within the vision-impaired community. Loadstone is built on GNU General Public License (GPL) software and is developed from private and user based funding; this overcomes the issue of Trekker Breeze’s reliance on trading policies and pricing models of the few global vendors of satellite navigation data. Both products have significant shortcomings if viewed in the broader context of the five sub-tasks involved in shopping described above. Trekker Breeze and Loadstone require that additional devices be connected to it. In the case of Trekker Breeze it is a tactile keypad, and with Loadstone it is an aftermarket screen reader. To function optimally, Trekker Breeze requires that routes be pre-recorded and, according to a review conducted by the American Foundation for the Blind, it requires a 30-minute warm up time to properly orient itself. Both Trekker Breeze and Loadstone allow users to create and share Points of Interest (POI) databases showing the location of various places along a given route. Non-standard or duplicated user generated content in POI databases may, however, have a negative effect on usability (Ellis and Kent 2). Furthermore, GPS-based navigation systems are accurate to approximately ten metres, which means that users must rely on their own mobility skills when they are required to change direction or stop for traffic. This issue with GPS accuracy is more pronounced when a vision-impaired user is approaching a supermarket where they are likely to encounter environmental hazards with greater frequency and both pedestrian and vehicular traffic in greater density. Here the relations between space defined and spaces poorly defined or undefined by the GPS device interact to produce the supermarket surrounds as a disabling space (Galloway). Prototype Systems for Supermarket Navigation and Product SelectionIn the discussion to follow, I look at two prototype systems using QR codes and RFID that are designed to be used in-store by vision-impaired shoppers. Shop Talk is a proof of concept system developed by researchers at Utah State University that uses synthetic verbal route directions to assist vision impaired shoppers with supermarket navigation, product search, and selection (Nicholson et al.). Its hardware consists of a portable computational unit, a numeric keypad, a wireless barcode scanner and base station, headphones for the user to receive the synthetic speech instructions, a USB hub to connect all the components, and a backpack to carry them (with the exception of the barcode scanner) which has been slightly modified with a plastic stabiliser to assist in correct positioning. Shop Talk represents the supermarket environment using two data structures. The first is comprised of two elements: a topological map of locomotor space that allows for directional labels of “left,” “right,” and “forward,” to be added to the supermarket floor plan; and, for navigation of haptic space, the supermarket inventory management system, which is used to create verbal descriptions of product information. The second data structure is a Barcode Connectivity Matrix (BCM), which associates each shelf barcode with several pieces of information such as aisle, aisle side, section, shelf, position, Universal Product Code (UPC) barcode, product description, and price. Nicholson et al. suggest that one of their “most immediate objectives for future work is to migrate the system to a more conventional mobile platform” such as a smart phone (see Mobile Shopping). The Personalisable Interactions with Resources on AMI-Enabled Mobile Dynamic Environments (PRIAmIDE) research group at the University of Deusto is also approaching Ambient Assisted Living (AAL) by exploring the smart phone’s sensing, communication, computing, and storage potential. As part of their work, the prototype system, BlindShopping, was developed to address the issue of assisted shopping using entirely off-the-shelf technology with minimal environmental adjustments to navigate the store and search, browse and select products (López-de-Ipiña et al. 34). Blind Shopping’s architecture is based on three components. Firstly, a navigation system provides the user with synthetic verbal instructions to users via headphones connected to the smart phone device being used in order to guide them around the store. This requires a RFID reader to be attached to the tip of the user’s white cane and road-marking-like RFID tag lines to be distributed throughout the aisles. A smartphone application processes the RFID data that is received by the smart phone via Bluetooth generating the verbal navigation commands as a result. Products are recognised by pointing a QR code reader enabled smart phone at an embossed code located on a shelf. The system is managed by a Rich Internet Application (RIA) interface, which operates by Web browser, and is used to register the RFID tags situated in the aisles and the QR codes located on shelves (López-de-Ipiña and 37-38). A typical use-scenario for Blind Shopping involves a user activating the system by tracing an “L” on the screen or issuing the “Location” voice command, which activates the supermarket navigation system which then asks the user to either touch an RFID floor marking with their cane or scan a QR code on a nearby shelf to orient the system. The application then asks the user to dictate the product or category of product that they wish to locate. The smart phone maintains a continuous Bluetooth connection with the RFID reader to keep track of user location at all times. By drawing a “P” or issuing the “Product” voice command, a user can switch the device into product recognition mode where the smart phone camera is pointed at an embossed QR code on a shelf to retrieve information about a product such as manufacturer, name, weight, and price, via synthetic speech (López-de-Ipiña et al. 38-39). Despite both systems aiming to operate with as little environmental adjustment as possible, as well as minimise the extent to which a supermarket would need to allocate infrastructural, administrative, and human resources to implementing assistive technologies for vision impaired shoppers, there will undoubtedly be significant establishment and maintenance costs associated with the adoption of production versions of systems resembling either prototype described in this paper. As both systems rely on data obtained from a server by invoking Web services, supermarkets would need to provide in-store WiFi. Further, both systems’ dependence on store inventory data would mean that commercial versions of either of these systems are likely to be supermarket specific or exclusive given that there will be policies in place that forbid access to inventory systems, which contain pricing information to third parties. Secondly, an assumption in the design of both prototypes is that the shopping task ends with the user arriving at home; this overlooks the important task of being able to recognise products in order to put them away or to use at a later time.The BCM and QR product recognition components of both respective prototypic systems associates information to products in order to assist users in the product search and selection sub-tasks. However, information such as use-by dates, discount offers, country of manufacture, country of manufacturer’s origin, nutritional information, and the labelling of products as Halal, Kosher, containing alcohol, nuts, gluten, lactose, phenylalanine, and so on, create further challenges in how different data sources are managed within the devices’ software architecture. The reliance of both systems on existing smartphone technology is also problematic. Changes in the production and uptake of mobile communication devices, and the software that they operate on, occurs rapidly. Once the fit-out of a retail space with the necessary instrumentation in order to accommodate a particular system has occurred, this system is unlikely to be able to cater to the requirement for frequent upgrades, as built environments are less flexible in the upgrading of their technological infrastructure (Kellerman 148). This sets up a scenario where the supermarket may persist as a disabling space due to a gap between the functional capacities of applications designed for mobile communication devices and the environments in which they are to be used. Lists and Disabling Spatial PracticeThe development and provision of access to assistive technologies and the data they rely upon is a commercial issue (Ellis and Kent 7). The use of assistive technologies in supermarket-spaces that rely on the inter-functional coordination of multiple inventories may have the unintended effect of excluding people with disabilities from access to legitimate content (Ellis and Kent 7). With de Certeau, we can ask of supermarket-space “What spatial practices correspond, in the area where discipline is manipulated, to these apparatuses that produce a disciplinary space?" (96).In designing assistive technologies, such as those discussed in this paper, developers must strive to achieve integration across multiple data inventories. Software architectures must be optimised to overcome issues relating to intellectual property, cross platform access, standardisation, fidelity, potential duplication, and mass-storage. This need for “cross sectioning,” however, “merely adds to the muddle” (Lefebvre 8). This is a predicament that only intensifies as space and objects in space become increasingly “representable” (Galloway), and as the impetus for the project of spatial politics for the vision impaired moves beyond representation to centre on access and meaning-making.ConclusionSupermarkets act as sites of hegemony, resistance, difference, and transformation, where the vision impaired and their allies resist the “repressive socialization of impaired bodies” through their own social movements relating to environmental accessibility and the technology assisted spatial practice of shopping (Gleeson 129). It is undeniable that the prototype technologies described in this paper, and those like it, indeed do have a great deal of emancipatory potential. However, it should be understood that these devices produce representations of supermarket-space as a simulation within a framework that attempts to mimic the real, and these representations are pre-determined by the industrial, technological, and regulatory forces that govern their production (Lefebvre 8). Thus, the potential of assistive technologies is dependent upon a range of constraints relating to data accessibility, and the interaction of various kinds of lists across the geographic area that surrounds the supermarket, locomotor, haptic, and search spaces of the supermarket, the home-space, and the internal spaces of a shopper’s imaginary. These interactions are important in contributing to the reproduction of disability in supermarkets through the use of assistive shopping technologies. The ways by which people make and read shopping lists complicate the relations between supermarket-space as location data and product inventories versus that which is intuited and experienced by a shopper (Sutherland). Not only should we be creating inventories of supermarket locomotor, haptic, and search spaces, the attention of developers working in this area of assistive technologies should look beyond the challenges of spatial representation and move towards a focus on issues of interoperability and expanded access of spatial inventory databases and data within and beyond supermarket-space.ReferencesDe Certeau, Michel. The Practice of Everyday Life. Berkeley: University of California Press, 1984. Print.De Souza e Silva, A. “From Cyber to Hybrid: Mobile Technologies As Interfaces of Hybrid Spaces.” Space and Culture 9.3 (2006): 261-78.Ellis, Katie, and Mike Kent. Disability and New Media. New York: Routledge, 2011.Farman, Jason. Mobile Interface Theory: Embodied Space and Locative Media. New York: Routledge, 2012.Galloway, Alexander. “Are Some Things Unrepresentable?” Theory, Culture and Society 28 (2011): 85-102.Gleeson, Brendan. Geographies of Disability. London: Routledge, 1999.Goggin, Gerard. Cell Phone Culture: Mobile Technology in Everyday Life. London: Routledge, 2006.Haber, Alex. “Mapping the Void in Perec’s Species of Spaces.” Tattered Fragments of the Map. Ed. Adam Katz and Brian Rosa. S.l.: Thelimitsoffun.org, 2009.Jolley, William M. When the Tide Comes in: Towards Accessible Telecommunications for People with Disabilities in Australia. Sydney: Human Rights and Equal Opportunity Commission, 2003.Keaggy, Bill. Milk Eggs Vodka: Grocery Lists Lost and Found. Cincinnati, Ohio: HOW Books, 2007.Kellerman, Aharon. Personal Mobilities. London: Routledge, 2006.Kleege, Georgia. “Blindness and Visual Culture: An Eyewitness Account.” The Disability Studies Reader. 2nd edition. Ed. Lennard J. Davis. New York: Routledge, 2006. 391-98.Lefebvre, Henri. The Production of Space. Oxford, UK: Blackwell, 1991.López-de-Ipiña, Diego, Tania Lorido, and Unai López. “Indoor Navigation and Product Recognition for Blind People Assisted Shopping.” Ambient Assisted Living. Ed. J. Bravo, R. Hervás, and V. Villarreal. Berlin: Springer-Verlag, 2011. 25-32. May, Michael, and Charles LaPierre. “Accessible Global Position System (GPS) and Related Orientation Technologies.” Assistive Technology for Visually Impaired and Blind People. Ed. Marion A. Hersh, and Michael A. Johnson. London: Springer-Verlag, 2008. 261-88. Nicholson, John, Vladimir Kulyukin, and Daniel Coster. “Shoptalk: Independent Blind Shopping Through Verbal Route Directions and Barcode Scans.” The Open Rehabilitation Journal 2.1 (2009): 11-23.Perec, Georges. Species of Spaces and Other Pieces. Trans. and Ed. John Sturrock. London: Penguin Books, 1997.Schillmeier, Michael W. J. Rethinking Disability: Bodies, Senses, and Things. New York: Routledge, 2010.Sutherland, I. “Mobile Media and the Socio-Technical Protocols of the Supermarket.” Australian Journal of Communication. 36.1 (2009): 73-84.
APA, Harvard, Vancouver, ISO, and other styles
40

Pearce, Lynne. "Diaspora." M/C Journal 14, no. 2 (May 1, 2011). http://dx.doi.org/10.5204/mcj.373.

Full text
Abstract:
For the past twenty years, academics and other social commentators have, by and large, shared the view that the phase of modernity through which we are currently passing is defined by two interrelated catalysts of change: the physical movement of people and the virtual movement of information around the globe. As we enter the second decade of the new millennium, it is certainly a timely moment to reflect upon the ways in which the prognoses of the scholars and scientists writing in the late twentieth century have come to pass, especially since—during the time this special issue has been in press—the revolutions that are gathering pace in the Arab world appear to be realising the theoretical prediction that the ever-increasing “flows” of people and information would ultimately bring about the end of the nation-state and herald an era of transnationalism (Appadurai, Urry). For writers like Arjun Appadurai, moreover, the concept of diaspora was key to grasping how this new world order would take shape, and how it would operate: Diasporic public spheres, diverse amongst themselves, are the crucibles of a postnational political order. The engines of their discourse are mass media (both interactive and expressive) and the movement of refugees, activists, students, laborers. It may be that the emergent postnational order proves not to be a system of homogeneous units (as with the current system of nation-states) but a system based on relations between heterogeneous units (some social movements, some interest groups, some professional bodies, some non-governmental organizations, some armed constabularies, some judicial bodies) ... In the short run, as we can see already, it is likely to be a world of increased incivility and violence. In the longer run, free from the constraints of the nation form, we may find that cultural freedom and sustainable justice in the world do not presuppose the uniform and general existence of the nation-state. This unsettling possibility could be the most exciting dividend of living in modernity at large. (23) In this editorial, we would like to return to the “here and now” of the late 1990s in which theorists like Arjun Appaduri, Ulrich Beck, John Urry, Zygmunt Bauman, Robert Robertson and others were “imagining” the consequences of both globalisation and glocalisation for the twenty-first century in order that we may better assess what is, indeed, coming to pass. While most of their prognoses for this “second modernity” have proven remarkably accurate, it is their—self-confessed—inability to forecast either the nature or the extent of the digital revolution that most vividly captures the distance between the mid-1990s and now; and it is precisely the consequences of this extraordinary technological revolution on the twin concepts of “glocality” and “diaspora” that the research featured in this special issue seeks to capture. Glocal Imaginaries Appadurai’s endeavours to show how globalisation was rapidly making itself felt as a “structure of feeling” (Williams in Appadurai 189) as well as a material “fact” was also implicit in our conceptualisation of the conference, “Glocal Imaginaries: Writing/Migration/Place,” which gave rise to this special issue. This conference, which was the culmination of the AHRC-funded project “Moving Manchester: Literature/Migration/Place (2006-10)”, constituted a unique opportunity to gain an international, cross-disciplinary perspective on urgent and topical debates concerning mobility and migration in the early twenty-first century and the strand “Networked Diasporas” was one of the best represented on the program. Attracting papers on broadcast media as well as the new digital technologies, the strand was strikingly international in terms of the speakers’ countries of origin, as is this special issue which brings together research from six European countries, Australia and the Indian subcontinent. The “case-studies” represented in these articles may therefore be seen to constitute something of a “state-of-the-art” snapshot of how Appadurai’s “glocal imaginary” is being lived out across the globe in the early years of the twenty-first century. In this respect, the collection proves that his hunch with regards to the signal importance of the “mass-media” in redefining our spatial and temporal coordinates of being and belonging was correct: The third and final factor to be addressed here is the role of the mass-media, especially in its electronic forms, in creating new sorts of disjuncture between spatial and virtual neighborhoods. This disjuncture has both utopian and dystopian potentials, and there is no easy way to tell how these may play themselves out in the future of the production of locality. (194) The articles collected here certainly do serve as testament to the “bewildering plethora of changes in ... media environments” (195) that Appadurai envisaged, and yet it can clearly also be argued that this agent of glocalisation has not yet brought about the demise of the nation-state in the way (or at the speed) that many commentators predicted. Digital Diasporas in a Transnational World Reviewing the work of the leading social science theorists working in the field during the late 1990s, it quickly becomes evident that: (a) the belief that globalisation presented a threat to the nation-state was widely held; and (b) that the “jury” was undecided as to whether this would prove a good or bad thing in the years to come. While the commentators concerned did their best to complexify both their analysis of the present and their view of the future, it is interesting to observe, in retrospect, how the rhetoric of both utopia and dystopia invaded their discourse in almost equal measure. We have already seen how Appadurai, in his 1996 publication, Modernity at Large, looks beyond the “increased incivility and violence” of the “short term” to a world “free from the constraints of the nation form,” while Roger Bromley, following Agamben and Deleuze as well as Appadurai, typifies a generation of literary and cultural critics who have paid tribute to the way in which the arts (and, in particular, storytelling) have enabled subjects to break free from their national (af)filiations (Pearce, Devolving 17) and discover new “de-territorialised” (Deleuze and Guattari) modes of being and belonging. Alongside this “hope,” however, the forces and agents of globalisation were also regarded with a good deal of suspicion and fear, as is evidenced in Ulrich Beck’s What is Globalization? In his overview of the theorists who were then perceived to be leading the debate, Beck draws distinctions between what was perceived to be the “engine” of globalisation (31), but is clearly most exercised by the manner in which the transformation has taken shape: Without a revolution, without even any change in laws or constitutions, an attack has been launched “in the normal course of business”, as it were, upon the material lifelines of modern national societies. First, the transnational corporations are to export jobs to parts of the world where labour costs and workplace obligations are lowest. Second, the computer-generation of worldwide proximity enables them to break down and disperse goods and services, and produce them through a division of labour in different parts of the world, so that national and corporate labels inevitably become illusory. (3; italics in the original) Beck’s concern is clearly that all these changes have taken place without the nation-states of the world being directly involved in any way: transnational corporations began to take advantage of the new “mobility” available to them without having to secure the agreement of any government (“Companies can produce in one country, pay taxes in another and demand state infrastructural spending in yet another”; 4-5); the export of the labour market through the use of digital communications (stereotypically, call centres in India) was similarly unregulated; and the world economy, as a consequence, was in the process of becoming detached from the processes of either production or consumption (“capitalism without labour”; 5-7). Vis-à-vis the dystopian endgame of this effective “bypassing” of the nation-state, Beck is especially troubled about the fate of the human rights legislation that nation-states around the world have developed, with immense effort and over time (e.g. employment law, trade unions, universal welfare provision) and cites Zygmunt Bauman’s caution that globalisation will, at worst, result in widespread “global wealth” and “local poverty” (31). Further, he ends his book with a fully apocalyptic vision, “the Brazilianization of Europe” (161-3), which unapologetically calls upon the conventions of science fiction to imagine a worst-case scenario for a Europe without nations. While fourteen or fifteen years is evidently not enough time to put Beck’s prognosis to the test, most readers would probably agree that we are still some way away from such a Europe. Although the material wealth and presence of the transnational corporations strikes a chord, especially if we include the world banks and finance organisations in their number, the financial crisis that has rocked the world for the past three years, along with the wars in Iraq and Afghanistan, and the ascendancy of Al-Qaida (all things yet to happen when Beck was writing in 1997), has arguably resulted in the nations of Europe reinforcing their (respective and collective) legal, fiscal, and political might through rigorous new policing of their physical borders and regulation of their citizens through “austerity measures” of an order not seen since World War Two. In other words, while the processes of globalisation have clearly been instrumental in creating the financial crisis that Europe is presently grappling with and does, indeed, expose the extent to which the world economy now operates outside the control of the nation-state, the nation-state still exists very palpably for all its citizens (whether permanent or migrant) as an agent of control, welfare, and social justice. This may, indeed, cause us to conclude that Bauman’s vision of a world in which globalisation would make itself felt very differently for some groups than others came closest to what is taking shape: true, the transnationals have seized significant political and economic power from the nation-state, but this has not meant the end of the nation-state; rather, the change is being experienced as a re-trenching of whatever power the nation-state still has (and this, of course, is considerable) over its citizens in their “local”, everyday lives (Bauman 55). If we now turn to the portrait of Europe painted by the articles that constitute this special issue, we see further evidence of transglobal processes and practices operating in a realm oblivious to local (including national) concerns. While our authors are generally more concerned with the flows of information and “identity” than business or finance (Appaduri’s “ethnoscapes,” “technoscapes,” and “ideoscapes”: 33-7), there is the same impression that this “circulation” (Latour) is effectively bypassing the state at one level (the virtual), whilst remaining very materially bound by it at another. In other words, and following Bauman, we would suggest that it is quite possible for contemporary subjects to be both the agents and subjects of globalisation: a paradox that, as we shall go on to demonstrate, is given particularly vivid expression in the case of diasporic and/or migrant peoples who may be able to bypass the state in the manufacture of their “virtual” identities/communities) but who (Cohen) remain very much its subjects (or, indeed, “non-subjects”) when attempting movement in the material realm. Two of the articles in the collection (Leurs & Ponzanesi and Marcheva) deal directly with the exponential growth of “digital diasporas” (sometimes referred to as “e-diasporas”) since the inception of Facebook in 2004, and both provide specific illustrations of the way in which the nation-state both has, and has not, been transcended. First, it quickly becomes clear that for the (largely) “youthful” (Leurs & Ponzanesi) participants of nationally inscribed networking sites (e.g. “discovernikkei” (Japan), “Hyves” (Netherlands), “Bulgarians in the UK” (Bulgaria)), shared national identity is a means and not an end. In other words, although the participants of these sites might share in and actively produce a fond and nostalgic image of their “homeland” (Marcheva), they are rarely concerned with it as a material or political entity and an expression of their national identities is rapidly supplemented by the sharing of other (global) identity markers. Leurs & Ponzanesi invoke Deleuze and Guattari’s concept of the “rhizome” to describe the way in which social networkers “weave” a “rhizomatic path” to identity, gradually accumulating a hybrid set of affiliations. Indeed, the extent to which the “nation” disappears on such sites can be remarkable as was also observed in our investigation of the digital storytelling site, “Capture Wales” (BBC) (Pearce, "Writing"). Although this BBC site was set up to capture the voices of the Welsh nation in the early twenty-first century through a collection of (largely) autobiographical stories, very few of the participants mention either Wales or their “Welshness” in the stories that they tell. Further, where the “home” nation is (re)imagined, it is generally in an idealised, or highly personalised, form (e.g. stories about one’s own family) or through a sharing of (perceived and actual) cultural idiosyncrasies (Marcheva on “You know you’re a Bulgarian when …”) rather than an engagement with the nation-state per se. As Leurs & Ponzanesi observe: “We can see how the importance of the nation-state gets obscured as diasporic youth, through cultural hybridisation of youth culture and ethnic ties initiate subcultures and offer resistance to mainstream cultural forms.” Both the articles just discussed also note the shading of the “national” into the “transnational” on the social networking sites they discuss, and “transnationalism”—in the sense of many different nations and their diasporas being united through a common interest or cause—is also a focus of Pikner’s article on “collective actions” in Europe (notably, “EuroMayDay” and “My Estonia”) and Harb’s highly topical account of the role of both broadcast media (principally, Al-Jazeera) and social media in the revolutions and uprisings currently sweeping through the Arab world (spring 2011). On this point, it should be noted that Harb identifies this as the moment when Facebook’s erstwhile predominantly social function was displaced by a manifestly political one. From this we must conclude that both transnationalism and social media sites can be put to very different ends: while young people in relatively privileged democratic countries might embrace transnationalism as an expression of their desire to “rise above” national politics, the youth of the Arab world have engaged it as a means of generating solidarity for nationalist insurgency and liberation. Another instance of “g/local” digital solidarity exceeding national borders is to be found in Johanna Sumiala’s article on the circulatory power of the Internet in the Kauhajoki school shooting which took place Finland in 2008. As well as using the Internet to “stage manage” his rampage, the Kauhajoki shooter (whose name the author chose to withhold for ethical reasons) was subsequently found to have been a member of numerous Web-based “hate groups”, many of them originating in the United States and, as a consequence, may be understood to have committed his crime on behalf of a transnational community: what Sumiala has defined as a “networked community of destruction.” It must also be noted, however, that the school shootings were experienced as a very local tragedy in Finland itself and, although the shooter may have been psychically located in a transnational hyper-reality when he undertook the killings, it is his nation-state that has had to deal with the trauma and shame in the long term. Woodward and Brown & Rutherford, meanwhile, show that it remains the tendency of public broadcast media to uphold the raison d’être of the nation-state at the same time as embracing change. Woodward’s feature article (which reports on the AHRC-sponsored “Tuning In” project which has researched the BBC World Service) shows how the representation of national and diasporic “voices” from around the world, either in opposition to or in dialogue with the BBC’s own reporting, is key to the way in which the Commission has changed and modernised in recent times; however, she is also clear that many of the objectives that defined the service in its early days—such as its commitment to a distinctly “English” brand of education—still remain. Similarly, Brown & Rutherford’s article on the innovative Australian ABC children’s television series, My Place (which has combined traditional broadcasting with online, interactive websites) may be seen to be positively promoting the Australian nation by making visible its commitment to multiculturalism. Both articles nevertheless reveal the extent to which these public service broadcasters have recognised the need to respond to their nations’ changing demographics and, in particular, the fact that “diaspora” is a concept that refers not only to their English and Australian audiences abroad but also to their now manifestly multicultural audiences at home. When it comes to commercial satellite television, however, the relationship between broadcasting and national and global politics is rather harder to pin down. Subramanian exposes a complex interplay of national and global interests through her analysis of the Malayalee “reality television” series, Idea Star Singer. Exported globally to the Indian diaspora, the show is shamelessly exploitative in the way in which it combines residual and emergent ideologies (i.e. nostalgia for a traditional Keralayan way of life vs aspirational “western lifestyles”) in pursuit of its (massive) audience ratings. Further, while the ISS series is ostensibly a g/local phenomenon (the export of Kerala to the rest of the world rather than “India” per se), Subramanian passionately laments all the progressive national initiatives (most notably, the campaign for “women’s rights”) that the show is happy to ignore: an illustration of one of the negative consequences of globalisation predicted by Beck (31) noted at the start of this editorial. Harb, meanwhile, reflects upon a rather different set of political concerns with regards to commercial satellite broadcasting in her account of the role of Al-Jazeera and Al Arabiya in the recent (2011) Arab revolutions. Despite Al-Jazeera’s reputation for “two-sided” news coverage, recent events have exposed its complicity with the Qatari government; further, the uprisings have revealed the speed with which social media—in particular Facebook and Twitter—are replacing broadcast media. It is now possible for “the people” to bypass both governments and news corporations (public and private) in relaying the news. Taken together, then, what our articles would seem to indicate is that, while the power of the nation-state has notionally been transcended via a range of new networking practices, this has yet to undermine its material power in any guaranteed way (witness recent counter-insurgencies in Libya, Bahrain, and Syria).True, the Internet may be used to facilitate transnational “actions” against the nation-state (individual or collective) through a variety of non-violent or violent actions, but nation-states around the world, and especially in Western Europe, are currently wielding immense power over their subjects through aggressive “austerity measures” which have the capacity to severely compromise the freedom and agency of the citizens concerned through widespread unemployment and cuts in social welfare provision. This said, several of our articles provide evidence that Appadurai’s more utopian prognoses are also taking shape. Alongside the troubling possibility that globalisation, and the technologies that support it, is effectively eroding “difference” (be this national or individual), there are the ever-increasing (and widely reported) instances of how digital technology is actively supporting local communities and actions around the world in ways that bypass the state. These range from the relatively modest collective action, “My Estonia”, featured in Pikner’s article, to the ways in which the Libyan diaspora in Manchester have made use of social media to publicise and support public protests in Tripoli (Harb). In other words, there is compelling material evidence that the heterogeneity that Appadurai predicted and hoped for has come to pass through the people’s active participation in (and partial ownership of) media practices. Citizens are now able to “interfere” in the representation of their lives as never before and, through the digital revolution, communicate with one another in ways that circumvent state-controlled broadcasting. We are therefore pleased to present the articles that follow as a lively, interdisciplinary and international “state-of-the-art” commentary on how the ongoing revolution in media and communication is responding to, and bringing into being, the processes and practices of globalisation predicted by Appadurai, Beck, Bauman, and others in the 1990s. The articles also speak to the changing nature of the world’s “diasporas” during this fifteen year time frame (1996-2011) and, we trust, will activate further debate (following Cohen) on the conceptual tensions that now manifestly exist between “virtual” and “material” diasporas and also between the “transnational” diasporas whose objective is to transcend the nation-state altogether and those that deploy social media for specifically local or national/ist ends. Acknowledgements With thanks to the Arts and Humanities Research Council (UK) for their generous funding of the “Moving Manchester” project (2006-10). Special thanks to Dr Kate Horsley (Lancaster University) for her invaluable assistance as ‘Web Editor’ in the production of this special issue (we could not have managed without you!) and also to Gail Ferguson (our copy-editor) for her expertise in the preparation of the final typescript. References Appadurai, Arjun. Modernity at Large: Cultural Dimensions of Globalisation. Minneapolis: U of Minnesota P, 1996. Bauman, Zygmunt. Globalization. Cambridge: Polity, 1998. Beck, Ulrich. What is Globalization? Trans. Patrick Camiller. Cambridge: Polity, 2000 (1997). Bromley, Roger. Narratives for a New Belonging: Diasporic Cultural Fictions. Edinburgh: Edinburgh UP, 2000. Cohen, Robin. Global Diasporas. 2nd ed. London and New York: Routledge, 2008. Deleuze, Gilles, and Felix Guattari. A Thousand Plateaus: Capitalism and Schizophrenia. Trans. Brian Massumi. Minneapolis: U of Minnesota P, 1987. Latour, Bruno. Reassembling the Social: An Introduction to Actor-Network Theory. Oxford: Oxford UP, 1995. Pearce, Lynne, ed. Devolving Identities: Feminist Readings in Home and Belonging. London: Ashgate, 2000. Pearce, Lynne. “‘Writing’ and ‘Region’ in the Twenty-First Century: Epistemological Reflections on Regionally Located Art and Literature in the Wake of the Digital Revolution.” European Journal of Cultural Studies 13.1 (2010): 27-41. Robertson, Robert. Globalization: Social Theory and Global Culture. London: Sage, 1992. Urry, John. Sociology beyond Societies. London: Routledge, 1999. Williams, Raymond. Dream Worlds: Mass Consumption in Late Nineteenth-Century France. Berkeley: U of California P, 1982.
APA, Harvard, Vancouver, ISO, and other styles
41

McNair, Brian. "Vote!" M/C Journal 10, no. 6 (April 1, 2008). http://dx.doi.org/10.5204/mcj.2714.

Full text
Abstract:
The twentieth was, from one perspective, the democratic century — a span of one hundred years which began with no fully functioning democracies in existence anywhere on the planet (if one defines democracy as a political system in which there is both universal suffrage and competitive elections), and ended with 120 countries out of 192 classified by the Freedom House think tank as ‘democratic’. There are of course still many societies where democracy is denied or effectively neutered — the remaining outposts of state socialism, such as China, Cuba, and North Korea; most if not all of the Islamic countries; exceptional states such as Singapore, unapologetically capitalist in its economic system but resolutely authoritarian in its political culture. Many self-proclaimed democracies, including those of the UK, Australia and the US, are procedurally or conceptually flawed. Countries emerging out of authoritarian systems and now in a state of democratic transition, such as Russia and the former Soviet republics, are immersed in constant, sometimes violent struggle between reformers and reactionaries. Russia’s recent parliamentary elections were accompanied by the intimidation of parties and politicians who opposed Vladimir Putin’s increasingly populist and authoritarian approach to leadership. The same Freedom House report which describes the rise of democracy in the twentieth century acknowledges that many self-styled democracies are, at best, only ‘partly free’ in their political cultures (for detailed figures on the rise of global democracy, see the Freedom House website Democracy’s Century). Let’s not for a moment downplay these important qualifications to what can nonetheless be fairly characterised as a century-long expansion and globalisation of democracy, and the acceptance of popular sovereignty, expressed through voting for the party or candidate of one’s choice, as a universally recognised human right. That such a process has occurred, and continues in these early years of the twenty-first century, is irrefutable. In the Gaza strip, Hamas appeals to the legitimacy of a democratic election victory in its campaign to be recognised as the voice of the Palestinian people. However one judges the messianic tendencies and Islamist ideology of Mahmoud Ahmadinejad, it must be acknowledged that the Iranian people elected him, and that they have the power to throw him out of government next time they vote. That was never true of the Shah. The democratic resurgence in Latin America, taking in Venezuela, Peru and Bolivia among others has been a much-noted feature of international politics in recent times (Alves), presenting a welcome contrast to the dictatorships and death squads of the 1980s, even as it creates some uncomfortable dilemmas for the Bush administration (which must champion democratic government at the same time as it resents some of the choices people may make when they have the opportunity to vote). Since 9/11 a kind of democracy has expanded even to Afghanistan and Iraq, albeit at the point of a gun, and with no guarantees of survival beyond the end of military occupation by the US and its coalition allies. As this essay was being written, Pakistan’s state of emergency was ending and democratic elections scheduled, albeit in the shadow cast by the assassination of Benazir Bhutto in December 2007. Democracy, then — imperfect and limited as it can be; grudgingly delivered though it is by political elites in many countries, and subject to attack and roll back at any time — has become a global universal to which all claim allegiance, or at least pay lip service. The scale of this transformation, which has occurred in little more than one quarter of the time elapsed since the Putney debates of 1647 and the English revolution first established the principle of the sovereignty of parliament, is truly remarkable. (Tristram Hunt quotes lawyer Geoffrey Robertson in the Guardian to the effect that the Putney debates, staged in St Mary’s church in south-west London towards the end of the English civil war, launched “the idea that government requires the consent of freely and fairly elected representatives of all adult citizens irrespective of class or caste or status or wealth” – “A Jewel of Democracy”, Guardian, 26 Oct. 2007) Can it be true that less than one hundred years ago, in even the most advanced capitalist societies, 50 per cent of the people — women — did not have the right to vote? Or that black populations, indigenous or migrant, in countries such as the United States and Australia were deprived of basic citizenship rights until the 1960s and even later? Will future generations wonder how on earth it could have been that the vast majority of the people of South Africa were unable to vote until 1994, and that they were routinely imprisoned, tortured and killed when they demanded basic democratic rights? Or will they shrug and take it for granted, as so many of us who live in settled democracies already do? (In so far as ‘we’ includes the community of media and cultural studies scholars, I would argue that where there is reluctance to concede the scale and significance of democratic change, this arises out of continuing ambivalence about what ‘democracy’ means, a continuing suspicion of globalisation (in particular the globalisation of democratic political culture, still associated in some quarters with ‘the west’), and of the notion of ‘progress’ with which democracy is routinely associated. The intellectual roots of that ambivalence were various. Marxist-leninist inspired authoritarianism gripped much of the world until the fall of the Berlin Wall and the end of the cold war. Until that moment, it was still possible for many marxians in the scholarly community to view the idea of democracy with disdain — if not quite a dirty word, then a deeply flawed, highly loaded concept which masked and preserved underlying social inequalities more than it helped resolve them. Until 1989 or thereabouts, it was possible for ‘bourgeois democracy’ to be regarded as just one kind of democratic polity by the liberal and anti-capitalist left, which often regarded the ‘proletarian’ or ‘people’s’ democracy prevailing in the Soviet Union, China, Cuba or Vietnam as legitimate alternatives to the emerging capitalist norm of one person, one vote, for constituent assemblies which had real power and accountability. In terms not very different from those used by Marx and Engels in The German Ideology, belief in the value of democracy was conceived by this materialist school as a kind of false consciousness. It still is, by Noam Chomsky and others who continue to view democracy as a ‘necessary illusion’ (1989) without which capitalism could not be reproduced. From these perspectives voting gave, and gives us merely the illusion of agency and power in societies where capital rules as it always did. For democracy read ‘the manufacture of consent’; its expansion read not as progressive social evolution, but the universalisation of the myth of popular sovereignty, mobilised and utilised by the media-industrial-military complex to maintain its grip.) There are those who dispute this reading of events. In the 1960s, Habermas’s hugely influential Structural Transformation of the Public Sphere critiqued the manner in which democracy, and the public sphere underpinning it, had been degraded by public relations, advertising, and the power of private interests. In the period since, critical scholarly research and writing on political culture has been dominated by the Habermasian discourse of democratic decline, and the pervasive pessimism of those who see democracy, and the media culture which supports it, as fatally flawed, corrupted by commercialisation and under constant threat. Those, myself included, who challenged that view with a more positive reading of the trends (McNair, Journalism and Democracy; Cultural Chaos) have been denounced as naïve optimists, panglossian, utopian and even, in my own case, a ‘neo-liberal apologist’. (See an unpublished paper by David Miller, “System Failure: It’s Not Just the Media, It’s the Whole Bloody System”, delivered at Goldsmith’s College in 2003.) Engaging as they have been, I venture to suggest that these are the discourses and debates of an era now passing into history. Not only is it increasingly obvious that democracy is expanding globally into places where it never previously reached; it is also extending inwards, within nation states, driven by demands for greater local autonomy. In the United Kingdom, for example, the citizen is now able to vote not just in Westminster parliamentary elections (which determine the political direction of the UK government), but for European elections, local elections, and elections for devolved assemblies in Scotland, Wales and Northern Ireland. The people of London can vote for their mayor. There would by now have been devolved assemblies in the regions of England, too, had the people of the North East not voted against it in a November 2004 referendum. Notwithstanding that result, which surprised many in the New Labour government who held it as axiomatic that the more democracy there was, the better for all of us, the importance of enhancing and expanding democratic institutions, of allowing people to vote more often (and also in more efficient ways — many of these expansions of democracy have been tied to the introduction of systems of proportional representation) has become consensual, from the Mid West of America to the Middle East. The Democratic Paradox And yet, as the wave of democratic transformation has rolled on through the late twentieth and into the early twenty first century it is notable that, in many of the oldest liberal democracies at least, fewer people have been voting. In the UK, for example, in the period between 1945 and 2001, turnout at general elections never fell below 70 per cent. In 1992, the last general election won by the Conservatives before the rise of Tony Blair and New Labour, turnout was 78 per cent, roughly where it had been in the 1950s. In 2001, however, as Blair’s government sought re-election, turnout fell to an historic low for the UK of 59.4 per cent, and rose only marginally to 61.4 per cent in the most recent general election of 2005. In the US presidential elections of 1996 and 2000 turnouts were at historic lows of 47.2 and 49.3 per cent respectively, rising just above 50 per cent again in 2004 (figures by International Institute for Democracy and Electoral Assistance). At local level things are even worse. In only the second election for a devolved parliament in Scotland (2003) turnout was a mere 48.5 per cent, rising to 50.5 in 2007. These trends are not universal. In countries with compulsory voting, they mean very little — in Australia, where voting in parliamentary elections is compulsory, turnout averages in the 90s per cent. In France, while turnouts for parliamentary elections show a similar downward trend to the UK and the UK, presidential contests achieve turnouts of 80-plus per cent. In the UK and US, as noted, the most recent elections show modest growth in turnout from those historic lows of the late 1990s and early Noughties. There has grown, nonetheless, the perception, commonplace amongst academic commentators as well as journalists and politicians themselves, that we are living through a ‘crisis’ of democratic participation, a dangerous decline in the tendency to vote in elections which undermines the legitimacy of democracy itself. In communication scholarship a significant body of research and publication has developed around this theme, from Blumler and Gurevitch’s Crisis of Public Communication (1996), through Barnett and Gaber’s Westminster Tales (2000), to more recent studies such as Lewis et al.’s Citizens or Consumers (2005). All presume a problem of some kind with the practice of democracy and the “old fashioned ritual” of voting, as Lewis et al. describe it (2). Most link alleged inadequacies in the performance of the political media to what is interpreted as popular apathy (or antipathy) towards democracy. The media are blamed for the lack of public engagement with democratic politics which declining turnouts are argued to signal. Political journalists are said to be too aggressive and hyper-adversarial (Lloyd), behaving like the “feral beast” spoken of by Tony Blair in his 2007 farewell speech to the British people as prime minister. They are corrosively cynical and a “disaster for democracy”, as Steven Barnett and others argued in the first years of the twenty first century. They are not aggressive or adversarial enough, as the propaganda modellists allege, citing what they interpret as supine media coverage of Coalition policy in Iraq. The media put people off, rather than turn them on to democracy by being, variously, too nice or too nasty to politicians. What then, is the solution to the apparent paradox represented by the fact that there is more democracy, but less voting in elections than ever before; and that after centuries of popular struggle democratic assemblies proliferate, but in some countries barely half of the eligible voters can be bothered to participate? And what role have the media played in this unexpected phenomenon? If the scholarly community has been largely critical on this question, and pessimistic in its analyses of the role of the media, it has become increasingly clear that the one arena where people do vote more than ever before is that presented by the media, and entertainment media in particular. There has been, since the appearance of Big Brother and the subsequent explosion of competitive reality TV formats across the world, evidence of a huge popular appetite for voting on such matters as which amateur contestant on Pop Idol, or X Factor, or Fame Academy, or Operatunity goes on to have a chance of a professional career, a shot at the big time. Millions of viewers of the most popular reality TV strands queue up to register their votes on premium phone lines, the revenue from which makes up a substantial and growing proportion of the income of commercial TV companies. This explosion of voting behaviour has been made possible by the technology-driven emergence of new forms of participatory, interactive, digitised media channels which allow millions to believe that they can have an impact on the outcome of what are, at essence, game and talent shows. At the height of anxiety around the ‘crisis of democratic participation’ in the UK, observers noted that nearly 6.5 million people had voted in the Big Brother UK final in 2004. More than eight million voted during the 2004 run of the BBC’s Fame Academy series. While these numbers do not, contrary to popular belief, exceed the numbers of British citizens who vote in a general election (27.2 million in 2005), they do indicate an enthusiasm for voting which seems to contradict declining rates of democratic participation. People who will never get out and vote for their local councillor often appear more than willing to pick up the telephone or the laptop and cast a vote for their favoured reality TV contestant, even if it costs them money. It would be absurd to suggest that voting for a contestant on Big Brother is directly comparable to the act of choosing a government or a president. The latter is recognised as an expression of citizenship, with potentially significant consequences for the lives of individuals within their society. Voting on Big Brother, on the other hand, is unmistakeably entertainment, game-playing, a relatively risk-free exercise of choice — a bit of harmless fun, fuelled by office chat and relentless tabloid coverage of the contestants’ strengths and weaknesses. There is no evidence that readiness to participate in a telephone or online vote for entertainment TV translates into active citizenship, where ‘active’ means casting a vote in an election. The lesson delivered by the success of participatory media in recent years, however — first reality TV, and latterly a proliferation of online formats which encourage user participation and voting for one thing or another — is that people will vote, when they are able and motivated to do so. Voting is popular, in short, and never more so, irrespective of the level of popular participation recorded in recent elections. And if they will vote in their millions for a contestant on X Factor, or participate in competitions to determine the best movies or books on Facebook, they can presumably be persuaded to do so when an election for parliament comes around. This fact has been recognised by both media producers and politicians, and reflected in attempts to adapt the evermore sophisticated and efficient tools of participatory media to the democratic process, to engage media audiences as citizens by offering the kinds of voting opportunities in political debates, including election processes, which entertainment media have now made routinely available. ITV’s Vote for Me strand, broadcast in the run-up to the UK general election of 2005, used reality TV techniques to select a candidate who would actually take part in the forthcoming poll. The programme was broadcast in a late night, low audience slot, and failed to generate much interest, but it signalled a desire by media producers to harness the appeal of participatory media in a way which could directly impact on levels of democratic engagement. The honourable failure of Vote for Me (produced by the same team which made the much more successful live debate shows featuring prime minister Tony Blair — Ask Tony Blair, Ask the Prime Minister) might be viewed as evidence that readiness to vote in the context of a TV game show does not translate directly into voting for parties and politicians, and that the problem in this respect — the crisis of democratic participation, such that it exists — is located elsewhere. People can vote in democratic elections, but choose not to, perhaps because they feel that the act is meaningless (because parties are ideologically too similar), or ineffectual (because they see no impact of voting in their daily lives or in the state of the country), or irrelevant to their personal priorities and life styles. Voting rates have increased in the US and the UK since September 11 2001, suggesting perhaps that when the political stakes are raised, and the question of who is in government seems to matter more than it did, people act accordingly. Meantime, media producers continue to make money by developing formats and channels on the assumption that audiences wish to participate, to interact, and to vote. Whether this form of participatory media consumption for the purposes of play can be translated into enhanced levels of active citizenship, and whether the media can play a significant contributory role in that process, remains to be seen. References Alves, R.C. “From Lapdog to Watchdog: The Role of the Press in Latin America’s Democratisation.” In H. de Burgh, ed., Making Journalists. London: Routledge, 2005. 181-202. Anderson, P.J., and G. Ward (eds.). The Future of Journalism in the Advanced Democracies. Aldershot: Ashgate Publishing, 2007. Barnett, S. “The Age of Contempt.” Guardian 28 October 2002. http://politics.guardian.co.uk/media/comment/0,12123,820577,00.html>. Barnett, S., and I. Gaber. Westminster Tales. London: Continuum, 2001. Blumler, J., and M. Gurevitch. The Crisis of Public Communication. London: Routledge, 1996. Habermas, J. The Structural Transformation of the Public Sphere. Cambridge: Polity Press, 1989. Lewis, J., S. Inthorn, and K. Wahl-Jorgensen. Citizens or Consumers? What the Media Tell Us about Political Participation. Milton Keynes: Open University Press, 2005. Lloyd, John. What the Media Are Doing to Our Politics. London: Constable, 2004. McNair, B. Journalism and Democracy: A Qualitative Evaluation of the Political Public Sphere. London: Routledge, 2000. ———. Cultural Chaos: News, Journalism and Power in a Globalised World. London: Routledge, 2006. Citation reference for this article MLA Style McNair, Brian. "Vote!." M/C Journal 10.6/11.1 (2008). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0804/01-mcnair.php>. APA Style McNair, B. (Apr. 2008) "Vote!," M/C Journal, 10(6)/11(1). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0804/01-mcnair.php>.
APA, Harvard, Vancouver, ISO, and other styles
42

McNair, Brian. "Vote!" M/C Journal 11, no. 1 (April 1, 2008). http://dx.doi.org/10.5204/mcj.21.

Full text
Abstract:
The twentieth was, from one perspective, the democratic century — a span of one hundred years which began with no fully functioning democracies in existence anywhere on the planet (if one defines democracy as a political system in which there is both universal suffrage and competitive elections), and ended with 120 countries out of 192 classified by the Freedom House think tank as ‘democratic’. There are of course still many societies where democracy is denied or effectively neutered — the remaining outposts of state socialism, such as China, Cuba, and North Korea; most if not all of the Islamic countries; exceptional states such as Singapore, unapologetically capitalist in its economic system but resolutely authoritarian in its political culture. Many self-proclaimed democracies, including those of the UK, Australia and the US, are procedurally or conceptually flawed. Countries emerging out of authoritarian systems and now in a state of democratic transition, such as Russia and the former Soviet republics, are immersed in constant, sometimes violent struggle between reformers and reactionaries. Russia’s recent parliamentary elections were accompanied by the intimidation of parties and politicians who opposed Vladimir Putin’s increasingly populist and authoritarian approach to leadership. The same Freedom House report which describes the rise of democracy in the twentieth century acknowledges that many self-styled democracies are, at best, only ‘partly free’ in their political cultures (for detailed figures on the rise of global democracy, see the Freedom House website Democracy’s Century). Let’s not for a moment downplay these important qualifications to what can nonetheless be fairly characterised as a century-long expansion and globalisation of democracy, and the acceptance of popular sovereignty, expressed through voting for the party or candidate of one’s choice, as a universally recognised human right. That such a process has occurred, and continues in these early years of the twenty-first century, is irrefutable. In the Gaza strip, Hamas appeals to the legitimacy of a democratic election victory in its campaign to be recognised as the voice of the Palestinian people. However one judges the messianic tendencies and Islamist ideology of Mahmoud Ahmadinejad, it must be acknowledged that the Iranian people elected him, and that they have the power to throw him out of government next time they vote. That was never true of the Shah. The democratic resurgence in Latin America, taking in Venezuela, Peru and Bolivia among others has been a much-noted feature of international politics in recent times (Alves), presenting a welcome contrast to the dictatorships and death squads of the 1980s, even as it creates some uncomfortable dilemmas for the Bush administration (which must champion democratic government at the same time as it resents some of the choices people may make when they have the opportunity to vote). Since 9/11 a kind of democracy has expanded even to Afghanistan and Iraq, albeit at the point of a gun, and with no guarantees of survival beyond the end of military occupation by the US and its coalition allies. As this essay was being written, Pakistan’s state of emergency was ending and democratic elections scheduled, albeit in the shadow cast by the assassination of Benazir Bhutto in December 2007. Democracy, then — imperfect and limited as it can be; grudgingly delivered though it is by political elites in many countries, and subject to attack and roll back at any time — has become a global universal to which all claim allegiance, or at least pay lip service. The scale of this transformation, which has occurred in little more than one quarter of the time elapsed since the Putney debates of 1647 and the English revolution first established the principle of the sovereignty of parliament, is truly remarkable. (Tristram Hunt quotes lawyer Geoffrey Robertson in the Guardian to the effect that the Putney debates, staged in St Mary’s church in south-west London towards the end of the English civil war, launched “the idea that government requires the consent of freely and fairly elected representatives of all adult citizens irrespective of class or caste or status or wealth” – “A Jewel of Democracy”, Guardian, 26 Oct. 2007) Can it be true that less than one hundred years ago, in even the most advanced capitalist societies, 50 per cent of the people — women — did not have the right to vote? Or that black populations, indigenous or migrant, in countries such as the United States and Australia were deprived of basic citizenship rights until the 1960s and even later? Will future generations wonder how on earth it could have been that the vast majority of the people of South Africa were unable to vote until 1994, and that they were routinely imprisoned, tortured and killed when they demanded basic democratic rights? Or will they shrug and take it for granted, as so many of us who live in settled democracies already do? (In so far as ‘we’ includes the community of media and cultural studies scholars, I would argue that where there is reluctance to concede the scale and significance of democratic change, this arises out of continuing ambivalence about what ‘democracy’ means, a continuing suspicion of globalisation (in particular the globalisation of democratic political culture, still associated in some quarters with ‘the west’), and of the notion of ‘progress’ with which democracy is routinely associated. The intellectual roots of that ambivalence were various. Marxist-leninist inspired authoritarianism gripped much of the world until the fall of the Berlin Wall and the end of the cold war. Until that moment, it was still possible for many marxians in the scholarly community to view the idea of democracy with disdain — if not quite a dirty word, then a deeply flawed, highly loaded concept which masked and preserved underlying social inequalities more than it helped resolve them. Until 1989 or thereabouts, it was possible for ‘bourgeois democracy’ to be regarded as just one kind of democratic polity by the liberal and anti-capitalist left, which often regarded the ‘proletarian’ or ‘people’s’ democracy prevailing in the Soviet Union, China, Cuba or Vietnam as legitimate alternatives to the emerging capitalist norm of one person, one vote, for constituent assemblies which had real power and accountability. In terms not very different from those used by Marx and Engels in The German Ideology, belief in the value of democracy was conceived by this materialist school as a kind of false consciousness. It still is, by Noam Chomsky and others who continue to view democracy as a ‘necessary illusion’ (1989) without which capitalism could not be reproduced. From these perspectives voting gave, and gives us merely the illusion of agency and power in societies where capital rules as it always did. For democracy read ‘the manufacture of consent’; its expansion read not as progressive social evolution, but the universalisation of the myth of popular sovereignty, mobilised and utilised by the media-industrial-military complex to maintain its grip.) There are those who dispute this reading of events. In the 1960s, Habermas’s hugely influential Structural Transformation of the Public Sphere critiqued the manner in which democracy, and the public sphere underpinning it, had been degraded by public relations, advertising, and the power of private interests. In the period since, critical scholarly research and writing on political culture has been dominated by the Habermasian discourse of democratic decline, and the pervasive pessimism of those who see democracy, and the media culture which supports it, as fatally flawed, corrupted by commercialisation and under constant threat. Those, myself included, who challenged that view with a more positive reading of the trends (McNair, Journalism and Democracy; Cultural Chaos) have been denounced as naïve optimists, panglossian, utopian and even, in my own case, a ‘neo-liberal apologist’. (See an unpublished paper by David Miller, “System Failure: It’s Not Just the Media, It’s the Whole Bloody System”, delivered at Goldsmith’s College in 2003.) Engaging as they have been, I venture to suggest that these are the discourses and debates of an era now passing into history. Not only is it increasingly obvious that democracy is expanding globally into places where it never previously reached; it is also extending inwards, within nation states, driven by demands for greater local autonomy. In the United Kingdom, for example, the citizen is now able to vote not just in Westminster parliamentary elections (which determine the political direction of the UK government), but for European elections, local elections, and elections for devolved assemblies in Scotland, Wales and Northern Ireland. The people of London can vote for their mayor. There would by now have been devolved assemblies in the regions of England, too, had the people of the North East not voted against it in a November 2004 referendum. Notwithstanding that result, which surprised many in the New Labour government who held it as axiomatic that the more democracy there was, the better for all of us, the importance of enhancing and expanding democratic institutions, of allowing people to vote more often (and also in more efficient ways — many of these expansions of democracy have been tied to the introduction of systems of proportional representation) has become consensual, from the Mid West of America to the Middle East. The Democratic Paradox And yet, as the wave of democratic transformation has rolled on through the late twentieth and into the early twenty first century it is notable that, in many of the oldest liberal democracies at least, fewer people have been voting. In the UK, for example, in the period between 1945 and 2001, turnout at general elections never fell below 70 per cent. In 1992, the last general election won by the Conservatives before the rise of Tony Blair and New Labour, turnout was 78 per cent, roughly where it had been in the 1950s. In 2001, however, as Blair’s government sought re-election, turnout fell to an historic low for the UK of 59.4 per cent, and rose only marginally to 61.4 per cent in the most recent general election of 2005. In the US presidential elections of 1996 and 2000 turnouts were at historic lows of 47.2 and 49.3 per cent respectively, rising just above 50 per cent again in 2004 (figures by International Institute for Democracy and Electoral Assistance). At local level things are even worse. In only the second election for a devolved parliament in Scotland (2003) turnout was a mere 48.5 per cent, rising to 50.5 in 2007. These trends are not universal. In countries with compulsory voting, they mean very little — in Australia, where voting in parliamentary elections is compulsory, turnout averages in the 90s per cent. In France, while turnouts for parliamentary elections show a similar downward trend to the UK and the UK, presidential contests achieve turnouts of 80-plus per cent. In the UK and US, as noted, the most recent elections show modest growth in turnout from those historic lows of the late 1990s and early Noughties. There has grown, nonetheless, the perception, commonplace amongst academic commentators as well as journalists and politicians themselves, that we are living through a ‘crisis’ of democratic participation, a dangerous decline in the tendency to vote in elections which undermines the legitimacy of democracy itself. In communication scholarship a significant body of research and publication has developed around this theme, from Blumler and Gurevitch’s Crisis of Public Communication (1996), through Barnett and Gaber’s Westminster Tales (2000), to more recent studies such as Lewis et al.’s Citizens or Consumers (2005). All presume a problem of some kind with the practice of democracy and the “old fashioned ritual” of voting, as Lewis et al. describe it (2). Most link alleged inadequacies in the performance of the political media to what is interpreted as popular apathy (or antipathy) towards democracy. The media are blamed for the lack of public engagement with democratic politics which declining turnouts are argued to signal. Political journalists are said to be too aggressive and hyper-adversarial (Lloyd), behaving like the “feral beast” spoken of by Tony Blair in his 2007 farewell speech to the British people as prime minister. They are corrosively cynical and a “disaster for democracy”, as Steven Barnett and others argued in the first years of the twenty first century. They are not aggressive or adversarial enough, as the propaganda modellists allege, citing what they interpret as supine media coverage of Coalition policy in Iraq. The media put people off, rather than turn them on to democracy by being, variously, too nice or too nasty to politicians. What then, is the solution to the apparent paradox represented by the fact that there is more democracy, but less voting in elections than ever before; and that after centuries of popular struggle democratic assemblies proliferate, but in some countries barely half of the eligible voters can be bothered to participate? And what role have the media played in this unexpected phenomenon? If the scholarly community has been largely critical on this question, and pessimistic in its analyses of the role of the media, it has become increasingly clear that the one arena where people do vote more than ever before is that presented by the media, and entertainment media in particular. There has been, since the appearance of Big Brother and the subsequent explosion of competitive reality TV formats across the world, evidence of a huge popular appetite for voting on such matters as which amateur contestant on Pop Idol, or X Factor, or Fame Academy, or Operatunity goes on to have a chance of a professional career, a shot at the big time. Millions of viewers of the most popular reality TV strands queue up to register their votes on premium phone lines, the revenue from which makes up a substantial and growing proportion of the income of commercial TV companies. This explosion of voting behaviour has been made possible by the technology-driven emergence of new forms of participatory, interactive, digitised media channels which allow millions to believe that they can have an impact on the outcome of what are, at essence, game and talent shows. At the height of anxiety around the ‘crisis of democratic participation’ in the UK, observers noted that nearly 6.5 million people had voted in the Big Brother UK final in 2004. More than eight million voted during the 2004 run of the BBC’s Fame Academy series. While these numbers do not, contrary to popular belief, exceed the numbers of British citizens who vote in a general election (27.2 million in 2005), they do indicate an enthusiasm for voting which seems to contradict declining rates of democratic participation. People who will never get out and vote for their local councillor often appear more than willing to pick up the telephone or the laptop and cast a vote for their favoured reality TV contestant, even if it costs them money. It would be absurd to suggest that voting for a contestant on Big Brother is directly comparable to the act of choosing a government or a president. The latter is recognised as an expression of citizenship, with potentially significant consequences for the lives of individuals within their society. Voting on Big Brother, on the other hand, is unmistakeably entertainment, game-playing, a relatively risk-free exercise of choice — a bit of harmless fun, fuelled by office chat and relentless tabloid coverage of the contestants’ strengths and weaknesses. There is no evidence that readiness to participate in a telephone or online vote for entertainment TV translates into active citizenship, where ‘active’ means casting a vote in an election. The lesson delivered by the success of participatory media in recent years, however — first reality TV, and latterly a proliferation of online formats which encourage user participation and voting for one thing or another — is that people will vote, when they are able and motivated to do so. Voting is popular, in short, and never more so, irrespective of the level of popular participation recorded in recent elections. And if they will vote in their millions for a contestant on X Factor, or participate in competitions to determine the best movies or books on Facebook, they can presumably be persuaded to do so when an election for parliament comes around. This fact has been recognised by both media producers and politicians, and reflected in attempts to adapt the evermore sophisticated and efficient tools of participatory media to the democratic process, to engage media audiences as citizens by offering the kinds of voting opportunities in political debates, including election processes, which entertainment media have now made routinely available. ITV’s Vote for Me strand, broadcast in the run-up to the UK general election of 2005, used reality TV techniques to select a candidate who would actually take part in the forthcoming poll. The programme was broadcast in a late night, low audience slot, and failed to generate much interest, but it signalled a desire by media producers to harness the appeal of participatory media in a way which could directly impact on levels of democratic engagement. The honourable failure of Vote for Me (produced by the same team which made the much more successful live debate shows featuring prime minister Tony Blair — Ask Tony Blair, Ask the Prime Minister) might be viewed as evidence that readiness to vote in the context of a TV game show does not translate directly into voting for parties and politicians, and that the problem in this respect — the crisis of democratic participation, such that it exists — is located elsewhere. People can vote in democratic elections, but choose not to, perhaps because they feel that the act is meaningless (because parties are ideologically too similar), or ineffectual (because they see no impact of voting in their daily lives or in the state of the country), or irrelevant to their personal priorities and life styles. Voting rates have increased in the US and the UK since September 11 2001, suggesting perhaps that when the political stakes are raised, and the question of who is in government seems to matter more than it did, people act accordingly. Meantime, media producers continue to make money by developing formats and channels on the assumption that audiences wish to participate, to interact, and to vote. Whether this form of participatory media consumption for the purposes of play can be translated into enhanced levels of active citizenship, and whether the media can play a significant contributory role in that process, remains to be seen. References Alves, R.C. “From Lapdog to Watchdog: The Role of the Press in Latin America’s Democratisation.” In H. de Burgh, ed., Making Journalists. London: Routledge, 2005. 181-202. Anderson, P.J., and G. Ward (eds.). The Future of Journalism in the Advanced Democracies. Aldershot: Ashgate Publishing, 2007. Barnett, S. “The Age of Contempt.” Guardian 28 October 2002. < http://politics.guardian.co.uk/media/comment/0,12123,820577,00.html >. Barnett, S., and I. Gaber. Westminster Tales. London: Continuum, 2001. Blumler, J., and M. Gurevitch. The Crisis of Public Communication. London: Routledge, 1996. Habermas, J. The Structural Transformation of the Public Sphere. Cambridge: Polity Press, 1989. Lewis, J., S. Inthorn, and K. Wahl-Jorgensen. Citizens or Consumers? What the Media Tell Us about Political Participation. Milton Keynes: Open University Press, 2005. Lloyd, John. What the Media Are Doing to Our Politics. London: Constable, 2004. McNair, B. Journalism and Democracy: A Qualitative Evaluation of the Political Public Sphere. London: Routledge, 2000. ———. Cultural Chaos: News, Journalism and Power in a Globalised World. London: Routledge, 2006.
APA, Harvard, Vancouver, ISO, and other styles
43

Champion, Katherine M. "A Risky Business? The Role of Incentives and Runaway Production in Securing a Screen Industries Production Base in Scotland." M/C Journal 19, no. 3 (June 22, 2016). http://dx.doi.org/10.5204/mcj.1101.

Full text
Abstract:
IntroductionDespite claims that the importance of distance has been reduced due to technological and communications improvements (Cairncross; Friedman; O’Brien), the ‘power of place’ still resonates, often intensifying the role of geography (Christopherson et al.; Morgan; Pratt; Scott and Storper). Within the film industry, there has been a decentralisation of production from Hollywood, but there remains a spatial logic which has preferenced particular centres, such as Toronto, Vancouver, Sydney and Prague often led by a combination of incentives (Christopherson and Storper; Goldsmith and O’Regan; Goldsmith et al.; Miller et al.; Mould). The emergence of high end television, television programming for which the production budget is more than £1 million per television hour, has presented new opportunities for screen hubs sharing a very similar value chain to the film industry (OlsbergSPI with Nordicity).In recent years, interventions have proliferated with the aim of capitalising on the decentralisation of certain activities in order to attract international screen industries production and embed it within local hubs. Tools for building capacity and expertise have proliferated, including support for studio complex facilities, infrastructural investments, tax breaks and other economic incentives (Cucco; Goldsmith and O’Regan; Jensen; Goldsmith et al.; McDonald; Miller et al.; Mould). Yet experience tells us that these will not succeed everywhere. There is a need for a better understanding of both the capacity for places to build a distinctive and competitive advantage within a highly globalised landscape and the relative merits of alternative interventions designed to generate a sustainable production base.This article first sets out the rationale for the appetite identified in the screen industries for co-location, or clustering and concentration in a tightly drawn physical area, in global hubs of production. It goes on to explore the latest trends of decentralisation and examines the upturn in interventions aimed at attracting mobile screen industries capital and labour. Finally it introduces the Scottish screen industries and explores some of the ways in which Scotland has sought to position itself as a recipient of screen industries activity. The paper identifies some key gaps in infrastructure, most notably a studio, and calls for closer examination of the essential ingredients of, and possible interventions needed for, a vibrant and sustainable industry.A Compulsion for ProximityIt has been argued that particular spatial and place-based factors are central to the development and organisation of the screen industries. The film and television sector, the particular focus of this article, exhibit an extraordinarily high degree of spatial agglomeration, especially favouring centres with global status. It is worth noting that the computer games sector, not explored in this article, slightly diverges from this trend displaying more spatial patterns of decentralisation (Vallance), although key physical hubs of activity have been identified (Champion). Creative products often possess a cachet that is directly associated with their point of origin, for example fashion from Paris, films from Hollywood and country music from Nashville – although it can also be acknowledged that these are often strategic commercial constructions (Pecknold). The place of production represents a unique component of the final product as well as an authentication of substantive and symbolic quality (Scott, “Creative cities”). Place can act as part of a brand or image for creative industries, often reinforcing the advantage of being based in particular centres of production.Very localised historical, cultural, social and physical factors may also influence the success of creative production in particular places. Place-based factors relating to the built environment, including cheap space, public-sector support framework, connectivity, local identity, institutional environment and availability of amenities, are seen as possible influences in the locational choices of creative industry firms (see, for example, Drake; Helbrecht; Hutton; Leadbeater and Oakley; Markusen).Employment trends are notoriously difficult to measure in the screen industries (Christopherson, “Hollywood in decline?”), but the sector does contain large numbers of very small firms and freelancers. This allows them to be flexible but poses certain problems that can be somewhat offset by co-location. The findings of Antcliff et al.’s study of workers in the audiovisual industry in the UK suggested that individuals sought to reconstruct stable employment relations through their involvement in and use of networks. The trust and reciprocity engendered by stable networks, built up over time, were used to offset the risk associated with the erosion of stable employment. These findings are echoed by a study of TV content production in two media regions in Germany by Sydow and Staber who found that, although firms come together to work on particular projects, typically their business relations extend for a much longer period than this. Commonly, firms and individuals who have worked together previously will reassemble for further project work aided by their past experiences and expectations.Co-location allows the development of shared structures: language, technical attitudes, interpretative schemes and ‘communities of practice’ (Bathelt, et al.). Grabher describes this process as ‘hanging out’. Deep local pools of creative and skilled labour are advantageous both to firms and employees (Reimer et al.) by allowing flexibility, developing networks and offsetting risk (Banks et al.; Scott, “Global City Regions”). For example in Cook and Pandit’s study comparing the broadcasting industry in three city-regions, London was found to be hugely advantaged by its unrivalled talent pool, high financial rewards and prestigious projects. As Barnes and Hutton assert in relation to the wider creative industries, “if place matters, it matters most to them” (1251). This is certainly true for the screen industries and their spatial logic points towards a compulsion for proximity in large global hubs.Decentralisation and ‘Sticky’ PlacesDespite the attraction of global production hubs, there has been a decentralisation of screen industries from key centres, starting with the film industry and the vertical disintegration of Hollywood studios (Christopherson and Storper). There are instances of ‘runaway production’ from the 1920s onwards with around 40 per cent of all features being accounted for by offshore production in 1960 (Miller et al., 133). This trend has been increasing significantly in the last 20 years, leading to the genesis of new hubs of screen activity such as Toronto, Vancouver, Sydney and Prague (Christopherson, “Project work in context”; Goldsmith et al.; Mould; Miller et al.; Szczepanik). This development has been prompted by a multiplicity of reasons including favourable currency value differentials and economic incentives. Subsidies and tax breaks have been offered to secure international productions with most countries demanding that, in order to qualify for tax relief, productions have to spend a certain amount of their budget within the local economy, employ local crew and use domestic creative talent (Hill). Extensive infrastructure has been developed including studio complexes to attempt to lure productions with the advantage of a full service offering (Goldsmith and O’Regan).Internationally, Canada has been the greatest beneficiary of ‘runaway production’ with a state-led enactment of generous film incentives since the late 1990s (McDonald). Vancouver and Toronto are the busiest locations for North American Screen production after Los Angeles and New York, due to exchange rates and tax rebates on labour costs (Miller et al., 141). 80% of Vancouver’s production is attributable to runaway production (Jensen, 27) and the city is considered by some to have crossed a threshold as:It now possesses sufficient depth and breadth of talent to undertake the full array of pre-production, production and post-production services for the delivery of major motion pictures and TV programmes. (Barnes and Coe, 19)Similarly, Toronto is considered to have established a “comprehensive set of horizontal and vertical media capabilities” to ensure its status as a “full function media centre” (Davis, 98). These cities have successfully engaged in entrepreneurial activity to attract production (Christopherson, “Project Work in Context”) and in Vancouver the proactive role of provincial government and labour unions are, in part, credited with its success (Barnes and Coe). Studio-complex infrastructure has also been used to lure global productions, with Toronto, Melbourne and Sydney all being seen as key examples of where such developments have been used as a strategic priority to take local production capacity to the next level (Goldsmith and O’Regan).Studies which provide a historiography of the development of screen-industry hubs emphasise a complex interplay of social, cultural and physical conditions. In the complex and global flows of the screen industries, ‘sticky’ hubs have emerged with the ability to attract and retain capital and skilled labour. Despite being principally organised to attract international production, most studio complexes, especially those outside of global centres need to have a strong relationship to local or national film and television production to ensure the sustainability and depth of the labour pool (Goldsmith and O’Regan, 2003). Many have a broadcaster on site as well as a range of companies with a media orientation and training facilities (Goldsmith and O’Regan, 2003; Picard, 2008). The emergence of film studio complexes in the Australian Gold Coast and Vancouver was accompanied by an increasing role for television production and this multi-purpose nature was important for the continuity of production.Fostering a strong community of below the line workers, such as set designers, locations managers, make-up artists and props manufacturers, can also be a clear advantage in attracting international productions. For example at Cinecitta in Italy, the expertise of set designers and experienced crews in the Barrandov Studios of Prague are regarded as major selling points of the studio complexes there (Goldsmith and O’Regan; Miller et al.; Szczepanik). Natural and built environments are also considered very important for film and television firms and it is a useful advantage for capturing international production when cities can double for other locations as in the cases of Toronto, Vancouver, Prague for example (Evans; Goldsmith and O’Regan; Szczepanik). Toronto, for instance, has doubled for New York in over 100 films and with regard to television Due South’s (1994-1998) use of Toronto as Chicago was estimated to have saved 40 per cent in costs (Miller et al., 141).The Scottish Screen Industries Within mobile flows of capital and labour, Scotland has sought to position itself as a recipient of screen industries activity through multiple interventions, including investment in institutional frameworks, direct and indirect economic subsidies and the development of physical infrastructure. Traditionally creative industry activity in the UK has been concentrated in London and the South East which together account for 43% of the creative economy workforce (Bakhshi et al.). In order, in part to redress this imbalance and more generally to encourage the attraction and retention of international production a range of policies have been introduced focused on the screen industries. A revised Film Tax Relief was introduced in 2007 to encourage inward investment and prevent offshoring of indigenous production, and this has since been extended to high-end television, animation and children’s programming. Broadcasting has also experienced a push for decentralisation led by public funding with a responsibility to be regionally representative. The BBC (“BBC Annual Report and Accounts 2014/15”) is currently exceeding its target of 50% network spend outside London by 2016, with 17% spent in Scotland, Wales and Northern Ireland. Channel 4 has similarly committed to commission at least 9% of its original spend from the nations by 2020. Studios have been also developed across the UK including at Roath Lock (Cardiff), Titanic Studios (Belfast), MedicaCity (Salford) and The Sharp Project (Manchester).The creative industries have been identified as one of seven growth sectors for Scotland by the government (Scottish Government). In 2010, the film and video sector employed 3,500 people and contributed £120 million GVA and £120 million adjusted GVA to the economy and the radio and TV sector employed 3,500 people and contributed £50 million GVA and £400 million adjusted GVA (The Scottish Parliament). Beyond the direct economic benefits of sectors, the on-screen representation of Scotland has been claimed to boost visitor numbers to the country (EKOS) and high profile international film productions have been attracted including Skyfall (2012) and WWZ (2013).Scotland has historically attracted international film and TV productions due to its natural locations (VisitScotland) and on average, between 2009-2014, six big budget films a year used Scottish locations both urban and rural (BOP Consulting, 2014). In all, a total of £20 million was generated by film-making in Glasgow during 2011 (Balkind) with WWZ (2013) and Cloud Atlas (2013), representing Philadelphia and San Francisco respectively, as well as doubling for Edinburgh for the recent acclaimed Scottish films Filth (2013) and Sunshine on Leith (2013). Sanson (80) asserts that the use of the city as a site for international productions not only brings in direct revenue from production money but also promotes the city as a “fashionable place to live, work and visit. Creativity makes the city both profitable and ‘cool’”.Nonetheless, issues persist and it has been suggested that Scotland lacks a stable and sustainable film industry, with low indigenous production levels and variable success from year to year in attracting inward investment (BOP Consulting). With regard to crew, problems with an insufficient production base have been identified as an issue in maintaining a pipeline of skills (BOP Consulting). Developing ‘talent’ is a central aspect of the Scottish Government’s Strategy for the Creative Industries, yet there remains the core challenge of retaining skills and encouraging new talent into the industry (BOP Consulting).With regard to film, a lack of substantial funding incentives and the absence of a studio have been identified as a key concern for the sector. For example, within the film industry the majority of inward investment filming in Scotland is location work as it lacks the studio facilities that would enable it to sustain a big-budget production in its entirety (BOP Consulting). The absence of such infrastructure has been seen as contributing to a drain of Scottish talent from these industries to other areas and countries where there is a more vibrant sector (BOP Consulting). The loss of Scottish talent to Northern Ireland was attributed to the longevity of the work being provided by Games of Thrones (2011-) now having completed its six series at the Titanic Studios in Belfast (EKOS) although this may have been stemmed somewhat recently with the attraction of US high-end TV series Outlander (2014-) which has been based at Wardpark in Cumbernauld since 2013.Television, both high-end production and local broadcasting, appears crucial to the sustainability of screen production in Scotland. Outlander has been estimated to contribute to Scotland’s production spend figures reaching a historic high of £45.8 million in 2014 (Creative Scotland ”Creative Scotland Screen Strategy Update”). The arrival of the program has almost doubled production spend in Scotland, offering the chance for increased stability for screen industries workers. Qualifying for UK High-End Television Tax Relief, Outlander has engaged a crew of approximately 300 across props, filming and set build, and cast over 2,000 supporting artist roles from within Scotland and the UK.Long running drama, in particular, offers key opportunities for both those cutting their teeth in the screen industries and also by providing more consistent and longer-term employment to existing workers. BBC television soap River City (2002-) has been identified as a key example of such an opportunity and the programme has been credited with providing a springboard for developing the skills of local actors, writers and production crew (Hibberd). This kind of pipeline of production is critical given the work patterns of the sector. According to Creative Skillset, of the 4,000 people in Scotland are employed in the film and television industries, 40% of television workers are freelance and 90% of film production work in freelance (EKOS).In an attempt to address skills gaps, the Outlander Trainee Placement Scheme has been devised in collaboration with Creative Scotland and Creative Skillset. During filming of Season One, thirty-eight trainees were supported across a range of production and craft roles, followed by a further twenty-five in Season Two. Encouragingly Outlander, and the books it is based on, is set in Scotland so the authenticity of place has played a strong component in the decision to locate production there. Producer David Brown began his career on Bill Forsyth films Gregory’s Girl (1981), Local Hero (1983) and Comfort and Joy (1984) and has a strong existing relationship to Scotland. He has been very vocal in his support for the trainee program, contending that “training is the future of our industry and we at Outlander see the growth of talent and opportunities as part of our mission here in Scotland” (“Outlander fast tracks next generation of skilled screen talent”).ConclusionsThis article has aimed to explore the relationship between place and the screen industries and, taking Scotland as its focus, has outlined a need to more closely examine the ways in which the sector can be supported. Despite the possible gains in terms of building a sustainable industry, the state-led funding of the global screen industries is contested. The use of tax breaks and incentives has been problematised and critiques range from use of public funding to attract footloose media industries to the increasingly zero sum game of competition between competing places (Morawetz; McDonald). In relation to broadcasting, there have been critiques of a ‘lift and shift’ approach to policy in the UK, with TV production companies moving to the nations and regions temporarily to meet the quota and leaving once a production has finished (House of Commons). Further to this, issues have been raised regarding how far such interventions can seed and develop a rich production ecology that offers opportunities for indigenous talent (Christopherson and Rightor).Nonetheless recent success for the screen industries in Scotland can, at least in part, be attributed to interventions including increased decentralisation of broadcasting and the high-end television tax incentives. This article has identified gaps in infrastructure which continue to stymie growth and have led to production drain to other centres. Important gaps in knowledge can also be acknowledged that warrant further investigation and unpacking including the relationship between film, high-end television and broadcasting, especially in terms of the opportunities they offer for screen industries workers to build a career in Scotland and notable gaps in infrastructure and the impact they have on the loss of production.ReferencesAntcliff, Valerie, Richard Saundry, and Mark Stuart. Freelance Worker Networks in Audio-Visual Industries. University of Central Lancashire, 2004.Bakhshi, Hasan, John Davies, Alan Freeman, and Peter Higgs. "The Geography of the UK’s Creative and High–Tech Economies." 2015.Balkind, Nicola. World Film Locations: Glasgow. Intellect Books, 2013.Banks, Mark, Andy Lovatt, Justin O’Connor, and Carlo Raffo. "Risk and Trust in the Cultural Industries." Geoforum 31.4 (2000): 453-464.Barnes, Trevor, and Neil M. Coe. “Vancouver as Media Cluster: The Cases of Video Games and Film/TV." Media Clusters: Spatial Agglomeration and Content Capabilities (2011): 251-277.Barnes, Trevor, and Thomas Hutton. "Situating the New Economy: Contingencies of Regeneration and Dislocation in Vancouver's Inner City." Urban Studies 46.5-6 (2009): 1247-1269.Bathelt, Harald, Anders Malmberg, and Peter Maskell. "Clusters and Knowledge: Local Buzz, Global Pipelines and the Process of Knowledge Creation." Progress in Human Geography 28.1 (2004): 31-56.BBC Annual Report and Accounts 2014/15 London: BBC (2015)BOP Consulting Review of the Film Sector in Glasgow: Report for Creative Scotland. Edinburgh: BOP Consulting, 2014.Champion, Katherine. "Problematizing a Homogeneous Spatial Logic for the Creative Industries: The Case of the Digital Games Industry." Changing the Rules of the Game. Palgrave Macmillan UK, 2013. 9-27.Cairncross, Francis. The Death of Distance London: Orion Business, 1997.Channel 4. Annual Report. London: Channel 4, 2014.Christopherson, Susan. "Project Work in Context: Regulatory Change and the New Geography of Media." Environment and Planning A 34.11 (2002): 2003-2015.———. "Hollywood in Decline? US Film and Television Producers beyond the Era of Fiscal Crisis." Cambridge Journal of Regions, Economy and Society 6.1 (2013): 141-157.Christopherson, Susan, and Michael Storper. "The City as Studio; the World as Back Lot: The Impact of Vertical Disintegration on the Location of the Motion Picture Industry." Environment and Planning D: Society and Space 4.3 (1986): 305-320.Christopherson, Susan, and Ned Rightor. "The Creative Economy as “Big Business”: Evaluating State Strategies to Lure Filmmakers." Journal of Planning Education and Research 29.3 (2010): 336-352.Christopherson, Susan, Harry Garretsen, and Ron Martin. "The World Is Not Flat: Putting Globalization in Its Place." Cambridge Journal of Regions, Economy and Society 1.3 (2008): 343-349.Cook, Gary A.S., and Naresh R. Pandit. "Service Industry Clustering: A Comparison of Broadcasting in Three City-Regions." The Service Industries Journal 27.4 (2007): 453-469.Creative Scotland Creative Scotland Screen Strategy Update. 2016. <http://www.creativescotland.com/__data/assets/pdf_file/0008/33992/Creative-Scotland-Screen-Strategy-Update-Feb2016.pdf>.———. Outlander Fast Tracks Next Generation of Skilled Screen Talent. 2016. <http://www.creativescotland.com/what-we-do/latest-news/archive/2016/02/outlander-fast-tracks-next-generation-of-skilled-screen-talent>.Cucco, Marco. "Blockbuster Outsourcing: Is There Really No Place like Home?" Film Studies 13.1 (2015): 73-93.Davis, Charles H. "Media Industry Clusters and Public Policy." Media Clusters: Spatial Agglomeration and Content Capabilities (2011): 72-98.Drake, Graham. "‘This Place Gives Me Space’: Place and Creativity in the Creative Industries." Geoforum 34.4 (2003): 511-524.EKOS. “Options for a Film and TV Production Space: Report for Scottish Enterprise.” Glasgow: EKOS, March 2014.Evans, Graeme. "Creative Cities, Creative Spaces and Urban Policy." Urban Studies 46.5-6 (2009): 1003-1040.Freidman, Thomas. "The World Is Flat." New York: Farrar, Straus and Giroux, 2006.Goldsmith, Ben, and Tom O’Regan. “Cinema Cities, Media Cities: The Contemporary International Studio Complex.” Screen Industry, Culture and Policy Research Series. Sydney: Australian Film Commission, Sep. 2003.Goldsmith, Ben, Susan Ward, and Tom O’Regan. "Global and Local Hollywood." InMedia. The French Journal of Media and Media Representations in the English-Speaking World 1 (2012).Grabher, Gernot. "The Project Ecology of Advertising: Tasks, Talents and Teams." Regional Studies 36.3 (2002): 245-262.Helbrecht, Ilse. "The Creative Metropolis Services, Symbols and Spaces." Zeitschrift für Kanada Studien 18 (1998): 79-93.Hibberd, Lynne. "Devolution in Policy and Practice: A Study of River City and BBC Scotland." Westminster Papers in Communication and Culture 4.3 (2007): 107-205.Hill, John. "'This Is for the Batmans as Well as the Vera Drakes': Economics, Culture and UK Government Film Production Policy in the 2000s." Journal of British Cinema and Television 9.3 (2012): 333-356.House of Commons Scottish Affairs Committee. “Creative Industries in Scotland.” Second Report of Session 2015–16. London: House of Commons, 2016.Hutton, Thomas A. "The New Economy of the Inner City." Cities 21.2 (2004): 89-108.Jensen, Rodney J.C. "The Spatial and Economic Contribution of Sydney's Visual Entertainment Industries." Australian Planner 48.1 (2011): 24-36.Leadbeater, Charles, and Kate Oakley. Surfing the Long Wave: Knowledge Entrepreneurship in Britain. London: Demos, 2001.McDonald, Adrian H. "Down the Rabbit Hole: The Madness of State Film Incentives as a 'Solution' to Runaway Production." University of Pennsylvania Journal of Business Law 14.85 (2011): 85-163.Markusen, Ann. "Sticky Places in Slippery Space: A Typology of Industrial Districts." Economic Geography (1996): 293-313.———. "Urban Development and the Politics of a Creative Class: Evidence from a Study of Artists." Environment and Planning A 38.10 (2006): 1921-1940.Miller, Toby, N. Govil, J. McMurria, R. Maxwell, and T. Wang. Global Hollywood 2. London: BFI, 2005.Morawetz, Norbert, et al. "Finance, Policy and Industrial Dynamics—The Rise of Co‐productions in the Film Industry." Industry and Innovation 14.4 (2007): 421-443.Morgan, Kevin. "The Exaggerated Death of Geography: Learning, Proximity and Territorial Innovation Systems." Journal of Economic Geography 4.1 (2004): 3-21.Mould, Oli. "Mission Impossible? Reconsidering the Research into Sydney's Film Industry." Studies in Australasian Cinema 1.1 (2007): 47-60.O’Brien, Richard. "Global Financial Integration: The End of Geography." London: Royal Institute of International Affairs, Pinter Publishers, 2002.OlsbergSPI with Nordicity. “Economic Contribution of the UK’s Film, High-End TV, Video Game, and Animation Programming Sectors.” Report presented to the BFI, Pinewood Shepperton plc, Ukie, the British Film Commission and Pact. London: BFI, Feb. 2015.Pecknold, Diane. "Heart of the Country? The Construction of Nashville as the Capital of Country Music." Sounds and the City. London: Palgrave Macmillan UK, 2014. 19-37.Picard, Robert G. Media Clusters: Local Agglomeration in an Industry Developing Networked Virtual Clusters. Jönköping International Business School, 2008.Pratt, Andy C. "New Media, the New Economy and New Spaces." Geoforum 31.4 (2000): 425-436.Reimer, Suzanne, Steven Pinch, and Peter Sunley. "Design Spaces: Agglomeration and Creativity in British Design Agencies." Geografiska Annaler: Series B, Human Geography 90.2 (2008): 151-172.Sanson, Kevin. Goodbye Brigadoon: Place, Production, and Identity in Global Glasgow. Diss. University of Texas at Austin, 2011.Scott, Allen J. "Creative Cities: Conceptual Issues and Policy Questions." Journal of Urban Affairs 28.1 (2006): 1-17.———. Global City-Regions: Trends, Theory, Policy. Oxford University Press, 2002.Scott, Allen J., and Michael Storper. "Regions, Globalization, Development." Regional Studies 41.S1 (2007): S191-S205.The Scottish Government. The Scottish Government Economic Strategy. Edinburgh: Scottish Government, 2015.———. Growth, Talent, Ambition – the Government’s Strategy for the Creative Industries. Edinburgh: Scottish Government, 2011.The Scottish Parliament Economy, Energy and Tourism Committee. The Economic Impact of the Film, TV and Video Games Industries. Edinburgh: Scottish Parliament, 2015.Sydow, Jörg, and Udo Staber. "The Institutional Embeddedness of Project Networks: The Case of Content Production in German Television." Regional Studies 36.3 (2002): 215-227.Szczepanik, Petr. "Globalization through the Eyes of Runners: Student Interns as Ethnographers on Runaway Productions in Prague." Media Industries 1.1 (2014).Vallance, Paul. "Creative Knowing, Organisational Learning, and Socio-Spatial Expansion in UK Videogame Development Studios." Geoforum 51 (2014): 15-26.Visit Scotland. “Scotland Voted Best Cinematic Destination in the World.” 2015. <https://www.visitscotland.com/blog/films/scotland-voted-best-cinematic-destination-in-the-world/>.
APA, Harvard, Vancouver, ISO, and other styles
44

Maxwell, Richard, and Toby Miller. "The Real Future of the Media." M/C Journal 15, no. 3 (June 27, 2012). http://dx.doi.org/10.5204/mcj.537.

Full text
Abstract:
When George Orwell encountered ideas of a technological utopia sixty-five years ago, he acted the grumpy middle-aged man Reading recently a batch of rather shallowly optimistic “progressive” books, I was struck by the automatic way in which people go on repeating certain phrases which were fashionable before 1914. Two great favourites are “the abolition of distance” and “the disappearance of frontiers”. I do not know how often I have met with the statements that “the aeroplane and the radio have abolished distance” and “all parts of the world are now interdependent” (1944). It is worth revisiting the old boy’s grumpiness, because the rhetoric he so niftily skewers continues in our own time. Facebook features “Peace on Facebook” and even claims that it can “decrease world conflict” through inter-cultural communication. Twitter has announced itself as “a triumph of humanity” (“A Cyber-House” 61). Queue George. In between Orwell and latter-day hoody cybertarians, a whole host of excitable public intellectuals announced the impending end of materiality through emergent media forms. Marshall McLuhan, Neil Postman, Daniel Bell, Ithiel de Sola Pool, George Gilder, Alvin Toffler—the list of 1960s futurists goes on and on. And this wasn’t just a matter of punditry: the OECD decreed the coming of the “information society” in 1975 and the European Union (EU) followed suit in 1979, while IBM merrily declared an “information age” in 1977. Bell theorized this technological utopia as post-ideological, because class would cease to matter (Mattelart). Polluting industries seemingly no longer represented the dynamic core of industrial capitalism; instead, market dynamism radiated from a networked, intellectual core of creative and informational activities. The new information and knowledge-based economies would rescue First World hegemony from an “insurgent world” that lurked within as well as beyond itself (Schiller). Orwell’s others and the Cold-War futurists propagated one of the most destructive myths shaping both public debate and scholarly studies of the media, culture, and communication. They convinced generations of analysts, activists, and arrivistes that the promises and problems of the media could be understood via metaphors of the environment, and that the media were weightless and virtual. The famous medium they wished us to see as the message —a substance as vital to our wellbeing as air, water, and soil—turned out to be no such thing. Today’s cybertarians inherit their anti-Marxist, anti-materialist positions, as a casual glance at any new media journal, culture-industry magazine, or bourgeois press outlet discloses. The media are undoubtedly important instruments of social cohesion and fragmentation, political power and dissent, democracy and demagoguery, and other fraught extensions of human consciousness. But talk of media systems as equivalent to physical ecosystems—fashionable among marketers and media scholars alike—is predicated on the notion that they are environmentally benign technologies. This has never been true, from the beginnings of print to today’s cloud-covered computing. Our new book Greening the Media focuses on the environmental impact of the media—the myriad ways that media technology consumes, despoils, and wastes natural resources. We introduce ideas, stories, and facts that have been marginal or absent from popular, academic, and professional histories of media technology. Throughout, ecological issues have been at the core of our work and we immodestly think the same should apply to media communications, and cultural studies more generally. We recognize that those fields have contributed valuable research and teaching that address environmental questions. For instance, there is an abundant literature on representations of the environment in cinema, how to communicate environmental messages successfully, and press coverage of climate change. That’s not enough. You may already know that media technologies contain toxic substances. You may have signed an on-line petition protesting the hazardous and oppressive conditions under which workers assemble cell phones and computers. But you may be startled, as we were, by the scale and pervasiveness of these environmental risks. They are present in and around every site where electronic and electric devices are manufactured, used, and thrown away, poisoning humans, animals, vegetation, soil, air and water. We are using the term “media” as a portmanteau word to cover a multitude of cultural and communications machines and processes—print, film, radio, television, information and communications technologies (ICT), and consumer electronics (CE). This is not only for analytical convenience, but because there is increasing overlap between the sectors. CE connect to ICT and vice versa; televisions resemble computers; books are read on telephones; newspapers are written through clouds; and so on. Cultural forms and gadgets that were once separate are now linked. The currently fashionable notion of convergence doesn’t quite capture the vastness of this integration, which includes any object with a circuit board, scores of accessories that plug into it, and a global nexus of labor and environmental inputs and effects that produce and flow from it. In 2007, a combination of ICT/CE and media production accounted for between 2 and 3 percent of all greenhouse gases emitted around the world (“Gartner Estimates,”; International Telecommunication Union; Malmodin et al.). Between twenty and fifty million tonnes of electronic waste (e-waste) are generated annually, much of it via discarded cell phones and computers, which affluent populations throw out regularly in order to buy replacements. (Presumably this fits the narcissism of small differences that distinguishes them from their own past.) E-waste is historically produced in the Global North—Australasia, Western Europe, Japan, and the US—and dumped in the Global South—Latin America, Africa, Eastern Europe, Southern and Southeast Asia, and China. It takes the form of a thousand different, often deadly, materials for each electrical and electronic gadget. This trend is changing as India and China generate their own media detritus (Robinson; Herat). Enclosed hard drives, backlit screens, cathode ray tubes, wiring, capacitors, and heavy metals pose few risks while these materials remain encased. But once discarded and dismantled, ICT/CE have the potential to expose workers and ecosystems to a morass of toxic components. Theoretically, “outmoded” parts could be reused or swapped for newer parts to refurbish devices. But items that are defined as waste undergo further destruction in order to collect remaining parts and valuable metals, such as gold, silver, copper, and rare-earth elements. This process causes serious health risks to bones, brains, stomachs, lungs, and other vital organs, in addition to birth defects and disrupted biological development in children. Medical catastrophes can result from lead, cadmium, mercury, other heavy metals, poisonous fumes emitted in search of precious metals, and such carcinogenic compounds as polychlorinated biphenyls, dioxin, polyvinyl chloride, and flame retardants (Maxwell and Miller 13). The United States’ Environmental Protection Agency estimates that by 2007 US residents owned approximately three billion electronic devices, with an annual turnover rate of 400 million units, and well over half such purchases made by women. Overall CE ownership varied with age—adults under 45 typically boasted four gadgets; those over 65 made do with one. The Consumer Electronics Association (CEA) says US$145 billion was expended in the sector in 2006 in the US alone, up 13% on the previous year. The CEA refers joyously to a “consumer love affair with technology continuing at a healthy clip.” In the midst of a recession, 2009 saw $165 billion in sales, and households owned between fifteen and twenty-four gadgets on average. By 2010, US$233 billion was spent on electronic products, three-quarters of the population owned a computer, nearly half of all US adults owned an MP3 player, and 85% had a cell phone. By all measures, the amount of ICT/CE on the planet is staggering. As investigative science journalist, Elizabeth Grossman put it: “no industry pushes products into the global market on the scale that high-tech electronics does” (Maxwell and Miller 2). In 2007, “of the 2.25 million tons of TVs, cell phones and computer products ready for end-of-life management, 18% (414,000 tons) was collected for recycling and 82% (1.84 million tons) was disposed of, primarily in landfill” (Environmental Protection Agency 1). Twenty million computers fell obsolete across the US in 1998, and the rate was 130,000 a day by 2005. It has been estimated that the five hundred million personal computers discarded in the US between 1997 and 2007 contained 6.32 billion pounds of plastics, 1.58 billion pounds of lead, three million pounds of cadmium, 1.9 million pounds of chromium, and 632000 pounds of mercury (Environmental Protection Agency; Basel Action Network and Silicon Valley Toxics Coalition 6). The European Union is expected to generate upwards of twelve million tons annually by 2020 (Commission of the European Communities 17). While refrigerators and dangerous refrigerants account for the bulk of EU e-waste, about 44% of the most toxic e-waste measured in 2005 came from medium-to-small ICT/CE: computer monitors, TVs, printers, ink cartridges, telecommunications equipment, toys, tools, and anything with a circuit board (Commission of the European Communities 31-34). Understanding the enormity of the environmental problems caused by making, using, and disposing of media technologies should arrest our enthusiasm for them. But intellectual correctives to the “love affair” with technology, or technophilia, have come and gone without establishing much of a foothold against the breathtaking flood of gadgets and the propaganda that proclaims their awe-inspiring capabilities.[i] There is a peculiar enchantment with the seeming magic of wireless communication, touch-screen phones and tablets, flat-screen high-definition televisions, 3-D IMAX cinema, mobile computing, and so on—a totemic, quasi-sacred power that the historian of technology David Nye has named the technological sublime (Nye Technological Sublime 297).[ii] We demonstrate in our book why there is no place for the technological sublime in projects to green the media. But first we should explain why such symbolic power does not accrue to more mundane technologies; after all, for the time-strapped cook, a pressure cooker does truly magical things. Three important qualities endow ICT/CE with unique symbolic potency—virtuality, volume, and novelty. The technological sublime of media technology is reinforced by the “virtual nature of much of the industry’s content,” which “tends to obscure their responsibility for a vast proliferation of hardware, all with high levels of built-in obsolescence and decreasing levels of efficiency” (Boyce and Lewis 5). Planned obsolescence entered the lexicon as a new “ethics” for electrical engineering in the 1920s and ’30s, when marketers, eager to “habituate people to buying new products,” called for designs to become quickly obsolete “in efficiency, economy, style, or taste” (Grossman 7-8).[iii] This defines the short lifespan deliberately constructed for computer systems (drives, interfaces, operating systems, batteries, etc.) by making tiny improvements incompatible with existing hardware (Science and Technology Council of the American Academy of Motion Picture Arts and Sciences 33-50; Boyce and Lewis). With planned obsolescence leading to “dizzying new heights” of product replacement (Rogers 202), there is an overstated sense of the novelty and preeminence of “new” media—a “cult of the present” is particularly dazzled by the spread of electronic gadgets through globalization (Mattelart and Constantinou 22). References to the symbolic power of media technology can be found in hymnals across the internet and the halls of academe: technologies change us, the media will solve social problems or create new ones, ICTs transform work, monopoly ownership no longer matters, journalism is dead, social networking enables social revolution, and the media deliver a cleaner, post-industrial, capitalism. Here is a typical example from the twilight zone of the technological sublime (actually, the OECD): A major feature of the knowledge-based economy is the impact that ICTs have had on industrial structure, with a rapid growth of services and a relative decline of manufacturing. Services are typically less energy intensive and less polluting, so among those countries with a high and increasing share of services, we often see a declining energy intensity of production … with the emergence of the Knowledge Economy ending the old linear relationship between output and energy use (i.e. partially de-coupling growth and energy use) (Houghton 1) This statement mixes half-truths and nonsense. In reality, old-time, toxic manufacturing has moved to the Global South, where it is ascendant; pollution levels are rising worldwide; and energy consumption is accelerating in residential and institutional sectors, due almost entirely to ICT/CE usage, despite advances in energy conservation technology (a neat instance of the age-old Jevons Paradox). In our book we show how these are all outcomes of growth in ICT/CE, the foundation of the so-called knowledge-based economy. ICT/CE are misleadingly presented as having little or no material ecological impact. In the realm of everyday life, the sublime experience of electronic machinery conceals the physical work and material resources that go into them, while the technological sublime makes the idea that more-is-better palatable, axiomatic; even sexy. In this sense, the technological sublime relates to what Marx called “the Fetishism which attaches itself to the products of labour” once they are in the hands of the consumer, who lusts after them as if they were “independent beings” (77). There is a direct but unseen relationship between technology’s symbolic power and the scale of its environmental impact, which the economist Juliet Schor refers to as a “materiality paradox” —the greater the frenzy to buy goods for their transcendent or nonmaterial cultural meaning, the greater the use of material resources (40-41). We wrote Greening the Media knowing that a study of the media’s effect on the environment must work especially hard to break the enchantment that inflames popular and elite passions for media technologies. We understand that the mere mention of the political-economic arrangements that make shiny gadgets possible, or the environmental consequences of their appearance and disappearance, is bad medicine. It’s an unwelcome buzz kill—not a cool way to converse about cool stuff. But we didn’t write the book expecting to win many allies among high-tech enthusiasts and ICT/CE industry leaders. We do not dispute the importance of information and communication media in our lives and modern social systems. We are media people by profession and personal choice, and deeply immersed in the study and use of emerging media technologies. But we think it’s time for a balanced assessment with less hype and more practical understanding of the relationship of media technologies to the biosphere they inhabit. Media consumers, designers, producers, activists, researchers, and policy makers must find new and effective ways to move ICT/CE production and consumption toward ecologically sound practices. In the course of this project, we found in casual conversation, lecture halls, classroom discussions, and correspondence, consistent and increasing concern with the environmental impact of media technology, especially the deleterious effects of e-waste toxins on workers, air, water, and soil. We have learned that the grip of the technological sublime is not ironclad. Its instability provides a point of departure for investigating and criticizing the relationship between the media and the environment. The media are, and have been for a long time, intimate environmental participants. Media technologies are yesterday’s, today’s, and tomorrow’s news, but rarely in the way they should be. The prevailing myth is that the printing press, telegraph, phonograph, photograph, cinema, telephone, wireless radio, television, and internet changed the world without changing the Earth. In reality, each technology has emerged by despoiling ecosystems and exposing workers to harmful environments, a truth obscured by symbolic power and the power of moguls to set the terms by which such technologies are designed and deployed. Those who benefit from ideas of growth, progress, and convergence, who profit from high-tech innovation, monopoly, and state collusion—the military-industrial-entertainment-academic complex and multinational commandants of labor—have for too long ripped off the Earth and workers. As the current celebration of media technology inevitably winds down, perhaps it will become easier to comprehend that digital wonders come at the expense of employees and ecosystems. This will return us to Max Weber’s insistence that we understand technology in a mundane way as a “mode of processing material goods” (27). Further to understanding that ordinariness, we can turn to the pioneering conversation analyst Harvey Sacks, who noted three decades ago “the failures of technocratic dreams [:] that if only we introduced some fantastic new communication machine the world will be transformed.” Such fantasies derived from the very banality of these introductions—that every time they took place, one more “technical apparatus” was simply “being made at home with the rest of our world’ (548). Media studies can join in this repetitive banality. Or it can withdraw the welcome mat for media technologies that despoil the Earth and wreck the lives of those who make them. In our view, it’s time to green the media by greening media studies. References “A Cyber-House Divided.” Economist 4 Sep. 2010: 61-62. “Gartner Estimates ICT Industry Accounts for 2 Percent of Global CO2 Emissions.” Gartner press release. 6 April 2007. ‹http://www.gartner.com/it/page.jsp?id=503867›. Basel Action Network and Silicon Valley Toxics Coalition. Exporting Harm: The High-Tech Trashing of Asia. Seattle: Basel Action Network, 25 Feb. 2002. Benjamin, Walter. “Central Park.” Trans. Lloyd Spencer with Mark Harrington. New German Critique 34 (1985): 32-58. Biagioli, Mario. “Postdisciplinary Liaisons: Science Studies and the Humanities.” Critical Inquiry 35.4 (2009): 816-33. Boyce, Tammy and Justin Lewis, eds. Climate Change and the Media. New York: Peter Lang, 2009. Commission of the European Communities. “Impact Assessment.” Commission Staff Working Paper accompanying the Proposal for a Directive of the European Parliament and of the Council on Waste Electrical and Electronic Equipment (WEEE) (recast). COM (2008) 810 Final. Brussels: Commission of the European Communities, 3 Dec. 2008. Environmental Protection Agency. Management of Electronic Waste in the United States. Washington, DC: EPA, 2007 Environmental Protection Agency. Statistics on the Management of Used and End-of-Life Electronics. Washington, DC: EPA, 2008 Grossman, Elizabeth. Tackling High-Tech Trash: The E-Waste Explosion & What We Can Do about It. New York: Demos, 2008. ‹http://www.demos.org/pubs/e-waste_FINAL.pdf› Herat, Sunil. “Review: Sustainable Management of Electronic Waste (e-Waste).” Clean 35.4 (2007): 305-10. Houghton, J. “ICT and the Environment in Developing Countries: Opportunities and Developments.” Paper prepared for the Organization for Economic Cooperation and Development, 2009. International Telecommunication Union. ICTs for Environment: Guidelines for Developing Countries, with a Focus on Climate Change. Geneva: ICT Applications and Cybersecurity Division Policies and Strategies Department ITU Telecommunication Development Sector, 2008. Malmodin, Jens, Åsa Moberg, Dag Lundén, Göran Finnveden, and Nina Lövehagen. “Greenhouse Gas Emissions and Operational Electricity Use in the ICT and Entertainment & Media Sectors.” Journal of Industrial Ecology 14.5 (2010): 770-90. Marx, Karl. Capital: Vol. 1: A Critical Analysis of Capitalist Production, 3rd ed. Trans. Samuel Moore and Edward Aveling, Ed. Frederick Engels. New York: International Publishers, 1987. Mattelart, Armand and Costas M. Constantinou. “Communications/Excommunications: An Interview with Armand Mattelart.” Trans. Amandine Bled, Jacques Guot, and Costas Constantinou. Review of International Studies 34.1 (2008): 21-42. Mattelart, Armand. “Cómo nació el mito de Internet.” Trans. Yanina Guthman. El mito internet. Ed. Victor Hugo de la Fuente. Santiago: Editorial aún creemos en los sueños, 2002. 25-32. Maxwell, Richard and Toby Miller. Greening the Media. New York: Oxford University Press, 2012. Nye, David E. American Technological Sublime. Cambridge, Mass.: MIT Press, 1994. Nye, David E. Technology Matters: Questions to Live With. Cambridge, Mass.: MIT Press. 2007. Orwell, George. “As I Please.” Tribune. 12 May 1944. Richtel, Matt. “Consumers Hold on to Products Longer.” New York Times: B1, 26 Feb. 2011. Robinson, Brett H. “E-Waste: An Assessment of Global Production and Environmental Impacts.” Science of the Total Environment 408.2 (2009): 183-91. Rogers, Heather. Gone Tomorrow: The Hidden Life of Garbage. New York: New Press, 2005. Sacks, Harvey. Lectures on Conversation. Vols. I and II. Ed. Gail Jefferson. Malden: Blackwell, 1995. Schiller, Herbert I. Information and the Crisis Economy. Norwood: Ablex Publishing, 1984. Schor, Juliet B. Plenitude: The New Economics of True Wealth. New York: Penguin, 2010. Science and Technology Council of the American Academy of Motion Picture Arts and Sciences. The Digital Dilemma: Strategic Issues in Archiving and Accessing Digital Motion Picture Materials. Los Angeles: Academy Imprints, 2007. Weber, Max. “Remarks on Technology and Culture.” Trans. Beatrix Zumsteg and Thomas M. Kemple. Ed. Thomas M. Kemple. Theory, Culture [i] The global recession that began in 2007 has been the main reason for some declines in Global North energy consumption, slower turnover in gadget upgrades, and longer periods of consumer maintenance of electronic goods (Richtel). [ii] The emergence of the technological sublime has been attributed to the Western triumphs in the post-Second World War period, when technological power supposedly supplanted the power of nature to inspire fear and astonishment (Nye Technology Matters 28). Historian Mario Biagioli explains how the sublime permeates everyday life through technoscience: "If around 1950 the popular imaginary placed science close to the military and away from the home, today’s technoscience frames our everyday life at all levels, down to our notion of the self" (818). [iii] This compulsory repetition is seemingly undertaken each time as a novelty, governed by what German cultural critic Walter Benjamin called, in his awkward but occasionally illuminating prose, "the ever-always-the-same" of "mass-production" cloaked in "a hitherto unheard-of significance" (48).
APA, Harvard, Vancouver, ISO, and other styles
45

Inglis, David. "On Oenological Authenticity: Making Wine Real and Making Real Wine." M/C Journal 18, no. 1 (January 20, 2015). http://dx.doi.org/10.5204/mcj.948.

Full text
Abstract:
IntroductionIn the wine world, authenticity is not just desired, it is actively required. That demand comes from a complex of producers, distributors and consumers, and other interested parties. Consequently, the authenticity of wine is constantly created, reworked, presented, performed, argued over, contested and appreciated.At one level, such processes have clear economic elements. A wine deemed to be an authentic “expression” of something—the soil and micro-climate in which it was grown, the environment and culture of the region from which it hails, the genius of the wine-maker who nurtured and brought it into being, the quintessential characteristics of the grape variety it is made from—will likely make much more money than one deemed inauthentic. In wine, as in other spheres, perceived authenticity is a means to garner profits, both economic and symbolic (Beverland).At another level, wine animates a complicated intertwining of human tastes, aesthetics, pleasures and identities. Discussions as to the authenticity, or otherwise, of a wine often involve a search by the discussants for meaning and purpose in their lives (Grahm). To discover and appreciate a wine felt to “speak” profoundly of the place from whence it came possibly involves a sense of superiority over others: I drink “real” wine, while you drink mass-market trash (Bourdieu). It can also create reassuring senses of ontological security: in discovering an authentic wine, expressive of a certain aesthetic and locational purity (Zolberg and Cherbo), I have found a cherishable object which can be reliably traced to one particular place on Earth, therefore possessing integrity, honesty and virtue (Fine). Appreciation of wine’s authenticity licenses the self-perception that I am sophisticated and sensitive (Vannini and Williams). My judgement of the wine is also a judgement upon my own aesthetic capacities (Hennion).In wine drinking, and the production, distribution and marketing processes underpinning it, much is at stake as regards authenticity. The social system of the wine world requires the category of authenticity in order to keep operating. This paper examines how and why this has come to be so. It considers the crafting of authenticity in long-term historical perspective. Demand for authentic wine by drinkers goes back many centuries. Self-conscious performances of authenticity by producers is of more recent provenance, and was elaborated above all in France. French innovations then spread to other parts of Europe and the world. The paper reviews these developments, showing that wine authenticity is constituted by an elaborate complex of environmental, cultural, legal, political and commercial factors. The paper both draws upon the social science literature concerning the construction of authenticity and also points out its limitations as regards understanding wine authenticity.The History of AuthenticityIt is conventional in the social science literature (Peterson, Authenticity) to claim that authenticity as a folk category (Lu and Fine), and actors’ desires for authentic things, are wholly “modern,” being unknown in pre-modern contexts (Cohen). Consideration of wine shows that such a view is historically uninformed. Demands by consumers for ‘authentic’ wine, in the sense that it really came from the location it was sold as being from, can be found in the West well before the 19th century, having ancient roots (Wengrow). In ancient Rome, there was demand by elites for wine that was both really from the location it was billed as being from, and was verifiably of a certain vintage (Robertson and Inglis). More recently, demand has existed in Western Europe for “real” Tokaji (sweet wine from Hungary), Port and Bordeaux wines since at least the 17th century (Marks).Conventional social science (Peterson, Authenticity) is on solider ground when demonstrating how a great deal of social energies goes into constructing people’s perceptions—not just of consumers, but of wine producers and sellers too—that particular wines are somehow authentic expressions of the places where they were made. The creation of perceived authenticity by producers and sales-people has a long historical pedigree, beginning in early modernity.For example, in the 17th and 18th centuries, wine-makers in Bordeaux could not compete on price grounds with burgeoning Spanish, Portuguese and Italian production areas, so they began to compete with them on the grounds of perceived quality. Multiple small plots were reorganised into much bigger vineyards. The latter were now associated with a chateau in the neighbourhood, giving the wines connotations of aristocratic gravity and dignity (Ulin). Product-makers in other fields have used the assertion of long-standing family lineages as apparent guarantors of tradition and quality in production (Peterson, Authenticity). The early modern Bordelaise did the same, augmenting their wines’ value by calling upon aristocratic accoutrements like chateaux, coats-of-arms, alleged long-term family ownership of vineyards, and suchlike.Such early modern entrepreneurial efforts remain the foundations of the very high prestige and prices associated with elite wine-making in the region today, with Chinese companies and consumers particularly keen on the grand crus of the region. Globalization of the wine world today is strongly rooted in forms of authenticity performance invented several hundred years ago.Enter the StateAnother notable issue is the long-term role that governments and legislation have played, both in the construction and presentation of authenticity to publics, and in attempts to guarantee—through regulative measures and taxation systems—that what is sold really has come from where it purports to be from. The west European State has a long history of being concerned with the fraudulent selling of “fake” wines (Anderson, Norman, and Wittwer). Thus Cosimo III, Medici Grand Duke of Florence, was responsible for an edict of 1716 which drew up legal boundaries for Tuscan wine-producing regions, restricting the use of regional names like Chianti to wine that actually came from there (Duguid).These 18th century Tuscan regulations are the distant ancestors of quality-control rules centred upon the need to guarantee the authenticity of wines from particular geographical regions and sub-regions, which are today now ubiquitous, especially in the European Union (DeSoucey). But more direct progenitors of today’s Geographical Indicators (GIs)—enforced by the GATT international treaties—and Protected Designations of Origin (PDOs)—promulgated and monitored by the EU—are French in origin (Barham). The famous 1855 quality-level classification of Bordeaux vineyards and their wines was the first attempt in the world explicitly to proclaim that the quality of a wine was a direct consequence of its defined place of origin. This move significantly helped to create the later highly influential notion that place of origin is the essence of a wine’s authenticity. This innovation was initially wholly commercial, rather than governmental, being carried out by wine-brokers to promote Bordeaux wines at the Paris Exposition Universelle, but was later elaborated by State officials.In Champagne, another luxury wine-producing area, small-scale growers of grapes worried that national and international perceptions of their wine were becoming wholly determined by big brands such as Dom Perignon, which advertised the wine as a luxury product, but made no reference to the grapes, the soil, or the (supposedly) traditional methods of production used by growers (Guy). The latter turned to the idea of “locality,” which implied that the character of the wine was an essential expression of the Champagne region itself—something ignored in brand advertising—and that the soil itself was the marker of locality. The idea of “terroir”—referring to the alleged properties of soil and micro-climate, and their apparent expression in the grapes—was mobilised by one group, smaller growers, against another, the large commercial houses (Guy). The terroir notion was a means of constructing authenticity, and denouncing de-localised, homogenizing inauthenticity, a strategy favouring some types of actors over others. The relatively highly industrialized wine-making process was later represented for public consumption as being consonant with both tradition and nature.The interplay of commerce, government, law, and the presentation of authenticity, also appeared in Burgundy. In that region between WWI and WWII, the wine world was transformed by two new factors: the development of tourism and the rise of an ideology of “regionalism” (Laferté). The latter was invented circa WWI by metropolitan intellectuals who believed that each of the French regions possessed an intrinsic cultural “soul,” particularly expressed through its characteristic forms of food and drink. Previously despised peasant cuisine was reconstructed as culturally worthy and true expression of place. Small-scale artisanal wine production was no longer seen as an embarrassment, producing wines far more “rough” than those of Bordeaux and Champagne. Instead, such production was taken as ground and guarantor of authenticity (Laferté). Location, at regional, village and vineyard level, was taken as the primary quality indicator.For tourists lured to the French regions by the newly-established Guide Michelin, and for influential national and foreign journalists, an array of new promotional devices were created, such as gastronomic festivals and folkloric brotherhoods devoted to celebrations of particular foodstuffs and agricultural events like the wine-harvest (Laferté). The figure of the wine-grower was presented as an exemplary custodian of tradition, relatively free of modern capitalist exchange relations. These are the beginnings of an important facet of later wine companies’ promotional literatures worldwide—the “decoupling” of their supposed commitments to tradition, and their “passion” for wine-making beyond material interests, from everyday contexts of industrial production and profit-motives (Beverland). Yet the work of making the wine-maker and their wines authentically “of the soil” was originally stimulated in response to international wine markets and the tourist industry (Laferté).Against this background, in 1935 the French government enacted legislation which created theInstitut National des Appellations d’Origine (INAO) and its Appelation d’Origine Controlle (AOC) system (Barham). Its goal was, and is, to protect what it defines as terroir, encompassing both natural and human elements. This legislation went well beyond previous laws, as it did more than indicate that wine must be honestly labelled as deriving from a given place of origin, for it included guarantees of authenticity too. An authentic wine was defined as one which truly “expresses” the terroir from which it comes, where terroir means both soil and micro-climate (nature) and wine-making techniques “traditionally” associated with that area. Thus French law came to enshrine a relatively recently invented cultural assumption: that places create distinctive tastes, the value of this state of affairs requiring strong State protection. Terroir must be protected from the untrammelled free market. Land and wine, symbiotically connected, are de-commodified (Kopytoff). Wine is embedded in land; land is embedded in what is regarded as regional culture; the latter is embedded in national history (Polanyi).But in line with the fact that the cultural underpinnings of the INAO/AOC system were strongly commercially oriented, at a more subterranean level the de-commodified product also has economic value added to it. A wine worthy of AOC protection must, it is assumed, be special relative to wines un-deserving of that classification. The wine is taken out of the market, attributed special status, and released, economically enhanced, back onto the market. Consequently, State-guaranteed forms of authenticity embody ambivalent but ultimately efficacious economic processes. Wine pioneered this Janus-faced situation, the AOC system in the 1990s being generalized to all types of agricultural product in France. A huge bureaucratic apparatus underpins and makes possible the AOC system. For a region and product to gain AOC protection, much energy is expended by collectives of producers and other interested parties like regional development and tourism officials. The French State employs a wide range of expert—oenological, anthropological, climatological, etc.—who police the AOC classificatory mechanisms (Barham).Terroirisation ProcessesFrench forms of legal classification, and the broader cultural classifications which underpin them and generated them, very much influenced the EU’s PDO system. The latter uses a language of authenticity rooted in place first developed in France (DeSoucey). The French model has been generalized, both from wine to other foodstuffs, and around many parts of Europe and the world. An Old World idea has spread to the New World—paradoxically so, because it was the perceived threat posed by the ‘placeless’ wines and decontextualized grapes of the New World which stimulated much of the European legislative measures to protect terroir (Marks).Paxson shows how artisanal cheese-makers in the US, appropriate the idea of terroir to represent places of production, and by extension the cheeses made there, that have no prior history of being constructed as terroir areas. Here terroir is invented at the same time as it is naturalised, made to seem as if it simply points to how physical place is directly expressed in a manufactured product. By defining wine or cheese as a natural product, claims to authenticity are themselves naturalised (Ulin). Successful terroirisation brings commercial benefits for those who engage in it, creating brand distinctiveness (no-one else can claim their product expresses that particularlocation), a value-enhancing aura around the product which, and promotion of food tourism (Murray and Overton).Terroirisation can also render producers into virtuous custodians of the land who are opposed to the depredations of the industrial food and agriculture systems, the categories associated with terroir classifying the world through a binary opposition: traditional, small-scale production on the virtuous side, and large-scale, “modern” harvesting methods on the other. Such a situation has prompted large-scale, industrial wine-makers to adopt marketing imagery that implies the “place-based” nature of their offerings, even when the grapes can come from radically different areas within a region or from other regions (Smith Maguire). Like smaller producers, large companies also decouple the advertised imagery of terroir from the mundane realities of industry and profit-margins (Beverland).The global transportability of the terroir concept—ironic, given the rhetorical stress on the uniqueness of place—depends on its flexibility and ambiguity. In the French context before WWII, the phrase referred specifically to soil and micro-climate of vineyards. Slowly it started mean to a markedly wider symbolic complex involving persons and personalities, techniques and knowhow, traditions, community, and expressions of local and regional heritage (Smith Maguire). Over the course of the 20th century, terroir became an ever broader concept “encompassing the physical characteristics of the land (its soil, climate, topography) and its human dimensions (culture, history, technology)” (Overton 753). It is thought to be both natural and cultural, both physical and human, the potentially contradictory ramifications of such understanding necessitating subtle distinctions to ward off confusion or paradox. Thus human intervention on the land and the vines is often represented as simply “letting the grapes speak for themselves” and “allowing the land to express itself,” as if the wine-maker were midwife rather than fabricator. Terroir talk operates with an awkward verbal balancing act: wine-makers’ “signature” styles are expressions of their cultural authenticity (e.g. using what are claimed as ‘traditional’ methods), yet their stylistic capacities do not interfere with the soil and micro-climate’s natural tendencies (i.e. the terroir’sphysical authenticity).The wine-making process is a case par excellence of a network of humans and objects, or human and non-human actants (Latour). The concept of terroir today both acknowledges that fact, but occludes it at the same time. It glosses over the highly problematic nature of what is “real,” “true,” “natural.” The roles of human agents and technologies are sequestered, ignoring the inevitably changing nature of knowledges and technologies over time, recognition of which jeopardises claims about an unchanging physical, social and technical order. Harvesting by machine production is representationally disavowed, yet often pragmatically embraced. The role of “foreign” experts acting as advisors —so-called “flying wine-makers,” often from New World production cultures —has to be treated gingerly or covered up. Because of the effects of climate change on micro-climates and growing conditions, the taste of wines from a particular terroir changes over time, but the terroir imaginary cannot recognise that, being based on projections of timelessness (Brabazon).The authenticity referred to, and constructed, by terroir imagery must constantly be performed to diverse audiences, convincing them that time stands still in the terroir. If consumers are to continue perceiving authenticity in a wine or winery, then a wide range of cultural intermediaries—critics, journalists and other self-proclaiming experts must continue telling convincing stories about provenance. Effective authenticity story-telling rests on the perceived sincerity and knowledgeability of the teller. Such tales stress romantic imagery and colourful, highly personalised accounts of the quirks of particular wine-makers, omitting mundane details of production and commercial activities (Smith Maguire). Such intermediaries must seek to interest their audience in undiscovered regions and “quirky” styles, demonstrating their insider knowledge. But once such regions and styles start to become more well-known, their rarity value is lost, and intermediaries must find ever newer forms of authenticity, which in turn will lose their burnished aura when they become objects of mundane consumption. An endless cycle of discovering and undermining authenticity is constantly enacted.ConclusionAuthenticity is a category held by different sorts of actors in the wine world, and is the means by which that world is held together. This situation has developed over a long time-frame and is now globalized. Yet I will end this paper on a volte face. Authenticity in the wine world can never be regarded as wholly and simply a social construction. One cannot directly import into the analysis of that world assumptions—about the wholly socially constructed nature of phenomena—which social scientific studies of other domains, most notably culture industries, work with (Peterson, Authenticity). Ways of thinking which are indeed useful for understanding the construction of authenticity in some specific contexts, cannot just be applied in simplistic manners to the wine world. When they are applied in direct and unsophisticated ways, such an operation misses the specificities and particularities of wine-making processes. These are always simultaneously “social” and “natural”, involving multiple forms of complex intertwining of human actions, environmental and climatological conditions, and the characteristics of the vines themselves—a situation markedly beyond beyond any straightforward notion of “social construction.”The wine world has many socially constructed objects. But wine is not just like any other product. Its authenticity cannot be fabricated in the manner of, say, country music (Peterson, Country). Wine is never in itself only a social construction, nor is its authenticity, because the taste, texture and chemical elements of wine derive from complex human interactions with the physical environment. Wine is partly about packaging, branding and advertising—phenomena standard social science accounts of authenticity focus on—but its organic properties are irreducible to those factors. Terroir is an invention, a label put on to certain things, meaning they are perceived to be authentic. But the things that label refers to—ranging from the slope of a vineyard and the play of sunshine on it, to how grapes grow and when they are picked—are entwined with human semiotics but not completely created by them. A truly comprehensive account of wine authenticity remains to be written.ReferencesAnderson, Kym, David Norman, and Glyn Wittwer. “Globalization and the World’s Wine Markets: Overview.” Discussion Paper No. 0143, Centre for International Economic Studies. Adelaide: U of Adelaide, 2001.Barham, Elizabeth. “Translating Terroir: The Global Challenge of French AOC Labelling.” Journal of Rural Studies 19 (2003): 127–38.Beverland, Michael B. “Crafting Brand Authenticity: The Case of Luxury Wines.” Journal of Management Studies 42.5 (2005): 1003–29.Bourdieu, Pierre. Distinction: A Social Critique of the Judgement of Taste. London: Routledge, 1992.Brabazon, Tara. “Colonial Control or Terroir Tourism? The Case of Houghton’s White Burgundy.” Human Geographies 8.2 (2014): 17–33.Cohen, Erik. “Authenticity and Commoditization in Tourism.” Annals of Tourism Research 15.3 (1988): 371–86.DeSoucey, Michaela. “Gastronationalism: Food Traditions and Authenticity Politics in the European Union.” American Sociological Review 75.3 (2010): 432–55.Duguid, Paul. “Developing the Brand: The Case of Alcohol, 1800–1880.” Enterprise and Society 4.3 (2003): 405–41.Fine, Gary A. “Crafting Authenticity: The Validation of Identity in Self-Taught Art.” Theory and Society 32.2 (2003): 153–80.Grahm, Randall. “The Soul of Wine: Digging for Meaning.” Wine and Philosophy: A Symposium on Thinking and Drinking. Ed. Fritz Allhoff. Oxford: Blackwell, 2008. 219–24.Guy, Kolleen M. When Champagne Became French: Wine and the Making of a National Identity. Baltimore: Johns Hopkins UP, 2003.Hennion, Antoine. “The Things That Bind Us Together.”Cultural Sociology 1.1 (2007): 65–85.Kopytoff, Igor. “The Cultural Biography of Things: Commoditization as a Process." The Social Life of Things: Commodities in Cultural Perspective. Ed. Arjun Appadurai. Cambridge: Cambridge UP, 1986. 64–91.Laferté, Gilles. “End or Invention of Terroirs? Regionalism in the Marketing of French Luxury Goods: The Example of Burgundy Wines in the Inter-War Years.” Working Paper, Centre d’Economie et Sociologie Appliquées a l’Agriculture et aux Espaces Ruraux, Dijon.Latour, Bruno. We Have Never Been Modern. Harvard: Harvard UP, 1993.Lu, Shun and Gary A. Fine. “The Presentation of Ethnic Authenticity: Chinese Food as a Social Accomplishment.” The Sociological Quarterly 36.3 (1995): 535–53.Marks, Denton. “Competitiveness and the Market for Central and Eastern European Wines: A Cultural Good in the Global Wine Market.” Journal of Wine Research 22.3 (2011): 245–63.Murray, Warwick E. and John Overton. “Defining Regions: The Making of Places in the New Zealand Wine Industry.” Australian Geographer 42.4 (2011): 419–33.Overton, John. “The Consumption of Space: Land, Capital and Place in the New Zealand Wine Industry.” Geoforum 41.5 (2010): 752–62.Paxson, Heather. “Locating Value in Artisan Cheese: Reverse Engineering Terroir for New-World Landscapes.” American Anthropologist 112.3 (2010): 444–57.Peterson, Richard A. Creating Country Music: Fabricating Authenticity. Chicago: U of Chicago P, 2000.———. “In Search of Authenticity.” Journal of Management Studies 42.5 (2005): 1083–98.Polanyi, Karl. The Great Transformation. Boston: Beacon Press, 1957.Robertson, Roland, and David Inglis. “The Global Animus: In the Tracks of World Consciousness.” Globalizations 1.1 (2006): 72–92.Smith Maguire, Jennifer. “Provenance and the Liminality of Production and Consumption: The Case of Wine Promoters.” Marketing Theory 10.3 (2010): 269–82.Trubek, Amy. The Taste of Place: A Cultural Journey into Terroir. Los Angeles: U of California P, 2008.Ulin, Robert C. “Invention and Representation as Cultural Capital.” American Anthropologist 97.3 (1995): 519–27.Vannini, Phillip, and Patrick J. Williams. Authenticity in Culture, Self and Society. Farnham: Ashgate, 2009.Wengrow, David. “Prehistories of Commodity Branding.” Current Anthropology 49.1 (2008): 7–34.Zolberg, Vera and Joni Maya Cherbo. Outsider Art: Contesting Boundaries in Contemporary Culture. Cambridge: Cambridge UP, 1997.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography