Journal articles on the topic 'International market scanning'

To see the other types of publications on this topic, follow the link: International market scanning.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'International market scanning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Diaz Ruiz, Carlos A., Jonathan J. Baker, Katy Mason, and Kieran Tierney. "Market-scanning and market-shaping: why are firms blindsided by market-shaping acts?" Journal of Business & Industrial Marketing 35, no. 9 (June 15, 2020): 1389–401. http://dx.doi.org/10.1108/jbim-03-2019-0130.

Full text
Abstract:
Purpose This paper aims to investigate two seminal market-scanning frameworks – the five-forces analysis and PESTEL environmental scanning tool – to assess their readiness for anticipating market-shaping acts. Design/methodology/approach Drawing on the market-shaping literature that conceptualizes markets as complex adaptive systems, this conceptual paper interrogates the underlying assumptions and “blind spots” in two seminal market-scanning frameworks. The paper showcases three illustrative vignettes in which non-industry actors catalyzed market change in ways that these market-scanning frameworks would not be able to anticipate. Findings Marketing strategists can be “blindsided” as seminal market-scanning frameworks have either too narrow an interpretation of market change or are too broad to anticipate specific types of market-shaping acts. The assumptions about markets that underpin these market-scanning frameworks contribute to incumbents being slow to realize market-shaping acts are taking place. Research limitations/implications The authors extend market-scanning to include a type of managerial myopia that fails to register the socially embedded, systemic nature of complex contemporary markets. Furthermore, the paper provides an “actors-agendas-outcomes” scanning framework that offers awareness of market-shaping acts. Originality/value This paper is the first to consider market-scanning frameworks from a market-shaping perspective.
APA, Harvard, Vancouver, ISO, and other styles
2

Meredith, Lindsay. "Scanning for market threats." Journal of Business & Industrial Marketing 22, no. 4 (June 19, 2007): 211–19. http://dx.doi.org/10.1108/08858620710754478.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Belich, Thom J., and Alan J. Dubinsky. "The integration of market‐scanning activities: effects of market distance." Journal of Business & Industrial Marketing 13, no. 2 (April 1998): 166–85. http://dx.doi.org/10.1108/08858629810213397.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lin, Hsiu-Fen, and Kai-Lin Chang. "Key success factors of international market development." Maritime Business Review 2, no. 2 (June 15, 2017): 79–98. http://dx.doi.org/10.1108/mabr-09-2016-0025.

Full text
Abstract:
Purpose The purpose of this paper is to develop an evaluation model to determine the relative weights of key factors influencing international market development (IMD) success through analysis network process (ANP) during group decision-making. An empirical case of the Taiwan bulk shipping industry is used to illustrate the feasibility of the proposed approach. Design/methodology/approach The literature review is performed to generate 20 key success factors (KSFs) along with four factor categories in IMD (such as organizational capability, environmental scanning, international strategy and internationalization behavior). Then, ANP is applied to develop an evaluation model that prioritizes the relative importance linking the above four factor categories with 20 evaluated KSFs. Findings With respect to the final weights for factor categories, “international strategy” and “environmental scanning” are the two most important criteria, followed by “organizational capability” and “internationalization behavior”. The results also showed that by reviewing the global weights of the 20 KSFs of IMD, “service as competitive advantage”, “market potential” and “risk taking” have the highest rankings. Practical implications The findings indicate that firm expansion into international markets typically depends on a successful international strategy. Hence, to enhance their global market competitiveness, Taiwan bulk shipping firms should focus their efforts on planning international market entry strategy and prioritizing shipping services with high-potential target markets. Originality/value Theoretically, the study results can provide both theoretical basis and empirical evidence, indicating the relative weights and priorities of KSFs of IMD for the Taiwan bulk shipping industry. From the managerial perspective, the analytical results can help managers focus on main factors and identify the best policy to improve their IMD practice and performance.
APA, Harvard, Vancouver, ISO, and other styles
5

Vos, Bart, and Edwin van den Berg. "Assessing International Allocation Strategies." International Journal of Logistics Management 7, no. 2 (July 1, 1996): 69–84. http://dx.doi.org/10.1108/09574099610805539.

Full text
Abstract:
Allocating the operations of multinational enterprises to geographic locations where performance can be optimized has become an important strategic issue. In view of the continuing growth of international trade and foreign direct investment, managers need systematic procedures to determine global allocation strategies. Available frameworks on global business strategy are typically abstract and generalized, making them less suited for the development of tailor‐made allocation strategies. Quantitative allocation models in operations research tend to be biased towards optimizing mathematical algorithms, making them less suited to support managerial decision making. This paper bridges the gap between generic strategy frameworks and highly quantitative operations research models by presenting a scanning tool to support decision making on strategic allocation issues. An important feature of this tool is to systematically filter available data, intended to quality and quantify critical product, process and market characteristics for specific product classes. The scanning tool has been applied in two cases, involving allocation decisions of a European multinational in the fast moving consumer goods industry.
APA, Harvard, Vancouver, ISO, and other styles
6

JENSSEN, JAN INGE, and ERLEND NYBAKK. "INTER-ORGANIZATIONAL INNOVATION PROMOTERS IN SMALL, KNOWLEDGE-INTENSIVE FIRMS." International Journal of Innovation Management 13, no. 03 (September 2009): 441–66. http://dx.doi.org/10.1142/s1363919609002376.

Full text
Abstract:
This paper examines the relationship between external relations and innovation in small, knowledge-intensive Norwegian firms. Our findings indicate that external relations are beneficial for innovation. The analysis shows that it is necessary to treat innovation as more than a concept. Our independent variables related differently to product innovation, process innovation, and market innovation. We found that market participation in product development has a positive impact on product, process and market innovation. We also found that top management interaction with other firms had a positive effect on market innovation and that top management interaction with external R&D had a positive effect on product innovation. This finding probably indicates that access to R&D resources is vital for product development in the context of knowledge-intensive products. The results also show that participation in conferences and courses positively influences process and market innovation and that systematic environmental scanning positively influences product innovation.
APA, Harvard, Vancouver, ISO, and other styles
7

Asikhia, Olalekan, and Vannie Naidoo. "Assessment of the moderating effects of Nigerian market environment on the relationship between management success determinants and SMEs’ performance." Problems and Perspectives in Management 18, no. 4 (December 18, 2020): 388–401. http://dx.doi.org/10.21511/ppm.18(4).2020.31.

Full text
Abstract:
A reported eighty-five percentage failure rate of SMEs in Nigeria before five years of operation was ascribed to a lack of knowledge of the market environment. Hence, this study investigated the moderating effects of the Nigerian market environment on the relationship between management success determinants and SMEs’ performance to see how the environment has affected SMEs’ performance. The study employed a survey research design, the population of the study comprised chief executive officers (CEOs) of registered SMEs, and a sample size of 1,102 was used. Probability sampling methods of stratified, proportionate, and random sampling were adopted. Responses were collected through a predetermined set of questions and a self-administered questionnaire. Data were analyzed using descriptive and inferential statistics. The study found that the Nigerian market environment had moderating effects on the relationship between management success determinants and SMEs’ performance (R = 0.817, R2 adjusted = 0.664, R2 change = 0.041, and Fchange = 19.694 at ρ = 0.000), most of the Nigerian market environment’s components have significant moderating effects on all the management success determinants relationship with SMEs’ performance; management skills (β = 0.220, 0.182; ρ < 0.05), innovation (β = 0.147, 0.135; ρ < 0.05), operating system (β = 0.083, 0.061; ρ < 0.05), organizational structure (β = 0.290, 0.303; ρ < 0.05), business reporting system (β = 0.142, 0.137; ρ < 0.05), system flexibility (β = 0.110, 0.107; ρ < 0.05), environmental scanning (β = 0.091, 0.062; ρ < 0.05). Only decision-making is not statistically significant (β = 0.037, 0.004; ρ > 0.05). These imply that Nigerian SMEs’ decisions under intense environmental turbulence are mostly ineffective, and the effects of management success determinants in facilitating performance were also drastically reduced as well as firms’ system flexibility. The study has a practical value of identifying the effect of the Nigerian market environment on the relationship between management success determinants and SMEs’ performance, thus revealing the gaps in the Nigerian SMEs’ management factors. Acknowledgment(s)To Small and Medium Enterprises Development Agency of Nigeria and Small Scale Enterprises Association of Nigeria for their support in ensuring participation of their members.
APA, Harvard, Vancouver, ISO, and other styles
8

Frigan, Chevalier, Zhang, and Spies. "Is a Zirconia Dental Implant Safe When It Is Available on the Market?" Ceramics 2, no. 4 (October 12, 2019): 568–77. http://dx.doi.org/10.3390/ceramics2040044.

Full text
Abstract:
The market share of zirconia (ZrO2) dental implants is steadily increasing. This material comprises a polymorphous character with three temperature-dependent crystalline structures, namely monoclinic (m), tetragonal (t) and cubic (c) phases. Special attention is given to the tetragonal phase when maintained in a metastable state at room temperature. Metastable tetragonal grains allow for the beneficial phenomenon of Phase Transformation Toughening (PTT), resulting in a high fracture resistance, but may lead to an undesired surface transformation to the monoclinic phase in a humid environment (low-temperature degradation, LTD, often referred to as ‘ageing’). Today, the clinical safety of zirconia dental implants by means of long-term stability is being addressed by two international ISO standards. These standards impose different experimental setups concerning the dynamic fatigue resistance of the final product (ISO 14801) or the ageing behavior of a standardized sample (ISO 13356) separately. However, when evaluating zirconia dental implants pre-clinically, oral environmental conditions should be simulated to the extent possible by combining a hydrothermal treatment and dynamic fatigue. For failure analysis, phase transformation might be quantified by non-destructive techniques, such as X-Ray Diffraction (XRD) or Raman spectroscopy, whereas Scanning Electron Microscopy (SEM) of cross-sections or Focused Ion Beam (FIB) sections might be used for visualization of the monoclinic layer growth in depth. Finally, a minimum load should be defined for static loading to fracture. The purpose of this communication is to contribute to the current discussion on how to optimize the aforementioned standards in order to guarantee clinical safety for the patients.
APA, Harvard, Vancouver, ISO, and other styles
9

Hill, Simon L., Michael Dunn, Céline Cano, Suzannah J. Harnor, Ian R. Hardcastle, Johann Grundlingh, Paul I. Dargan, et al. "Human Toxicity Caused by Indole and Indazole Carboxylate Synthetic Cannabinoid Receptor Agonists: From Horizon Scanning to Notification." Clinical Chemistry 64, no. 2 (February 1, 2018): 346–54. http://dx.doi.org/10.1373/clinchem.2017.275867.

Full text
Abstract:
Abstract BACKGROUND The emergence of novel psychoactive substances (NPS), particularly synthetic cannabinoid receptor agonists (SCRA), has involved hundreds of potentially harmful chemicals in a highly dynamic international market challenging users', clinicians', and regulators' understanding of what circulating substances are causing harm. We describe a toxicovigilance system for NPS that predicted the UK emergence and identified the clinical toxicity caused by novel indole and indazole carboxylate SCRA. METHODS To assist early accurate identification, we synthesized 5 examples of commercially unavailable indole and indazole carboxylate SCRA (FUB-NPB-22, 5F-NPB-22, 5F-SDB-005, FUB-PB-22, NM-2201). We analyzed plasma and urine samples from 160 patients presenting to emergency departments with severe toxicity after suspected NPS use during 2015 to 2016 for these and other NPS using data-independent LC-MS/MS. RESULTS We successfully synthesized 5 carboxylate SCRAs using established synthetic and analytical chemistry methodologies. We identified at least 1 SCRA in samples from 49 patients, including an indole or indazole carboxylate SCRA in 17 (35%), specifically 5F-PB-22 (14%), FUB PB-22 (6%), BB-22 (2%), 5F NPB-22 (20%), FUB NPB-22 (2%), and 5F-SDB-005 (4%). In these 17 patients, there was analytical evidence of other substances in 16. Clinical features included agitation and aggression (82%), reduced consciousness (76%), acidosis (47%), hallucinations and paranoid features (41%), tachycardia (35%), hypertension (29%), raised creatine kinase (24%), and seizures (12%). CONCLUSIONS This toxicovigilance system predicted the emergence of misuse of indole and indazole carboxylate SCRA, documented associated clinical harms, and notified relevant agencies. Toxicity appears consistent with other SCRA, including mental state disturbances and reduced consciousness.
APA, Harvard, Vancouver, ISO, and other styles
10

Vishnevskiy, Konstantin, and Andrei Yaroslavtsev. "Russian S&T Foresight 2030: case of nanotechnologies and new materials." foresight 19, no. 2 (April 10, 2017): 198–217. http://dx.doi.org/10.1108/fs-08-2016-0041.

Full text
Abstract:
Purpose The purpose of this paper is to apply Foresight methodology to the area of nanotechnologies and new materials within the framework of Russian S&T Foresight 2030 aimed at revelation of major trends, most promising products and technologies. Design/methodology/approach To achieve this goal, best international practice was analyzed that provided a solid basis for Russian S&T Foresight 2030 (section “Nanotechnology and new materials”). The study used a wide range of advanced Foresight methods adapted to Russian circumstances. During the Foresight study, the authors integrated “market pull” and research “technology push” approaches including both traditional methods (priority-setting, roadmaps, global challenges analysis) and relatively new approaches (horizon scanning, weak signals, wild cards, etc.). Findings Using the methods of the Foresight, the authors identified trends with the greatest impact on the sphere of nanotechnology and new materials, promising markets, product groups and potential areas of demand for Russian innovation technologies and developments in this field. The authors assessed the state-of-the-art of the domestic research in the area of nanotechnologies and new materials to identify “white spots”, as well as parity zone and leadership, which can be the basis for integration into international alliances and positioning of Russia as a center of global technological development in this field. Originality/value The results of applying Foresight methodology toward revelation of the most prospective S&T areas in the field of nanotechnologies and new materials can be used by a variety of stakeholders including federal and regional authorities, technology platforms and innovation and industrial clusters, leading universities and scientific organizations in formulation of their research and strategic agenda. Russian businesses including both large companies and small and medium-sized enterprises can use results of the study in creating their strategic R&D programs and finding appropriate partners.
APA, Harvard, Vancouver, ISO, and other styles
11

Ogodescu, Alexandru Simion, Alexandru Attila Morvay, Adriana Balan, Laura Gavrila, Ana Petcu, and Carmen Savin. "Comparative Study on the Effect of Three Disinfection Procedure on the Streptococcus pyogenes Biofilm Formed on Plastic Materials Used in Paedodontics and Orthodontics." Materiale Plastice 54, no. 1 (March 30, 2017): 116–18. http://dx.doi.org/10.37358/mp.17.1.4797.

Full text
Abstract:
Plastic materials are widely used today in Paedodontics and Orthodontics for manufacturing preventive and therapeutic devices. Since these are worn for long times in the oral cavity biofilm forms on the smooth acrylic surfaces of those appliances. The biofilm must be removed not to destroy the oral microbiology. The aim of this study was to research the possibility of removing the microbial biofilm and disinfecting retainers using the photodynamic effect of toluidine blue O, Fotosan System (CMS Dental, Copenhagen, Denmark) in comparison to two products available on the market Corega Denture Cleanser Tablets (GlaxoSmithKline) and the Retainer Brite� Cleaning Tablets (DENTSPLY International Raintree Essix, FL, USA). The plastic material used in this experiment was the cold-cure acrylic Palapress� vario (Heraeus-Kulzer GmbH, Hanau, Germany). Images of the biofilm formed by Streptococcus pyogenes were obtained using a confocal laser scanning m icroscope. The images were analyzed using Comstat 2 software. The results showed that all the three investigated methods had a disinfectant effect. Corega Denture Cleanser Tablets reduced most of the biofilm formed on the plastic substrate.
APA, Harvard, Vancouver, ISO, and other styles
12

MURIMBIKA, McEDWARD, and BORIS URBAN. "STRATEGIC INNOVATION AT THE FIRM LEVEL: THE IMPACT OF STRATEGIC MANAGEMENT PRACTICES ON ENTREPRENEURIAL ORIENTATION." International Journal of Innovation Management 18, no. 02 (April 2014): 1450016. http://dx.doi.org/10.1142/s1363919614500169.

Full text
Abstract:
The study combines the research domains of strategic management and corporate innovation by examining the impact of strategic management practices on entrepreneurial orientation (EO). Recognising the importance of internal business processes that enable firm entrepreneurial behaviour, it is hypothesised that higher levels of EO are positively associated with the strategic management practices of (1) locus of planning, (2) scanning intensity, (3) planning flexibility, (4) planning horizon, and (5) strategy and financial control attributes. Empirical testing takes place in an under-researched emerging market context on a sample of 219 financial and business services firms. The results provide support for the positive impact that the different strategic management practices have on EO. A practical consideration is for managers to leverage the strategic management practices so that the firm's position on the conservative-entrepreneurial continuum is increased by its propensity to be innovative, proactive, and be willing to take risks when confronted by uncertainty.
APA, Harvard, Vancouver, ISO, and other styles
13

Ruggero, Federica, Riccardo Gori, and Claudio Lubello. "Methodologies to assess biodegradation of bioplastics during aerobic composting and anaerobic digestion: A review." Waste Management & Research 37, no. 10 (June 20, 2019): 959–75. http://dx.doi.org/10.1177/0734242x19854127.

Full text
Abstract:
Bioplastics are emerging on the market as sustainable materials which rise to the challenge to improve the lifecycle of plastics from the perspective of the circular economy. The article aims at providing a critical insight of research studies carried out in the last 20 years on the degradation of bioplastics under aerobic composting and anaerobic digestion conditions. It mainly focuses on the various and different methodologies which have been proposed and developed to monitor the process of biodegradation of several bioplastic materials: CO2 and CH4 measurements, mass loss and disintegration degree, spectroscopy, visual analysis and scanning electron microscopy. Moreover, across the wide range of studies, the process conditions of the experimental setup, such as temperature, test duration and waste composition, often vary from author to author and in accordance with the international standard followed for the test. The different approaches, in terms of process conditions and monitoring methodologies, are pointed out in the review and highlighted to find significant correlations between the results obtained and the experimental procedures. These observed correlations allow critical considerations to be reached about the efficiency of the methodologies and the influence of the main abiotic factors on the process of biodegradation of bioplastics.
APA, Harvard, Vancouver, ISO, and other styles
14

RAMIREZ, MATIAS, and PETER DICKENSON. "GATEKEEPERS, KNOWLEDGE BROKERS AND INTER-FIRM KNOWLEDGE TRANSFER IN BEIJING'S ZHONGGUANCUN SCIENCE PARK." International Journal of Innovation Management 14, no. 01 (February 2010): 93–122. http://dx.doi.org/10.1142/s1363919610002568.

Full text
Abstract:
An important part of industrial policy in China has been directed towards improving the degree and effectiveness of knowledge transfer between key firms in China's innovation system. Amongst these policies, the creation of regional science parks that encourage labour mobility and inter-firm collaboration on innovation projects have been central. Learning through inter-firm knowledge transfer focuses the attention on at least two key factors, improving absorptive capability (Cohen and Levinthal, 1990), which relies on the development of specialised skills in the firm and the establishment of inter-organisational networks through which knowledge is transferred. This paper contributes to this analysis through a detailed study of the relationship between learning and knowledge transfer of knowledge workers working on innovation projects in Chinese ICT companies located in Beijing's Zhongguancun (ZGC) high-technology park. A major advantage of analysing knowledge transfer through the activities of R'D employees is that it highlights the process by which specific competencies and network relations are built. A skills profile of R&D employees is developed that, amongst other features, includes three different networks Chinese knowledge workers use to access and share knowledge: formal organisational networks, personal networks and scanning networks. Empirical data based on two unique surveys in China of senior R&D managers and R&D employees was collected and analysed. This suggests that a skills profile combining knowledge within and outside of the company and scanning activity positively impact both the innovation projects and the labour market position of the knowledge workers. Policy recommendations in terms of training and development in R&D follow.
APA, Harvard, Vancouver, ISO, and other styles
15

Civi, Emin, Elif S. Persinger, and Aziz Sunje. "Gaining Strength For A New Future: Bosnia And Herzegovinas Export Opportunities." Journal of Diversity Management (JDM) 2, no. 4 (October 1, 2007): 43–60. http://dx.doi.org/10.19030/jdm.v2i4.5022.

Full text
Abstract:
International trade is crucial for Bosnia and Herzegovinas (B&H) economic prosperity. In this study guidance to B&H exporters is provided by identifying potential markets and products to focus on when designing future trade strategies. To this end trends in the world trade and trade patterns are examined using various approaches.First approach to identify the potential markets for the B&H exports called for identifying the countries with highest general demand for Bosnia and Herzegovinas current export products. The products Bosnia and Herzegovina exports most along with the countries that demand these products the most in the world are identified. The second approach for identifying the potential export markets for B&H products examine the import volumes of other countries in the world. Still a more fruitful approach for the B&H exporters, at least in the short term, is to target the markets with the fastest growth of import volumes (the third approach). In the fourth approach, untapped trade and highly untapped trade countries that should be targeted by B&H exporters are presented. The fifth approach for identifying the potential export markets for B&H products is based on examining the products whose imports increased fastest in recent years and the countries that imported these products most.The products/product groups that have the highest potential for B&H export success are also identified. First, most imported products as well as the products/product groups whose exports increased the fastest in recent years are examined. Second, the import volumes ten countries with the highest total imports are examined on a product basis to identify the products they import most as well as the products with the highest growth rate of imports. Third, product categories with untapped trade potential and highly untapped trade potential along with their respective markets are presented.Long term sustainable success in the ever changing global economy requires a close and continuous scanning of the trends in the environment. The analysis approaches described above provide B&H exporters a starting point in evaluating their product and market selection strategies and designing new ones for the future.
APA, Harvard, Vancouver, ISO, and other styles
16

Kumar, Jitender, Ashish Gupta, and Sweta Dixit. "Netflix: SVoD entertainment of next gen." Emerald Emerging Markets Case Studies 10, no. 3 (September 15, 2020): 1–36. http://dx.doi.org/10.1108/eemcs-04-2020-0108.

Full text
Abstract:
Learning outcomes The case study illustrated strategic, marketing, financial and operational challenges faced by Netflix in India's growing SVoD market. This case is appropriate in courses such as Strategic Management, Business Strategy, Marketing Management and International Marketing for postgraduate MBA students, other graduate-level management programs and undergraduate-level students. The case was developed to raise awareness among students, to understand the complex nature of the technology-driven industry, to survive in the highly competitive market, to set up a company that serves the huge Indian market. This case delves into the dynamics of marketing on the Indian market, characterized by unorganized players such as local cable television; torrent downloads and organized and established players, low digitalization rates, language barriers, low internet penetration, lack of infrastructure, price-sensitive consumers. Due to up-gradation in technology, internet penetration, an increase in smartphone users, and the market has undergone a notable amount of change, due to a lot on new entrants, competitions, substitutes. The case states various obstacles, for a multinational company while entering the market such as India and how they are required to strategize, mold their marketing mix, need to analyze en-cash their strength, overcome their weakness, take maximum advantage of opportunities and modify their strategies to face huge challenges. The specific learning outcome of the case will help students to understand the strategy that multinational companies can adopt to sustain, compete in emerging countries such as India and within that emerging market such as streaming videos on demand (SVoD). This case will help students to understand the importance of internal and external resources, which help multinational companies to make strategies based on these resources. The case study offers learners the opportunity to explore the strategy in a dynamic environment. This case also highlights the critical issues that should be addressed by multinational companies when entering into a foreign market. The case highlights the importance of analyzing the competitive environment in which it’s going to compete and sustain. It can be used to introduce Ansoff’s growth matrix, internal and external factor analysis and porter’s five forces in the delivery of course for both regular and executive programs. The case should be offered in the middle term periods of the course. Additionally, the case could be used in marketing courses to indicate the importance of scanning the business environment in marketing activities for any organization. The case illustrates the strategies that companies can undertake to expand the market, introduce new products, as per the requirement of business environment and concerns linked with innovating approaches to support the organization to satisfy a larger number of price-sensitive consumers from varied backgrounds. Case overview/synopsis Netflix has been optimistic about the potential growth of the Indian market. It will grow slowly and gradually and become profitable. The SVoD market in India has been price sensitive. There are no plans for cheaper prices. Netflix had a long way to go. The pricing model of Netflix was a hurdle in its growth, but the future of Netflix in India was bright. There have been numerous challenges in terms of government regulations, pricing structure and an increase in the number of competitive players on the market. Netflix believed that Indian audiences enjoyed “Bollywood” film productions but watched low-quality soap opera content on television. Television audiences were a massive untapped market for their brand of original, exclusively produced content. Can Netflix come up with a marketing and growth strategy, or else they might be looking to lose market share and revenue. Should a new product such as Amazon and MI fire stick be introduced in the existing market like their competitors? Should they enter the existing market with existing products, or should they seek a new market in India, such as the rural market, the Pyramid market, the Tier II market and the City III market? Should they diversify into a new market with new products? How Netflix should plan its market communication if it wants to launch a new product or if it wants to reposition its existing product. Netflix had to rethink its strategies and also needed to address these issues so that they could travel smoothly on Indian roads. High marketing budget and aggressive promotions helped Netflix India to make a profit in its first year. Complexity academic level Postgraduate MBA students, other graduate-level management programs and undergraduate-level students. Supplementary materials Teaching notes are available for educators only. Subject code CSS 11: Strategy.
APA, Harvard, Vancouver, ISO, and other styles
17

Píštěk, Václav, Pavel Kučera, Oleksij Fomin, and Alyona Lovska. "Effective Mistuning Identification Method of Integrated Bladed Discs of Marine Engine Turbochargers." Journal of Marine Science and Engineering 8, no. 5 (May 25, 2020): 379. http://dx.doi.org/10.3390/jmse8050379.

Full text
Abstract:
Radial turbine and compressor wheels form essential cornerstones of modern internal combustion engines in terms of economy, efficiency and, in particular, environmental compatibility. As a result of the introduction of exhaust gas turbochargers in the extremely important global market for diesel engines, higher engine efficiencies are possible, which in turn reduce fuel consumption. The associated reduced exhaust emissions can answer questions that results from environmentally relevant aspects of the engine development. In shipping, the international Maritime Organisation (IMO) prescribes the step-by-step reduction of nitrogen oxide and other types of emissions. To reduce these emissions, various systems are being developed, in which turbochargers are an important part. The requirements for the reliability and service life of turbochargers are constantly increasing. Turbocharger blade vibration is one of the most important problems to be solved when designing the rotors. In the case of real rotors, so-called mistuning arises, which is a slight deviation of the properties of the individual blades from the design parameters. The article deals with an effective method of mistuning identification for cases of integrated bladed discs of marine engine turbochargers. Unlike approaches that use costly scanning laser Doppler vibrometers, this method is based on using only a simple laser vibrometer in combination with a computational model of the integrated bladed disc. The added value of this method is, in particular, a significant reduction in the cost of laboratory equipment and a reduction in the time required to obtain the results.
APA, Harvard, Vancouver, ISO, and other styles
18

Pezzuto, Ivo. "Turning globalization 4.0 into a real and sustainable success for all stakeholders." Journal of Governance and Regulation 8, no. 1 (2019): 8–18. http://dx.doi.org/10.22495/jgr_v8_i1_p1.

Full text
Abstract:
The paper aims to provide an overview of the major opportunities and challenges of the fourth phase of globalization in the current macro scenario characterized by a high level of economic and geopolitical complexity and uncertainty. The assumptions and results reported in this work are based mostly on the judgmental opinion of the author and on his critical analysis of macroeconomic data and global trends. The author of the paper is a seasoned chief economic advisor and professor of global economics and disruptive innovation. Forecasting global market trends and future scenarios in a highly unpredictable business environment is always a complex task which cannot be undertaken simply relying on quantitative research techniques based on historical datasets since the past is not always a good predictor of future events. The qualitative approach adopted for this research is based on multiple forms of data sources and the following activities: (1) identification of the key forces and trends in the environment (i.e. environmental scanning); (2) assessing the driving forces and trends by importance and uncertainty; (3) envisioning potential alternative scenarios; and (4) assessing the potential implications of each trend and scenario. The result of this analysis confirms the central role that technological development is likely to have in the near future as a major driver of disruptive change in the economic and social models of many countries and leads to the conclusion that the groundbreaking and disruptive innovations of the future should be perceived as a potential opportunity and not just as a threat by stakeholders in the international community.
APA, Harvard, Vancouver, ISO, and other styles
19

Würth, Ines, Laura Valldecabres, Elliot Simon, Corinna Möhrlen, Bahri Uzunoğlu, Ciaran Gilbert, Gregor Giebel, David Schlipf, and Anton Kaifel. "Minute-Scale Forecasting of Wind Power—Results from the Collaborative Workshop of IEA Wind Task 32 and 36." Energies 12, no. 4 (February 21, 2019): 712. http://dx.doi.org/10.3390/en12040712.

Full text
Abstract:
The demand for minute-scale forecasts of wind power is continuously increasing with the growing penetration of renewable energy into the power grid, as grid operators need to ensure grid stability in the presence of variable power generation. For this reason, IEA Wind Tasks 32 and 36 together organized a workshop on “Very Short-Term Forecasting of Wind Power” in 2018 to discuss different approaches for the implementation of minute-scale forecasts into the power industry. IEA Wind is an international platform for the research community and industry. Task 32 tries to identify and mitigate barriers to the use of lidars in wind energy applications, while IEA Wind Task 36 focuses on improving the value of wind energy forecasts to the wind energy industry. The workshop identified three applications that need minute-scale forecasts: (1) wind turbine and wind farm control, (2) power grid balancing, (3) energy trading and ancillary services. The forecasting horizons for these applications range from around 1 s for turbine control to 60 min for energy market and grid control applications. The methods that can be applied to generate minute-scale forecasts rely on upstream data from remote sensing devices such as scanning lidars or radars, or are based on point measurements from met masts, turbines or profiling remote sensing devices. Upstream data needs to be propagated with advection models and point measurements can either be used in statistical time series models or assimilated into physical models. All methods have advantages but also shortcomings. The workshop’s main conclusions were that there is a need for further investigations into the minute-scale forecasting methods for different use cases, and a cross-disciplinary exchange of different method experts should be established. Additionally, more efforts should be directed towards enhancing quality and reliability of the input measurement data.
APA, Harvard, Vancouver, ISO, and other styles
20

Yu, Ji-Min, Seen-Young Kang, Jun-Seok Lee, Ho-Sang Jeong, and Seung-Youl Lee. "Mechanical Properties of Dental Alloys According to Manufacturing Process." Materials 14, no. 12 (June 17, 2021): 3367. http://dx.doi.org/10.3390/ma14123367.

Full text
Abstract:
The purpose of this study is to investigate the effect of the fabrication method of dental prosthesis on the mechanical properties. Casting was produced using the lost wax casting method, and milling was designed using a CAD/CAM program. The 3D printing method used the SLS technique to create a three-dimensional structure by sintering metal powder with a laser. When making the specimen, the specimen was oriented at 0, 30, 60, and 90 degrees. All test specimens complied with the requirements of the international standard ISO 22674 for dental alloys. Tensile strength was measured for yield strength, modulus of elasticity and elongation by applying a load until fracture of the specimen at a crosshead speed of 1.5 ± 0.5 mm/min (n = 6, modulus of elasticity n = 3). After the tensile test, the cross section of the fractured specimen was observed with a scanning electron microscope, and the statistics of the data were analyzed with a statistical program SPSS (IBM Corp. Released 2020. IBM SPSS Statistics for Windows, Version 27.0. Armonk, NY, USA: IBM Corp.) and using Anova and multiple comparison post-tests (scheffe method). The yield strength was the highest at 1042 MPa at an angle of 0 degrees in the specimen produced by 3D printing method, and the elongation was the highest at 14% at an angle of 90 degrees in the specimen produced by 3D printing method. The modulus of elasticity was the highest at 235 GPa in the milled specimen. In particular, the 3D printing group showed a difference in yield strength and elongation according to the build direction. The introduction of various advanced technologies and digital equipment is expected to bring high prospects for the growth of the dental market.
APA, Harvard, Vancouver, ISO, and other styles
21

Palupi, Niken Widya, Yudi Pranoto, and Sutardi Sutardi. "Pembuatan nanopartikel pati jagung dengan teknik fotooksidasi menggunakan H2O2 dan lampu UV-C pada sistem tersirkulasi." Jurnal Aplikasi Teknologi Pangan 9, no. 3 (May 30, 2020): 118–25. http://dx.doi.org/10.17728/jatp.7254.

Full text
Abstract:
Dalam penelitian ini, nanopartikel pati (Starch Nano Particle (SNP)) disiapkan dengan metode sederhana, yakni melalui proses fotooksidasi dengan sistem sirkulasi dengan melibatkan H2O2 dan lampu UV-C. Tujuan dari penelitian ini adalah mengoptimasi konsentrasi perlakuan H2O2 dan lama proses fotooksidasi untuk menghasilkan nanopartikel pati jagung. Pati jagung diperoleh secara komersial dan dianalisis sifat-sifatnya. Hasil analisa Scanning Electron Microscopy (SEM) menunjukkan bahwa partikel hasil fotooksidasi berbentuk bulat dan memiliki diameter dengan kisaran 100-1000 nm. Hasil uji SEM diperkuat dengan hasil uji Dynamic Light Scattering (DLS) Zetasizer yang menunjukkan kurva distribusi normal ukuran partikel di kisaran 100-1000 nm. Proses fotooksidasi menyebabkan nano partikel pati yang dihasilkan mengandung gugus karbonil dan karboksil. Kandungan karboksil dan ukurannya yang nano meningkatkan kejernihan pasta dan kelarutan suspensi partikel, namun juga menurunkan viskositas suspensi partikel. Lebih lanjut nanopartikel pati yang dihasilkan mempunyai kemampuan untuk mengurangi tegangan antarmuka minyak dan air, sehingga berpotensi berperan sebagai emulsifier. Kesimpulannya, perlakukan H2O2 sebesar dan lama pemaparan UV-C pada proses fotooksidasi dapat menghasilkan nanopartikel pati dengan sifat-sifat yang diinginkan yaitu: ukuran nano terdistribusi normal, kejernihan pasta mendekati kejernihan air, dan gugus karboksil yang dihasilkan cukup untuk menurunkan tegangan muka minyak dan air.Formation of Corn Starch Nanoparticles Involving H2O2 and UV-C lamp through Photo-oxidation in the Circulation SystemAbstractIn this study, starch nanoparticles (SNPs) was prepared by a simple method, photooxidation in circulation system by involving H2O2 and UV-C lamp using variations in concentration and treatment duration. SNPs was ontained from the international market and analyzed its properties. SEM analysis revealed that the photo-oxidized particles showed round-like shape and diameter in the range of 100-1000 nm. This data were in line with DLS Zetasizer analysis which showed normal distribution of particle size curve in ranges 100-1000 nm. The photooxidation process resulted in the starch nano particle contained carbonyl and carboxyl groups. The carboxyls content and nano-size of the photo-oxidized particles contributed to high clear paste and high solubility, but low viscosity of the particle suspension. Moreover, the photo-oxidized starch nanoparticles be able to reduce interfacial tension between oil and water, so its possible played role as an emulsifier. As conclusion, treatment of H2O2 and UV-C exposure might resulted starch nanoparticle with properties: nano-size of the particle was in a normal distribution, paste clarity close to clarity of the water, and carboxyl groups attached in the particle was able to reduce interfacial tension water and oil.
APA, Harvard, Vancouver, ISO, and other styles
22

Chalapud, Juan, Pedro Sobrevilla-Calvo, Silvia Rivas-Vera, and Javier Altamirano-Levy. "Marked Improvement in Detecting the Number of Involved Nodal Areas in Lymphoma, Using 18 F- FDG - PET and CT Scan." Blood 106, no. 11 (November 16, 2005): 1344. http://dx.doi.org/10.1182/blood.v106.11.1344.1344.

Full text
Abstract:
Abstract Positron emission tomography (PET) imaging with 18-fluoro-2-deoxiglucose (FDG) is used increasingly for the initial evaluation and staging of patients with Hodgkin’s lymphoma (HL) and non- Hodgkin’s lymphoma (NHL). However, the degree of concordance of PET and TAC scanning for each nodal and extra nodal site are not well defined. The number of nodal areas involved is a new prognostic factor in follicular lymphomas as was demonstrated in the Follicular Lymphoma (FL) International prognostic index (FLIPI), and their use may be useful for the LNH and HL. In this study, we examined the performance of CT versus FDG-PET scanning, comparing each one of the nodal and extra nodal areas, as it is described in the FLIPI, in a retrospective cohort of lymphoma patients (pts) with HL and NHL. We reviewed the charts of 56 patients with diagnosis of HL and NHL in the initial and relapse staging, in a single tertiary care center. All patients had FDG-PET imaging study, clinical examination and CT scans. The Ann Arbor stage, each nodal site (cervical, mediastinal, axillary, mesenteric, para aortic, inguinal), and extra nodal sites were evaluated on the basis of FDG-PET scanning and were compared with the findings derived from CT. Bone marrow biopsy results were excluded from this initial analysis. The histopathological diagnoses included diffuse large B-cell Lymphomas in 20/56 pts (36%), HL 15/56 pts (27%), anaplastic large cell lymphoma 8/56 pts (14%), FL 5/56 pts (9%), peripheral T-cell Lymphoma 4/56 (7%) and others 7%. Among the 56 pts, 22 (39%) had discordant results between FDG-PET scanning and CT scanning, that lead to a change in stage assignment. Among the discordant cases FDG-PET resulted in upstaging in 18/56 pts (32%), and down staging in 4/56 pts (7%). Forty for pts (79%) had discordant results in the number of nodal areas, among the discordant cases FDG-PET detected more nodal areas in 36/56 pts (64%) and CT in 8/56 pts (14%). The discordant cases were distributed as it is shown in the table. In conclusion Pet and CT in combination detects more involved nodal areas than each method by itself. Summary of PET/CT correlation with nodal areas Nodal areas Cervical(n) Axillar (n) Mediastinal(n) Paraaortic(n) Inguinal (n) positive total 40 26 26 24 17 Only Positive FDG-PET 16 12 13 7 11 Only Positive CT 7 3 2 2 1 Positive FGD-PET + CT 17 11 11 15 5
APA, Harvard, Vancouver, ISO, and other styles
23

Zheng, Yun Xing, Hao Ding, Le Fu Mei, Xi Yao Zhu, and Meng Meng Wang. "Preparation and Property Characterization of Talc/Phthalocyanine Blue Composite Powder." Key Engineering Materials 512-515 (June 2012): 239–44. http://dx.doi.org/10.4028/www.scientific.net/kem.512-515.239.

Full text
Abstract:
To improve the dispersion of phthalocyanine blue powder, decrease its consumption and increase the use value of talc, TPBCP (Talc / Phthalocyanine Blue Composite Powder) was prepared by liquid phase mechanochemical method. By means of paint performance test, scanning electron microscopy (SEM) and fourier transform infrared spectroscopy (FT-IR), the properties and microstructure of composite powder were characterized. The hiding power and oil absorption value of talc/phthalocyanine blue composite powder were 12.88 g/m2 and 32.50 g/100g respectively, and the International Commission on Illumination (CIE) colorimetric data L, a, and b were 44.95, 1.65 and -17.18 respectively. It was equivalent for the hiding power of composite powder to 77.6% of that of pure phthalocyanine blue with a equivalent CIE. The formation of TPBCP was marked by phthalocyanine blue particles uniformly coated on talc surface. The results showed that TPBCP had similar performances with phthalocyanine blue and could instead of phthalocyanine blue to be applied in several fields.
APA, Harvard, Vancouver, ISO, and other styles
24

Nannya, Yasuhito, Michi Kamei, Hiroki Torikai, Takakazu Kawase, Kenjiro Taura, Yoshihiro Inamoto, Taro Takahashi, et al. "HapMap Scanning of Novel Human Minor Histocompatibility Antigens." Blood 112, no. 11 (November 16, 2008): 3908. http://dx.doi.org/10.1182/blood.v112.11.3908.3908.

Full text
Abstract:
Abstract Minor histocompatibility antigens (mHags) are the molecular targets of allo-immunity associated with major anti-tumor activities in hematopoietic stem cell transplantation (HSCT), but are also involved in the pathogenesis of graft-versus-host disease (GVHD). They are typically defined by the host’s SNPs that are not shared by the donor and immunologically recognized by cytotoxic T-cells isolated from the post-HSCT patients. However, despite their critical importance in transplantation medicine, fewer than 20 mHags have been identified during the past 20 years due to the lack of an efficient method for their isolation. Here we developed a novel method in which the large data set from the International HapMap Project can be directly used for genetic mapping of novel mHags. Concretely, immortalized B lymphoblastoid cell lines (LCLs) from a HapMap panel are grouped into mHag positive (mHag+) and negative (mHag−) subpanels according to their susceptibility to a cytotoxic T-cells (CTL) clone as determined by conventional chromium release cytotoxicity assays (CRAs), and the target mHag locus could be directly identified by association scan (indicated by χ2 statistic) using the highly qualified HapMap data set having over 3,000,000 SNP markers. The major concern about this approach arises from the risk of overfitting observed phenotypes to one or more incidental SNPs from this large number of the HapMap SNPs. To address this problem, we first estimated the maximum sizes of the test statistics under the null hypothesis (i.e., no associated SNPs within the HapMap set) empirically by simulating 10,000 case-control HapMap panels in different experimental conditions, and compared them with the expected size of test statistic values from the marker SNPs associated with the target SNP, assuming different linkage disequilibrium (LD), or values in between. Except for those mHags having very low minor allele frequencies (MAF) below ~0.05, the possibility of overfitting is progressively reduced as the number of LCLs increases, allowing for unique identification of the target locus in a broad range of values. To demonstrate the feasibility of this method, we tried to map the locus for HA-1H mHag, by actually immunophenotyping 58 LCLs from the JPT+CHB HapMap panel with CRAs using HLA-A*0206-restricted LCL (CTL-4B1). As expected, the genome-wide scan clearly indicated a unique association within the HMHA1 gene, showing a peak χ2 statistic of 52.8 (not reached in 100,000 permutations) at rs10421359. Next, we applied this method to mapping novel mHags recognized by HLA-B*4002-restricted CTL-3B6 and HLA-A*0206-restricted CTL-1B2, both of whose target mHags had not been identified. The peak in chromosome 19q13.3 for the CTL-3B6 set showed the theoretically maximum χ2 value of 50 (not reached in 100,000 permutations) at rs3027952, which was mapped within a small LD block of ~182kb containing a single gene, SLC1A5, as a candidate mHag gene. In fact, when expressed in HEK293T with HLA-B*4002 transgene, recipient-derived, but not donor-derived, SLC1A5 cDNA was able to stimulate interferon-γ secretion from CTL-3B6, indicating that SLC1A5 encodes the target mHag recognized by CTL-3B6. Conventional epitope mapping finally identified an undecameric peptide, AEATANGGLAL, which was further confirmed by epitope reconstitution assays. The target mHag locus for CTL-1B2 was identified at the peak (max χ2 = 44, not reached in 100,000 permutations) within a 598 kb block on chromosome 4q13.1, and coincides with the locus for a previously reported mHag, UGT2B17. Our epitope mapping by using UGT2B17 cDNA deletion mutants, prediction of candidate epitopes by HLA-binding algorithms and epitope reconstitution assays successfully identified a novel nonameric peptide, CVATMIFMI. Our results demonstrate how effectively the HapMap resources could be used for genetic mapping of clinically relevant human traits. This method may be also applied to disclosing other relevant human variations, if an accurate bioassay is applied to discriminate them. We anticipate our method based on the HapMap scan greatly accelerates isolation of novel mHags, which could be used for the development of selective allo-immune therapies to intractable blood cancers, circumventing potentially life-threatening GVHD, while harnessing its anti-tumor effects. Such knowledge on mHags should also promote our understanding of allo-immunity.
APA, Harvard, Vancouver, ISO, and other styles
25

Tomeleri, João Otávio Poletto, Luciano Donizeti Varanda, Leonardo Machado Pitombo, Fabio Minoru Yamaji, and Franciane Andrade de Pádua. "Influence of Non-Lignocellulosic Elements on the Combustion of Treated Wood and Wooden Panel." Sustainability 13, no. 9 (May 5, 2021): 5161. http://dx.doi.org/10.3390/su13095161.

Full text
Abstract:
Brazil stands out internationally in the production and commercialization of wood products. Although the external and internal demand for these products is met by the Brazilian forestry sector, challenges related to the internal management of lignocellulosic waste are evident, as the country has structural difficulties in the sector of solid waste management. Therefore, the objective was to comparatively analyze the performance of the most abundant lignocellulosic materials in the Brazilian market, regarding energy recovery at the end of their life cycles. Pine wood treated with chromed copper arsenate (CCA), untreated pine wood, eucalypt wood treated with CCA, untreated eucalypt wood, uncoated medium density fiberboard panel (MDF), and MDF panel with melamine coating were sampled. The characterization included thermogravimetric analysis (TGA), scanning electron microscopy (SEM) with energy-dispersive x-ray spectroscopy (EDXA), and elementary analysis (EA). The presence of the CCA salts and the melamine coating reduced the energy potential of the biomass, altering the burning behavior and significantly increasing the amount of generated ashes. They also caused an increase in the concentrations of copper (Cu), chromium (Cr), arsenic (As), and cadmium (Cd) in the wood ashes as well as lead (Pb) and chromium in the panel ashes.
APA, Harvard, Vancouver, ISO, and other styles
26

Gerl, Arthur, Adina Hauck, and Marcus Hentrich. "Can abdominal ultrasound replace CT scanning in long-term follow-up of male patients with germ cell tumor?" Journal of Clinical Oncology 30, no. 15_suppl (May 20, 2012): 4602. http://dx.doi.org/10.1200/jco.2012.30.15_suppl.4602.

Full text
Abstract:
4602 Background: To date, there is no international consensus on how best to follow patients (pts) with GCT after their initial management. In the absence of a generally accepted follow-up schedule the possible benefits of regular CT scanning must be weighed against their cost, potential contrast media reactions and the long term risk of cumulative X-ray exposure. The present study focuses on the role of abdominal ultrasound in the follow-up of males with GCT and raises the question whether CT-scanning may be replaced by abdominal ultrasound. Methods: This retrospective single-center cohort study included 887 GCT pts followed between January 2001 and November 2011. The follow-up schedule was predominately based on abdominal ultrasound performed by the same physician (A.G.). Patterns of recurrence and the long-term outcome were analyzed. Results: 462 of 887 pts (52.1%) had stage I, 258 (29.1%) stage II and 130 (14.7%) stage III disease (not evaluable in 37 pts). The median time between baseline and the most recent follow-up examination was 5.0 years. A total of 14.604 abdominal ultrasound examinations (16.5/pt.), 1170 CT scans (1.32/pt.), and 956 chest X-rays (1.08/pt) were performed. A relapse occurred in 58 pts (6.5%) with 11 of 58 pts experiencing multiple relapses. 34 of 58 relapses (58.6%) were detected in pts with stage I GCT. The sites of relapse included the abdomen (n=42), other sites (n=10), and marker elevation only (n=10). 33 of 42 abdominal relapses (78.6%) were detected by abdominal ultrasound. The median size of abdominal lymph nodes was 25 mm (range, 6 - 67 mm). After a median follow-up of 5 years the GCT specific survival of the entire cohort was 98.4%. Regarding the subgroup of pts with relapse the GCT specific survival was 94.7%. Conclusions: Ultrasound appears to be an appropriate method to detect abdominal recurrences in pts with GCT. However, training and cumulative experience is necessary to detect retroperitoneal recurrences at an early stage. The number of expensive and potentially harmful CT scans may be markedly decreased by abdominal ultrasound.
APA, Harvard, Vancouver, ISO, and other styles
27

Dinas, Petros C., Christian Mueller, Nathan Clark, Tim Elgin, S. Ali Nasseri, Etai Yaffe, Scott Madry, Jonathan B. Clark, and Farhan Asrar. "Innovative Methods for the Benefit of Public Health Using Space Technologies for Disaster Response." Disaster Medicine and Public Health Preparedness 9, no. 3 (April 14, 2015): 319–28. http://dx.doi.org/10.1017/dmp.2015.29.

Full text
Abstract:
AbstractSpace applications have evolved to play a significant role in disaster relief by providing services including remote sensing imagery for mitigation and disaster damage assessments; satellite communication to provide access to medical services; positioning, navigation, and timing services; and data sharing. Common issues identified in past disaster response and relief efforts include lack of communication, delayed ordering of actions (eg, evacuations), and low levels of preparedness by authorities during and after disasters. We briefly summarize the Space for Health (S4H) Team Project, which was prepared during the Space Studies Program 2014 within the International Space University. The S4H Project aimed to improve the way space assets and experiences are used in support of public health during disaster relief efforts. We recommend an integrated solution based on nano-satellites or a balloon communication system, mobile self-contained relief units, portable medical scanning devices, and micro-unmanned vehicles that could revolutionize disaster relief and disrupt different markets. The recommended new system of coordination and communication using space assets to support public health during disaster relief efforts is feasible. Nevertheless, further actions should be taken by governments and organizations in collaboration with the private sector to design, test, and implement this system. (Disaster Med Public Health Preparedness. 2015;9:319-328)
APA, Harvard, Vancouver, ISO, and other styles
28

Souza, Nathália P., Gordon C. Hard, Lora L. Arnold, Kirk W. Foster, Karen L. Pennington, and Samuel M. Cohen. "Epithelium Lining Rat Renal Papilla: Nomenclature and Association with Chronic Progressive Nephropathy (CPN)." Toxicologic Pathology 46, no. 3 (March 4, 2018): 266–72. http://dx.doi.org/10.1177/0192623318762694.

Full text
Abstract:
Chronic progressive nephropathy (CPN) occurs commonly in rats, more frequently and severely in males than females. High-grade CPN is characterized by increased layers of the renal papilla lining, designated as urothelial hyperplasia in the International Harmonization of Nomenclature and Diagnostic Criteria classification. However, urothelium lining the pelvis is not equivalent to the epithelium lining the papilla. To evaluate whether the epithelium lining the renal papilla is actually urothelial in nature and whether CPN-associated multicellularity represents proliferation, kidney tissues from aged rats with CPN, from rats with multicellularity of the renal papilla epithelium of either low-grade or marked severity, and from young rats with normal kidneys were analyzed and compared. Immunohistochemical staining for uroplakins (urothelial specific proteins) was negative in the papilla epithelium in all rats with multicellularity or not, indicating these cells are not urothelial. Mitotic figures were rarely observed in this epithelium, even with multicellularity. Immunohistochemical staining for Ki-67 was negative. Papilla lining cells and true urothelium differed by scanning electron microscopy. Based on these findings, we recommend that the epithelium lining the papilla not be classified as urothelial, and the CPN-associated lesion be designated as vesicular alteration of renal papilla instead of hyperplasia and distinguished in diagnostic systems from kidney pelvis urothelial hyperplasia.
APA, Harvard, Vancouver, ISO, and other styles
29

Mawassi, M., O. Dror, M. Bar-Joseph, A. Piasezky, J. M. Sjölund, N. Levitzky, N. Shoshana, et al. "‘CandidatusLiberibacter solanacearum’ Is Tightly Associated with Carrot Yellows Symptoms in Israel and Transmitted by the Prevalent Psyllid VectorBactericera trigonica." Phytopathology® 108, no. 9 (September 2018): 1056–66. http://dx.doi.org/10.1094/phyto-10-17-0348-r.

Full text
Abstract:
Carrot yellows disease has been associated for many years with the Gram-positive, insect-vectored bacteria, ‘Candidatus Phytoplasma’ and Spiroplasma citri. However, reports in the last decade also link carrot yellows symptoms with a different, Gram-negative, insect-vectored bacterium, ‘Ca. Liberibacter solanacearum’. Our study shows that to date ‘Ca. L. solanacearum’ is tightly associated with carrot yellows symptoms across Israel. The genetic variant found in Israel is most similar to haplotype D, found around the Mediterranean Basin. We further show that the psyllid vector of ‘Ca. L. solanacearum’, Bactericera trigonica, is highly abundant in Israel and is an efficient vector for this pathogen. A survey conducted comparing conventional and organic carrot fields showed a marked reduction in psyllid numbers and disease incidence in the field practicing chemical control. Fluorescent in situ hybridization and scanning electron microscopy analyses further support the association of ‘Ca. L. solanacearum’ with disease symptoms and show that the pathogen is located in phloem sieve elements. Seed transmission experiments revealed that while approximately 30% of the tested carrot seed lots are positive for ‘Ca. L. solanacearum’, disease transmission was not observed. Possible scenarios that may have led to the change in association of the disease etiological agent with carrot yellows are discussed.[Formula: see text] Copyright © 2018 The Author(s). This is an open access article distributed under the CC BY-NC-ND 4.0 International license .
APA, Harvard, Vancouver, ISO, and other styles
30

EL-Banhawy, Ahmed, Iman H. Nour, Carmen Acedo, Ahmed ElKordy, Ahmed Faried, Widad AL-Juhani, Ahmed M. H. Gawhari, Asmaa O. Olwey, and Faten Y. Ellmouni. "Taxonomic Revisiting and Phylogenetic Placement of Two Endangered Plant Species: Silene leucophylla Boiss. and Silene schimperiana Boiss. (Caryophyllaceae)." Plants 10, no. 4 (April 9, 2021): 740. http://dx.doi.org/10.3390/plants10040740.

Full text
Abstract:
The genus Silene L. is one of the largest genera in Caryophyllaceae, and is distributed in the Northern Hemisphere and South America. The endemic species Silene leucophylla and the near-endemic S. schimperiana are native to the Sinai Peninsula, Egypt. They have reduced population size and are endangered on national and international scales. These two species have typically been disregarded in most studies of the genus Silene. This research integrates the Scanning Electron Microscope (SEM), species micromorphology, and the phylogenetic analysis of four DNA markers: ITS, matK, rbcL and psb-A/trn-H. Trichomes were observed on the stem of Silene leucophylla, while the S. schimperiana has a glabrous stem. Irregular epicuticle platelets with sinuate margin were found in S. schimperiana. Oblong, bone-shaped, and irregularly arranged epidermal cells were present on the leaf of S. leucophylla, while Silene schimperiana leaf has “tetra-, penta-, hexa-, and polygonal” epidermal cells. Silene leucophylla and S. schimperiana have amphistomatic stomata. The Bayesian phylogenetic analysis of each marker individually or in combination represented the first phylogenetic study to reveal the generic and sectional classification of S. leucophylla and S. schimperiana. Two Silene complexes are proposed based on morphological and phylogenetic data. The Leucophylla complex was allied to section Siphonomorpha and the Schimperiana complex was related to section Sclerocalycinae. However, these two complexes need further investigation and more exhaustive sampling to infer their complex phylogenetic relationships.
APA, Harvard, Vancouver, ISO, and other styles
31

Otsuka, Jiro, and Sadaji Hayama. "Special Issue on Precision and Ultraprecision Positioning." International Journal of Automation Technology 3, no. 3 (May 5, 2009): 223. http://dx.doi.org/10.20965/ijat.2009.p0223.

Full text
Abstract:
I have been the chairman of the technical committee of ultraprecision positioning at the Japan Society of Precision Engineers (JSPE) from 1993 to 1997. In November 2008, the 3rd International Conference on Positioning Technology (ICPT) was held in Shizuoka, Japan. After the conference I together with Dr. Sadaji Hayama, an adviser of the journal editorial board, asked by mail the most significant presenters and members of the technical committee of ultraprecision positioning if they are willing to contribute their papers for this special issue. As a result, we received more than 20 manuscripts, among which 2 development reports, 2 reviews, and 14 papers have been selected for publication in this journal. The contents of these papers relate mainly to the nano/subnanometer positioning technology, new control methods for ultraprecision positioning, guide way for precision positioning, positioning for ultraprecision machining, new hard disk drive method, etc. I would like to express my sincere gratitude to the authors for their interesting papers on this issue and I also would like to deeply thank all the reviewers and editors for their invaluable effort.1. Demarcation Between Precision Positioning and Ultraprecision Positioning The Technical Committee of Ultraprecision Positioning (TCUP) has had a poll on Ultraprecision and Ultraprecision technology to the randomly selected members of Japan Society for Precision Engineers (JSPE) every four years since 1986 [1]. Results indicate that most respondents felt that the maximum allowable positioning error and image resolution was 1 µm for precision positioning and 10 nm for ultraprecision positioning. After 2004, most respondents appeared to view 0.1 nm as the demarcation line between the precision positioning and ultraprecision positioning.2. Know-How for Achieving Ultraprecision Positioning The champion device in ultraprecision positioning is always the stages of demagnification exposure devices for semiconductors. The exposure method using stages have advanced from 1980s steppers shown in Fig. 1(a) to today's scanning stages with the increasement of LSI capacity in achieving higher processing as shown in Fig. 1(b). The stepper consists of X and Y stages.The XY stages in the 1980s consisted of a DC servomotor, either a ball or sliding screw plus a linear guide way consisting of either rollers or a slide guide. Current scanning type consists of a linear motor and pneumatic hydrostatic guide way (Fig. 1(b)). Reticle and wafer stages travel in opposite directions and the relative positioning error is about 1 nm.Ultraprecision positioning of sub-µm accuracy is now achieved either by an AC servomotor and a ball screw or by using a linear motor. subsection2.1. Achieving high positioning resolution and accuracy with less than 0.1 µm generally depends on three factors: newpage(1) Displacement sensors for feed-back(2) Mechanical structure(3) Control, including software Ultraprecision positioning is possible only when these three factors are well coordinated.(1) Displacement Sensors Ultra-precision positioning requires high-performance displacement sensors. About 10 sensor manufacturers in Japan alone currently achieve resolution under 1 nm [3]. To achieve higher resolution, laser interferometers must operate in thermostatic chambers controlling or monitoring temperature, humidity, and atmospheric pressure. Great effort is required to minimize or eliminate air turbulence and inhomogeneous atmosphere temperatures in the laser beam path. To achieve nm level resolution, operations must be conducted in a vacuum.Linear encoders, although somewhat less accurate than laser interferometers, are used in over 50% for ultraprecision positioning devices in Japan and their market share continues to grow, according to the 2006 TCUP poll. Analog sensor performance in detecting microscopic displacement is steadily improved. The technical level of precision positioning device is often assessed by how the designer considers Abbe's principle.(2) Mechanical Structure Overall structural rigidity should be maximized to ensure monolithic construction. Semiconductor aligners used in exposure are made from ceramics with a high specific rigidity, i.e., the quotient of Young's modulus divided by specific gravity.1990s arguments pitting linear actuators against ball screws subsided as their specific advantages and domains of preferred use became established. Linear guide ways using steel balls or rollers are becoming cheaper, and their accuracy and other aspects of performance are improving.When stage movement is reversed, friction generated by preloads as nonlinear spring behavior which is caused by elastic deformation of balls and race ways over the moving stroke of several tens of µm, stage vibration is easy to generate. Another disadvantage, called waving, occurs when the table moves up and down at the sub-µm level perpendicular to the stage travel direction at twice the spacing of the roller separation. It is found out that waving is minimized by crowning roller guide race way. Error due to waving is reduced to less than one tenth of the original error margin [4]. Nonlinear spring behavior is minimized by modifying control method of the positioning device. For longitudinal travel, pneumatic-hydrostatic devices virtually unaffected by friction are an alternative but are prohibitively expensive.(3) Control, Including Software In precision positioning, control devices and systems have advanced significantly in the last two decades [5], changing from analog to digital with higher sampling frequency. Current digital control enables devices to be operated in conceptually the same way as analog control. TCUP respondents [1] stated that 70% of positioning devices in Japan still depend on conventional control, PID control, with innovative contemporary control theory, fuzzy control, and neural nets, etc. yet to be fully implemented.2.2. Higher Positioning Speed Higher positioning speed is required, as well as higher positioning accuracy. In scanning Fig. 1(b), maximum stage speed exceeds 2 m/s second and maximum acceleration ranges from 3G to 5G. The corresponding speed and acceleration of the wafer stage is one fourth of these values. At such high acceleration, reaction dampers are used to prevent vibration [2].About ten years ago, the maximum velocities of positioning stages tended to be limited by the speed of the displacement sensor for feed-back, however at present, it is possible to operate at the range of speed mentioned above. Note that the velocity exceeding 2 m/s is possible even with ball-screw, but noise and microvibration remain a problem.3. Nanometer and Subnanometer Positioning [3, 5-7, 10] We are pursuing the convergence of the positioning resolution to the fullest extent of the resolution of the displacement sensor for the feed-back. Bulletins [3, 6] have carried reports on experiments attaining resolution for positioning with maximum error below 0.1 nm. We introduce cases of positioning device development at nm and sub-nm resolution using both ball screw [7] and linear motor drives [8]. I would like to introduce a commercialized ball screw drive production of 1 nm resolution [7].3.1. Combination of Ball-Screw and Stepping Motors [7] The positioning devices have the resolution respectively at 1 nm and 5 nm (the lengths of travelling strokes for the stage are 20 m and 50 mm respectively). Both compensate for the rolling frictions between the ball screw and the roller guide way and for the nonlinear spring behavior at the micro-displacement range through the control of the stepping motors at high, medium and low ranges of speeds. As the dimension of detector of the displacement sensor is very small, we can make the positioning devices smaller. So, it is very strong to external disturbances.3.2. New type of Linear Motor Drive [8] The latest new type of linear actuators, generally referred to as tunnel actuators (TAs) used in ultraprecision positioning devices with a stage stroke of 200 nm (Fig. 2) are free from magnetic attractive force between stator magnets and armatures, generating less heat and having other advantages over conventional linear motors with cores.In experiments using a displacement sensor to adjust feed-back with 0.034 nm resolution and a maximum velocity of 400 mm/s, we use ball guide ways to reduce cost and still achieved a positioning resolution of 0.2 nm (Fig. 3) [8]. Experiments confirmed that, to achieve more higher resolution, electric current linear amplifiers are 10 times more effective than PWM as the current amplifier.4. Conclusions We have discussed how nanometer- and sub-nm level positioning resolution and accuracy became possible, greatly contributing to advances in nanotechnology. Nanometer and subnanometer positioning resolution are currently verified by signals from displacement sensors for feed-back. Considering changes in the positioning of stages, however, such positioning and resolution should be verified by using displacement sensors which are more accurate.If possible, verification on the resolution and accuracy must be done using a laser interferometer in a vacuum in a temperature-controlled chamber. We feel that positioning resolution should be indicated by signals directly received from sensors without low pass filter.
APA, Harvard, Vancouver, ISO, and other styles
32

Lin, Neng-Yu, Alfiya Distler, Christian Beyer, Ariella Philipi-Schöbinger, Silvia Breda, Clara Dees, Michael Stock, et al. "Inhibition of Notch1 promotes hedgehog signalling in a HES1-dependent manner in chondrocytes and exacerbates experimental osteoarthritis." Annals of the Rheumatic Diseases 75, no. 11 (February 5, 2016): 2037–44. http://dx.doi.org/10.1136/annrheumdis-2015-208420.

Full text
Abstract:
ObjectivesNotch ligands and receptors have recently been shown to be differentially expressed in osteoarthritis (OA). We aim to further elucidate the functional role of Notch signalling in OA using Notch1 antisense transgenic (Notch1 AS) mice.MethodsNotch and hedgehog signalling were analysed by real-time PCR and immunohistochemistry. Notch-1 AS mice were employed as a model of impaired Notch signalling in vivo. Experimental OA was induced by destabilisation of the medial meniscus (DMM). The extent of cartilage destruction and osteophyte formation was analysed by safranin-O staining with subsequent assessment of the Osteoarthritis Research Society International (OARSI) and Mankin scores and µCT scanning. Collagen X staining was used as a marker of chondrocyte hypertrophy. The role of hairy/enhancer of split 1 (Hes-1) was investigated with knockdown and overexpression experiments.ResultsNotch signalling was activated in human and murine OA with increased expression of Jagged1, Notch-1, accumulation of the Notch intracellular domain 1 and increased transcription of Hes-1. Notch1 AS mice showed exacerbated OA with increases in OARSI scores, osteophyte formation, increased subchondral bone plate density, collagen X and osteocalcin expression and elevated levels of Epas1 and ADAM-TS5 mRNA. Inhibition of the Notch pathway induced activation of hedgehog signalling with induction of Gli-1 and Gli-2 and increased transcription of hedgehog target genes. The regulatory effects of Notch signalling on Gli-expression were mimicked by Hes-1.ConclusionsInhibition of Notch signalling activates hedgehog signalling, enhances chondrocyte hypertrophy and exacerbates experimental OA including osteophyte formation. These data suggest that the activation of the Notch pathway may limit aberrant hedgehog signalling in OA.
APA, Harvard, Vancouver, ISO, and other styles
33

Venter, Santa-Marie, Roopam Dey, Vikas Khanduja, Richard PB von Bormann, and Michael Held. "The management of acute knee dislocations: A global survey of orthopaedic surgeons’ strategies." SICOT-J 7 (2021): 21. http://dx.doi.org/10.1051/sicotj/2021017.

Full text
Abstract:
Purpose: Great variety and controversies surround the management strategies of acute multiligament knee injuries (aMKLIs) and no established guidelines exist for resource-limited practices. The aim of this study was to compare the management approach of acute knee dislocations (AKDs) by orthopedic surgeons from nations with different economic status. Methods: This descriptive cross-sectional scenario-based survey compares different management strategies for aMLKIs of surgeons in developed economic nations (DEN) and emerging markets and developing nations (EMDN). The main areas of focus were operative versus non-operative management, timing and staging of surgery, graft choice and vascular assessment strategies. The members of the Societe Internationale de Chirurgie Orthopedique et de Traumatologie (SICOT) were approached to participate and information was collected regarding their demographics, experience, hospital setting and management strategies of aMLKIs. These were analyzed after categorizing participants into DEN and EMDN based on the gross domestic product (GDP) per capita. Results: One-hundred and thirty-eight orthopedic surgeons from 47 countries participated in this study, 67 from DEN and 71 (51.4%) from EMDN. DEN surgeons had more years of experience and were older (p < 0.05). Surgeons from EMDN mostly worked in public sector hospitals, were general orthopedic surgeons and treated patients from a low-income background. They preferred conservative management and delayed reconstruction with autograft (p < 0.05) if surgery was necessary. Surgeons from DEN favored early, single stage arthroscopic ligament reconstruction. Selective Computerized Tomography Angiography (CTA) was the most preferred choice of arterial examination for both groups. Significantly more EMDN surgeons preferred clinical examination (p < 0.05) and duplex doppler scanning (p < 0.05) compared to DEN surgeons. More surgeons from EMDN did not have access to a physiotherapist for their patients. Conclusions: Treatment of aMLKIs vary significantly based on the economic status of the country. Surgeons from DEN prefer early, single stage arthroscopic ligament reconstruction, while conservative management is favored in EMDN. Ligament surgery in EMDN is often delayed and staged. EMDN respondents utilize duplex doppler scanning and clinical examination more readily in their vascular assessment of aMLKIs. These findings highlight very distinct approaches to MLKIs in low-resource settings which are often neglected when guidelines are generated.
APA, Harvard, Vancouver, ISO, and other styles
34

Chen, Yang, Guanlan Liu, Yaming Xu, Pai Pan, and Yin Xing. "PointNet++ Network Architecture with Individual Point Level and Global Features on Centroid for ALS Point Cloud Classification." Remote Sensing 13, no. 3 (January 29, 2021): 472. http://dx.doi.org/10.3390/rs13030472.

Full text
Abstract:
Airborne laser scanning (ALS) point cloud has been widely used in the fields of ground powerline surveying, forest monitoring, urban modeling, and so on because of the great convenience it brings to people’s daily life. However, the sparsity and uneven distribution of point clouds increases the difficulty of setting uniform parameters for semantic classification. The PointNet++ network is an end-to-end learning network for irregular point data and highly robust to small perturbations of input points along with corruption. It eliminates the need to calculate costly handcrafted features and provides a new paradigm for 3D understanding. However, each local region in the output is abstracted by its centroid and local feature that encodes the centroid’s neighborhood. The feature learned on the centroid point may not contain relevant information of itself for random sampling, especially in large-scale neighborhood balls. Moreover, the centroid point’s global-level information in each sample layer is also not marked. Therefore, this study proposed a modified PointNet++ network architecture which concentrates the point-level and global features on the centroid point towards the local features to facilitate classification. The proposed approach also utilizes a modified Focal Loss function to solve the extremely uneven category distribution on ALS point clouds. An elevation- and distance-based interpolation method is also proposed for the objects in ALS point clouds which exhibit discrepancies in elevation distributions. The experiments on the Vaihingen dataset of the International Society for Photogrammetry and Remote Sensing and the GML(B) 3D dataset demonstrate that the proposed method which provides additional contextual information to support classification achieves high accuracy with simple discriminative models and new state-of-the-art performance in power line categories.
APA, Harvard, Vancouver, ISO, and other styles
35

Hernani, NFN, Tatang Hidayat, and NFN Risfaheri. "EVALUASI MUTU LADA PUTIH BUBUK YANG DIPERDAGANGKAN DI PASAR TRADISIONAL DAN MODERN DI BOGOR DAN JAKARTA." Jurnal Penelitian Pascapanen Pertanian 17, no. 3 (February 8, 2021): 126. http://dx.doi.org/10.21082/jpasca.v17n3.2020.126-133.

Full text
Abstract:
<p>Lada putih bubuk mempunyai sifat sangat higroskopis, sehingga mudah mengalami kerusakan, baik fisik, kimia ataupun mikrobiologis. Tujuan dari penelitian adalah menentukan mutu lada putih bubuk, baik secara fisik, kimia dan mikrobiologi dari pasar tradisional dan modern di wilayah Bogor dan Jakarta serta mendapatkan informasi awal/indikasi adanya pencampuran bahan lain pada lada putih bubuk. Metodologi penelitian terdiri atas beberapa tahapan, yaitu metode pengambilan sampel dilakukan secara acak sederhana (<em>simple random sampling</em>), dari pasar tradisional yang dijual secara curah, dan pasar modern yang dikemas dalam botol plastik, masing-masing 3 lokasi dan 3 ulangan. Untuk masing-masing sampel yang diambil dari pasar tradisional dan modern adalah 300 g. Pengujian fisiko-kimia sesuai dengan metode yang dikeluarkan oleh IPC (<em>International Pepper Community</em>), yaitu kadar air, abu, abu tak larut asam, minyak atsiri, piperin dan logam timbal (Pb). Untuk uji mikrobiologis terdiri dari TPC (<em>Total Plate Count</em>), kapang, jamur, <em>Salmonella</em> dan <em>Escheria coli</em>. Selain itu, dilakukanan analisis SEM (<em>Scanning Electrone Microscope</em>) untuk melihat profil morfologi permukaan. Hasil penelitian menunjukkan bahwa untuk kadar air, abu, minyak atsiri masih memenuhi kriteria SNI 01 3717 1995, kecuali kadar abu yang tidak larut asam dan kadar piperin. Cemaran mikrobiologi (TPC, kapang dan jamur) memenuhi kriteria, kecuali <em>E. coli</em> dan jamur pada sampel dari pasar tradisional Bogor. <em>Salmonella</em> memenuhi kriteria SNI untuk semua sampel, yaitu negatif. Cemaran logam berat (Pb) masih memenuhi ketentuan kriteria SNI. Deteksi pencampuran lada putih bubuk dengan bahan lain menggunakan metode kombinasi sifat fisiko-kimia dan SEM baru bisa mendeteksi adanya indikasi pencampuran, dan belum bisa menentukan jenis bahan percampurnya.</p><p> </p><p><strong>Evaluation Of Quality of White Pepper Powder on Trade in Traditional and Modern Markets in Bogor and Jakarta.</strong></p><p>White pepper powder has very hygroscopic properties, so it is easily damaged, physically, chemically, or microbiologically. The purpose of this study was to determine the quality of white pepper powder, physically, chemically, and microbiologically from the traditional and modern markets of Bogor and Jakarta and to obtain information of mixing white pepper powder using other ingredients. The research methodology consists of several stages, namely sampling from traditional markets that are sold in bulk, and modern markets, which are packaged in plastic bottles; the sample has taken from 3 locations and 3 replications. For each sample taken from traditional and modern markets was 300 g. The physico-chemical tested according to the methods issued by the IPC, especially for moisture, ash content, acid insoluble ash, essential oil, piperine and timbal (Pb). The microbiological was tested, including TPC, mold, fungus, <em>Salmonella</em> and <em>Escheria coli</em>. In addition, an SEM analysis was performed to see the surface morphology profile. The results showed that moisture, ash content, ash insoluble in acid, volatile oil still meets the criteria of Indonesian National Standard, except for ash insoluble in acid and piperine content. Microbiological contamination fulfilled SNI criteria except <em>E. coli </em>and mold in samples from PT Bogor. Salmonella was fulfilled SNI criteria for all samples, which are given negative. Heavy metal (Pb) still fulfils the requirement in SNI criteria. Detection of mixing white pepper powder with other ingredients using a combination of physico-chemical properties and SEM can detect the indication of mixing and has not been able to determine the type of mixing material.</p>
APA, Harvard, Vancouver, ISO, and other styles
36

Gratzinger, Dita, Justin I. Odegaard, Robert J. Marinelli, Kunju J. Sridhar, and Peter L. Greenberg. "In Situ Tissue Microarray Cell-Lineage Specific Analysis of Protein Expression In Intact Myelodysplastic Bone Marrow: Data on Putative Poor Prognosis Biomarkers." Blood 116, no. 21 (November 19, 2010): 1887. http://dx.doi.org/10.1182/blood.v116.21.1887.1887.

Full text
Abstract:
Abstract Abstract 1887 Myelodysplastic syndromes (MDS) are clonal bone marrow failure disorders with variably progressive bone marrow failure and risk of leukemic transformation. Risk stratification based on the International Prognostic Scoring System (IPSS) is a powerful tool, but much clinical variability remains within risk categories. Gene expression profiling (GEP) of disaggregated bone marrow mononuclear cells has yielded many potential prognostic biomarkers; such studies are limited by (1) the need for fresh bone marrow and labor-intensive analysis of individual specimens, leading to a bottleneck between identification of biomarkers and clinical implementation, and (2) by the loss of cell-specific and architectural information. We have demonstrated the feasibility of quantitative cell-type-specific evaluation of biomarkers in intact archival MDS bone marrow (BM). Routinely processed archival core biopsy specimens were sampled to create tissue microarrays (TMAs). We chose to test the protein products of 4 genes overexpressed in a “poor risk” gene signature for early leukemic transformation in MDS (Sridhar K et al, Blood 114:4847, 2009). These include ribosomal subunit components RPS4 and RPL23 and proteases TPP2 and KLK3. A TMA was constructed from 1 mm cores of 5 normal, 4 MDS, and 5 acute myeloid leukemia (AML) BM core biopsies. Immunohistochemistry showed reproducible cytoplasmic staining of immature mononuclear cells by antibodies against RPL23, RPS4Y, TPP2, and KLK3; TPP2 also stained megakaryocytes. Reactivity with a correctly-sized band was confirmed by Western blotting of frozen BM from normal and MDS subjects. Double immunofluorescence (IF) staining was then performed to allow simultaneous identification of cell types of interest [CD34+ progenitors, Glycophorin C(GPC)+ erythroid precursors] in combination with quantitative analysis of the marker of interest. Representative image Figure 1: ribosomal marker RPL23 (red) and erythroid marker GPC (green); DAPI nuclei (blue): After scanning of the TMA on the Ariol platform, CellProfiler image analysis software (Broad Institute) was used to identify primary objects (nuclei) in the DAPI stained-image, and associated secondary objects (cells) in the green channel; fluorescence intensity was then quantified in the red channel. An example of object identification in CellProfiler is shown in Figure 2: When the mean per cell intensities from pooled MDS specimens were compared with the pooled normal specimens, all four putative “poor prognosis” biomarkers were significantly more highly expressed in the CD34+ population of MDS as compared to normal BM; even higher expression was seen in AML (see table below; Kruskal Wallis test with Bonferroni adjustment for multiple comparisons, p<0.001 for all comparisons except no significant difference in RPL23 between MDS and AML). These results are consistent with the prior GEP data performed on isolated CD34+ progenitors. By contrast, GPC+ erythroid precursors in MDS showed significantly lower expression of RPL23, RPS4 and TPP2 in MDS compared to normal BM (See Table 1and Figure 3, below). KLK3 was overexpressed in the MDS erythroid compartment (p<0.001 for all comparisons except RPS4 for normal versus MDS, p=0.042). These data demonstrate the ability to localize and quantitatively detect specific gene products in intact archival BM using IF-stained TMAs in conjunction with image analysis. Differential expression of these gene products was shown in CD34+ versus erythroid precursors. In addition, preliminary independent confirmation of a poor risk gene expression signature in CD34+ cells for MDS was demonstrated. Disclosures: No relevant conflicts of interest to declare.
APA, Harvard, Vancouver, ISO, and other styles
37

Benet, Claire, Emmanuelle Ferrant, Tony Petrella, Emilie Degrolard, Jean-Noel M. Bastie, Bernard Bonnotte, Rene-Olivier Casasnovas, and Laurent Martin. "Evaluation of the Prognostic Value of CD45RO+ and FOXP3+ Cells of the Micro-Environment In Classical Hodgkin Lymphomas Using Tissue Micro Array." Blood 116, no. 21 (November 19, 2010): 2687. http://dx.doi.org/10.1182/blood.v116.21.2687.2687.

Full text
Abstract:
Abstract Abstract 2687 Classical Hodgkin lymphomas (cHL) are B lympho-proliferative disorders that can be cured in 80% of cases. The goal of current research is to identify, at diagnosis, the 20% patients, who are likely to relapse or be refractory to treatment. As cHL have an extensive micro-environment implicated in tumour growth, the prognostic value (disease-free survival) of 2 important subsets of lymphoid cells (CD45RO+ effector memory T cells and FOXP3+ regulatory T cells) of the micro-environment has been evaluated using Tissue Micro Array (TMA). Patients (n=96) treated and followed for cHL between 1998 and 2005 were included in this study. Six TMA with 3 spots per case, were built (3 formalin and 3 Bouin fixative). Immunostaining with anti-CD45RO (Dako, UCHL1, 1/400) and anti-FOXP3 (Abcam, 236A/E7 1/100) antibodies was performed on the Benchmark Ultra (Ventana). After scanning of the slides, the staining was quantified automatically using a function (pixel count) of Scanscope software (Aperio). ROC curves were used in order to calculate the best threshold for each marker, depending on the number and intensity of pixels. Survival curves were built according to these thresholds using the Kaplan Meier method and compared with the log-rank test. Then, a Cox model was used to estimate the risk of occurrence of an event in univariate and multivariate analyses. The demographic characteristics of patients and sub-types of HL were not different in the << formalin >> and << Bouin >> groups. Median follow-up of patients was 69 months (1-135). In univariate analysis, low quantities of CD45RO+ cells was associated to a 40% 5-years progression free survival (PFS) compared to a 87% 5-years PFS for patients with high number of CD45RO+ cells (p=0.003), and low number of FOXP3+ cells was related to a 56% 5-years PFS compared to a 86% 5-years PFS for patients with high number of FOXP3+ cells (p=0.026). In multivariate analysis, taking into account the International Prognostic Score (IPS) and the expression of CD15 by neoplastic cells, the association of CD45RO and FOXP3 was the only independent prognostic factor identified (p<0.02 et RR=5.75). This preliminary study underlines the importance of certain subsets of cells from the microenvironment of HL in the control of tumour growth. These results are encouraging. When used in association with validated criteria, they could make it possible to identify patients at risk of relapse and/or drug resistance. Nevertheless, due to the small number of patients in our study, these findings need to be validated in a larger and/or independent cohort. Disclosures: No relevant conflicts of interest to declare.
APA, Harvard, Vancouver, ISO, and other styles
38

Rajan, Sandeep K. "Serum Thymidine Kinase (TK) Distinguishes Grade of Non Hodgkins Lymphoma (NHL) and Predicts Complete Response Rate (CRR) and Progression Free Survival (PFS)." Blood 104, no. 11 (November 16, 2004): 4558. http://dx.doi.org/10.1182/blood.v104.11.4558.4558.

Full text
Abstract:
Abstract Background: International Prognostic Index (IPI) based on clinical plus lab criteria help determine CRR and prognosis in lymphoma patients. However drawbacks exist and new markers to improve prognostication are sought for. TK is an enzyme involved in the DNA synthesis salvage pathway and can serve as a proliferative marker. Methods: Serum TK enzyme activity was determined using the Prolifigen TK radioenzymatic assay (Sangtec Medical,Sweden) in 58 consequetive NHL patients. All cases had conventional histopathologic and staging procedures and evaluation of IPI. Treatment was per institutional standards. For intermediate grade NHL: Low IPI and limited stage-non-bulky NHL received 3 cycles CHOP + IFRT; 6–8 CHOP-R for higher stage or bulky disease. For Burkitt’s NHL, HYPERCVAD chemotherapy was given and CVP for low grade NHL (all were high stage). All patients had CT scanning after 3-cycles and at end of planned therapy. CRR was noted at end of therapy. Patients were followed up every 2-m for the first year and PFS noted. TK assay was done baseline, after 2-cycles (TK-2c), at end of planned therapy (TK-endRx) and every 2m thereafter. Statistical analysis was done by Sigmastat (v.2) software. Results: Histological subtypes of 58 NHL were: 6 aggressive (Burkitt’s NHL), 46 Intermediate grade(41 DLBCL, 1 mantle cell, 2 ALCL & 2 NK-T) and 6 low grade (5 follicular and 1 marginal zone). Stage distribution was, I=3, II=25, III=9 and IV=21. For DLBCL, IPI risk was 25 Low, 10 Low-Intermediate, 4 Hi-intermediate and 2 High. Baseline TK was distinct in low grade, intermediate and high grade NHL by Kruskal-Wallis One Way ANOVA (median value 5.7mU/μg, 14.5 and 88.1; P &lt; 0.05). 19 patients did not reach CR with planned therapy. 6 of these reached CR with IFRT. TK value &gt; 10mU/μg after 2-cycles (TK-2c) predicted failure to reach CR (Table-1, P&lt;0.001 by Fisher exact test). Patients with TK &gt; 10mU/μg at end of planned chemotherapy (TK-endRx) were more likely to have a relapse within 1-y (Table 2, Fisher exact test P&lt;0.001). In DLBCL, CRR was correlated with Stage, IPI, B-symptoms, high LDH and TK-2c, the highest correlation coefficient by Pearson product moment correlation was with TK-2c. Correlation coefficient (CC) for IPI was 0.36 and that for TK-2c was 0.56 (CC &gt; 0.5 is statistically significant). By multiple linear regression TK-2c was independent predictor of CRR (P =0.002) and IPI status was not significant predictor. With median follow-up of 16m (range 14–22m), 10 patients in CR had a relapse, in all TK value showed a raise at least 2m before clinical relapse. Conclusions: In this study TK distinguished grade of NHL, TK-2c predicted CRR and TK-endRx &gt;10mU/mcg predicted PFS &lt;1y. At least in our predominantly low and low-intermediate IPI risk group patients, TK-2c was better predictor of CRR and prognosis than conventional clinical factors. Further studies are needed to confirm this in all risk categories. TK after 2-cycle predicts CRrate (CRR) CR Yes No Fisher exact test p&lt;0.001 TK &lt;10mU/mcg 36 3 TK &gt; 10mU/mcg 3 16 TK-endRx and Progression free at 1-y. Progression free at 1y Yes No Fisher exact test P&lt; 0.001 TK &lt; 10mU/mcg 43 1 TK &gt; 10mU/mcg 6 8
APA, Harvard, Vancouver, ISO, and other styles
39

Goldhaber, Samuel. "Venous Thromboembolism Prophylaxis in Medical Patients." Thrombosis and Haemostasis 82, no. 08 (1999): 899–901. http://dx.doi.org/10.1055/s-0037-1615929.

Full text
Abstract:
IntroductionPharmacologic measures to prevent venous thromboembolism were first routinely incorporated into the practice of general surgeons, urologists, and orthopedic surgeons in 1975, after the landmark International Multicentre Trial was published.1 This randomized trial allocated 4,121 surgical patients either to unfractionated heparin 5,000 U, beginning 2 hours preoperatively and continuing every 8 hours for 7 days, or to no heparin. Among the heparin-treated group, two patients had massive pulmonary embolism (PE) verified upon autopsy, compared with 16 among the no heparin group.These dramatic differences were reinforced by a subsequent meta-analysis of 15,598 surgical patients in randomized trials of venous thromboembolism prevention with low fixed dose (“minidose”) heparin.2 Those assigned to heparin prophylaxis had a two-thirds reduction in predominantly asymptomatic deep vein thrombosis (DVT), a one-third reduction in nonfatal pulmonary embolism, and a marked reduction in fatal PE (19 in heparin patients compared with 55 among controls). Based upon the results of these studies, unfractionated heparin in a dose of 5,000 U twice or three times daily, beginning 2 hours preoperatively, became the standard pharmacologic approach to perioperative prevention of DVT and PE.Despite the intensive study of venous thromboembolism in thousands of surgical patients, the investigation of DVT and PE developing as a complication among medical patients hospitalized for other primary conditions has languished, except for in stroke and myocardial infarction patients. Several fundamental issues are apparent. First, the incidence of venous thromboembolism among hospitalized patients has not been precisely elucidated. Second, subsets of patients with potentially the greatest risk, such as those in medical intensive care units, warrant special attention. Third, the failure rates of conventional low-dose heparin prophylaxis and mechanical prophylaxis with intermittent pneumatic compression boots have not been adequately defined among contemporary hospitalized medical patients. Fourth, the Food and Drug Administration has not approved low molecular weight heparin (LMWH) for prophylaxis against venous thromboembolism in medical patients. Such approval awaits the design, execution, and analysis of appropriate clinical trials in this understudied population.An Israeli study undertaken more than two decades ago provided intriguing evidence to support the concept that mortality reduction could be achieved in hospitalized general medical patients with low-dose heparin prophylaxis.3 This hypothesis was tested in 1,358 consecutive patients greater than 40 years of age who were admitted through the emergency department to the medical wards of an acute care hospital. Eligible patients with even numbered hospital records were assigned to receive 5,000 U low-dose heparin twice daily. Those with odd numbered records served as controls. Among patients allocated to heparin, there was a 31% reduction in mortality from 10.9% in the control group to 7.8% in the heparin group. The reduction in mortality in the heparin-treated group was evident from the first day, and the difference increased significantly and consistently with time until the end of the study period. Because the death rate was highest in the first 2 days in both groups, the reduction in mortality in absolute numbers was greatest on those 2 days. However, the relative mortality reduction remained stable throughout the study period.While low-dose heparin was demonstrated in the 1970s to be effective and safe for the prevention of venous thromboembolism in many thousands of surgical patients, only miniscule studies were carried out among medical patients during that era. For example, the Royal Infirmary in Glasgow studied 100 medical patients hospitalized with heart failure or chest infection.4 Patients were randomized to receive either heparin 5,000 U every 8 hours or to receive no specific prophylaxis measures. The diagnosis of DVT was established by iodine-125 fibrinogen leg scanning, which was undertaken in all study patients within 24 hours of hospitalization and repeated every other day for 14 days or until hospital discharge. The results in this group of hospitalized medical patients were dramatic. Among controls, 26% developed DVT, whereas the rate was only 4% among those receiving low-dose heparin.In a trial in 1986 that focused on octogenarian medical inpatients, a placebo-controlled, randomized, double-blind study5 utilized a once daily low molecular weight heparin (Pharmuka 10169, subsequently renamed enoxaparin). The dose was 60 mg injected subcutaneously once daily. The potential development of DVT was assessed by iodine-125 fibrinogen leg scanning in all patients. The trial lasted 10 days, and 270 patients were enrolled. The majority of subjects suffered from heart failure, respiratory diseases, stroke, or cancer. Of 263 evaluable patients, 9% in the placebo group developed DVT, compared with 3% of those receiving LMWH prophylaxis. Except for injection site hematomas, bleeding complications were not appreciably increased in the LMWH group.A trial involving 11,693 medical patients with infectious diseases randomized patients to receive either 5,000 U of heparin every 12 hours or no prophylaxis.6 Although patients were treated for a maximum of 3 weeks, follow-up was carried out for a maximum of 2 months. Heparin prophylaxis delayed the occurrence of fatal PE from a median of 12 days to a median of 28 days. Far more nonfatal thromboembolic complications in the control group (116 vs. 70, p = 0.0012). However, the prespecified primary endpoint was clinically relevant, autopsy-verified PE. In this respect, there was virtually no difference between the two groups: 15 heparin treated and 16 control group patients had autopsy-verified fatal PEs. This large trial, which yielded disappointing results, may have been flawed had the following study design flaws: 1) a lack of statistical power to detect a difference between the two groups in the primary endpoint, 2) the restriction of heparin prophylaxis to 3 weeks, and 3) an inadequate dose of heparin. (Keep in mind that the International Multicentre trial1 used low-dose heparin every 8 hours, not every 12 hours.)In the past decade, low molecular weight heparin has supplanted unfractionated heparin for prophylaxis against venous thromboembolism in total hip replacement7 and has proved superior both to warfarin8,9 and to graduated compression stockings10 for total knee replacement. This does not necessarily mean, however, that low molecular weight heparin will prove superior to unfractionated heparin, warfarin, or graduated compression stockings for prophylaxis of hospitalized medical patients.The MEDENOX trial of enoxaparin prophylaxis in medical patients completed enrollment of approximately 1,100 subjects in July 1998. Patients were randomized to one of three groups in a double-blind controlled trial: enoxaparin 20 mg once daily, enoxaparin 40 mg once daily, or placebo. The principal endpoint is the incidence of DVT as assessed by contrast venography on approximately day 10 of hospitalization. The results of this crucially important trial which favored enoxaparin 40 mg once daily, will be presented at the August 1999 XVII Congress of the International Society on Thrombosis and Haemostasis.Also, the Veterans Affairs Cooperative Studies Program has organized a randomized trial to study the effect of low-dose heparin prophylaxis on mortality among hospitalized general medical patients.11 Results will be available in about 5 years.Intermittent pneumatic compression devices constitute an alternative, nonpharmacologic approach to prevent PE and DVT. Though effective, special care must be taken to ensure that these devices are worn as prescribed.12 Frequent removal and nonuse can be problematic, especially in patients outside of an intensive care unit. In addition to the mechanical effect of increasing venous blood flow in the legs, these devices appear to cause an increase in endogenous fibrinolysis, due to stimulation of the vascular endothelial wall.13-15 It is possible that for hospitalized medical patients, combined mechanical and pharmacologic prophylaxis will find a special niche. For example, in certain surgical subspecialties, combined prophylaxis modalities are routinely used. Urologists combine intermittent pneumatic compression boots and adjusted-dose warfarin following radical prostatectomy.16 Neurosurgeons employ compression boots plus fixed, low-dose heparin in craniotomy patients with malignancies.17 The medical intensive care unit setting remains one of the last frontiers where the culture of routine venous thromboembolism prophylaxis is not well developed. Prophylaxis should be part of the standard admission orders, just like H2-blockers or carafate are almost always ordered routinely to prevent stress ulcers. Intensive care unit patients pose special challenges when planning prophylaxis strategies. First, these patients are often bleeding overtly or are admitted with thrombocytopenia. Accordingly, heparin or warfarin are often contraindicated. Second, leg ulcers, wounds, or peripheral arterial occlusive disease will preclude the use of intermittent pneumatic compression devices. With these problems in mind, it is useful to examine the current state of prophylaxis among intensive care unit patients.In 1994, the Venous Thromboembolism Research Group at Brigham and Women’s Hospital found that only one-third of consecutive patients admitted to the Medical Intensive Care Unit received prophylaxis against PE and DVT.18 In a subsequent survey of this population, one-third of patients developed DVT, and half of these were proximal leg DVTs. Overall, 56% received prophylaxis.19 Surprisingly, prophylaxis appeared to have little impact on DVT rates. The overall DVT rate in patients who had received either heparin or pneumatic compression prophylaxis was 34%, compared with 32% in patients who did not receive any prophylaxis. This observation should be interpreted cautiously because these patients were not randomly allocated to prophylaxis.There is currently no consensus on optimal prophylaxis for medical intensive care unit patients.20 Two prior trials have failed to show the superiority of low molecular weight heparin compared with unfractionated low-dose heparin among hospitalized medical patients.21,22 These two trials may have administered subtherapeutic doses of LMWH.We have just completed a multicenter, randomized, controlled trial of heparin 5,000 U twice daily (“miniheparin”) versus enoxaparin 30 mg twice daily among Medical Intensive Care Unit patients. This multicentered study has the principal endpoint of venous thrombosis proven by ultrasound examination. Approximately, almost 300 patients have been enrolled. We expect to present the results of this trial at the August 1999 XVII Congress of the International Society on Thrombosis and Haemostasis.
APA, Harvard, Vancouver, ISO, and other styles
40

Ocholla, Dennis N. "Review and Revision of Library and Information Science Curriculum in a South African University and the Usage of Follow-Up Study and Advertisement Scanning Methods." Proceedings of the Annual Conference of CAIS / Actes du congrès annuel de l'ACSI, October 15, 2013. http://dx.doi.org/10.29173/cais22.

Full text
Abstract:
Two methods for curriculum review and revision are used to review and revise the Library and Information (LIS) curriculum at the University of Zululand, South Africa. Firstly, as an exercise in product analysis, a case study of the graduates of the University of Zululand between 1996 and 1999 was conducted. Graduates were traced to their current places of employment and interviewed together with their employers in order to determine whether the knowledge, skills and attitudes gained during training were adequate for their current job requirements. Secondly, a market-type analysis was conducted by scanning job advertisements in the field of library and information science appearing in a popular national weekly newspaper over a period of three years. Details regarding date and location of advertisement, type of employer, job details and job specifications and requirements in terms of qualifications, experience, knowledge, skills and attitudes were captured from this source and analysed. Whereas the aforementioned two methods still enjoy popularity, arguably, they alone do not necessarily provide an accurate picture of the demand and supply matrix that can enhance effective and beneficial LIS education for service and employability of graduates. Evidently, the public sector and in particular the public and academic libraries, dominate this specific segment of the employment market in South Africa. Sound education in the fields of management, information and communication technologies, information searching, analysis and synthesis, as well as the ability to perform practical work is regarded as essential. The use of the aforementioned two methods exploits techniques which play a crucial verification role and which effectively supplement other methods such as reviewing existing curriculum and literature, consulting with colleagues and observing national and international trends as well as the focus-group method for academic programme development. Other intervening variables in the study are discussed. The paper addresses issues that can benefit theoretical and methodological issues in library and information science education and curriculum development.
APA, Harvard, Vancouver, ISO, and other styles
41

Iyengar, Raghavan J., and Malavika Sundararajan. "CEO pay sensitivities in innovative firms." Benchmarking: An International Journal ahead-of-print, ahead-of-print (February 16, 2021). http://dx.doi.org/10.1108/bij-09-2020-0491.

Full text
Abstract:
PurposeThis study aims to investigate whether compensation committees provide the chief executive officers (CEOs) with incentives to undertake “income-decreasing” but potentially “value-enhancing” innovation expenditures. The authors specifically analyze pay–performance relationships for innovative firms relative to all other firms. This study is critical because innovation is expensive and has uncertain outcomes.Design/methodology/approachUsing alternative accounting performance measures and market performance measures, the authors estimate an econometric model of CEO compensation in innovative firms that incorporates the interaction of endogenous innovation and firm performance.FindingsThe authors document an incremental positive association between changes in accounting performance measures and CEO compensation changes in innovative firms relative to other firms. This sensitivity of executive pay to firm performance is higher for firms that innovate. These results support the hypothesis that compensation committees provide incentives to carry out risky innovation by tying executive compensation more closely to firm performance. This finding survives a battery of sensitivity tests.Practical implicationsThe implications of this study are significant. Capital needs to support risky research and development investments (Tidd and Besant, 2018; Baldwin and Johnson, 1995) form the basis of innovative firms' operations. Considering these expenses, if CEOs, who play a critical role in the scanning, adapting and implementing innovative needs in a firm, are not protected and compensated for making risky choices, the entire investment itself will be threatened. Hence, the findings reiterate and support earlier findings that speak to the importance of compensating CEOs to make high-risk investments that will lead to long-term economic and financial gains for the firm when the innovative behaviors result in competitive market shares and profits.Originality/valueThe original work is related to the investigation of pay–performance sensitivity in the presence of innovation, which has not been fully investigated in prior literature.
APA, Harvard, Vancouver, ISO, and other styles
42

Rahikainen, Marjatta. "Zu alt für den Arbeitsmarkt. Arbeitslosenrenten als Rationalisierungsmaßnahme in Finnland in den 1980er Jahren in einer historischen Perspektive." Jahrbuch für Wirtschaftsgeschichte / Economic History Yearbook 49, no. 1 (January 1, 2008). http://dx.doi.org/10.1524/jbwg.2008.49.1.105.

Full text
Abstract:
AbstractThis article argues that welfare schemes, which served to remove elderly workers from the labour force around the 1970s, foreshadowed the breakdown of the world of work in which permanent employment contracts and secure jobs until old-age retirement were the norm. As the global economy put heavier pressure on profitability and all sectors became subject to international shareholder scanning, the policy of ‘lifelong jobs’ began to give way to flexible labour throughout Europe. This change is discussed here using the empirical case of the Finnish unemployment pension schemes providing for the elderly labour force. The article asks how Finnish employers made use of unemployment pensions in the 1970s and 1980s in order to get rid of older employees. A large Finnish corporation, in particular its female labour force, are discussed in detail. Based on the working careers of about 100 women (manual workers and salaried employees) this article examines who kept their jobs and who were dismissed and pensioned off on unemployment pensions. The examples of women who succeeded in keeping their jobs at the time of mass-dismissals imply that careers (internal labour market) and occupational sociability (social capital) may be equally as valuable for women as they are for men.
APA, Harvard, Vancouver, ISO, and other styles
43

Söilen, Klaus Solberg. "The impasse of competitive intelligence today is not a failure. A special issue for papers at the ICI 2020 Conference." Journal of Intelligence Studies in Business 10, no. 2 (June 30, 2020). http://dx.doi.org/10.37380/jisib.v10i2.579.

Full text
Abstract:
seven military classics (Jiang Ziya, the methods of theSima, Sun Tzu, Wu Qi, Wei Liaozi, the three strategies of Huang Shigong and the Questions and Repliesbetween Tang Taizong and Li Weigong). The entities studied then were nation states. Later, corporationsoften became just as powerful as states and their leaders demanded similar strategic thinking. Many ofthe ideas came initially from geopolitics as developed in the 19th century, and later with the spread ofmultinational companies at the end of the 20th century, with geoeconomics.What is unique for intelligence studies is the focus on information— not primarily geography ornatural resources— as a source for competitive advantage. Ideas of strategy and information developedinto social intelligence with Stevan Dedijer in the 1960s and became the title of a course he gave at theUniversity of Lund in the 1970s. In the US this direction came to be known as business intelligence. At afast pace we then saw the introduction of corporate intelligence, strategic intelligence and competitiveintelligence. Inspired by the writings of Mikael Porter on strategy, as related to the notion of competitiveadvantage the field of competitive intelligence, a considerable body of articles and books were written inthe 1980s and 1990s. This was primarily in the US, but interest spread to Europe and other parts of theworld, much due to the advocacy of the Society of Competitive Intelligence Professionals (SCIP). In Francethere was a parallel development with “intelligence économique”, “Veille” and “Guerre économique”, inGermany with “Wettbewerbserkundung” and in Sweden with “omvärldsanalys,” just to give someexamples.On the technological side, things were changing even faster, not only with computers but alsosoftware. Oracle corporation landed a big contract with the CIA and showed how data analysis could bedone efficiently. From then on, the software side of the development gained most of the interest fromcompanies. Business intelligence was sometimes treated as enterprise resource planning (ERP), customerrelations management (CRM) and supply chain management (SCM). Competitive intelligence wasassociated primarily with the management side of things as we entered the new millennium. Marketintelligence became a more popular term during the first decade, knowledge management developed intoits own field, financial intelligence became a specialty linked to the detection of fraud and crime primarilyin banks, and during the last decade we have seen a renewed interest for planning, in the form of futurestudies, or futurology and foresight, but also environmental scanning. With the development of Big Data,data mining and artificial intelligence there is now a strong interest in collective intelligence, which isabout how to make better decisions together. Collective intelligence and foresight were the main topics ofthe ICI 2020 conference. All articles published in this issue are from presentations at that conference.The common denominator for the theoretical development described above is the Information Age,which is about one’s ability to analyze large amounts of data with the help of computers. What is drivingthe development is first of all technical innovations in computer science (both hardware and software),while the management side is more concerned with questions about implementation and use.Management disciplines that did not follow up on new technical developments but defined themselvesseparately or independently from these transformations have become irrelevant.Survival as a discipline is all about being relevant. It’s the journey of all theory, and of all sciencesto go from “funeral to funeral” to borrow an often-used phrase: ideas are developed and tested againstreality. Adjustments are made and new ideas developed based on the critic. It’s the way we createknowledge and achieve progress. It’s never a straight line but can be seen as a large number of trials andsolutions to problems that change in shape, a process that never promises to be done, but is ever-changing,Journal of Intelligence Studies in BusinessVol. 10, No 2 (2020) p. 4-5Open Access: Freely available at: https://ojs.hh.se/5much like the human evolution we are a part of. This is also the development of the discipline ofintelligence studies and on a more basic level of market research, which is about how to gatherinformation and data, to gain a competitive advantage.Today intelligence studies and technology live in a true symbiosis, just like the disciplines ofmarketing and digital marketing. This means that it is no longer meaningful to study managementpractices alone while ignoring developments in hardware and software. The competitive intelligence (CI)field is one such discipline to the extent that we can say that CI now is a chapter in the history ofmanagement thought, dated to around 1980-2010, equivalent to a generation. It is not so that it willdisappear, but more likely phased out. Some of the methods developed under its direction will continueto be used in other discipline. Most of the ideas labeled as CI were never exclusive to CI in the first place,but borrowed from other disciplines. They were also copied in other disciplines, which is common practicein all management disciplines. Looking at everything that has been done under the CI label the legacy ofCI is considerable.New directions will appear that better fit current business practices. Many of these will seem similarin content to previous contributions, but there will also be elements that are new. To be sure newsuggestions are not mere buzzwords we have to ask critical questions like: how is this discipline definedand how is it different from existing disciplines? It is the meaning that should interest us, not the labelswe put on them. Unlike consultants, academics and researchers have a real obligation to bring clarityand order in the myriad ideas.The articles in this issue are no exception. They are on collective intelligence, decision making, BigData, knowledge management and above all about the software used to facilitate these processes. Thefirst article by Teubert is entitled “Thinking methods as a lever to develop collective intelligence”. Itpresents a methodology and framework for the use of thinking methods as a lever to develop collectiveintelligence.The article by Calof and Sewdass is entitled “On the relationship between competitive intelligenceand innovation”. The authors found that of the 95 competitive intelligence measures used in the study59% were significantly correlated with the study’s measure of innovation.The third article is entitled “Atman: Intelligent information gap detection for learning organizations:First steps toward computational collective intelligence for decision making” and is written by Grèzes,Bonazzi, and Cimmino. The research project shows how companies can constantly adapt to theirenvironment, how they can integrate a learning process in relation to what is happening and become a"learning company".The next article by Calof and Viviers entitled “Big data analytics and international market selection:An exploratory study” develops a multi-phase, big-data analytics model for how companies can performinternational market selection.The last article by Vegas Fernandez entitled “Intelligent information extraction from scholarlydocument databases” presents a method that takes advantage of free desktop tools that are commonplaceto perform systematic literature review, to retrieve, filter, and organize results, and to extract informationto transform it into knowledge. The conceptual basis is a semantics-oriented concept definition and arelative importance index to measure concept relevance in the literature studied.As always, we would above all like to thank the authors for their contributions to this issue of JISIB.Thanks to Dr. Allison Perrigo for reviewing English grammar and helping with layout design for allarticles.Have a safe summer!On behalf of the Editorial Board,
APA, Harvard, Vancouver, ISO, and other styles
44

Sacchini, Dario, and Ignacio Carrasco de Paula. "Alcune questioni etico-deontologiche nella Medicina di laboratorio." Medicina e Morale 57, no. 5 (October 30, 2008). http://dx.doi.org/10.4081/mem.2008.268.

Full text
Abstract:
Nel presente contributo viene esaminata la scarsa letteratura inerente la fondazione etica dell’attività nel laboratorio biomedico (LBM), elaborata per lo più dagli specialisti di laboratorio. La scansione rivela le seguenti principali posizioni: 1. un approccio di etica procedurale, che rintraccia l’eticità nella laboratoristica sia nello scrupolo e nella trasparenza metodologica sia nella ottimizzazione delle dinamiche relazionali sia all’interno del laboratorio sia tra specialisti di laboratorio e clinici. Si tratta dunque di un’etica in certo modo “intrinseca”, cioè prevalentemente concentrata sull’oggetto materiale dell’attività. 2. Una prospettiva di marca jonasana improntata all’etica della responsabilità, improntata a due cardini: la “consapevolezza” delle conseguenze che al chimico clinico vengono dalle richieste di esami e la verifica dell’intenzionalità degli operatori. 3. L’etica delle virtù. Tale prospettiva postula l’esercizio dell’abito virtuoso da parte dell’operatore di laboratorio come condizione per raggiungere la finalità ultima della laboratoristica biomedica: il benessere del paziente. 4. Il principialismo, noto modello bioetico, peraltro proposto a livello globale da un gruppo di lavoro della International Federation of Clinical Chemistry (IFCC) quale base di discussione per le società specialistiche nazionali. 5. Infine, un’etica centrata sulla persona, ove il richiamo antropologico affianca ed integra i pur necessari compiti tecnici dello specialista di laboratorio. L’articolo si conclude con una disamina critica delle diverse posizioni riscontrate in letteratura. ---------- The paper examines the scarce literature related to the ethical foundation of the activity in the biomedical laboratory (BML), realized mainly by the experts of laboratory. The scanning reveals the followings main positions: 1. a procedural ethics approach, in which the ethics related to the BML is tracked both in the scruple and in the methodological transparency and in the optimization of the relational dynamics both in laboratory and among laboratory and clinicians experts. It concerns therefore with an “intrinsic” ethics, that is predominantly centred on the material object of the activity. 2. A perspective based on Jonas’s perspective marked on ethics of the responsibility, founded on two bases: the “awareness” of the consequences that clinicians chemist could receive from the examination demands and the verification of the intentionality of the operators. 3. Virtue ethics. Such perspective assumes the virtuous habit exercise of the operator of laboratory as condition to achieve the ultimate end of the BML: the patient’s wellbeing 4. Principialism, known as bioethical model, moreover proposed at a global level by a working group of the International Federation of Clinical Chemistry (IFCC) as basis of discussion for the national specialist societies. 5. Finally, an ethics centred on the person, where the anthropological reference helps and integrates the also necessary technical tasks of the laboratory expert. The article ends with a critical examination of the different positions found in literature.
APA, Harvard, Vancouver, ISO, and other styles
45

Kelly, Michelle. "Eminent Library Figures." M/C Journal 8, no. 4 (August 1, 2005). http://dx.doi.org/10.5204/mcj.2396.

Full text
Abstract:
“K29.” One day it will be me (oh please let it be so). When I’m K29, it will mean that my book is on the shelf of a library which has a collection large enough to employ the Cutter-Sanborn Three Figure Author Table so that it might translate “Kelly” to code. K29 grates a little, sure—I’d prefer the visually softer, assonantal, sonorous J88 for Joy, or the zippiness of Laâbi’s L111—but that’s just a personal preference. K29, J88, L111: divested of their link to authors’ surnames, it can be argued that Cutter-Sanborn numbers have a particular relationship to the practice of “scanning” as a mode of reading. These numbers are available to two types of scanning (in fact, they are perhaps available only to scanning and not “reading”). On a superficial level, they promote the scan which is purely pragmatic: the brief glimpse or glance, a looking which does not know or care what the number represents. Or they may be subject to the analytical scan which is an act of scrutiny, or interrogation. That is to say, while the Cutter-Sanborn number is open to decipherment, it is constitutionally affective (“sonorous”, “zippy”) and effective (as a library tool) for everyone, even those disinterested in its deeper codified meaning. This essay considers what a superficial scan of the Cutter-Sanborn number could signify for all who encounter it, and offers an idiosyncratic account of the possibilities of deeper, scrutinising signification, in particular its ramifications for the author it contracts. The author number is the heart of the book number, and the Cutter-Sanborn number is a particular type—indeed a paradigm—of the author number. It is used especially by libraries employing Dewey Decimal Classification (Lehnus 76). The book number is designed to sub-arrange books which share the same classification number, and is thus formed by those letters and figures which follow the classification number. Abdellatif Laâbi’s L’arbre de fer fleurit, for example, is represented by the call number 848.9964 L111 E 1 at the University of Sydney Library: 848.9964 is a subdivision within the Dewey class of 848 for French miscellaneous writings; L111 E 1 is the book number, broadly conceived. Accordingly, the overall call number structure is worldly, then parochial. Book numbers thus create and express the singularity of books within an institution which, through classification, create and range a community of books. Book numbers are assigned on the basis of the library’s extant collection: new acquisitions are inserted around those numbers already bestowed. Lisa Zhao writes “We have to accept the shelflist (sic.) we have” (116), and thus numbers may vary for the same books at different libraries. Book numbers, it may be seen, are designations of philosophical, textual, and bibliographic consequence. The Cutter-Sanborn number is derived from a table that numerates letter combinations in order to maintain an alphabetical arrangement on the shelves. Charles Cutter printed the first of several versions of his author number scheme in 1880; Kate Emery Sanborn later revised it to produce the Table’s most popular edition (Lehnus 18, 37-42). The Cutter-Sanborn number’s familiar contemporary form is a first initial followed by two, three, or more digits. No matter what a patron knows about the Cutter-Sanborn number, it will be impossible to miss the number’s recurring formal feature of lopsidedness. The mnemonic initial is consistently overpowered by a splatter of integers. Numbers appear as the furthered refinement. The single letter becomes almost incidental—a blunted, rudimentary, and superseded signifier—against a run of figures which seem more attenuating, demanding, or sophisticated. The Cutter-Sanborn number seems to suggest that the numbers enhance the letters, but it is an enhancement which denies the patron easy intelligibility. It substitutes a number for a name it still hints at with a first initial, and the precision of this former device creates a designation that looks like a measure of the book. This conception is facilitated by the everyday scanning eye undertaking a traversing kind of interpretation, not a probing one. Why should the critic probe any deeper than this: why disturb the Cutter-Sanborn number beyond remarking on its simple utility and its affective scientism? Because of the Cutter-Sanborn number’s own pretensions. Conceived by Charles Cutter, the Cutter number was instrumental in the book number’s task of ensuring that “every volume has its own mark, shared with no other volume, its proper name, by which it is absolutely identified” (quoted in Lehnus, 9). The discourse surrounding the genesis of Cutter numbers was thus one of radical individuality. In spite of not being easily legible, the Cutter number hoped to be a kind of translation: Melvil Dewey, for instance, claimed that author numbers “are significant like our class numbers, and translate themselves into the name” (quoted in Lehnus, 27). The Cutter number is historically implicated by its optimistic aspirations of absolute identification, translation, and comprehensibility. This optimism has served it well—a Library Journal editorial blithely suggested that a new innovation “may be the best idea since Cutter numbers” (Berry III, 96)—but it has also obscured investigation of the way in which the Cutter-Sanborn number functions by presupposing its own adequacy. ‘Cuttered’, the author mark holds that said author may be satisfactorily equated with their name, which may be satisfactorily equated with a number. The author has their proper name converted for and contributed to another “proper name” (Cutter’s exact words), that of the volume. This latter proper name is claimed to be superior: “more exact,” suggests Dewey, “than a full written title, as it specifies the identical copy” (Dewey, 296). It is a proper name, then, which is motivated by a blinkered allegiance to the limitable unit and presence of the book. Jacques Derrida, in explaining the replacement of the proper name of a particular author with the designation “Sarl”—an acronym of Société à responsabilité limitée (Society with Limited Responsibility), bestowed so as to acknowledge all the named and unnamed signatures bearing upon the article under question—declares “I hope that the bearers of proper names will not be wounded by this technical or scientific device” (36). I would like to suggest that this is a sentiment that may also be applicable for book authors whose names have been “translated” into Cutter numbers, albeit that the library is more insouciant in expressing any repentance for its actions. The Cutter number format accounts for the book in particular standardising ways, which authors’ names have connotative apparatus (biography, contingency, etymology) to prevent. Derrida recognises his renaming may affront the author, but does not try in any way to mitigate this indignity. He does no more than express the hope that if he did in fact wound the author, that this wasn’t the case. The corollary of this position is that any injury is worthwhile, or has been compensated for elsewhere. The author number’s result is nothing less than an expression of confidence in the viability of transacting a human proper name. A “transaction” concludes something: that something would be concluded was inevitable from the moment that Cutter’s words “by which [the volume] is absolutely identified” established the book number’s precept of satisfaction. The Cutter-Sanborn number concludes a care for human susceptibility: the wound Derrida excises is an ego celebrated in paragraph one and now (I wish to say fully) relinquished. In these very particular book number places—on the shelf-marker, on the spine, and on the sticker—a reduced human authority is proposed. The Cutter-Sanborn number is a text with the express purpose to create an author who has limited ability to claim, and limited ability to connote. In the Cutter-Sanborn number, the book’s author is only just present. They may be able to be traced, but I would like to suggest that in the Cutter number the author is presented without spoil (that is, presented without the rot or reward attendant upon the contingencies and connotations of a human proper name). Consider, furthermore, the genesis of individual Cutter-Sanborn numbers themselves. Any Cutter-Sanborn number has Cutter and Sanborn as ur-authors, but individual authors—working in libraries everywhere—have no means of claiming the number they allocate as their own. The Cutter-Sanborn number simultaneously proposes reduced individual authority and enacts reduced individual authority. The Cutter-Sanborn number is thus available for use by critical textual practices sincerely and self-reflexively, both as an alternative authorial designation (traceable, connotative but standardising, international but relative), and as a model in the task of re-imagining authorship. There is, however, a complicating factor. The Cutter-Sanborn number has proven bibliographically mobile. Its form of an initial followed by digits has been adapted to denote not only authors but titles, topics, subjects, place names, and even publication dates. For example, in the call number of a book entitled Power Sales Presentations: Complete Sales Dialogues for Each Critical Step of the Sales Cycle, a Cutter number P74 stands for the topic “Presentations” (O’Neill). The Cutter-Sanborn number format assimilates book features, it is slippery. In these assorted adaptations, the Cutter-Sanborn number manifests bibliographic features indiscriminately. However incomprehensible the number may appear at each individual occurrence, as a fabrication it does indeed always broadcast various measures of the book. The author’s proper name is thus potentially reduced to just one factor among many: other factors may be given equal leverage. (It is only now that the full consequence of the Cutter-Sanborn number’s sophistication is becoming evident: for devotees of these factors, in particular the author, its totalising representation veers towards sophistry.) A single initial followed by a splatter of integers, which could refer to any bibliographic thing? The Cutter-Sanborn number is an agitator: imprecise in its target, but utterly confident in the genius of its own designative force. The Cutter-Sanborn number does not encourage the scanning, probing eye to look closely, but upon investigation one can discern its paradoxical attempt to challenge author authority while trying to cement its own. Subject to two different types of scanning eye, the Cutter-Sanborn number and its wider contextual environment of the book number destabilise and reconfigure ideas of authorship, simultaneously reducing and promoting it. These doubly scannable codes—these eminent library figures—have implications for the reading of books themselves. In textualising and deprioritising the author, in varying according to location, and in mitigating the grand narratives of classification, the book number has a stake in postmodern expression. And so this essay has been cautionary: it is wary of claiming or promoting book number literacy because of these very evidences of decentralisation. But this relativity is not a problem, as the book number is a thing so saturated in code that a degree of unintelligibility is in fact integral to its message. Unintelligibility need not be white noise. The book number is available to be read impressionistically—that is, available to be read in a manner somewhere between the two paradigmatic scanning cases of those indifferent and those intrigued. A fiction book from a scholarly archive stamped and stickered 853.91 C168 J8 T 1—the example is Italo Calvino’s If on a Winter’s Night a Traveller—is a different text to the version marked F-CAL from a local library. The first example’s complex denotation and brute extent does not so well accommodate the accessible and leisured reading suggested by the second. Calvino from the local is on my time, and its direct address—F-CAL, Fiction: Calvino—is integral in facilitating this. This observation reveals that book number analysis cannot be trusted for any reason, other than that of the Cutter-Sanborn number’s refusal to coalesce adequately across libraries and submit to investigation. Book number analysis is suspect too because, in explaining parts of the book number’s code, analysis pollutes the same experience’s affective value. The loss is significant, as innocence or ignorance is not easily regained. It is ironic that this essay—itself a measured study—must in the final analysis refuse the polarity of the two modes of scan initially posited as exemplary for encountering book numbers (the unaffected glance; the probing need to intuit and ramify), in order to reinstitute and advocate a mode of experience that the book number, within its stipulated self, excludes: susceptibility, a mere responsiveness to presence. References Berry III, John N. “Certification: Is It Worth the Price?” Editorial. Library Journal 15 Feb. 2001: 96. Cutter-Sanborn Three-Figure Author Table: Swanson-Swift Revision, 1969. Chicopee, Ma: H. R. Huntting, 1969. Derrida, Jacques. “Limited Inc a b c…” Trans. Samuel Weber. Glyph 2 (1977). Rpt. in Limited Inc. By Derrida. Evanston, Il: Northwestern UP, 1988. Dewey, Melvil. “Eclectic Book-Numbers.” Library Journal 11 (1886): 296-301. Laâbi, Abdellatif. L’arbre de fer fleurit: Poémes (1972). Paris: Oswald, 1974. Lehnus, Donald J. Book Numbers: History, Principles, and Application. Chicago: ALA, 1980. O’Neill, Edward T. “Cuttering for the Library of Congress Classification.” Annual Review of OCLC Research 1994 1 Jul. 2005. http://digitalarchive.oclc.org/da/ViewObject.jsp? fileid=0000002650:000000058648&reqid=701>. Zhao, Lisa. “Save Space for ‘Newcomers’ – Analyzing Problems in Book Number Assignment under the LCC System.” Cataloging & Classification Quarterly 38.1 (2004): 105-19. Citation reference for this article MLA Style Kelly, Michelle. "Eminent Library Figures: A Reader." M/C Journal 8.4 (2005). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0508/07-kelly.php>. APA Style Kelly, M. (Aug. 2005) "Eminent Library Figures: A Reader," M/C Journal, 8(4). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0508/07-kelly.php>.
APA, Harvard, Vancouver, ISO, and other styles
46

Thomas, Peter. "Anywhere But the Home: The Promiscuous Afterlife of Super 8." M/C Journal 12, no. 3 (July 15, 2009). http://dx.doi.org/10.5204/mcj.164.

Full text
Abstract:
Consumer or home use (previously ‘amateur’) moving image formats are distinguished from professional (still known as ‘professional’) ones by relative affordability, ubiquity and simplicity of use. Since Pathé Frères released its Pathé Baby camera, projector and 9.5mm film gauge in 1922, a distinct line of viewing and making equipment has been successfully marketed at nonprofessional use, especially in the home. ‘Amateur film’ is a simple term for a complex, variegated and longstanding set of activities. Conceptually it is bounded only by the negative definition of nonprofessional (usually intended as sub-professional), and the positive definition of being for the love of the activity and motivated by personal passion alone. This defines a field broad enough that two major historians of US amateur film, Patricia R. Zimmermann and Alan D. Kattelle, write about different subjects. Zimmermann focuses chiefly on domestic use and ‘how-to’ literature, while Kattelle unearths the collective practices and institutional structure of the Amateur Ciné Clubs and the Amateur Ciné League (Zimmerman, Reel Families, Professional; Kattelle, Home Movies, Amateur Ciné). Marion Norris Gleason, a test subject in Eastman Kodak’s development of 16mm and advocate of amateur film, defined it as having three parts, the home movie, “the photoplay produced by organised groups”, and the experimental film (Swanson 132). This view was current at least until the 1960s, when domestic documentation, Amateur Ciné clubs and experimental filmmakers shared the same film gauges and space in the same amateur film magazines, but paths have diverged somewhat since then. Domestic documentation remains committed to the moving image technology du jour, the Amateur Ciné movement is much reduced, and experimental film has developed a separate identity, its own institutional structure, and won some legitimacy in the art world. The trajectory of Super 8, a late-coming gauge to amateur film, has been defined precisely by this disintegration. Obsolescence was manufactured far more slowly during the long reign of amateur film gauges, allowing 9.5mm (1922-66), 16mm (1923-), 8mm (1932-), and Super 8 (1965-) to engage in protracted format wars significantly longer than the life spans of their analogue and digital video successors. The range of options available to nonprofessional makers – the quality but relative expense of 16mm, the near 16mm frame size of 9.5mm, the superior stability of 8mm compared to 9.5mm and Super 8, the size of Super 8’s picture relative to 8mm’s – are not surprising in the context of general competition for a diverse popular market on the usual basis of price, quality, and novelty. However, since analogue video’s ascent the amateur film gauges have all comprehensibly lost the battle for the home use market. This was by far the largest section of amateur film and the manufacturers’ overt target segment, so the amateur film gauges’ contemporary survival and significance is as something else. Though all the gauges from 8mm to 16mm remain available today to the curious and enthusiastic, Super 8’s afterlife is distinguished by the peculiar combination of having been a tremendously popular substandard to the substandard (ie, to 16mm, the standardised film gauge directly below 35mm in both price and quality), and now being prized for its technological excellence. When the large scale consumption that had supported Super 8’s manufacture dropped away, it revealed the set of much smaller, apparently non-transferable uses that would determine whether and as what Super 8 survived. Consequently, though Super 8 has been superseded many times over as a home movie format, it is not obsolete today as an art medium, a professional format used in the commercial industry, or as an alternative to digital video and 16mm for low budget independent production. In other words, everything it was never intended to be. I lately witnessed an occasion of the kind of high-fetishism for film-versus-video and analogue-versus-digital that the experimental moving image world is justifiably famed for. Discussion around the screening of Peter Tscherkassky’s films at the Xperimenta ‘09 festival raised the specifics and availability of the technology he relies on, both because of the peculiarity of his production method – found-footage collaging onto black and white 35mm stock via handheld light pen – and the issue of projection. Has digital technology supplied an alternative workflow? Would 35mm stock to work on (and prints to pillage) continue to be available? Is the availability of 35mm projectors in major venues holding up? Although this insider view of 35mm’s waning market share was more a performance of technological cultural politics than an analysis of it, it raised a series of issues central to any such analysis. Each film format is a gestalt item, consisting of four parts (that an individual might own): film stock, camera, projector and editor. Along with the availability of processing services, these items comprise a gauge’s viability (not withstanding the existence of camera-less and unedited workflows, and numerous folk developing methods). All these are needed to conjure the geist of the machine at full strength. More importantly, the discussion highlights what happens when such a technology collides with idiosyncratic and unintended use, which happens only because it is manufactured on a much wider scale than eccentric use alone can support. Although nostalgia often plays a role in the advocacy of obsolete technology, its role here should be carefully qualified and not overstated. If it plays a role in the three main economies that support contemporary Super 8, it need not be the same role. Further, even though it is now chiefly the same specialist shops and technicians that supply and service 9.5mm, 8mm, Super 8, and 16mm, they are not sold on the same scale nor to the same purpose. There has been no reported Renaissances of 9.5mm or 8mm, though, as long term home movie formats, they must loom large in the memories of many, and their particular look evokes pastness as surely as any two-colour process. There are some specifics to the trajectory of Super 8 as a non-amateur format that cannot simply be subsumed to general nostalgia or dead technology fetishism. Super 8 as an Art Medium Super 8 has a longer history as an art medium than as a pro-tool or low budget substandard. One key aspect in the invention and supply of amateur film was that it not be an adequate substitute for the professional technology used to populate the media sphere proper. Thus the price of access to motion picture making through amateur gauges has been a marginalisation of the outcome for format reasons alone (Zimmermann, Professional 24; Reekie 110) Eastman Kodak established their 16mm as the acceptable substandard for many non-theatrical uses of film in the 1920s, Pathé’s earlier 28mm having already had some success in this area (Mebold and Tepperman 137, 148-9). But 16mm was still relatively expensive for the home market, and when Kiyooka Eiichi filmed his drive across the US in 1927, his 16mm camera alone cost more than his car (Ruoff 240, 243). Against this, 9.5mm, 8mm and eventually Super 8 were the increasingly affordable substandards to the substandard, marginalised twice over in the commercial world, but far more popular in the consumer market. The 1960s underground film, and the modern artists’ film that was partly recuperated from it, was overwhelmingly based on 16mm, as the collections of its chief distributors, the New York Film-Makers’ Co-op, Canyon Cinema and the Lux clearly show. In the context of experimental film’s longstanding commitment to 16mm, an artist filmmaker’s choice to work with Super 8 had important resonances. Experimental work on 8mm and Super 8 is not hard to come by, even from the 1960s, but consider the cultural stakes of Jonas Mekas’s description of 8mm films as “beautiful folk art, like song and lyric poetry, that was created by the people” (Mekas 83). The evocation of ‘folk art’ signals a yawning gap between 8mm, whose richness has been produced collectively by a large and anonymous group, and the work produced by individual artists such as those (like Mekas himself) who founded the New American Cinema Group. The resonance for artists of the 1960s and 1970s who worked with 8mm and Super 8 was from their status as the premier vulgar film gauge, compounding-through-repetition their choice to work with film at all. By the time Super 8 was declared ‘dead’ in 1980, numerous works by canonical artists had been made in the format (Stan Brakhage, Derek Jarman, Carolee Schneemann, Anthony McCall), and various practices had evolved around the specific possibilities of this emulsion and that camera. The camcorder not only displaced Super 8 as the simplest to use, most ubiquitous and cheapest moving image format, at the same time it changed the hierarchy of moving image formats because Super 8 was now incontestably better than something. Further, beyond the ubiquity, simplicity and size, camcorder video and Super 8 film had little in common. Camcorder replay took advantage of the ubiquity of television, but to this day video projection remains a relatively expensive business and for some time after 1980 the projectors were rare and of undistinguished quality. Until the more recent emergence of large format television (also relatively expensive), projection was necessary to screen to anything beyond very small audience. So, considering the gestalt aspect of these technologies and their functions, camcorders could replace Super 8 only for the capture of home movies and small-scale domestic replay. Super 8 maintained its position as the cheapest way into filmmaking for at least 20 years after its ‘death’, but lost its position as the premier ‘folk’ moving image format. It remained a key format for experimental film through the 1990s, but with constant competition from evolving analogue and digital video, and improved and more affordable video projection, its market share diminished. Kodak has continued to assert the viability of its film stocks and gauges, but across 2005-06 it deleted its Kodachrome Super 8, 16mm and slide range (Kodak, Kodachrome). This became a newsworthy Super 8 story (see Morgan; NYT; Hodgkinson; Radio 4) because Super 8 was the first deletion announced, this was very close to 8 May 2005, which was Global Super 8 Day, Kodachrome 40 (K40) was Super 8’s most famous and still used stock, and because 2005 was Super 8’s 40th birthday. Kodachome was then the most long-lived colour process still available, but there were only two labs left in the world which could supply processing- Kodak’s Lausanne Kodachrome lab in Switzerland, using the authentic company method, and Dwayne’s Photo in the US, using a tolerable but substandard process (Hodgkinson). Kodak launched a replacement stock simultaneously, and indeed the variety of Super 8 stocks is increasing year to year, partly because of new Kodak releases and partly because other companies split Kodak’s 16mm and 35mm stock for use as Super 8 (Allen; Muldowney; Pro8mm; Dager). Nonetheless, the cancelling of K40 convulsed the artists’ film community, and a spirited defence of its unique and excellent properties was lead by artist and activist Pip Chodorov. Chodorov met with a Kodak executive at the Cannes Film Festival, appealed to the French Government and started an online petition. His campaign circular read: EXPLAIN THE ADVANTAGES OF K40We have to show why we care specifically about Kodachrome and why Ektachrome is not a replacement. Kodachrome […] whose fine grain and warm colors […] are often used as a benchmark of quality for other stocks. The unique qualities of the Kodachrome image should be pointed out, and especially the differences between Kodachrome and Ektachrome […]. What great films were shot in Kodachrome, and why? […] What are the advantages to the K-14 process and the Lausanne laboratory? Is K40 a more stable stock, is it more preservable, do the colors fade resistant? Point out differences in the sensitometry curves, the grain structure... There was a rash of protest screenings, including a special all-day programme at Le Festival des Cinemas Différents de Paris, about which Raphaël Bassan wrote This initiative was justified, Kodak having announced in 2005 that it was going to stop the manufacturing of the ultra-sensitive film Kodachrome 40, which allowed such recognized artists as Gérard Courant, Joseph Morder, Stéphane Marti and a whole new generation of filmmakers to express themselves through this supple and inexpensive format with such a particular texture. (Bassan) The distance Super 8 has travelled culturally since analogue video can be seen in the distance between these statements of excellence and the attributes of Super 8 and 8mm that appealed to earlier artists: The great thing about Super 8 is that you can switch is onto automatic and get beyond all those technicalities” (Jarman)An 8mm camera is the ballpoint of the visual world. Soon […] people will use camera-pens as casually as they jot memos today […] and the narrow gauge can make finished works of art. (Durgnat 30) Far from the traits that defined it as an amateur gauge, Super 8 is now lionised in terms more resembling a chemistry historian’s eulogy to the pigments used in Dark Ages illuminated manuscripts. From bic to laspis lazuli. Indie and Pro Super 8 Historian of the US amateur film Patricia R. Zimmermann has charted the long collision between small gauge film, domesticity and the various ‘how-to’ publications designed to bridge the gap. In this she pays particular attention to the ‘how-to’ publications’ drive to assert the commercial feature film as the only model worthy of emulation (Professional 267; Reel xii). This drive continues today in numerous magazines and books addressing the consumer and pro-sumer levels. Alan D. Kattelle has charted a different history of the US amateur film, concentrating on the cine clubs and their national organisation, the Amateur Cine League (ACL), competitive events and distribution, a somewhat less domestic part of the movement which aimed less at family documentation more toward ‘photo-plays’, travelogues and instructionals. Just as interested in achieving professional results with amateur means, the ACL encouraged excellence and some of their filmmakers received commissions to make more widely seen films (Kattelle, Amateur 242). The ACL’s Ten Best competition still exists as The American International Film and Video Festival (Kattelle, Amateur 242), but its remit has changed from being “a showcase for amateur films” to being open “to all non-commercial films regardless of the status of the film makers” (AMPS). This points to both the relative marginalisation of the mid-century notion of the amateur, and that successful professionals and others working in the penumbra of independent production surrounding the industry proper are now important contributors to the festival. Both these groups are the economically important contemporary users of Super 8, but they use it in different ways. Low budget productions use it as cheap alternative to larger gauges or HD digital video and a better capture format than dv, while professional productions use it as a lo-fi format precisely for its degradation and archaic home movie look (Allen; Polisin). Pro8mm is a key innovator, service provider and advocate of Super 8 as an industry standard tool, and is an important and long serving agent in what should be seen as the normalisation of Super 8 – a process of redressing its pariah status as a cheap substandard to the substandard, while progressively erasing the special qualities of Super 8 that underlay this. The company started as Super8 Sound, innovating a sync-sound system in 1971, prior to the release of Kodak’s magnetic stripe sound Super 8 in 1973. Kodak’s Super 8 sound film was discontinued in 1997, and in 2005 Pro8mm produced the Max8 format by altering camera front ends to shoot onto the unused stripe space, producing a better quality image for widescreen. In between they started cutting professional 35mm stocks for Super 8 cameras and are currently investing in ever more high-quality HD film scanners (Allen; Pro8mm). Simultaneous to this, Kodak has brought out a series of stocks for Super 8, and more have been cut down for Super 8 by third parties, that offer a wider range of light responses or ever finer grain structure, thus progressively removing the limitations and visible artefacts associated with the format (Allen; Muldowney; Perkins; Kodak, Motion). These films stocks are designed to be captured to digital video as a normal part of their processing, and then entered into the contemporary digital work flow, leaving little or no indication of the their origins on a format designed to be the 1960s equivalent of the Box Brownie. However, while Super 8 has been used by financially robust companies to produce full-length programmes, its role at the top end of production is more usually as home movie footage and/or to evoke pastness. When service provider and advocate OnSuper8 interviewed professional cinematographer James Chressanthis, he asserted that “if there is a problem with Super 8 it is that it can look too good!” and spent much of the interview explaining how a particular combination of stocks, low shutter speeds and digital conversion could reproduce the traditional degraded look and avoid “looking like a completely transparent professional medium” (Perkins). In his history of the British amateur movement, Duncan Reekie deals with this distinction between the professional and amateur moving image, defining the professional as having a drive towards clarity [that] eventually produced [what] we could term ‘hyper-lucidity’, a form of cinematography which idealises the perception of the human eye: deep focus, increased colour saturation, digital effects and so on. (108) Against this the amateur as distinguished by a visible cinematic surface, where the screen image does not seem natural or fluent but is composed of photographic grain which in 8mm appears to vibrate and weave. Since the amateur often worked with only one reversal print the final film would also often become scratched and dirty. (108-9) As Super 8’s function has moved away from the home movie, so its look has adjusted to the new role. Kodak’s replacement for K40 was finer grained (Kodak, Kodak), designed for a life as good to high quality digital video rather than a film strip, and so for video replay rather than a small gauge projector. In the economy that supports Super 8’s survival, its cameras and film stock have become part of a different gestalt. Continued use is still justified by appeals to geist, but the geist of film in a general and abstract way, not specific to Super 8 and more closely resembling the industry-centric view of film propounded by decades of ‘how-to’ guides. Activity that originally supported Super 8 continues, and currently has embraced the ubiquitous and extremely substandard cameras embedded in mobile phones and still cameras for home movies and social documentation. As Super 8 has moved to a new cultural position it has shed its most recognisable trait, the visible surface of grain and scratches, and it is that which has become obsolete, discontinued and the focus of nostalgia, along with the sound of a film projector (which you can get to go with films transferred to dvd). So it will be left to artist filmmaker Peter Tscherkassky, talking in 1995 about what Super 8 was to him in the 1980s, to evoke what there is to miss about Super 8 today. Unlike any other format, Super-8 was a microscope, making visible the inner life of images by entering beneath the skin of reality. […] Most remarkable of all was the grain. While 'resolution' is the technical term for the sharpness of a film image, Super-8 was really never too concerned with this. Here, quite a different kind of resolution could be witnessed: the crystal-clear and bright light of a Xenon-projection gave us shapes dissolving into the grain; amorphous bodies and forms surreptitiously transformed into new shapes and disappeared again into a sea of colour. Super-8 was the pointillism, impressionism and the abstract expressionism of cinematography. (Howath) Bibliography Allen, Tom. “‘Making It’ in Super 8.” MovieMaker Magazine 8 Feb. 1994. 1 May 2009 ‹http://www.moviemaker.com/directing/article/making_it_in_super_8_3044/›. AMPS. “About the American Motion Picture Society.” American Motion Picture Society site. 2009. 25 Apr. 2009 ‹http://www.ampsvideo.com›. Bassan, Raphaël. “Identity of Cinema: Experimental and Different (review of Festival des Cinémas Différents de Paris, 2005).” Senses of Cinema 44 (July-Sep. 2007). 25 Apr. 2009 ‹http://archive.sensesofcinema.com/contents/07/44/experimental-cinema-bassan.html›. Chodorov, Pip. “To Save Kodochrome.” Frameworks list, 14 May 2005. 28 Apr. 2009 ‹http://www.hi-beam.net/fw/fw29/0216.html›. Dager, Nick. “Kodak Unveils Latest Film Stock in Vision3 Family.” Digital Cinema Report 5 Jan. 2009. 27 Apr. 2009 ‹http://www.digitalcinemareport.com/Kodak-Vision3-film›. Durgnat, Raymond. “Flyweight Flicks.” GAZWRX: The Films of Jeff Keen booklet. Originally published in Films and Filming (Feb. 1965). London: BFI, 2009. 30-31. Frye, Brian L. “‘Me, I Just Film My Life’: An Interview with Jonas Mekas.” Senses of Cinema 44 (July-Sep. 2007). 15 Apr. 2009 ‹http://archive.sensesofcinema.com/contents/07/44/jonas-mekas-interview.html›. Hodgkinson, Will. “End of the Reel for Super 8.” Guardian 28 Sep. 2006. 20 Mar. 2009 ‹http://www.guardian.co.uk/film/2006/sep/28/1›. Horwath, Alexander. “Singing in the Rain - Supercinematography by Peter Tscherkassky.” Senses of Cinema 28 (Sep.-Oct. 2003). 5 May 2009 ‹http://archive.sensesofcinema.com/contents/03/28/tscherkassky.html›. Jarman, Derek. In Institute of Contemporary Arts Video Library Guide. London: ICA, 1987. Kattelle, Alan D. Home Movies: A History of the American Industry, 1897-1979. Hudson, Mass.: self-published, 2000. ———. “The Amateur Cinema League and its films.” Film History 15.2 (2003): 238-51. Kodak. “Kodak Celebrates 40th Anniversary of Super 8 Film Announces New Color Reversal Product to Portfolio.“ Frameworks list, 9 May 2005. 23 Mar. 2009 ‹http://www.hi-beam.net/fw/fw29/0150.html›. ———. “Kodachrome Update.” 30 Jun. 2006. 24 Mar. 2009 ‹http://www.hi-beam.net/fw/fw32/0756.html›. ———. “Motion Picture Film, Digital Cinema, Digital Intermediate.” 2009. 2 Apr. 2009 ‹http://motion.kodak.com/US/en/motion/index.htm?CID=go&idhbx=motion›. Mekas, Jonas. “8mm as Folk Art.” Movie Journal: The Rise of the New American Cinema, 1959-1971. Ed. Jonas Mekas. Originally Published in Village Voice 1963. New York: Macmillan, 1972. Morgan, Spencer. “Kodak, Don't Take My Kodachrome.” New York Times 31 May 2005. 4 Apr. 2009 ‹http://query.nytimes.com/gst/fullpage.html?res=9F05E1DF1F39F932A05756C0A9639C8B63&sec=&spon=&pagewanted=2›. ———. “Fans Beg: Don't Take Kodachrome Away.” New York Times 1 Jun. 2005. 4 Apr. 2009 ‹http://www.nytimes.com/2005/05/31/technology/31iht-kodak.html›. Muldowney, Lisa. “Kodak Ups the Ante with New Motion Picture Film.” MovieMaker Magazine 30 Nov. 2007. 6 Apr. 2009 ‹http://www.moviemaker.com/cinematography/article/kodak_ups_the_ante_with_new_motion_picture_film/›. New York Times. “Super 8 Blues.” 31 May 2005: E1. Perkins, Giles. “A Pro's Approach to Super 8.” OnSuper8 Blogspot 16 July 2007. 13 Apr. 2009 ‹http://onsuper8.blogspot.com/2007/07/pros-approach-to-super-8.html›. Polisin, Douglas. “Pro8mm Asks You to Think Big, Shoot Small.” MovieMaker Magazine 4 Feb. 2009. 1 May 2009 ‹http://www.moviemaker.com/cinematography/article/think_big_shoot_small_rhonda_vigeant_pro8mm_20090127/›. Pro8mm. “Pro8mm Company History.” Super 8 /16mm Cameras, Film, Processing & Scanning (Pro8mm blog) 12 Mar. 2008. 3 May 2009 ‹http://pro8mm-burbank.blogspot.com/2008/03/pro8mm-company-history.html›. Radio 4. No More Yellow Envelopes 24 Dec. 2006. 4 May 2009 ‹http://www.bbc.co.uk/radio4/factual/pip/m6yx0/›. Reekie, Duncan. Subversion: The Definitive History of the Underground Cinema. London: Wallflower Press, 2007. Sneakernet, Christopher Hutsul. “Kodachrome: Not Digital, But Still Delightful.” Toronto Star 26 Sep. 2005. Swanson, Dwight. “Inventing Amateur Film: Marion Norris Gleason, Eastman Kodak and the Rochester Scene, 1921-1932.” Film History 15.2 (2003): 126-36 Zimmermann, Patricia R. “Professional Results with Amateur Ease: The Formation of Amateur Filmmaking Aesthetics 1923-1940.” Film History 2.3 (1988): 267-81. ———. Reel Families: A Social History of Amateur Film. Bloomington: Indiana UP, 1995.
APA, Harvard, Vancouver, ISO, and other styles
47

Williams, Deborah Kay. "Hostile Hashtag Takeover: An Analysis of the Battle for Februdairy." M/C Journal 22, no. 2 (April 24, 2019). http://dx.doi.org/10.5204/mcj.1503.

Full text
Abstract:
We need a clear, unified, and consistent voice to effect the complete dismantling, the abolition, of the mechanisms of animal exploitation.And that will only come from what we say and do, no matter who we are.— Gary L. Francione, animal rights theoristThe history of hashtags is relatively short but littered with the remnants of corporate hashtags which may have seemed a good idea at the time within the confines of the boardroom. It is difficult to understand the rationale behind the use of hashtags as an effective communications tactic in 2019 by corporations when a quick stroll through their recent past leaves behind the much-derided #qantasluxury (Glance), #McDstories (Hill), and #myNYPD (Tran).While hashtags have an obvious purpose in bringing together like-minded publics and facilitating conversation (Kwye et al. 1), they have also regularly been the subject of “hashtag takeovers” by activists and other interested parties, and even by trolls, as the Ecological Society of Australia found in 2015 when their seemingly innocuous #ESA15 hashtag was taken over with pornographic images (news.com.au). Hashtag takeovers have also been used as a dubious marketing tactic, where smaller and less well-known brands tag their products with trending hashtags such as #iphone in order to boost their audience (Social Garden). Hashtags are increasingly used as a way for activists or other interested parties to disrupt a message. It is, I argue, predictable that any hashtag related to an even slightly controversial topic will be subject to some form of activist hashtag takeover, with varying degrees of success.That veganism and the dairy industry should attract such conflict is unsurprising given that the two are natural enemies, with vegans in particular seeming to anticipate and actively engage in the battle for the opposing hashtag.Using a comparative analysis of the #Veganuary and #Februdairy hashtags and how they have been used by both pro-vegan and pro-dairy social media users, this article illustrates that the enthusiastic and well-meaning social media efforts of farmers and dairy supporters have so far been unable to counteract those of well-organised and equally passionate vegan activists. This analysis compares tweets in the first week of the respective campaigns, concluding that organisations, industries and their representatives should be extremely wary of engaging said activists who are not only highly-skilled but are also highly-motivated. Grassroots, ideology-driven activism is a formidable opponent in any public space, let alone when it takes place on the outspoken and unstructured landscape of social media which is sometimes described as the “wild West” (Fitch 5) where anything goes and authenticity and plain-speaking is key (Macnamara 12).I Say Hashtag, You Say Bashtag#Februdairy was launched in 2018 to promote the benefits of dairy. The idea was first mooted on Twitter in 2018 by academic Dr Jude Capper, a livestock sustainability consultant, who called for “28 days, 28 positive dairy posts” (@Bovidiva; Howell). It was a response to the popular Veganuary campaign which aimed to “inspire people to try vegan for January and throughout the rest of the year”, a campaign which had gained significant traction both online and in the traditional media since its inception in 2014 (Veganuary). Hopes were high: “#Februdairy will be one month of dairy people posting, liking and retweeting examples of what we do and why we do it” (Yates). However, the #Februdairy hashtag has been effectively disrupted and has now entered the realm of a bashtag, a hashtag appropriated by activists for their own purpose (Austin and Jin 341).The Dairy Industry (Look Out the Vegans Are Coming)It would appear that the dairy industry is experiencing difficulties in public perception. While milk consumption is declining, sales of plant-based milks are increasing (Kaiserman) and a growing body of health research has questioned whether dairy products and milk in particular do in fact “do a body good” (Saccaro; Harvard Milk Study). In the 2019 review of Canada’s food guide, its first revision since 2007, for instance, the focus is now on eating plant-based foods with dairy’s former place significantly downgraded. Dairy products no longer have their own distinct section and are instead placed alongside other proteins including lentils (Pippus).Nevertheless, the industry has persevered with its traditional marketing and public relations activities, choosing to largely avoid addressing animal welfare concerns brought to light by activists. They have instead focused their message towards countering concerns about the health benefits of milk. In the US, the Milk Processing Education Program’s long-running celebrity-driven Got Milk campaign has been updated with Milk Life, a health focused campaign, featuring images of children and young people living an active lifestyle and taking part in activities such as skateboarding, running, and playing basketball (Milk Life). Interestingly, and somewhat inexplicably, Milk Life’s home page features the prominent headline, “How Milk Can Bring You Closer to Your Loved Ones”.It is somewhat reflective of the current trend towards veganism that tennis aces Serena and Venus Williams, both former Got Milk ambassadors, are now proponents for the plant-based lifestyle, with Venus crediting her newly-adopted vegan diet as instrumental in her recovery from an auto-immune disease (Mango).The dairy industry’s health focus continues in Australia, as well as the use of the word love, with former AFL footballer Shane Crawford—the face of the 2017 campaign Milk Loves You Back, from Lion Dairy and Drinks—focusing on reminding Australians of the reputed nutritional benefits of milk (Dawson).Dairy Australia meanwhile launched their Legendairy campaign with a somewhat different focus, promoting and lauding Australia’s dairy families, and with a message that stated, in a nod to the current issues, that “Australia’s dairy farmers and farming communities are proud, resilient and innovative” (Dairy Australia). This campaign could be perceived as a morale-boosting exercise, featuring a nation-wide search to find Australia’s most legendairy farming community (Dairy Australia). That this was also an attempt to humanise the industry seems obvious, drawing on established goodwill felt towards farmers (University of Cambridge). Again, however, this strategy did not address activists’ messages of suffering animals, factory farms, and newborn calves being isolated from their grieving mothers, and it can be argued that consumers are being forced to make the choice between who (or what) they care about more: animals or the people making their livelihoods from them.Large-scale campaigns like Legendairy which use traditional channels are of course still vitally important in shaping public opinion, with statistics from 2016 showing 85.1% of Australians continue to watch free-to-air television (Roy Morgan, “1 in 7”). However, a focus and, arguably, an over-reliance on traditional platforms means vegans and animal activists are often unchallenged when spreading their message via social media. Indeed, when we consider the breakdown in age groups inherent in these statistics, with 18.8% of 14-24 year-olds not watching any commercial television at all, an increase from 7% in 2008 (Roy Morgan, “1 in 7”), it is a brave and arguably short-sighted organisation or industry that relies primarily on traditional channels to spread their message in 2019. That these large-scale campaigns do little to address the issues raised by vegans concerning animal welfare leaves these claims largely unanswered and momentum to grow.This growth in momentum is fuelled by activist groups such as the People for the Ethical Treatment of Animals (PETA) who are well-known in this space, with 5,494,545 Facebook followers, 1.06 million Twitter followers, 973,000 Instagram followers, and 453,729 You Tube subscribers (People for the Ethical Treatment of Animals). They are also active on Pinterest, a visual-based platform suited to the kinds of images and memes particularly detrimental to the dairy industry. Although widely derided, PETA’s reach is large. A graphic video posted to Facebook on February 13 2019 and showing a suffering cow, captioned “your cheese is not worth this” was shared 1,244 times, and had 4.6 million views in just over 24 hours (People for the Ethical Treatment of Animals). With 95% of 12-24 year olds in Australia now using social networking sites (Statista), it is little wonder veganism is rapidly growing within this demographic (Bradbury), with The Guardian labelling the rise of veganism unstoppable (Hancox).Activist organisations are joined by prominent and charismatic vegan activists such as James Aspey (182,000 Facebook followers) and Earthling Ed (205,000 Facebook followers) in distributing information and images that are influential and often highly graphic or disturbing. Meanwhile Instagram influencers and You Tube lifestyle vloggers such as Ellen Fisher and FreeLee share information promoting vegan food and the vegan lifestyle (with 650,320 and 785,903 subscribers respectively). YouTube video Dairy Is Scary has over 5 million views (Janus) and What the Health, a follow-up documentary to Cowspiracy: The Sustainability Secret, promoting veganism, is now available on Netflix, which itself has 9.8 million Australian subscribers (Roy Morgan, “Netflix”). BOSH’s plant-based vegan cookbook was the fastest selling cookbook of 2018 (Chiorando).Additionally, the considerable influence of celebrities such as Miley Cyrus, Beyonce, Alicia Silverstone, Zac Efron, and Jessica Chastain, to name just a few, speaking publicly about their vegan lifestyle, encourages veganism to become mainstream and increases its widespread acceptance.However not all the dairy industry’s ills can be blamed on vegans. Rising costs, cheap imports, and other pressures (Lockhart, Donaghy and Gow) have all placed pressure on the industry. Nonetheless, in the battle for hearts and minds on social media, the vegans are leading the way.Qualitative research interviewing new vegans found converting to veganism was relatively easy, yet some respondents reported having to consult multiple resources and required additional support and education on how to be vegan (McDonald 17).Enter VeganuaryUsing a month, week or day to promote an idea or campaign, is a common public relations and marketing strategy, particularly in health communications. Dry July and Ocsober both promote alcohol abstinence, Frocktober raises funds for ovarian cancer, and Movember is an annual campaign raising awareness and funds for men’s health (Parnell). Vegans Matthew Glover and Jane Land were discussing the success of Movember when they raised the idea of creating a vegan version. Their initiative, Veganuary, urging people to try vegan for the month of January, launched in 2014 and since then 500,000 people have taken the Veganuary pledge (Veganuary).The Veganuary website is the largest of its kind on the internet. With vegan recipes, expert advice and information, it provides all the answers to Why go vegan, but it is the support offered to answer How to go vegan that truly sets Veganuary apart. (Veganuary)That Veganuary participants would use social media to discuss and share their experiences was a foregone conclusion. Twitter, Facebook, and Instagram are all utilised by participants, with the official Veganuary pages currently followed/liked by 159,000 Instagram followers, receiving 242,038 Facebook likes, and 45,600 Twitter followers (Veganuary). Both the Twitter and Instagram sites make effective use of hashtags to spread their reach, not only using #Veganuary but also other relevant hashtags such as #TryVegan, #VeganRecipes, and the more common #Vegan, #Farm, and #SaveAnimals.Februdairy Follows Veganuary, But Only on the CalendarCalling on farmers and dairy producers to create counter content and their own hashtag may have seemed like an idea that would achieve an overall positive response.Agricultural news sites and bloggers spread the word and even the BBC reported on the industry’s “fight back” against Veganuary (BBC). However the hashtag was quickly overwhelmed with anti-dairy activists mobilising online. Vegans issued a call to arms across social media. The Vegans in Australia Facebook group featured a number of posts urging its 58,949 members to “thunderclap” the Februdairy hashtag while the Project Calf anti-dairy campaign declared that Februdairy offered an “easy” way to spread their information (Sandhu).Februdairy farmers and dairy supporters were encouraged to tell their stories, sharing positive photographs and videos, and they did. However this content was limited. In this tweet (fig. 1) the issue of a lack of diverse content was succinctly addressed by an anti-Februdairy activist.Fig. 1: Content challenges. (#Februdairy, 2 Feb. 2019)MethodUtilising Twitter’s advanced search capability, I was able to search for #Veganuary tweets from 1 to 7 January 2019 and #Februdairy tweets from 1 to 7 February 2019. I analysed the top tweets provided by Twitter in terms of content, assessed whether the tweet was pro or anti Veganuary and Februdairy, and also categorised its content in terms of subject matter.Tweets were analysed to assess whether they were on message and aligned with the values of their associated hashtag. Veganuary tweets were considered to be on message if they promoted veganism or possessed an anti-dairy, anti-meat, or pro-animal sentiment. Februdairy tweets were assessed as on message if they promoted the consumption of dairy products, expressed sympathy or empathy towards the dairy industry, or possessed an anti-vegan sentiment. Tweets were also evaluated according to their clarity, emotional impact and coherence. The overall effectiveness of the hashtag was then evaluated based on the above criteria as well as whether they had been hijacked.Results and FindingsOverwhelmingly, the 213 #Veganuary tweets were on message. That is they were pro-Veganuary, supportive of veganism, and positive. The topics were varied and included humorous memes, environmental facts, information about the health benefits of veganism, as well as a strong focus on animals. The number of non-graphic tweets (12) concerning animals was double that of tweets featuring graphic or shocking imagery (6). Predominantly the tweets were focused on food and the sharing of recipes, with 44% of all pro #Veganuary tweets featuring recipes or images of food. Interestingly, a number of well-known corporations tweeted to promote their vegan food products, including Tesco, Aldi, Iceland, and M&S. The diversity of veganism is reflected in the tweets. Organisations used the hashtag to promote their products, including beauty and shoe products, social media influencers promoted their vegan podcasts and blogs, and, interestingly, the Ethiopian Embassy of the United Kingdom tweeted their support.There were 23 (11%) anti-Veganuary tweets. Of these, one was from Dr. Jude Capper, the founder of Februdairy. The others expressed support for farming and farmers, and a number were photographs of meat products, including sausages and fry-ups. One Australian journalist tweeted in favour of meat, stating it was yummy murder. These tweets could be described as entertaining and may perhaps serve as a means of preaching to the converted, but their ability to influence and persuade is negligible.Twitter’s search tool provided access to 141 top #Februdairy tweets. Of these 82 (52%) were a hijack of the hashtag and overtly anti-Februdairy. Vegan activists used the #Februdairy hashtag to their advantage with most of their tweets (33%) featuring non-graphic images of animals. They also tweeted about other subject matters, including environmental concerns, vegan food and products, and health issues related to dairy consumption.As noted by the activists (see fig. 1 above), most of the pro-Februdairy tweets were images of milk or dairy products (41%). Images of farms and farmers were the next most used (26%), followed by images of cows (17%) (see fig. 2). Fig. 2: An activist makes their anti-Februdairy point with a clear, engaging image and effective use of hashtags. (#Februdairy, 6 Feb. 2019)The juxtaposition between many of the tweets was also often glaring, with one contrasting message following another (see fig. 3). Fig. 3: An example of contrasting #Februdairy tweets with an image used by the activists to good effect, making their point known. (#Februdairy, 2 Feb. 2019)Storytelling is a powerful tool in public relations and marketing efforts. Yet, to be effective, high-quality content is required. That many of the Februdairy proponents had limited social media training was evident; images were blurred, film quality was poor, or they failed to make their meaning clear (see fig. 4). Fig. 4: A blurred photograph, reflective of some of the low-quality content provided by Februdairy supporters. (#Februdairy, 3 Feb. 2019)This image was tweeted in support of Februdairy. However the image and phrasing could also be used to argue against Februdairy. We can surmise that the tweeter was suggesting the cow was well looked after and seemingly content, but overall the message is as unclear as the image.While some pro-Februdairy supporters recognised the need for relevant hashtags, often their images were of a low-quality and not particularly engaging, a requirement for social media success. This requirement seems to be better understood by anti-Februdairy activists who used high-quality images and memes to create interest and gain the audience’s attention (see figs. 5 and 6). Fig. 5: An uninspiring image used to promote Februdairy. (#Februdairy, 6 Feb. 2019) Fig. 6: Anti-Februdairy activists made good use of memes, recognising the need for diverse content. (#Februdairy, 3 Feb. 2019)DiscussionWhat the #Februdairy case makes clear, then, is that in continuing its focus on traditional media, the dairy industry has left the battle online to largely untrained, non-social media savvy supporters.From a purely public relations perspective, one of the first things we ask our students to do in issues and crisis communication is to assess the risk. “What can hurt your organisation?” we ask. “What potential issues are on the horizon and what can you do to prevent them?” This is PR101 and it is difficult to understand why environmental scanning and resulting action has not been on the radar of the dairy industry long before now. It seems they have not fully anticipated or have significantly underestimated the emerging issue that public perception, animal cruelty, health concerns, and, ultimately, veganism has had on their industry and this is to their detriment. In Australia in 2015–16 the dairy industry was responsible for 8 per cent (A$4.3 billion) of the gross value of agricultural production and 7 per cent (A$3 billion) of agricultural export income (Department of Agriculture and Water Resources). When such large figures are involved and with so much at stake, it is hard to rationalise the decision not to engage in a more proactive online strategy, seeking to engage their publics, including, whether they like it or not, activists.Instead there are current attempts to address these issues with a legislative approach, lobbying for the introduction of ag-gag laws (Potter), and the limitation of terms such as milk and cheese (Worthington). However, these measures are undertaken while there is little attempt to engage with activists or to effectively counter their claims with a widespread authentic public relations campaign, and reflects a failure to understand the nature of the current online environment, momentum, and mood.That is not to say that the dairy industry is not operating in the online environment, but it does not appear to be a priority, and this is reflected in their low engagement and numbers of followers. For instance, Dairy Australia, the industry’s national service body, has a following of only 8,281 on Facebook, 6,981 on Twitter, and, crucially, they are not on Instagram. Their Twitter posts do not include hashtags and unsurprisingly they have little engagement on this platform with most tweets attracting no more than two likes. Surprisingly they have 21,013 subscribers on YouTube which featured professional and well-presented videos. This demonstrates some understanding of the importance of effective storytelling but not, as yet, trans-media storytelling.ConclusionSocial media activism is becoming more important and recognised as a legitimate voice in the public sphere. Many organisations, perhaps in recognition of this as well as a growing focus on responsible corporate behaviour, particularly in the treatment of animals, have adjusted their behaviour. From Unilever abandoning animal testing practices to ensure Dove products are certified cruelty free (Nussbaum), to Domino’s introducing vegan options, companies who are aware of emerging trends and values are changing the way they do business and are reaping the benefits of engaging with, and catering to, vegans. Domino’s sold out of vegan cheese within the first week and vegans were asked to phone ahead to their local store, so great was the demand. From their website:We knew the response was going to be big after the demand we saw for the product on social media but we had no idea it was going to be this big. (Domino’s Newsroom)As a public relations professional, I am baffled by the dairy industry’s failure to adopt a crisis-based strategy rather than largely rely on the traditional one-way communication that has served them well in the previous (golden?) pre-social media age. However, as a vegan, persuaded by the unravelling of the happy cow argument, I cannot help but hope this realisation continues to elude them.References@bovidiva. “Let’s Make #Februdairy Happen This Year. 28 Days, 28 Positive #dairy Posts. From Cute Calves and #cheese on Crumpets, to Belligerent Bulls and Juicy #beef #burgers – Who’s In?” Twitter post. 15 Jan. 2018. 1 Feb. 2019 <https://twitter.com/bovidiva/status/952910641840447488?lang=en>.Austin, Lucinda L., and Yan Jin. Social Media and Crisis Communication. New York: Routledge, 2018.Bradbury, Tod. “Data Shows Major Rise in Veganism among Young People.” Plant Based News 12 Oct. 2018. 10 Feb. 2019 <https://www.plantbasednews.org>.BBC. “Februdairy: The Dairy Industry Fights Back against Veganuary.” BBC.com 8 Feb. 2018. 1 Feb. 2019 <https://www.bbc.com/news/newsbeat-42990941>.Campaign Brief. “Shane Crawford Stars in ‘Milk Loves You Back’ Work for Lion Dairy & Drinks via AJF Partnership.” Campaign Brief Australia 1 Jun. 2017. 12 Feb. 2019 <http://www.campaignbrief.com/2017/06/shane-crawford-stars-in-milk-l.html>.Chiorando, Maria. “BOSH!’s Vegan Cookbook Is Fastest Selling Cookery Title of 2018.” Plant Based News 26 April 2018. 18 Feb. 2019 <https://www.plantbasednews.org/post/bosh-s-vegan-cookbook-is-fastest-selling-cookery-title-of-2018>.Cowspiracy: The Sustainability Secret. Dir. Kip Anderson, and Keegan Kuhn. Appian Way, A.U.M. Films, First Spark Media, 2014.Dairy Australia. “About Legendairy Capital.” Legendairy.com.au, 2019. 12 Feb. 2019 <http://www.legendairy.com.au/dairy-talk/capital-2017/about-us>.Dawson, Abigail. “Lion Dairy & Drinks Launches Campaign to Make Milk Matter Again.” Mumbrella 1 Jun. 2017. 10 Feb 2019 <https://mumbrella.com.au/lion-dairy-drinks-launches-campaign-make-milk-matter-448581>.Department of Agriculture and Water Resources. “Dairy Industry.” Australian Government. 21 Sep. 2018. 20 Feb. 2019 <http://www.agriculture.gov.au/abares/research-topics/surveys/dairy>.Domino’s Newsroom. “Meltdown! Domino’s Set to Run Out of Vegan Cheese!” Domino’s Australia 18 Jan. 2018. 10 Feb. 2019 <https://newsroom.dominos.com.au/home/2018/1/17/meltdown-dominos-set-to-run-out-of-vegan-cheese>.Fitch, Kate. “Making Friends in the Wild West: Singaporean Public Relations Practitioners’ Perceptions of Working in Social Media.” PRism 6.2 (2009). 10 Feb. 2019 <http://www.prismjournal.org/fileadmin/Praxis/Files/globalPR/FITCH.pdf>.Francione, Gary L. “Animal Rights: The Abolitionist Approach.” Animal Rights: The Abolitionist Approach 10 Feb. 2019. <https://www.abolitionistapproach.com/quotes/>.Glance, David. “#QantasLuxury: A Qantas Social Media Disaster in Pyjamas.” The Conversation 23 Nov. 2011. 10 Feb. 2019 <http://theconversation.com/qantasluxury-a-qantas-social-media-disaster-in-pyjamas-4421>.Hancox, Dan. “The Unstoppable Rise of Veganism: How a Fringe Movement Went Mainstream.” The Guardian 1 Apr. 2018. 10 Feb. 2019 <https://www.theguardian.com/lifeandstyle/2018/apr/01/vegans-are-coming-millennials-health-climate-change-animal-welfare>.“Harvard Milk Study: It Doesn’t Do a Body Good.” HuffPost Canada 25 Jul. 2013. 12 Feb. 2019 <https://www.huffingtonpost.ca/2013/07/05/harvard-milk-study_n_3550063.html>.Hill, Kashmir. “#McDStories: When a Hashtag Becomes a Bashtag.” Forbes.com 24 Jan. 2012. 10 Feb. 2019 <https://www.forbes.com/sites/kashmirhill/2012/01/24/mcdstories-when-a-hashtag-becomes-a-bashtag/#1541ef39ed25>.Howell, Madeleine. “Goodbye Veganuary, Hello Februdairy: How the Dairy Industry Is Taking the Fight to Its Vegan Critics.” The Telegraph 9 Feb. 2018. 10 Feb. 2019 <https://www.telegraph.co.uk/food-and-drink/features/goodbye-veganuary-hello-februdairy-dairy-industry-taking-fight/>.Janus, Erin. “DAIRY IS SCARY! The Industry Explained in 5 Minutes.” Video. 27 Dec. 2015. 12 Feb. 2019 <https://www.youtube.com/watch?v=UcN7SGGoCNI&t=192s>.Kaiserman, Beth. “Dairy Industry Struggles in a Sea of Plant-Based Milks.” Forbes.com 31 Jan. 2019. 20 Feb. 2019 <https://www.forbes.com/sites/bethkaiserman/2019/01/31/dairy-industry-plant-based-milks/#7cde005d1c9e>.Kwye, Su Mon, et al. “On Recommending Hashtags in Twitter Networks.” Proceedings of the Social Informatics: 4th International Conference, SocInfo. 5-7 Dec. 2012. Lausanne: Research Collection School of Information Systems. 337-50. 12 Feb. 2019 <https://ink.library.smu.edu.sg/cgi/viewcontent.cgi?article=2696&context=sis_research>.Lockhart, James, Danny Donaghy, and Hamish Gow. “Milk Price Cuts Reflect the Reality of Sweeping Changes in Global Dairy Market.” The Conversation 12 May 2016. 12 Feb. 2019 <https://theconversation.com/milk-price-cuts-reflect-the-reality-of-sweeping-changes-in-global-dairy-market-59251>.Macnamara, Jim. “‘Emergent’ Media and Public Communication: Understanding the Changing Mediascape.” Public Communication Review 1.2 (2010): 3–17.Mango, Alison. “This Drastic Diet Change Helped Venus Williams Fight Her Autoimmune Condition.” Health.com 12 Jan. 2017. 10 Feb. 2019 <https://www.health.com/nutrition/venus-williams-raw-vegan-diet>.McDonald, Barbara. “Once You Know Something, You Can’t Not Know It. An Empirical Look at Becoming Vegan.” Foodethics.univie.ac.at, 2000. 12 Feb. 2019 <https://foodethics.univie.ac.at/fileadmin/user_upload/inst_ethik_wiss_dialog/McDonald__B._2000._Vegan_...__An_Empirical_Look_at_Becoming_Vegan..pdf>.Milk Life. “What Is Milk Life?” 20 Feb. 2019 <https://milklife.com/what-is-milk-life>.News.com.au. “Twitter Trolls Take over Conference Hashtag with Porn.” News.com.au 30 Nov. 2015. 12 Feb. 2019 <https://www.news.com.au/national/twitter-trolls-take-over-ecology-conference-hashtag-with-porn/news-story/06a76d7ab53ec181776bdb11d735e422>.Nussbaum, Rachel. “Tons of Your Favorite Drugstore Products Are Officially Cruelty-Free Now.” Glamour.com 9 Oct. 2018. 21 Feb. 2019 <https://www.glamour.com/story/dove-cruelty-free-peta>.Parnell, Kerry. “Charity Theme Months Have Taken over the Calendar.” Daily Telegraph.com 26 Sep. 2015. 18 Feb. 2019 <https://www.dailytelegraph.com.au/rendezview/charity-theme-months-have-taken-over-the-calendar/news-story/1f444a360ee04b5ec01154ddf4763932>.People for the Ethical Treatment of Animals. “This Cow Was Suffering on Dairy Farm and the Owner Refused to Help Her.” Facebook post. 13 Feb. 2019. 15 Feb. 2019 <https://www.facebook.com/official.peta>.Pippus, Anna. “Progress! Canada’s New Draft Food Guide Favors Plant-Based Protein and Eliminates Dairy as a Food Group.” Huffington Post 7 Dec. 2017. 10 Feb. 2019 <https://www.huffingtonpost.com/entry/progress-canadas-new-food-guide-will-favor-plant_us_5966eb4ce4b07b5e1d96ed5e>.Potter, Will. “Ag-Gag Laws: Corporate Attempts to Keep Consumers in the Dark.” Griffith Journal of Law and Human Dignity (2017): 1–32.Roy Morgan. “Netflix Set to Surge beyond 10 Million Users.” Roy Morgan 3 Aug. 2018. 20 Feb. 2019 <http://www.roymorgan.com/findings/7681-netflix-stan-foxtel-fetch-youtube-amazon-pay-tv-june-2018-201808020452>.———. “1 in 7 Australians Now Watch No Commercial TV, Nearly Half of All Broadcasting Reaches People 50+, and Those with SVOD Watch 30 Minutes Less a Day.” Roy Morgan 1 Feb. 2016. 10 Feb. 2019 <http://www.roymorgan.com/findings/6646-decline-and-change-commercial-television-viewing-audiences-december-2015-201601290251>.Saccaro, Matt. “Milk Does Not Do a Body Good, Says New Study.” Mic.com 29 Oct. 2014. 12 Feb. 2019 <https://mic.com/articles/102698/milk-does-not-do-a-body-good#.o7MuLnZgV>.Sandhu, Serina. “A Group of Vegan Activists Is Trying to Hijack the ‘Februdairy’ Month by Encouraging People to Protest at Dairy Farms.” inews.co.uk 5 Feb. 2019. 18 Feb. 2019 <https://inews.co.uk/news/uk/vegan-activists-hijack-februdairy-protest-dairy-farms-farmers/>.Social Garden. “Hashtag Blunders That Hurt Your Social Media Marketing Efforts.” Socialgarden.com.au 30 May 2014. 10 Feb. 2019 <https://socialgarden.com.au/social-media-marketing/hashtag-blunders-that-hurt-your-social-media-marketing-efforts/>.Statista: The Statista Portal. Use of Social Networking Sites in Australia as of March 2017 by Age. 2019. 10 Feb. 2019 <https://www.statista.com/statistics/729928/australia-social-media-usage-by-age/>.Tran, Mark. “#myNYPD Twitter Callout Backfires for New York Police Department.” The Guardian 23 Apr. 2014. 10 Feb. 2019 <https://www.theguardian.com/world/2014/apr/23/mynypd-twitter-call-out-new-york-police-backfires>.University of Cambridge. “Farming Loved But Misunderstood, Survey Shows.” Cam.uc.uk 23 Aug. 2012. 10 Feb. 2019 <https://www.cam.ac.uk/research/news/farming-loved-but-misunderstood-survey-shows>.Veganuary. “About Veganuary.” 2019. 21 Feb. 2019 <https://veganuary.com/about/>.———. “Veganuary: Inspiring People to Try Vegan!” 2019. 10 Feb. 2019 <https://veganuary.com/>.What the Health. Dir. Kip Anderson, and Keegan Kuhn. A.U.M. Films, 2017.Worthington, Brett. “Federal Government Pushes to Stop Plant-Based Products Labelled as ‘Meat’ or ‘Milk’.” ABC News 11 Oct. 2018. 20 Feb. 2019 <https://www.abc.net.au/news/2018-10-11/federal-government-wants-food-standards-reviewed/10360200>.Yates, Jack. “Farmers Plan to Make #Februdairy Month of Dairy Celebration.” Farmers Weekly 20 Jan. 2018. 10 Feb. 2019 <https://www.fwi.co.uk/business/farmers-plan-make-februdairy-month-dairy-celebration>.
APA, Harvard, Vancouver, ISO, and other styles
48

Nunes, Mark. "Failure Notice." M/C Journal 10, no. 5 (October 1, 2007). http://dx.doi.org/10.5204/mcj.2702.

Full text
Abstract:
Amongst the hundreds of emails that made their way to error@media-culture.org.au over the last ten months, I received the following correspondence: Failure noticeHi. This is the qmail-send program at sv01.wadax.ne.jp.I’m afraid I wasn’t able to deliver your message to the following addresses.This is a permanent error; I’ve given up. Sorry it didn’t work out.namewithheld@s.vodafone.ne.jp>:210.169.171.135 does not like recipient.Remote host said: 550 Invalid recipient:namewithheld@s.vodafone.ne.jp>Giving up on 210.169.171.135. Email of this sort marks a moment that is paradoxically odd and all too familiar in the digital exchanges of everyday life. The failure message arrives to tell me something “didn’t work out.” This message shows up in my email account looking no different from any other correspondence—only this one hails from the system itself, signalling a failure to communicate. Email from the “mailer-daemon” calls attention to both the logic of the post that governs email (a “letter” sent to an intended address at the intention of some source) and the otherwise invisible logic of informatic protocols, made visible in the system failure of a “permanent error.” In this particular instance, however, the failure notice is itself a kind of error. I never emailed namewithheld@s.vodafone.ne.jp—and by the mailer-daemon’s account, such a person does not exist. It seems that a spammer has exploited an email protocol as a way of covering his tracks: when a deliver-to path fails, the failure notice bounces to a third site. The failure notice marks the successful execution of a qmail protocol, but its arrival at our account is still a species of error. In most circumstances, error yields an invalid result. In calculation, error marks a kind of misstep that not only corrupts the end result, but all steps following the error. One error begets others. But as with the failure notice, error often marks not only the misdirections of a system, but also the system’s internal logic. The failure notice corresponds to a specific category of error—a potential error that the system must predict before it has actually occurred. While the notice signals failure (permanent error), it does so within the successful, efficient operation of a communicative system. What is at issue, then, is less a matter of whether or not error occurs than a system’s ability to handle error as it arises. Control systems attempt to close themselves off to error’s misdirections. If error signals a system failure, the “failure notice” of error foregrounds the degree to which in “societies of control” every error is a fatal error in that Baudrillardian sense—a failure that is subsumed in the operational logic of the system itself (40). Increasingly, the networks of a global marketplace require a rationalisation of processes and an introduction of informatic control systems to minimise wastage and optimise output. An informatic monoculture expresses itself through operational parameters that define communication according to principles of maximum transmission. In effect, in the growing dominance of a network society, we are witnessing the transcendence of a social and cultural system that must suppress at all costs the failure to communicate. This global communication system straddles a paradoxical moment of maximum exchange and maximum control. With growing frequency, social and commercial processes are governed by principles of quality assurance, what Lyotard defined nearly thirty years ago as a “logic of maximum performance” (xxiv). As Six Sigma standards migrate from the world of manufacturing to a wide range of institutions, we find a standard of maximum predictability and minimum error as the latest coin of the realm. Utopia is now an error-free world of 100% efficiency, accuracy, and predictability. This lure of an informatic “monoculture” reduces communication to a Maxwell’s demon for capturing transmission and excluding failure. Such a communicative system establishes a regime of signs that thrives upon the drift and flow of a network of signifiers, but that affirms its power as a system in its voracious incorporation of signs within a chain of signification (Deleuze and Guattari 111-117). Error is cast out as abject, the scapegoat “condemned as that which exceeds the signifying regime’s power of deterritorialization” (Deleuze and Guattari 117). Deleuze and Guattari describe this self-cycling apparatus of capture as “a funeral world of terror,” the terror of a black-hole regime that ultimately depends upon a return of the same and insures that everything that circulates communicates…or is cast off as abject (113). This terror marks a relation of control, one that depends upon a circulation of signs but that also insists all flows fall within its signifying regime. To speak of the “terror of information” is more than metaphorical to the extent that this forced binary (terror of signal/error of noise) imposes a kind of violence that demands a rationalisation of all singularities of expression into the functionalities of a quantifiable system. To the extent that systems of information imply systems of control, the violence of information is less metaphor than metonym, as it calls into high relief the scapegoat error—the abject remainder whose silenced line of flight marks the trajectory of the unclean. This cybernetic logic of maximum performance demands that error is either contained within the predictable deviations of a system’s performance, or nullified as outlying and asignifying. Statistics tells us that we are best off ignoring the outlier. This logic of the normal suggests that something very risky occurs when an event or an instance falls outside the scope of predicable variance. In the ascendancy of information, error, deviance, and outlying results cast a long shadow. In Norbert Wiener’s account of informatic entropy, this drift from systematic control marked a form of evil—not a Manichean evil of bad actors, but rather an Augustinian evil: a falling away from the perfection of order (34-36). Information utopia banishes error as a kind of evil—an aberration that is decidedly off the path of order and control. This cybernetic logic functions at all levels, from social systems theory to molecular biology. Our diseases are now described as errors in coding, transcription, or transmission—genetic anomalies, cancerous loop scripts, and neurochemical noise. Mutation figures as an error in reproduction—a straying from faithful replication and a falling away from the Good of order and control. But we should keep in mind that when we speak of “evil” in the context of this cybernetic logic, that evil takes on a specific form. It is the evil of the errant. Or to put it another way: it is the evil of the Sesame Street Muppet, Bert. In 2001, a U.S. high school student named Dino Ignacio created a graphic of the Muppet, Bert, with Osama bin Laden—part of his humorous Website project, “Bert is Evil.” A Pakistani-based publisher scanning the Web for images of bin Laden came across Ignacio’s image and, apparently not recognising the Sesame Street character, incorporated it into a series of anti-American posters. According to Henry Jenkins’s account of the events, in the weeks that followed, “CNN reporters recorded the unlikely sight of a mob of angry protestors marching through the streets chanting anti-American slogans and waving signs depicting Bert and bin Laden” (1-2). As the story of the Bert-sighting spread, new “Bert is evil” Websites sprang up, and Ignacio found himself the unwitting centre of a full-blown Internet phenomenon. Jenkins finds in this story a fascinating example of what he calls convergence culture, the blurring of the line between consumer and producer (3). From a somewhat different critical perspective, Mark Poster reads this moment of misappropriation and misreading as emblematic of global networked culture, in which “as never before, we must begin to interpret culture as multiple cacophonies of inscribed meanings as each cultural object moves across cultural differences” (11). But there is another moral to this story as well, to the extent that the convergence and cacophony described here occur in a moment of error, an errant slippage in which signification escapes its own regime of signs. The informatic (Augustinian) evil of Bert the Muppet showing up at an anti-American rally in Pakistan marks an event-scene in which an “error” not only signifies, but in its cognitive resonance, begins to amplify and replicate. At such moments, the “failure notice” of error signals a creative potential in its own right—a communicative context that escapes systemic control. The error of “evil Bert” introduces noise into this communicative system. It is abject information that marks an aberration within an otherwise orderly system of communication, an error of sorts marking an errant line of flight. But in contrast to the trance-like lure of 100% efficiency and maximum performance, is there not something seductive in these instances of error, as it draws us off our path of intention, leading us astray, pulling us toward the unintended and unforeseen? In its breach of predictable variance, error gives expression to the erratic. As such, “noise” marks a species of error (abject information) that, by failing to signify within a system, simultaneously marks an opening, a poiesis. This asignifying poetics of “noise,” marked by these moments of errant information, simultaneously refuses and exceeds the cybernetic imperative to communicate. This poetics of noise is somewhat reminiscent of Umberto Eco’s discussion of Claude Shannon’s information theory in The Open Work. For Shannon, the gap between signal and selection marks a space of “equivocation,” what Warren Weaver calls “an undesirable … uncertainty about what the message was” (Shannon and Weaver 21). Eco is intrigued by Shannon’s insight that communication is always haunted by equivocation, the uncertainty that the message received was the signal sent (57-58). Roland Barthes also picks up on this idea in S/Z, as N. Katherine Hayles notes in her discussion of information theory and post-structuralism (46). For these writers, equivocation suggests a creative potential in entropy, in that noise is, in Weaver’s words, “spurious information” (Shannon and Weaver 19). Eco elaborates on Shannon and Weaver’s information theory by distinguishing between actual communication (the message sent) and its virtuality (the possible messages received). Eco argues, in effect, that communication reduces information in its desire to actualise signal at the expense of noise. In contrast, poetics generates information by sustaining the equivocation of the text (66-68). It is in this tension between capture and escape marked by the scapegoats of error and noise that I find a potential for a contemporary poetics within a global network society. Error reveals the degree to which everyday life plays itself out within this space of equivocation. As Stuart Moulthrop addressed nearly ten years ago, our frequent encounters with “Error 404” on the Web calls attention to “the importance of not-finding”: that error marks a path in its own right, and not merely a misstep. Without question, this poetics of noise runs contrary to a dominant, cybernetic ideology of efficiency and control. By paying attention to drift and lines of flight, such erratic behaviour finds little favour in a world increasingly defined by protocol and predictable results. But note how in its attempt to capture error within its regime of signs, the logic of maximum performance is not above recuperating the Augustinian evil of error as a form of “fortunate fall.” Even in the Six Sigma world of 100% efficiency, does not corporate R & D mythologise the creative moment that allows error to turn a profit? Post-It Notes® and Silly Putty® present two classic instances in which happenstance, mistake, and error mark a moment in which “thinking outside of the box” saves the day. Error marks a kind of deviation from—and within—this system: a “failure” that at the same time marks a potential, a virtuality. Error calls attention to its etymological roots, a going astray, a wandering from intended destinations. Error, as errant heading, suggests ways in which failure, mutation, spurious information, and unintended results provide creative openings and lines of flight that allow for a reconceptualisation of what can (or cannot) be realised within social and cultural forms. While noise marks a rupture of signification, it also operates within the framework of a cybernetic imperative that constantly attempts to capture the flows that threaten to escape its operational parameters. As networks become increasingly social, this logic of rationalisation and abstraction serves as a dialectical enclosure for an information-based culture industry. But error also suggests a strategy of misdirection, getting a result back other than what one expected, and in doing so turns the cybernetic imperative against itself. “Google-bombing,” for example, creates an informatic structure that plays off of the creative potential of equivocation. Here, error of a Manichean sort introduces noise into an information system to produce unexpected results. Until recently, typing the word “failure” into the search engine Google produced as a top response George Bush’s Webpage at www.whitehouse.gov. By building Webpages in which the text “failure” links to the U.S. President’s page, users “hack” Google’s search algorithm to produce an errant heading. The cybernetic imperative is turned against itself; this strategy of misdirection enacts a “fatal error” that evokes the logic of a system to create an opening for poeisis, play, and the unintended. Information networks, no longer secondary to higher order social and cultural formations, now define the function and logic of social space itself. This culture of circulation creates equivalences by way of a common currency of “information,” such that “viral” distribution defines a social event in its own right, regardless of the content of transmission. While a decade earlier theorists speculated on the emergence of a collective intelligence via global networks, the culture of circulation that has developed online would seem to indicate that “emergence” and circulation are self-justifying events. In the moment of equivocation—not so much beyond good and evil, but rather in the spaces between signal and noise—slippage, error, and misdirection suggest a moment of opening in contrast to the black hole closures of the cybernetic imperative. The violence of an informatic monoculture expresses itself in this moment of insistence that whatever circulates signifies, and that which cannot communicate must be silenced. In such an environment, we would do well to examine these failures to communicate, as well as the ways in which error and noise seduce us off course. In contrast to the terror of an eternal return of the actual, a poetics of noise suggests a virtuality of the network, an opening of the possible in an increasingly networked society. The articles in this issue of M/C Journal approach error from a range of critical and social perspectives. Essays address the ways in which error marks both a misstep and an opening. Throughout this issue, the authors address error as both abject and privileged instance in a society increasingly defined by information networks and systems of control. In our feature article, “Revealing Errors,” Benjamin Mako Hill explores how media theorists would benefit from closer attention to errors as “under-appreciated and under-utilised in their ability to reveal technology around us.” By allowing errors to communicate, he argues, we gain a perspective that makes invisible technologies all the more visible. As such, error provides a productive moment for both interpretive and critical interventions. Two essays in this issue look at the place of error and noise within the work of art. Rather than foregrounding a concept of “medium” that emphasises clear, unimpeded transmission, these authors explore the ways in which the errant and unintended provide for a productive aesthetic in its own right. Using Shannon’s information theory, and in particular his concept of equivocation, Su Ballard’s essay, “Information, Noise, and et al.’s ‘maintenance of social solidarity-instance 5,” explores the productive error of noise in the digital installation art of a New Zealand artists’ collective. Rather than carefully controlling the viewer’s experience, et al.’s installation places the viewer within a field of equivocation, in effect encouraging misreadings and unintended insertions. In a similar vein, Tim Barker’s essay, “Error, the Unforeseen, and the Emergent: The Error of Interactive Media Art” examines the productive error of digital art, both as an expression of artistic intent and as an emergent expression within the digital medium. This “glitch aesthetic” foregrounds the errant and uncontrollable in any work of art. In doing so, Barker argues, error also serves as a measure of the virtual—a field of potential that gestures toward the “unforeseen.” The virtuality of error provides a framework of sorts for two additional essays that, while separated considerably in subject matter, share similar theoretical concerns. Taking up the concept of an asignifying poetics of noise, Christopher Grant Ward’s essay, “Stock Images, Filler Content, and the Ambiguous Corporate Message” explores how the stock image industry presents a kind of culture of noise in its attempt to encourage equivocation rather than control semiotic signal. By producing images that are more virtual than actual, visual filler provides an all-too-familiar instance of equivocation as a field of potential and a Derridean citation of undecidibility. Adi Kuntsman takes a similar theoretic tack in “‘Error: No Such Entry’: Haunted Ethnographies of Online Archives.” Using a database retrieval error message, “no such entry,” Kuntsman reflects upon her ethnographic study of an online community of Russian-Israeli queer immigrants. Error messages, she argues, serve as informatic “hauntings”—erasures that speak of an online community’s complex relation to the construction and archiving of a collective history. In the case of a database retrieval error—as in the mailer-daemon’s notice of the “550” error—the failure of an address to respond to its hailing calls attention to a gap between query and expected response. This slippage in control is, as discussed above, and instance of an Augustinian error. But what of the Manichean—the intentional engagement in strategies of misdirection? In Kimberly Gregson’s “Bad Avatar! Griefing in Virtual Worlds,” she provides a taxonomy of aberrant behaviour in online gaming, in which players distort or subvert orderly play through acts that violate protocol. From the perspective of many a gamer, griefing serves no purpose other than annoyance, since it exploits the rules of play to disrupt play itself. Yet in “Amazon Noir: Piracy, Distribution, Control,” Michael Dieter calls attention to “how the forces confined as exterior to control (virality, piracy, noncommunication) regularly operate as points of distinction to generate change and innovation.” The Amazon Noir project exploited vulnerabilities in Amazon.com’s Search Inside!™ feature to redistribute thousands of electronic texts for free through peer-to-peer networks. Dieter demonstrates how this “tactical media performance” challenged a cybernetic system of control by opening it up to new and ambiguous creative processes. Two of this issue’s pieces explore a specific error at the nexus of media and culture, and in keeping with Hill’s concept of “revealing errors,” use this “glitch” to lay bare dominant ideologies of media use. In her essay, “Artificial Intelligence: Media Illiteracy and the SonicJihad Debacle in Congress,” Elizabeth Losh focuses on a highly public misreading of a Battlefield 2 fan video by experts from the Science Applications International Corporation in their testimony before Congress on digital terrorism. Losh argues that Congress’s willingness to give audience to this misreading is a revealing error in its own right, as it calls attention to the anxiety of experts and power brokers over the control and distribution of information. In a similar vein, in Yasmin Ibrahim’s essay, “The Emergence of Audience as Victims: The Issue of Trust in an Era of Phone Scandals,” explores the revealing error of interactive television gone wrong. Through an examination of recent BBC phone-in scandals, Ibrahim explores how failures—both technical and ethical—challenge an increasingly interactive audience’s sense of trust in the “reality” of mass media. Our final essay takes up the theme of mutation as genetic error. Martin Mantle’s essay, “‘Have You Tried Not Being a Mutant?’: Genetic Mutation and the Acquisition of Extra-ordinary Ability,” explores “normal” and “deviant” bodies as depicted in recent Hollywood retellings of comic book superhero tales. Error, he argues, while signalling the birth of superheroic abilities, marks a site of genetic anxiety in an informatic culture. Mutation as “error” marks the body as scapegoat, signalling all that exceeds normative control. In each of these essays, error, noise, deviation, and failure provide a context for analysis. In suggesting the potential for alternate, unintended outcomes, error marks a systematic misgiving of sorts—a creative potential with unpredictable consequences. As such, error—when given its space—provides an opening for artistic and critical interventions. References “Art Fry, Inventor of Post-It® Notes: ‘One Man’s Mistake is Another’s Inspiration.” InventHelp. 2004. 14 Oct. 2007 http://www.inventhelp.com/articles-for-inventors-art-fry.asp>. Barthes, Roland. S/Z. Trans. Richard Miller. New York: Hill and Wang, 1974. Baudrillard, Jean. The Transparency of Evil. Trans. James Benedict. New York: Verso, 1993. Deleuze, Gilles. “Postscript on the Societies of Control.” October 59 (Winter 1992): 3-7. Deleuze, Gilles, and Felix Guattari. A Thousand Plateaus. Trans. Brian Massumi. Minneapolis: U Minnesota P, 1987. Eco, Umberto. The Open Work. Cambridge: Harvard UP, 1989. “Googlebombing ‘Failure.’” Official Google Blog. 16 Sep. 2005. 14 Oct. 2007 http://googleblog.blogspot.com/2005/09/googlebombing-failure.html>. Hayles, N. Katherine. How We Became Posthuman. Chicago: U Chicago P, 1999. Jenkins, Henry. Convergence Culture. New York: NYU Press, 2006. Lyotard, Jean-Francois. The Postmodern Condition. Trans. Geoffrey Bennington and Brian Massumi. Minneapolis: Minnesota UP, 1984. Moulthrop, Stuart. “Error 404: Doubting the Web.” 2000. 14 Oct. 2007 http://iat.ubalt.edu/moulthrop/essays/404.html>. Poster, Mark. Information Please. Durham, NC: Duke UP, 2006. Shannon, Claude, and Warren Weaver. The Mathematical Theory of Communication. Urbana: U Illinois P, 1949. “Silly Putty®.” Inventor of the Week. 3 Mar. 2003. 14 Oct. 2007 http://web.mit.edu/Invent/iow/sillyputty.html>. Wiener, Norbert. The Human Use of Human Beings. Cambridge, MA: Da Capo, 1988. Citation reference for this article MLA Style Nunes, Mark. "Failure Notice." M/C Journal 10.5 (2007). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0710/00-editorial.php>. APA Style Nunes, M. (Oct. 2007) "Failure Notice," M/C Journal, 10(5). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0710/00-editorial.php>.
APA, Harvard, Vancouver, ISO, and other styles
49

White, Peter B., and Naomi White. "Staying Safe and Guilty Pleasures." M/C Journal 10, no. 1 (March 1, 2007). http://dx.doi.org/10.5204/mcj.2614.

Full text
Abstract:
Introduction In a period marked by the pervasiveness of new mobile technologies saturating urban areas of the Asia-Pacific region, it can be easy to forget the realities of life in the rural areas. In a location such as Australia, in which 80% of the population lives in urban areas, one must be reminded of the sociotechnological realities of rural existence where often-newer mobile communication devices cease to function. This paper focuses on these black spots – and often forgotten areas – where examples of older, mediated technologies such as UHF Citizen Band (CB) radios can be found as integral to practices of everyday rural life. As Anderson notes, constructs of the nation are formed through contested notions of what individuals and communities imagine and project as a sense of place. In Australia, one of the dominant contested imageries can be found in the urban and rural divide, a divide that is not just social and cultural but technological; it is marked by a digital divide. This divide neatly corresponds to the images of Australia experienced by Australians (predominantly living in urban areas) and exported tourist images of the rugged vast rural landscapes. The remote Australia Outback is a popular destination for domestic tourists. Its sparsely populated and rough terrain attracts tourists seeking a quintessentially Australian experience. Roads are often unmade and in poor condition. Fuel and food supplies and health services are widely separated and there is almost no permanent accommodation. Apart from a small number of regional centres there is no access to mobile phones or radio broadcasts. As a consequence tourists must be largely self sufficient. While the primary roads carry significant road traffic it is possible to drive all day on secondary roads without seeing another person. Isolation and self-sufficiency are both an attraction and a challenge. Travelling in campervans, towing caravans or camper trailers and staying in caravan parks, national parks, roadside stops or alone in the bush, tourists spend extended times in areas where there are few other tourists. Many tourists deal with this isolation by equipping their vehicles with CB radios. Depending on the terrain, they are able to listen to, and participate in conversations with other CB users within a 10-20 kilometre range. In some areas where there are repeater stations, the range of radio transmissions can be extended. This paper examines the role of these CB radios in the daily life of tourists in the Australian Outback. Theoretical Issues The links between travel, the new communications technologies and the diminished spatial-time divide have been explored by John Urry. According to Urry, mobile electronic devices make it possible for people “to leave traces of their selves in informational space” (266). Using these informational traces, mobile communication technologies ‘track’ the movements of travellers, enabling them to communicate synchronously. People become ’nodes in multiple networks of communication and mobility’ (266). Another consequence of readily available communication independent of location is for the meaning of social connections. Social encounters provide tourists with the opportunity to develop and affirm understandings of their shared common occupation of unfamiliar social and cultural landscapes (Harrison). Both transitory and enduring relationships provide information, companionship and resources that allow tourists to create, share and give meaning to their experiences (Stokowski). Communication technology also enables individuals to enter and remain part of social networks while physically absent and distant from them (Johnsen; Makimoto and Manners, Urry). The result is a “nomadic intimacy” in an everyday social and physical environment characterised by extended spaces and individual freedom to move around in these spaces (Fortunati). For travellers in the Australian Outback, this “nomadic intimacy” is both literal and metaphorical. Research has shown that travellers use mobile communications services and a range of other communication strategies to maintain a “symbolic proximity” with family, friends and colleagues (Wurtzel and Turner) and to promote a sense of “presence while absent”, or ‘co-presence’ (Gergen; Lury; Short, Williams and Christie; White and White, “Keeping Connected”; White and White, “Home and Away”). Central to the original notion of co-presence was that it was contingent on those involved in a given communication both being and feeling close enough to perceive each other and to be perceived in the course of their activities (Goffman). That is, the notion of co-presence initially referred to physical presence in face-to-face contact and interactions. However, increasing use of mobile phones in particular has meant that this sense of connection can be affirmed at a distance. But what happens when travellers do not have access to mobile phones and the Internet, and as a consequence, do not have access to their networks of family, friends and colleagues? How do they deal with travel and isolation in a harsh environment? These issues are the starting point for the present paper, which examines travellers’ experience of CB radio in the remote Australian Outback. This exploration of how the CB radio has been incorporated into the daily lives of these travellers can be seen as a contribution to an understanding of the domestication of mobile communications (Haddon). Methodology People were included in the study if they used CB radios while travelling in remote parts of Western Australian and the Northern Territory. The participants were approached in caravan parks, camping grounds and at roadside stops. Most were travelling in caravans while others were using camper trailers and campervans. Twenty-four travellers were interviewed, twelve men and twelve women. All were travelling with partners or spouses, and one group of two couples was travelling together. They ranged in age from twenty five to seventy years, and all were Australian residents. The duration of their travels varied from six weeks to eleven months. Participants were interviewed using a semi-structured interview schedule. The interviews were transcribed and then thematically coded with respect to regularly articulated points of view. Where points of view were distinctive, they were noted during the coding process as contrasting instances. While the relatively small sample size limits generalizability, the issues raised by the respondents provide insights into the meaning of CB radio use in the daily life of travellers in the Australian Outback. Findings Staying Safe The primary reason given for travelling with a CB radio was personal safety. The tourists interviewed were aware of the risks associated with travelling in the Outback. Health emergencies, car accidents and problems with tyres in a harsh and hot environment without ready access to water were often mentioned. ‘If you call a May Day someone will come out and answer…” (Female, 55). Another interviewee reported that: Last year we helped some folk who were bogged in the sand right at the end of the road in the middle of nowhere. The wife just started calling the various channels explaining that they were bogged and asking whether there was anyone out there….We went and towed them out. …. It would have been a long walk for them to get help. (Female, 55) Even though most interviewees had not themselves experienced a personal emergency, many recounted stories about how CB radio had been used to come to the aid of someone in distress. Road conditions were another concern. Travellers were often rightly very concerned about hazards ahead. One traveller noted: You are always going to hear someone who gives you an insight as to what is happening up ahead on the road. If there’s an accident up ahead someone’s going to get on the radio and let people know. Or there could be road works or the road could be shitty. (Male, 50) Safety arose in another context. Tourists share the rough and often dusty roads with road trains towing up to three trailers. These vehicles can be 50 metres long. A road train creates wind turbulence when it passes a car and trailer or caravan and the dust it raises reduces visibility. Because of this car drivers and caravanners need to be extremely careful when they pass or are passed by one. Passing a road train at 100 km can take 2.5km. Interviewees reported that they communicated with road train drivers to negotiate a safe time and place to pass. One caravanner noted: Sometimes you see a road train coming up behind you. You call him up and say ” I’ll pull over for you mate and slow down and you go”. You use it a lot because it’s safer. We are not in a hurry. Road trains are working and they are in a hurry and he (sic.) is bigger, so he has the right of way. (Male, 50) As with the dominant rationale for installing and using a CB radio, Rice and Katz showed that concern about safety is the primary motive for women acquiring a mobile phone, and safety was also important for men. The social contact enabled by CB radio provided a means of tracking the movements of other travellers who were nearby. This tracking ability engendered a sense of comfort and enabled them to communicate and exchange information synchronously in a potentially dangerous environment. As a consequence, a ‘metaworld’ (Suvantola) of ‘informational traces’ (Urry) was created. Making Oneself Known All interactions entail conventions and signals that enable a conversation to commence. These conventions were also seen to apply to CB conversations. Driving in a car or truck involves being physically enclosed with the drivers and passengers being either invisible or only partially visible to other travellers. Caravanners deal with this lack of visibility in a number of ways. Many have their first names, the name of their caravan and the channel they use on the rear of their van. A typical sign was “Bill and Rose, Travelling Everywhere, Channel 18” or “Harry and Mary, Bugger Work, Gone Fishing”, Channel 18” clearly visible to anyone coming from behind. (The male partner’s name was invariably first.) A sign that identified the occupants was seen as an invitation to chat by other travellers. One traveller said that if he saw such a sign he would call up by saying: “Hello Harry and Mary”. From then on who knows where it goes. It depends on the people. If someone comes back really cheery and a bit cheeky I can be cheery and cheeky back. (Male, 50) The names of caravans were used in other more personal ways. One couple from South Africa had given their van a Zulu name and that was seen as a way of identifying their origins and encouraging a specific kind of conversation while they were on the road. This couple reported that People call us up and ask us what it means. We have lots of calls about that. We’ve had more conversations about that than anything else. (Male, 67) Another caravanner reported that he had seen a van with “Nanna and Poppa’ on the back. They used that as a cue to start a conversation about their grandchildren. But caravan names linked to their CB radio channel can have a deeper personal meaning. One couple had their first names and the number 58 on the rear of their van. (The number 58 is beyond the range of CB channels.) On further questioning the number 58 was revealed to be the football club number of a daughter who had died. The sign was an attempt to deal with their grief and its public display a way of entering into a conversation about grief and loss. It has probably backfired because it puts people back into their shell because they think “We don’t want to talk about death”. But because of the sign we’ve met people who’ve lost a child too. (Male, 50) As Featherstone notes, drivers develop competence in switching between a range of communicative modes while they are travelling. These range from body gestures to formal signalling devices on other cars. Signage on caravans designed to invite conversation was a specialised signalling device specific to the CB user. Talking Loneliness was another theme emerging from the interviews. One of the attractions of the Outback is its sparse population. As one interviewee noted ‘You can travel all day and not see another soul’ (Female, 35). But this loneliness can be a challenge. Some of these roads are pretty lonely, the radio lets you know that there’s somebody else out there. (Male, 54) Hearing other travellers talk was comforting. As with previous research showing that travellers use mobile communications services to maintain a “symbolic proximity” (Gergen; Lury; Short, Williams and Christie; White and White, “Keeping Connected”) the CB conversations enabled the travellers to feel this sense of connection. These interactions also offered them the possibility of converting mediated relationships into face-to-face encounters along the road. That is, some travellers reported that CB-based chats with people while they were driving would lead to a decision to stop along the road for a shared morning tea or lunch. Conventions governed the use of specific channels. Some of these are government regulated, while others are user generated. For instance, Channels 18 and 40, were seen as ‘working channels’. Some interviewees felt very strongly about people who ‘cluttered up’ these channels and moved to another unused channel when they wanted to have an extended conversation. One couple was unaware of the local convention and could not understand why no one was calling them up. They later discovered that they were on the ‘wrong channel’. Interviewees travelling in a convoy would use the standard channel for travellers and then agree to move to another channel of their choice. When we travelling in a convoy we go off Channel 18 and use another channel to talk. The girls love it to talk about their knitting and work out what they’ve done wrong. We sometimes tell jokes. Also we work out what we are going to do in the next town. (Male, 67) These extended conversations parallel the lengthy conversations between drivers equipped with CB radio in the United States during the 1970’s which Dannaher described as ‘as diverse as those found at a cocktail party’. They also provided a sense of the “nomadic intimacy” described by Fortunati. Eavesdropping While travellers used Channel 18 for conversations they set their radio to automatically scan all forty channels. When a conversation was located the radio would stop scanning and they could listen to what was being said. This meant that travellers would overhear conversations between strangers. We scan all the channels so you can hear anyone coming up behind, especially trucks and you can hear them say “that damn caravan” and you can say ’ that damn caravan will pull over at the first opportunity.” (Female, 44) But the act of listening in to other people’s conversations created moral dilemmas for some travellers. One interviewee described it as “voyeurism for the ears”. While she described listening to farm conversations as giving her an insight into daily life on huge cattle station she was tempted to butt into one conversation that she was listening to. On reflection she decided against entering the conversation. She said: I didn’t want them to know that we were eavesdropping on their conversation. I’d be embarrassed if a third-party knew that we were listening in. I guess that I’ve been taught that you shouldn’t listen in to other people’s conversations. It’s not good manners… (Female, 35) When travellers overheard conversations between road train or truck drivers they had mixed responses. These conversations were often sexually loaded and seen as coarse by the middle class travellers. Some were forgiving of the conversational excesses, distinguishing themselves from the rough and tumble world of the ‘truckies’. One traveller noted that the truck drivers use a lot of bad language, but you’ve got to go with that, because that’s the type of people they are. But you have to go with the flow. We know that we are ‘playing’ and the truckies are ‘working’ so you have to be considerate to them. (Female, 50) While the language of the truck drivers was often threatening to middle class travellers, overhearing their conversations was also seen as a comfort. One traveller remarked that sometimes you hear truckies talking about their families and they obviously know each other. It’s kind of nice to see how they think. (Female, 50) Travellers had similar feelings when they overheard conversations from cattle stations. Also, local cattle station workers and their families would use CB radios for their social and working communications. Travellers would often overhear these conversations. One traveller noted that when we are driving through a cattle station we work out which channel they are using, and we lock it on that one. And then we listen until they are out of range. We are city people and listening to the station chatter gives us a bit of an insight into what it must be like as a farmer working land out here. And then we talk about the farmers’ conversations. (Female, 35) Another traveller noted: If you are travelling and there’s nothing you can see you can listen to the farmer talking to his wife or the kids. It’s absolutely awesome to hear conversations on radio. (Female, 67) This empathic listening allows the travellers to imagine the lives of others in settings quite different from those with which they are familiar. Furthermore, hearing farmers talking about fixing the fence in the left paddock or rounding up strays makes ‘you feel that you’re not alone’. The networking of the travellers’ social life arising from listening in to others meant that they were able to learn about the environment in which they found themselves, as well as enabling them to feel that they continued to remain embedded or ‘co-present’ in social relationships in circumstances of considerable physical isolation. Conclusions The accounts provided by tourists illustrated the way communications technologies – in this case, CB radio – enabled people to become ’nodes in multiple networks of communication and mobility’ described by Urry and to maintain ‘co-presence’. The CB radio allowed tourists to remain part of social networks while being physically absent from them (Gergen). Their responses also demonstrated the significance of CB radio in giving meaning to the experience of travel. The CB radio was shown to be an important part of the travel experience in the remote Australian Outback. The use of CB made it possible for travellers in the Australian Outback to obtain information vital for the safe traverse of the huge distances and isolated roads. The technology enabled them to break down the atomism and frontier-like isolation of the highway. Drivers and their passengers could reach out to other travellers and avoid remaining unconnected strangers. Long hours on the road could be dealt with by listening in on others’ conversations, even though some ambivalence was expressed about this activity. Despite an awareness that they could be violating the personal boundaries of others and that their conversations could be overheard, the use of CB radio meant staying safe and enjoying guilty pleasures. Imagined or not. References Anderson, Benedict. Imagined Community. London: Verso, 1983 Dannefer, W. Dale. “The C.B. Phenomenon: A Sociological Appraisal.” Journal of Popular Culture 12 (1979): 611-19. Featherstone, Mike. “Automobilities: An Introduction.” Theory, Culture and Society 21.4/5 (2004): 1-24. Fortunati, Leopoldina. “The Mobile Phone: Towards New Categories and Social Relations.” Information, Communication and Society 5.2 (2002): 513-28. Gergen, Kenneth. “The Challenge of Absence Presence.” Perpetual Contact: Mobile Communications, Private Talk, Public Performance. Ed. James Katz. Cambridge: Cambridge UP, 2002. 227-54. Goffman, Erving. Behavior in Public Places: Notes on the Social Organization of Gatherings. New York: Free Press of Glencoe, 1963. Haddon, Leslie. “Domestication and Mobile Telephony.” Machines That Become Us: The Social Context of Personal Communication Technology. Ed. James E. Katz. New Brunswick, N.J.: Transaction Publishers, 2003. 43-55. Harrison, Julia. Being a Tourist: Finding Meaning in Pleasure Travel. Vancouver: U of British Columbia P, 2003. Johnsen, Truls Erik. “The Social Context of Mobile Use of Norwegian Teens.” Machines That Become Us: The Social Context of Personal Communication Technology. Ed. James Katz. London: Transaction Publishers, 2003. 161-69. Ling, Richard. “One Can Talk about Common Manners! The Use of Mobile Telephones in Inappropiate Situations.” Communications on the Move: The Experience of Mobile Telephony in the 1990s (Report of Cost 248: The Future European Telecommunications User Mobile Workgroup). Ed. Leslie Haddon. Farsta, Sweden: Telia AB, 1997. 97-120. Lury, Celia. “The Objects of Travel.” Touring Cultures: Transformations of Travel and Theory. Eds. Chris Rojek and John Urry. London: Routledge, 1997. 75-95. Rice, Ronald E., and James E. Katz. “Comparing Internet and Mobile Phone Usage: Digital Divides of Usage, Adoption and Dropouts.” Telecommunications Policy 27 (2003): 597-623. Short, J., E. Williams, and B. Christie. The Social Psychology of Telecommunications. New York: Wiley, 1976. Stokowski, Patricia. “Social Networks and Tourist Behavior.” American Behavioural Scientist 36.2 (1992): 212-21. Suvantola, Jaakko. Tourist’s Experience of Place. Aldershot: Ashgate, 2002. Urry, John. “Mobility and Proximity.” Sociology 36.2 (2002): 255-74. ———. “Social Networks, Travel and Talk.” British Journal of Sociology 54.2 (2003): 155-75. White, Naomi Rosh, and Peter B. White. “Home and Away: Tourists in a Connected World.” Annals of Tourism Research 34. 1 (2007): 88-104. White, Peter B., and Naomi Rosh White. “Keeping Connected: Travelling with the Telephone.” Convergence: The International Journal of Research into New Media Technologies 11.2 (2005): 102-18. Williams, Stephen, and Lynda Williams. “Space Invaders: The Negotiation of Teenage Boundaries through the Mobile Phone.” The Sociological Review 53.2 (2005): 314-31. Wurtzel, Alan H., and Colin Turner. “Latent Functions of the Telephone: What Missing the Extension Means.” The Social Impact of the Telephone. Ed. Ithiel de Sola Pool. Cambridge: MIT Press, 1977. 246-61. Citation reference for this article MLA Style White, Peter B., and Naomi White. "Staying Safe and Guilty Pleasures: Tourists and CB Radio in the Australian Outback." M/C Journal 10.1 (2007). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0703/11-white-white.php>. APA Style White, P., and N. White. (Mar. 2007) "Staying Safe and Guilty Pleasures: Tourists and CB Radio in the Australian Outback," M/C Journal, 10(1). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0703/11-white-white.php>.
APA, Harvard, Vancouver, ISO, and other styles
50

Meleo-Erwin, Zoe C. "“Shape Carries Story”: Navigating the World as Fat." M/C Journal 18, no. 3 (June 10, 2015). http://dx.doi.org/10.5204/mcj.978.

Full text
Abstract:
Story spreads out through time the behaviors or bodies – the shapes – a self has been or will be, each replacing the one before. Hence a story has before and after, gain and loss. It goes somewhere…Moreover, shape or body is crucial, not incidental, to story. It carries story; it makes story visible; in a sense it is story. Shape (or visible body) is in space what story is in time. (Bynum, quoted in Garland Thomson, 113-114) Drawing on Goffman’s classic work on stigma, research documenting the existence of discrimination and bias against individuals classified as obese goes back five decades. Since Cahnman published “The Stigma of Obesity” in 1968, other researchers have well documented systematic and growing discrimination against fat people (cf. Puhl and Brownell; Puhl and Heuer; Puhl and Heuer; Fikkan and Rothblum). While weight-based stereotyping has a long history (Chang and Christakis; McPhail; Schwartz), contemporary forms of anti-fat stigma and discrimination must be understood within a social and economic context of neoliberal healthism. By neoliberal healthism (see Crawford; Crawford; Metzel and Kirkland), I refer to the set of discourses that suggest that humans are rational, self-determining actors who independently make their own best choices and are thus responsible for their life chances and health outcomes. In such a context, good health becomes associated with proper selfhood, and there are material and social consequences for those who either unwell or perceived to be unwell. While the greatest impacts of size-based discrimination are structural in nature, the interpersonal impacts are also significant. Because obesity is commonly represented (at least partially) as a matter of behavioral choices in public health, medicine, and media, to “remain fat” is to invite commentary from others that one is lacking in personal responsibility. Guthman suggests that this lack of empathy “also stems from the growing perception that obesity presents a social cost, made all the more tenable when the perception of health responsibility has been reversed from a welfare model” (1126). Because weight loss is commonly held to be a reasonable and feasible goal and yet is nearly impossible to maintain in practice (Kassierer and Angell; Mann et al.; Puhl and Heuer), fat people are “in effect, asked to do the impossible and then socially punished for failing” (Greenhalgh, 474). In this article, I explore how weight-based stigma shaped the decisions of bariatric patients to undergo weight loss surgery. In doing so, I underline the work that emotion does in circulating anti-fat stigma and in creating categories of subjects along lines of health and responsibility. As well, I highlight how fat bodies are lived and negotiated in space and place. I then explore ways in which participants take up notions of time, specifically in regard to risk, in discussing what brought them to the decision to have bariatric surgery. I conclude by arguing that it is a dynamic interaction between the material, social, emotional, discursive, and the temporal that produces not only fat embodiment, but fat subjectivity “failed”, and serves as an impetus for seeking bariatric surgery. Methods This article is based on 30 semi-structured interviews with American bariatric patients. At the time of the interview, individuals were between six months and 12 years out from surgery. After obtaining Intuitional Review Board approval, recruitment occurred through a snowball sample. All interviews were audio-taped with permission and verbatim interview transcripts were analyzed by means of a thematic analysis using Dedoose (www.dedoose.com). All names given in this article are pseudonyms. This work is part of a larger project that includes two additional interviews with bariatric surgeons as well as participant-observation research. Findings Navigating Anti-Fat Stigma In discussing what it was like to be fat, all but one of the individuals I interviewed discussed experiencing substantive size-based stigma and discrimination. Whether through overt comments, indirect remarks, dirty looks, open gawking, or being ignored and unrecognized, participants felt hurt, angry, and shamed by friends, family, coworkers, medical providers, and strangers on the street because of the size of their bodies. Several recalled being bullied and even physically assaulted by peers as children. Many described the experience of being fat or very fat as one of simultaneous hypervisibility and invisibility. One young woman, Kaia, said: “I absolutely was not treated like a person … . I was just like this object to people. Just this big, you know, thing. That’s how people treated me.” Nearly all of my participants described being told repeatedly by others, including medical professionals, that their inability to lose weight was effectively a failure of the will. They found these comments to be particularly hurtful because, in fact, they had spent years, even decades, trying to lose weight only to gain the weight back plus more. Some providers and family members seemed to take up the idea that shame could be a motivating force in weight loss. However, as research by Lewis et al.; Puhl and Huerer; and Schafer and Ferraro has demonstrated, the effect this had was the opposite of what was intended. Specifically, a number of the individuals I spoke with delayed care and avoided health-facilitating behaviors, like exercising, because of the discrimination they had experienced. Instead, they turned to health-harming practices, like crash dieting. Moreover, the internalization of shame and blame served to lower a sense of self-worth for many participants. And despite having a strong sense that something outside of personal behavior explained their escalating body weights, they deeply internalized messages about responsibility and self-control. Danielle, for instance, remarked: “Why could the one thing I want the most be so impossible for me to maintain?” It is important to highlight the work that emotion does in circulating such experiences of anti-fat stigma and discrimination. As Fraser et al have argued in their discussion on fat and emotion, the social, the emotional, and the corporeal cannot be separated. Drawing on Ahmed, they argue that strong emotions are neither interior psychological states that work between individuals nor societal states that impact individuals. Rather, emotions are constitutive of subjects and collectivities, (Ahmed; Fraser et al.). Negative emotions in particular, such as hate and fear, produce categories of people, by defining them as a common threat and, in the process, they also create categories of people who are deemed legitimate and those who are not. Thus following Fraser et al, it is possible to see that anti-fat hatred did more than just negatively impact the individuals I spoke with. Rather, it worked to produce, differentiate, and drive home categories of people along lines of health, weight, risk, responsibility, and worth. In this next section, I examine the ways in which anti-fat discrimination works at the interface of not only the discursive and the emotive, but the material as well. Big Bodies, Small Spaces When they discussed their previous lives as very fat people, all of the participants made reference to a social and built environment mismatch, or in Garland Thomson’s terms, a “misfit”. A misfit occurs “when the environment does not sustain the shape and function of the body that enters it” (594). Whereas the built environment offers a fit for the majority of bodies, Garland Thomson continues, it also creates misfits for minority forms of embodiment. While Garland Thomson’s analysis is particular to disability, I argue that it extends to fat embodiment as well. In discussing what it was like to navigate the world as fat, participants described both the physical and emotional pain entailed in living in bodies that did not fit and frequently discussed the ways in which leaving the house was always a potential, anxiety-filled problem. Whereas all of the participants I interviewed discussed such misfitting, it was notable that participants in the Greater New York City area (70% of the sample) spoke about this topic at length. Specifically, they made frequent and explicit mentions of the particular interface between their fat bodies and the Metropolitan Transit Authority (MTA), and the tightly packed spaces of the city itself. Greater New York City area participants frequently spoke of the shame and physical discomfort in having to stand on public transportation for fear that they would be openly disparaged for “taking up too much room.” Some mentioned that transit seats were made of molded plastic, indicating by design the amount of space a body should occupy. Because they knew they would require more space than what was allotted, these participants only took seats after calculating how crowded the subway or train car was and how crowded it would likely become. Notably, the decision to not take a seat was one that was made at a cost for some of the larger individuals who experienced joint pain. Many participants stated that the densely populated nature of New York City made navigating daily life very challenging. In Talia’s words, “More people, more obstacles, less space.” Participants described always having to be on guard, looking for the next obstacle. As Candice put it: “I would walk in some place and say, ‘Will I be able to fit? Will I be able to manoeuvre around these people and not bump into them?’ I was always self-conscious.” Although participants often found creative solutions to navigating the hostile environment of both the MTA and the city at large, they also identified an increasing sense of isolation that resulted from the physical discomfort and embarrassment of not fitting in. For instance, Talia rarely joined her partner and their friends on outings to movies or the theater because the seats were too tight. Similarly, Decenia would make excuses to her husband in order to avoid social situations outside of the home: “I’d say to my husband, ‘I don’t feel well, you go.’ But you know what? It was because I was afraid not to fit, you know?” The anticipatory scrutinizing described by these participants, and the anxieties it produced, echoes Kirkland’s contention that fat individuals use the technique of ‘scanning’ in order to navigate and manage hostile social and built environments. Scanning, she states, involves both literally rapidly looking over situations and places to determine accessibility, as well as a learned assessment and observation technique that allows fat people to anticipate how they will be received in new situations and new places. For my participants, worries about not fitting were more than just internal calculation. Rather, others made all too clear that fat bodies are not welcome. Nina recalled nasty looks she received from other subway riders when she attempted to sit down. Decenia described an experience on a crowded commuter train in which the woman next to her openly expressed annoyance and disgust that their thighs were touching. Talia recalled being aggressively handed a weight loss brochure by a fellow passenger. When asked to contrast their experiences living in New York City with having travelled or lived elsewhere, participants almost universally described the New York as a more difficult place to live for fat people. However, the experiences of three of the Latinas that I interviewed troubled this narrative. Katrina felt that the harassment she received in her country of origin, the Dominican Republic, was far worse than what she now experienced in the New York Metropolitan Area. Although Decenia detailed painful experiences of anti-fat stigma in New York City, she nevertheless described her life as relatively “easy” compared to what it was like in her home country of Brazil. And Denisa contrasted her neighbourhood of East Harlem with other parts of Manhattan: “In Harlem it's different. Everybody is really fat or plump – so you feel a bit more comfortable. Not everybody, but there's a mix. Downtown – there's no mix.” Collectively, their stories serve as a reminder (see Franko et al.; Grabe and Hyde) to be suspicious of over determined accounts that “Latino culture” is (or people of colour communities in general are), more accepting of larger bodies and more resistant to weight-based stigma and discrimination. Their comments also reflect arguments made by Colls, Grosz, and Garland Thomson, who have all pointed to the contingent nature between space and bodies. Colls argue that sizing is both a material and an emotional process – what size we take ourselves to be shifts in different physical and emotional contexts. Grosz suggests that there is a “mutually constitutive relationship between bodies and cities” – one that, I would add, is raced, classed, and gendered. Garland Thomson has described the relationship between bodies and space/place as “a dynamic encounter between world and flesh.” These encounters, she states, are always contingent and situated: “When the spatial and temporal context shifts, so does the fit, and with it meanings and consequences” (592). In this sense, fat is materialized differently in different contexts and in different scales – nation, state, city, neighbourhood – and the materialization of fatness is always entangled with raced, classed, and gendered social and political-economic relations. Nevertheless, it is possible to draw some structural commonalities between divergent parts of the Greater New York City Metropolitan Area. Specifically, a dense population, cramped physical spaces, inaccessible transportation and transportation funding cuts, social norms of fast paced life, and elite, raced, classed, and gendered norms of status and beauty work to materialize fatness in such a way that a ‘misfit’ is often the result for fat people who live and/or work in this area. And importantly, misfitting, as Garland Thomson argues, has consequences: it literally “casts out” when the “shape and function of … bodies comes into conflict with the shape and stuff of the built world” (594). This casting out produces some bodies as irrelevant to social and economic life, resulting in segregation and isolation. To misfit, she argues, is to be denied full citizenship. Responsibilising the Present Garland Thomson, discussing Bynum’s statement that “shape carries story”, argues the following: “the idea that shape carries story suggests … that material bodies are not only in the spaces of the world but that they are entwined with temporality as well” (596). In this section, I discuss how participants described their decisions to get weight loss surgery by making references to the need take responsibility for health now, in the present, in order to avoid further and future morbidity and mortality. Following Adams et al., I look at how the fat body is lived in a state of constant anticipation – “thinking and living toward the future” (246). All of the participants I spoke with described long histories of weight cycling. While many managed to lose weight, none were able to maintain this weight loss in the long term – a reality consistent with the medical fact that dieting does not produce durable results (Kassirer and Angell; Mann et al.; Puhl and Heuer). They experienced this inability as not only distressing, but terrifying, as they repeatedly regained the lost weight plus more. When participants discussed their decisions to have surgery, they highlighted concerns about weight related comorbidities and mobility limitations in their explanations. Consistent then with Boero, Lopez, and Wadden et al., the participants I spoke with did not seek out surgery in hopes of finding a permanent way to become thin, but rather a permanent way to become healthy and normal. Concerns about what is considered to be normative health, more than simply concerns about what is held to be an appropriate appearance, motivated their decisions. Significantly, for these participants the decision to have bariatric surgery was based on concerns about future morbidity (and mortality) at least as much, if not more so, than on concerns about a current state of ill health and impairment. Some individuals I spoke with were unquestionably suffering from multiple chronic and even life threatening illnesses and feared they would prematurely die from these conditions. Other participants, however, made the decision to have bariatric surgery despite the fact that they had no comorbidities whatsoever. Motivating their decisions was the fear that they would eventually develop them. Importantly, medial providers explicitly and repeatedly told all of these participants that lest they take drastic and immediate action, they would die. For example: Faith’s reproductive endocrinologist said: “you’re going to have diabetes by the time you’re 30; you’re going to have a stroke by the time you’re 40. And I can only hope that you can recover enough from your stroke that you’ll be able to take care of your family.” Several female participants were warned that without losing weight, they would either never become pregnant or they would die in childbirth. By contrast, participants stated that their bariatric surgeons were the first providers they had encountered to both assert that obesity was a medical condition outside of their control and to offer them a solution. Within an atmosphere in which obesity is held to be largely or entirely the result of behavioural choices, the bariatric profession thus positions itself as unique by offering both understanding and what it claims to be a durable treatment. Importantly, it would be a mistake to conclude that some bariatric patients needed surgery while others choose it for the wrong reasons. Regardless of their states of health at the time they made the decision to have surgery, the concerns that drove these patients to seek out these procedures were experienced as very real. Whether or not these concerns would have materialized as actual health conditions is unknown. Furthermore, bariatric patients should not be seen as having been duped or suffering from ‘false consciousness.’ Rather, they operate within a particular set of social, cultural, and political-economic conditions that suggest that good citizenship requires risk avoidance and personal health management. As these individuals experienced, there are material and social consequences for ‘failing’ to obtain normative conceptualizations of health. This set of conditions helps to produce a bariatric patient population that includes both those who were contending with serious health concerns and those who feared they would develop them. All bariatric patients operate within this set of conditions (as do medical providers) and make decisions regarding health (current, future, or both) by using the resources available to them. In her work on the temporalities of dieting, Coleman argues that rather than seeing dieting as a linear and progressive event, we might think of it instead a process that brings the future into the present as potential. Adams et al suggest concerns about potential futures, particularly in regard to health, are a defining characteristic of our time. They state: “The present is governed, at almost every scale, as if the future is what matters most. Anticipatory modes enable the production of possible futures that are lived and felt as inevitable in the present, rendering hope and fear as important political vectors” (249). The ability to act in the present based on potential future risks, they argue, has become a moral imperative and a marker of proper of citizenship. Importantly, however, our work to secure the ‘best possible future’ is never fully assured, as risks are constantly changing. The future is thus always uncertain. Acting responsibly in the present therefore requires “alertness and vigilance as normative affective states” (254). Importantly, these anticipations are not diagnostic, but productive. As Adams et al state, “the future arrives already formed in the present, as if the emergency has already happened…a ‘sense’ of the simultaneous uncertainty and inevitability of the future, usually manifest in entanglements of fear and hope” (250). It is in this light, then, that we might see the decision to have bariatric surgery. For these participants, their future weight-related morbidity and mortality had already arrived in the present and thus they felt they needed to act responsibly now, by undergoing what they had been told was the only durable medical intervention for obesity. The emotions of hope, fear, anxiety and I would suggest, hatred, were key in making these decisions. Conclusion Medical, public health, and media discourses frame obesity as an epidemic that threatens to bring untold financial disaster and escalating rates of morbidity and mortality upon the nation state and the world at large. As Fraser et al argue, strong emotions (such hatred, fear, anxiety, and hope), are at the centre of these discourses; they construct, circulate, and proliferate them. Moreover, they create categories of people who are deemed legitimate and categories of others who are not. In this context, the participants I spoke with were caught between a desire to have fatness understood as a medical condition needing intervention; the anti-fat attitudes of others, including providers, which held that obesity was a failure of the will and nothing more; their own internalization of these messages of personal responsibility for proper behavioural choices, and, the biologically intractable nature of fatness wherein dieting not only fails to reduce weight in the vast majority of cases but results, in the long term, in increased weight gain (Kassirer and Angell; Mann et al.; Puhl and Heuer). Widespread anxiety and embarrassment over and fear and hatred of fatness was something that the individuals I interviewed experienced directly and which signalled to them that they were less than human. Their desire for weight loss, therefore was partially a desire to become ‘normal.’ In Butler’s term, it was the desire for a ‘liveable life. ’A liveable life, for these participants, included a desire for a seamless fit with the built environment. The individuals I spoke with were never more ashamed of their fatness than when they experienced a ‘misfit’, in Garland Thomson’s terms, between their bodies and the material world. Moreover, feelings of shame over this disjuncture worked in tandem with a deeply felt, pressing sense that something must be done in the present to secure a better health future. The belief that bariatric surgery might finally provide a durable answer to obesity served as a strong motivating factor in their decisions to undergo bariatric surgery. By taking drastic action to lose weight, participants hoped to contest stigmatizing beliefs that their fat bodies reflected pathological interiors. Moreover, they sought to demonstrate responsibility and thus secure proper subjectivities and citizenship. In this sense, concerns, anxieties, and fears about health cannot be disentangled from the experience of anti-fat stigma and discrimination. Again, anti-fat bias, for these participants, was more than discursive: it operated through the circulation of emotion and was experienced in a very material sense. The decision to have weight loss surgery can thus be seen as occurring at the interface of emotion, flesh, space, place, and time, and in ways that are fundamentally shaped by the broader social context of neoliberal healthism. AcknowledgmentI am grateful to the anonymous reviewers of this article for their helpful feedback on earlier version. References Adams, Vincanne, Michelle Murphy, and Adele E. Clarke. “Anticipation: Technoscience, Life, Affect, Temporality.” Subjectivity 28.1 (2009): 246-265. Ahmed, Sara. “Affective Economies.” Social Text 22.2 (2004): 117-139 Boero, Natalie. Killer Fat: Media, Medicine, and Morals in the American "Obesity Epidemic". New Brunswick: Rutgers University Press, 2012. Butler, Judith. Undoing Gender. New York: Routledge, 2004. Bynum, Caroline Walker. 1999. Jefferson Lecture in the Humanities. National Endowment for the Humanities. Washington, DC, 1999. Cahnman, Werner J. “The Stigma of Obesity.” The Sociological Quarterly 9.3 (1968): 283-299. Chang, Virginia W., and Nicholas A. Christakis. “Medical Modeling of Obesity: A Transition from Action to Experience in a 20th Century American Medical Textbook.” Sociology of Health & Illness 24.2 (2002): 151-177. Coleman, Rebecca. “Dieting Temporalities: Interaction, Agency and the Measure of Online Weight Watching.” Time & Society 19.2 (2010): 265-285. Colls, Rachel. “‘Looking Alright, Feeling Alright:’ Emotions, Sizing, and the Geographies of Women’s Experience of Clothing Consumption.” Social & Cultural Geography 5.4 (2004): 583-596. Crawford, Robert. “You Are Dangerous to Your Health: The Ideology and Politics of Victim Blaming.” International Journal of Health Services 7.4 (1977): 663-680. ———. “Health as a Meaningful Social Practice.: Health 10.4 (2006): 401-20. Dedoose. Computer Software. n.d. Franko, Debra L., Emilie J. Coen, James P. Roehrig, Rachel Rodgers, Amy Jenkins, Meghan E. Lovering, Stephanie Dela Cruz. “Considering J. Lo and Ugly Betty: A Qualitative Examination of Risk Factors and Prevention Targets for Body Dissatisfaction, Eating Disorders, and Obesity in Young Latina Women.” Body Image 9.3 (2012), 381-387. Fikken, Janna J., and Esther D. Rothblum. “Is Fat a Feminist Issue? Exploring the Gendered Nature of Weight Bias.” Sex Roles 66.9-10 (2012): 575-592. Fraser, Suzanne, JaneMaree Maher, and Jan Wright. “Between Bodies and Collectivities: Articulating the Action of Emotion in Obesity Epidemic Discourse.” Social Theory & Health 8.2 (2010): 192-209. Garland Thomson, Rosemarie. “Misfits: A Feminist Materialist Disability Concept.” Hypatia 26.3 (2011): 591-609. Goffman, Erving. Stigma: Notes on the Management of Spoiled Identity. New York: Simon & Schuster, 1963. Grabe, Shelly, and Janet S. Hyde. “Ethnicity and Body Dissatisfaction among Women in the United States: A Meta-Analysis.” Psychological Bulletin 132.2 (2006): 622. Greenhalgh, Susan. “Weighty Subjects: The Biopolitics of the U.S. War on Fat.” American Ethnologist 39.3 (2012): 471-487. Grosz, Elizabeth A. “Bodies-Cities.” Feminist Theory and the Body: A Reader, eds. Janet Price and Margrit Shildrick. New York: Routledge, 1999. 381-387. Guthman, Julie. “Teaching the Politics of Obesity: Insights into Neoliberal Embodiment and Contemporary Biopolitics.” Antipode 41.5 (2009): 1110-1133. Kassirer, Jerome P., and M. Marcia Angell. “Losing Weight: An Ill-Fated New Year's Resolution.” The New England Journal of Medicine 338.1 (1998): 52. Kirkland, Anna. “Think of the Hippopotamus: Rights Consciousness in the Fat Acceptance Movement.” Law & Society Review 42.2 (2008): 397-432. Lewis, Sophie, Samantha L. Thomas, R. Warwick Blood, David Castle, Jim Hyde, and Paul A. Komesaroff. “How Do Obese Individuals Perceive and Respond to the Different Types of Obesity Stigma That They Encounter in Their Daily Lives? A Qualitative Study.” Social Science & Medicine 73.9 (2011): 1349-56. López, Julia Navas. “Socio-Anthropological Analysis of Bariatric Surgery Patients: A Preliminary Study.” Social Medicine 4.4 (2009): 209-217. McPhail, Deborah. “What to Do with the ‘Tubby Hubby?: ‘Obesity,’ the Crisis of Masculinity, and the Nuclear Family in Early Cold War Canada. Antipode 41.5 (2009): 1021-1050. Mann, Traci, A. Janet Tomiyama, Erika Westling, Ann-Marie Lew, Barbara Samuels, and Jason Chatman. “Medicare’s Search for Effective Obesity Treatments.” American Psychologist 62.3 (2007): 220-233. Metzl, Jonathan. “Introduction: Why ‘Against Health?’” Against Health: How Health Became the New Morality, eds. Jonathan Metzl and Anna Kirkland. New York: NYU Press, 2010. 1-14. Puhl, Rebecca M. “Obesity Stigma: Important Considerations for Public Health.” American Journal of Public Health 100.6 (2010): 1019-1028.———, and Kelly D. Brownell. “Psychosocial Origins of Obesity Stigma: Toward Changing a Powerful and Pervasive Bias.” Obesity Reviews 4.4 (2003): 213-227. ——— and Chelsea A. Heuer. “The Stigma of Obesity: A Review and Update.” Obesity 17.5 (2009): 941-964. Schafer, Markus H., and Kenneth F. Ferraro. “The Stigma of Obesity: Does Perceived Weight Discrimination Affect Identity and Physical Health?” Social Psychology Quarterly 74.1 (2011): 76-97. Schwartz, H. Never Satisfied: A Cultural History of Diets, Fantasies, and Fat. New York: Anchor Books, 1986. Wadden, Thomas A., David B. Sarwer, Anthony N. Fabricatore, LaShanda R. Jones, Rebecca Stack, and Noel Williams. “Psychosocial and Behavioral Status of Patients Undergoing Bariatric Surgery: What to Expect before and after Surgery.” The Medical Clinics of North America 91.3 (2007): 451-69. Wilson, Bianca. “Fat, the First Lady, and Fighting the Politics of Health Science.” Lecture. The Graduate Center of the City University of New York. 14 Feb. 2011.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography