Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Beyond worst-case analysis.

Artykuły w czasopismach na temat „Beyond worst-case analysis”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Beyond worst-case analysis”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

Roughgarden, Tim. "Beyond worst-case analysis". Communications of the ACM 62, nr 3 (21.02.2019): 88–96. http://dx.doi.org/10.1145/3232535.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Manthey, Bodo, i Heiko Röglin. "Smoothed Analysis: Analysis of Algorithms Beyond Worst Case". it - Information Technology 53, nr 6 (grudzień 2011): 280–86. http://dx.doi.org/10.1524/itit.2011.0654.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Kirner, Raimund, Jens Knoop, Adrian Prantl, Markus Schordan i Albrecht Kadlec. "Beyond loop bounds: comparing annotation languages for worst-case execution time analysis". Software & Systems Modeling 10, nr 3 (9.04.2010): 411–37. http://dx.doi.org/10.1007/s10270-010-0161-0.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Arnestad, Håvard Kjellmo, Gábor Geréb, Tor Inge Birkenes Lønmo, Jan Egil Kirkebø, Andreas Austeng i Sven Peter Näsholm. "Worst-case analysis of array beampatterns using interval arithmetic". Journal of the Acoustical Society of America 153, nr 6 (1.06.2023): 3312. http://dx.doi.org/10.1121/10.0019715.

Pełny tekst źródła
Streszczenie:
Over the past decade, interval arithmetic (IA) has been used to determine tolerance bounds of phased-array beampatterns. IA only requires that the errors of the array elements are bounded and can provide reliable beampattern bounds even when a statistical model is missing. However, previous research has not explored the use of IA to find the error realizations responsible for achieving specific bounds. In this study, the capabilities of IA are extended by introducing the concept of “backtracking,” which provides a direct way of addressing how specific bounds can be attained. Backtracking allows for the recovery of the specific error realization and corresponding beampattern, enabling the study and verification of which errors result in the worst-case array performance in terms of the peak sidelobe level (PSLL). Moreover, IA is made applicable to a wider range of arrays by adding support for arbitrary array geometries with directive elements and mutual coupling in addition to element amplitude, phase, and positioning errors. Last, a simple formula for approximate bounds of uniformly bounded errors is derived and numerically verified. This formula gives insights into how array size and apodization cannot reduce the worst-case PSLL beyond a certain limit.
Style APA, Harvard, Vancouver, ISO itp.
5

Mitzenmacher, Michael, i Sergei Vassilvitskii. "Algorithms with predictions". Communications of the ACM 65, nr 7 (lipiec 2022): 33–35. http://dx.doi.org/10.1145/3528087.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Pape, Marieke, Steven Kuijper, Pauline A. J. Vissers, Geert-Jan Creemers, Hanneke W. M. Van Laarhoven i Rob Verhoeven. "Beyond median overall survival: Estimating multiple survival scenarios in patients with metastatic esophagogastric cancer." Journal of Clinical Oncology 40, nr 4_suppl (1.02.2022): 261. http://dx.doi.org/10.1200/jco.2022.40.4_suppl.261.

Pełny tekst źródła
Streszczenie:
261 Background: Recent clinical trials of novel systemic therapies showed improved survival of patients with metastatic esophageal cancer (EC) and gastric cancer (GC). Survival improvements observed in clinical trials might be unrepresentative for the total population as the percentage of patients whom participate in clinical trials is limited and more than half of all patients receive best supportive care (BSC). The aim of our study is to assess the best-case, typical and worst-case survival scenarios in patients with metastatic esophagogastric cancer. Methods: We selected patients with metastatic EC (including junction) or GC diagnosed in 2006-2019 from the Netherlands Cancer Registry. Survival scenarios were calculated based on percentiles of the survival curve stratified by tumor location and treatment (tumor-directed therapy or BSC). Survival scenarios were calculated for the 10th (best-case), 25th (upper-typical), 75th (lower-typical) and 90th (worst-case) percentiles. Linear trend analysis was performed to test if changes in survival over the diagnosis years were significant. Results: We identified 12739 patients with EC and 6833 patients with GC. Percentage of patients receiving tumor-directed therapy increased from 34% to 47% and 30% to 45% for patients with EC and GC, respectively. The median survival remained unchanged for patients with EC (5.0 months) and improved slightly for patients with GC (3.1 to 3.7 months; P=0.006). For patients with EC survival of the best-case scenario improved (17.4 to 22.8 months; P=0.001), whereas the other scenarios remained unchanged: upper-typical 11.2 to 11.7 (P=0.11), lower-typical 2.1 to 2.0 (P=0.10) and worst-case 0.9 to 0.8 months (P=0.22). For patients with GC survival improved for the best-case (13.1 to 19.5; P=0.005) and upper-typical scenario (6.7 to 10.6 months; P=0.002), whereas the lower-typical (1.2 to 1.4 months; P=0.87) and worst-case (0.6 to 0.6 months; P=0.60) remained unchanged. For patients with EC receiving tumor-directed therapy survival in all scenarios remained unchanged while for patients receiving BSC survival decreased: best-case 11.8 to 9.8 (P=0.005), upper-typical 6.0 to 5.0 (P=0.002), lower-typical 1.4 to 1.0 (P=0.003) and worst-case 0.7 to 0.5 months (P=0.03). For patients with GC receiving tumor-directed therapy survival improved for all scenarios: best-case 19.8 to 30.4 (P=0.005), upper-typical 6.4 to 10.3 (P=0.002), lower-typical 3.6 to 5.4 (P<0.001) and worst-case 1.4 to 2.6 months (P<0.001), and for patients receiving BSC survival for all scenarios remained unchanged. Conclusions: The proportion of patients with EC and GC receiving tumor-directed therapy increased over time. Despite the fact that survival improvements were not observed across all scenarios, at least an increase in survival was observed in certain subgroups of patients.
Style APA, Harvard, Vancouver, ISO itp.
7

Liu, S. C., S. J. Hu i T. C. Woo. "Tolerance Analysis for Sheet Metal Assemblies". Journal of Mechanical Design 118, nr 1 (1.03.1996): 62–67. http://dx.doi.org/10.1115/1.2826857.

Pełny tekst źródła
Streszczenie:
Traditional tolerance analyses such as the worst case methods and the statistical methods are applicable to rigid body assemblies. However, for flexible sheet metal assemblies, the traditional methods are not adequate: the components can deform, changing the dimensions during assembly. This paper evaluates the effects of deformation on component tolerances using linear mechanics. Two basic configurations, assembly in series and assembly in parallel, are investigated using analytical methods. Assembly sequences and multiple joints beyond the basic configurations are further examined using numerical methods (with finite element analysis). These findings constitute a new methodology for the tolerancing of deformable parts.
Style APA, Harvard, Vancouver, ISO itp.
8

Söderlund, Ellinor Susanne, i Natalia B. Stambulova. "In a Football Bubble and Beyond". Scandinavian Journal of Sport and Exercise Psychology 3 (14.06.2021): 13–23. http://dx.doi.org/10.7146/sjsep.v3i.121756.

Pełny tekst źródła
Streszczenie:
The objectives of this study were: (1) to explore cultural transition pathways of Swedish professional football players relocated to another European country, (2) to identify shared themes in their transition narratives. We interviewed three professional players who in their early twenties relocated to Italy, Turkey, and Switzerland, and then analyzed their stories using holistic and categorical analyses following the narrative oriented inquiry (NOI) model (Hiles & Čermák, 2008). The holistic analysis resulted in creating three core narratives (i.e., re-telling of the participants’ stories) entitled: Preparing for the worst-case scenario and saved by dedication to football; Showing interest for the host culture and carrying responsibility as a foreign player; and A step for personal development: from homesickness to being hungry for more. The categorical analysis resulted in 12 shared themes from the players’ stories arranged around three phases of the cultural transition model (Ryba et al., 2016). In the pre-transition all the participants were established players searching for new professional opportunities. In the acute cultural adaptation phase, they all prioritized adjustment in football (e.g., fitting in the team, performing). In the socio-cultural adaptation phase, they broaden their perspectives and realized that finding a meaningful life outside of football was just as important to function and feel satisfied as football success.
Style APA, Harvard, Vancouver, ISO itp.
9

Xu, Chenyang, i Benjamin Moseley. "Learning-Augmented Algorithms for Online Steiner Tree". Proceedings of the AAAI Conference on Artificial Intelligence 36, nr 8 (28.06.2022): 8744–52. http://dx.doi.org/10.1609/aaai.v36i8.20854.

Pełny tekst źródła
Streszczenie:
This paper considers the recently popular beyond-worst-case algorithm analysis model which integrates machine-learned predictions with online algorithm design. We consider the online Steiner tree problem in this model for both directed and undirected graphs. Steiner tree is known to have strong lower bounds in the online setting and any algorithm’s worst-case guarantee is far from desirable. This paper considers algorithms that predict which terminal arrives online. The predictions may be incorrect and the algorithms’ performance is parameterized by the number of incorrectly predicted terminals. These guarantees ensure that algorithms break through the online lower bounds with good predictions and the competitive ratio gracefully degrades as the prediction error grows. We then observe that the theory is predictive of what will occur empirically. We show on graphs where terminals are drawn from a distribution, the new online algorithms have strong performance even with modestly correct predictions.
Style APA, Harvard, Vancouver, ISO itp.
10

Lucarelli, Giorgio, Benjamin Moseley, Nguyen Kim Thang, Abhinav Srivastav i Denis Trystram. "Online Non-preemptive Scheduling on Unrelated Machines with Rejections". ACM Transactions on Parallel Computing 8, nr 2 (30.06.2021): 1–22. http://dx.doi.org/10.1145/3460880.

Pełny tekst źródła
Streszczenie:
When a computer system schedules jobs there is typically a significant cost associated with preempting a job during execution. This cost can be incurred from the expensive task of saving the memory’s state or from loading data into and out of memory. Thus, it is desirable to schedule jobs non-preemptively to avoid the costs of preemption. There is a need for non-preemptive system schedulers for desktops, servers, and data centers. Despite this need, there is a gap between theory and practice. Indeed, few non-preemptive online schedulers are known to have strong theoretical guarantees. This gap is likely due to strong lower bounds on any online algorithm for popular objectives. Indeed, typical worst-case analysis approaches, and even resource-augmented approaches such as speed augmentation, result in all algorithms having poor performance guarantees. This article considers online non-preemptive scheduling problems in the worst-case rejection model where the algorithm is allowed to reject a small fraction of jobs. By rejecting only a few jobs, this article shows that the strong lower bounds can be circumvented. This approach can be used to discover algorithmic scheduling policies with desirable worst-case guarantees. Specifically, the article presents algorithms for the following three objectives: minimizing the total flow-time, minimizing the total weighted flow-time plus energy where energy is a convex function, and minimizing the total energy under the deadline constraints. The algorithms for the first two problems have a small constant competitive ratio while rejecting only a constant fraction of jobs. For the last problem, we present a constant competitive ratio without rejection. Beyond specific results, the article asserts that alternative models beyond speed augmentation should be explored to aid in the discovery of good schedulers in the face of the requirement of being online and non-preemptive.
Style APA, Harvard, Vancouver, ISO itp.
11

Sugumar, D., i Kek-Kiong Tio. "Thermal Analysis of Inclined Micro Heat Pipes". Journal of Heat Transfer 128, nr 2 (9.08.2005): 198–202. http://dx.doi.org/10.1115/1.2137763.

Pełny tekst źródła
Streszczenie:
The effect of gravity is investigated for the case of inclined-triangular- and trapezoidal-shaped micro heat pipes (MHPs). The study is limited to the case of positive inclination, whereby the condenser section is elevated from the horizontal position. The results show that the axial distribution of the liquid phase is changed qualitatively. While the liquid distribution still increases monotonically starting from the evaporator end, it reaches its maximum value not at the condenser end but at a certain point in the condenser section, beyond which the liquid distribution decreases monotonically. This maximum point, where potentially flooding will first take place, results from the balance between the effects of gravity and the heat load on the MHPs. As the liquid distribution assumes its greatest value at the maximum point, a throat-like formation appears there. This formation is detrimental to the performance of MHPs, because it hinders, and at worst may block, the axial flow of the vapor phase. The results also show that the maximum point occurs further away from the condenser end for a triangular-shaped MHP compared to a trapezoidal-shaped MHP.
Style APA, Harvard, Vancouver, ISO itp.
12

Slomka, Frank, i Mohammadreza Sadeghi. "Beyond the limitations of real-time scheduling theory: a unified scheduling theory for the analysis of real-time systems". SICS Software-Intensive Cyber-Physical Systems 35, nr 3-4 (29.11.2021): 201–36. http://dx.doi.org/10.1007/s00450-021-00429-1.

Pełny tekst źródła
Streszczenie:
AbstractWe investigate the mathematical properties of event bound functions as they are used in the worst-case response time analysis and utilization tests. We figure out the differences and similarities between the two approaches. Based on this analysis, we derive a more general form do describe events and event bounds. This new unified approach gives clear new insights in the investigation of real-time systems, simplifies the models and will support algebraic proofs in future work. In the end, we present a unified analysis which allows the algebraic definition of any scheduler. Introducing such functions to the real-time scheduling theory will lead two a more systematic way to integrate new concepts and applications to the theory. Last but not least, we show how the response time analysis in dynamic scheduling can be improved.
Style APA, Harvard, Vancouver, ISO itp.
13

Walton, Daniel, i Gabriel Carroll. "A General Framework for Robust Contracting Models". Econometrica 90, nr 5 (2022): 2129–59. http://dx.doi.org/10.3982/ecta17386.

Pełny tekst źródła
Streszczenie:
We study a class of models of moral hazard in which a principal contracts with a counterparty, which may have its own internal organizational structure. The principal has non‐Bayesian uncertainty as to what actions might be taken in response to the contract, and wishes to maximize her worst‐case payoff. We identify conditions on the counterparty's possible responses to any given contract that imply that a linear contract solves this maxmin problem. In conjunction with a Richness property motivated by much previous literature, we identify a Responsiveness property that is sufficient—and, in an appropriate sense, also necessary—to ensure that linear contracts are optimal. We illustrate by contrasting several possible models of contracting in hierarchies. The analysis demonstrates how one can distill key features of contracting models that allow their findings to be carried beyond the bilateral setting.
Style APA, Harvard, Vancouver, ISO itp.
14

Almeida, Dilini, Jagadeesh Pasupuleti, Shangari K. Raveendran i M. Reyasudin Basir Khan. "Monte Carlo analysis for solar PV impact assessment in MV distribution networks". Indonesian Journal of Electrical Engineering and Computer Science 23, nr 1 (1.07.2021): 23. http://dx.doi.org/10.11591/ijeecs.v23.i1.pp23-31.

Pełny tekst źródła
Streszczenie:
The rapid penetration of solar photovoltaic (PV) systems in distribution networks has imposed various implications on network operations. Therefore, it is imperative to consider the stochastic nature of PV generation and load demand to address the operational challenges in future PV-rich distribution networks. This paper proposes a Monte Carlo based probabilistic framework for assessing the impact of PV penetration on medium voltage (MV) distribution networks. The uncertainties associated with PV installation capacity and its location, as well as the time-varying nature of PV generation and load demand were considered for the implementation of the probabilistic framework. A case study was performed for a typical MV distribution network in Malaysia, demonstrating the effectiveness of Monte Carlo analysis in evaluating the potential PV impacts in the future. A total of 1000 Monte Carlo simulations were conducted to accurately identify the influence of PV penetration on voltage profiles and network losses. Besides, several key metrics were used to quantify the technical performance of the distribution network. The results revealed that the worst repercussion of high solar PV penetration on typical Malaysian MV distribution networks is the violation of the upper voltage statutory limit, which is likely to occur beyond 70% penetration level.
Style APA, Harvard, Vancouver, ISO itp.
15

Theis, Dirk Oliver. ""Proper" Shift Rules for Derivatives of Perturbed-Parametric Quantum Evolutions". Quantum 7 (11.07.2023): 1052. http://dx.doi.org/10.22331/q-2023-07-11-1052.

Pełny tekst źródła
Streszczenie:
Banchi & Crooks (Quantum, 2021) have given methods to estimate derivatives of expectation values depending on a parameter that enters via what we call a "perturbed" quantum evolution x&#x21A6;ei(xA+B)/&#x210F;. Their methods require modifications, beyond merely changing parameters, to the unitaries that appear. Moreover, in the case when the B-term is unavoidable, no exact method (unbiased estimator) for the derivative seems to be known: Banchi & Crooks's method gives an approximation.In this paper, for estimating the derivatives of parameterized expectation values of this type, we present a method that only requires shifting parameters, no other modifications of the quantum evolutions (a "proper" shift rule). Our method is exact (i.e., it gives analytic derivatives, unbiased estimators), and it has the same worst-case variance as Banchi-Crooks's.Moreover, we discuss the theory surrounding proper shift rules, based on Fourier analysis of perturbed-parametric quantum evolutions, resulting in a characterization of the proper shift rules in terms of their Fourier transforms, which in turn leads us to non-existence results of proper shift rules with exponential concentration of the shifts. We derive truncated methods that exhibit approximation errors, and compare to Banchi-Crooks's based on preliminary numerical simulations.
Style APA, Harvard, Vancouver, ISO itp.
16

Holzäpfel, Frank. "Analysis of potential wake vortex encounters at a major European airport". Aircraft Engineering and Aerospace Technology 89, nr 5 (4.09.2017): 634–43. http://dx.doi.org/10.1108/aeat-01-2017-0043.

Pełny tekst źródła
Streszczenie:
Purpose In this study, 12 potential wake vortex encounters that were reported at a major European airport have been investigated. Because almost all encounters occurred in ground proximity, most pilots conducted a go-around. The primary purpose of this study is to discriminate between incidents caused by wake vortices or rather by effects like wind shear or turbulence. Detailed knowledge of real-world encounter scenarios and identification of worst-case conditions during the final approach constitute highly relevant background information to assess the standard scenario used for the definition of revised wake turbulence separations. Design/methodology/approach Wake vortex predictions using the probabilistic two-phase wake vortex model (P2P) are used to investigate the incidents in detail by using data from the flight data recorder, meteorological instrumentation at the airport and numerical weather prediction. Findings In the best documented cases, the flight tracks through the vortices could be reconstructed in good agreement with wake vortex predictions and recorded aircraft reactions. Out of the eight plausible wake vortex encounters, five were characterized by weak crosswinds below 1.5 m/s combined with tailwinds. This meteorological situation appears favourable for encounters because, on the one hand, weak crosswinds may compensate the self-induced lateral propagation of the upwind vortex, such that it may hover over the runway directly in the flight path of the following aircraft. On the other hand, tailwinds limit the propagation of the so-called end effects caused by the breakdown of lift during touchdown. Practical implications The installation of plate lines beyond the runway tails may improve safety by reducing the number of wake vortex encounters. Originality/value The conducted investigations provide high originality and value for both science and operational application.
Style APA, Harvard, Vancouver, ISO itp.
17

Rao, Dantam, i Madhan Bagianathan. "Selection of Optimal Magnets for Traction Motors to Prevent Demagnetization". Machines 9, nr 6 (20.06.2021): 124. http://dx.doi.org/10.3390/machines9060124.

Pełny tekst źródła
Streszczenie:
Currently, permanent-magnet-type traction motors drive most electric vehicles. However, the potential demagnetization of magnets in these motors limits the performance of an electric vehicle. It is well known that during severe duty, the magnets are demagnetized if they operate beyond a ‘knee point’ in the B(H) curve. We show herein that the classic knee point definition can degrade a magnet by up to 4 grades. To prevent consequent excessive loss in performance, this paper defines the knee point k as the point of intersection of the B(H) curve and a parallel line that limits the reduction in its residual flux density to 1%. We show that operating above such a knee point will not be demagnetizing the magnets. It will also prevent a magnet from degenerating to a lower grade. The flux density at such a knee point, termed demag flux density, characterizes the onset of demagnetization. It rightly reflects the value of a magnet, so can be used as a basis to price the magnets. Including such knee points in the purchase specifications also helps avoid the penalty of getting the performance of a low-grade magnet out of a high-grade magnet. It also facilitates an accurate demagnetization analysis of traction motors in the worst-case conditions.
Style APA, Harvard, Vancouver, ISO itp.
18

Palmer, Stephen, Casey Quinn, Crystal Watson i Arie Barlev. "Estimating Long-Term Survival in a Cohort of Allogeneic Hematopoietic Stem Cell Transplant Patients". Blood 132, Supplement 1 (29.11.2018): 3556. http://dx.doi.org/10.1182/blood-2018-99-113616.

Pełny tekst źródła
Streszczenie:
Abstract Introduction: Allogeneic hematopoietic cell transplantation (HCT) is a common treatment for many hematologic diseases. Most deaths occur in the first 2 years after HCT due to relapse, graft-versus-host disease, infections, malignancies, or other toxicities. Among patients who are alive and recurrence free at 2 years after HCT, survival at 10 years is between 80% and 92%. Advances in transplantation practices have led to improved outcomes and more long-term HCT survivors. As survival outcomes continue to improve and new treatments emerge, understanding and quantifying the full lifetime benefit of HCT in terms of mean overall survival (OS) is clinically relevant. In this analysis, we estimate the mean OS of a cohort of HCT patients. Methods: A systematic literature review of all studies reporting OS post-HCT was conducted. Extracted data were incorporated into a long-term survival model using a step-wise approach: short-term survival (up to 2 years post-HCT) using data reported by Uhlin et al (Haematologica. 2014;99:346-52), and longer-term survival (more than 2 years post-HCT) using data reported by Wingard et al (J Clin Oncol. 2011;29:2230-9) and the age-adjusted life tables for the general UK population. Available published data provided OS estimates up to 15 years post-HCT, and beyond this time, OS estimates are uncertain. To estimate mean OS and address this uncertainty, three different survival scenarios were modeled: best-case, worst-case, and base-case. For the best-case, it was assumed that HCT patients were cured and had the same OS as the age-adjusted general population. For the worst-case, it was assumed that HCT patients carried excess mortality for the rest of their lives. The excess mortality was calculated from Wingard et al 2011, which showed 20% mortality from year 2-15 post-HCT; this is an order of magnitude greater than the general population (2%). For the base-case, a Weibull parametric function was fit to the data published by Wingard et al 2011 to estimate the survival curve from year 2 post-HCT until death. To incorporate the excess mortality, the lowest survival was chosen cycle over cycle between the parametric estimate and the age-adjusted life tables. Sensitivity analysis for the mean survival estimate was performed by fitting several different parametric functions using exponential, Gompertz, log-logistic, and log-normal. A lifetime analysis was undertaken for a cohort of patients starting at the weighted mean age at the time of HCT, calculated from Wingard et al 2011. Results: Only two published articles were found that provided OS in patients without complications after HCT (Uhlin et al 2014 and Wingard et al 2011). Data from Uhlin et al 2014 was used to estimate the percent alive at 2 years post-HCT, which was 65%. For a cohort of HCT patients that received their transplant at age 23.5, the estimated mean OS for the base-case was 25.9 years post-HCT, with parametric models ranging between 25.9 and 29.5 years. The best-case and worst-case estimates were 31.7 and 23.9 years, respectively. Conclusions: The mean OS for a cohort of HCT patients was estimated to be 25.9 years. This estimate helps to understand and quantify the full lifetime survival benefit to HCT patients, including the tail end of the survival curve, and the potential added benefits of future treatments post-HCT. The sensitivity analysis revealed a narrow range for the estimated mean OS, minimizing the uncertainty of the results. Our estimates are based on data published by Wingard et al 2011, which incorporates excess mortality after 2 years post-HCT. This is consistent with the clinical assumption that HCT patients continue to have excess mortality throughout their lifetime. Since the first 2 years post-HCT has the highest mortality rate, new treatments that can improve survival during this time may change the impact on the lifetime benefit of the therapy. Disclosures Palmer: PRMA Consulting, Ltd: Consultancy. Watson:Atara Biotherapeutics, Inc: Employment, Equity Ownership. Barlev:Atara Biotherapeutics, Inc: Employment, Equity Ownership.
Style APA, Harvard, Vancouver, ISO itp.
19

Borisy, G. "Beyond Cell Toons". Journal of Cell Science 113, nr 5 (1.03.2000): 749–50. http://dx.doi.org/10.1242/jcs.113.5.749.

Pełny tekst źródła
Streszczenie:
In the roadrunner cartoons, the unlucky coyote, in hot pursuit of the roadrunner, frequently finds himself running off the edge of a precipice. In sympathy with the coyote's plight, the laws of physics suspend their action. Gravity waits to exert its force until the coyote realizes his situation and resigns himself to the inevitable. Only then does the coyote fall, miraculously surviving the near-disaster without serious damage. What does this have to do with cell biology at the turn of the millennium? Blame it on JCS's Caveman or at least the infectiousness of the troglodyte's point of view. But it strikes this Editor that for much of cell biology, no less than for the roadrunner, the laws of physics are seemingly suspended. Pick up any contemporary text book or review article and look at the cartoons (diagrams) that grace the pages. You will find diagrams replete with circles, squares, ellipsoids and iconic representations of molecular components, supramolecular assemblies or membrane compartments. Arrows define signal cascades, pathways of transport and patterns of interaction. Even better, check out any of the supplementary instructional CDs that accompany text books and view the animations. You will see cell toons - molecules moving on smooth trajectories to interact with their partners, assembling into cellular machinery or arriving at cellular destinations. They all seem to know where to go and what to do in their cell toon life. It doesn't matter whether we are talking about DNA replication, protein synthesis, mitochondrial respiration, membrane trafficking, nuclear import, chromatin condensation or assembly of the mitotic spindle to mention just a few examples. In each case, the process unfolds before us as a molecular ballet choreographed by a hidden director. Or should I say anonymous animator. Please don't get me wrong. Cartoon diagrams are a necessary part of science. They help us to form and communicate concepts. Adages such as ‘a picture is worth a thousand words’ do not come into existence for nothing. Further, simplification is necessary to sharpen Occam's razor. Science progresses faster if a hypothesis is honed to the point where it can be readily refuted. Of course, it is best to be right. Next best is to be wrong. But the worst thing that can be said about a concept is that it is so hedged or ambiguous that it cannot even be wrong. Cartoons are invaluable in presenting clear alternatives. And cartoons, by definition, do not attempt to portray reality. We understand and accept that they deliberately omit details which may be important in some other context but which are extraneous to the story line. We do not have to know how the coyote recovers from his disastrous fall. It is sufficient that he resumes the chase. Likewise, much of Cell Biology can satisfactorily be ‘explained’ in terms of the behavior of toons. My thesis for this essay is that cell biology at the turn of the millennium has, for the first time, the real opportunity to burst the frames of the cartoons. The field has progressed to the point where the maxim that cells obey the laws of physics and chemistry can be made more than a creed. The time is approaching for the mystery of the hidden directornymous animator to be dispelled. What is driving this new orientation and what is required to bring it to fruition? Advances in structural biology provide part of the explanation. Atomic structures have been determined for a large variety of proteins, with the number increasing on a daily basis. Structural genomics will succeed genomics. It is possible to foresee that in the not too distant future atomic structures will be known for most if not all the major proteins in a cell. Not only individual proteins but supramolecular assemblies as complex as the ribosome have yielded to structural analysis. Of course, structures per se are static entities, but biology has taught that function is inherent in structure. Knowledge of molecular structures has provided atomic explanations for ligand binding, allosteric interaction, enzymatic catalysis, ion pumps, immune recognition, sensory detection and mechano-chemical transduction. When combined with kinetics, structural biology provides the chemical bedrock of cell biology. But the bedrock of structural biology, while necessary for the new cell biology, is almost certainly not sufficient. A major gap is in understanding the complex properties of self-organizing systems. Cells are ensembles of molecules interacting within boundaries. Some of the molecules are organized into supramolecular assemblies that have been likened to molecular machines. Examples include multi-enzyme complexes, DNA replication complexes, the ribosome and the proteasome. Understanding the operation of these molecular machines in chemical and physical terms is a major challenge in that they display exotic behavior such as solid-state channeling of substrates, error-checking, proof-reading, regulation and adaptiveness. Nevertheless, the conceptual basis for their formation is thought to rest on well-established principles: namely, the equilibrium self-assembly of molecular components whose specific affinities are inherent in their 3-D structure. However, other aspects of cellular organization manifest properties beyond self-assembly. The cytoskeleton, for example, is a steady-state system which requires the continuous input of energy to maintain its organization. It displays emergent properties of self-organization, self-centering, self-polarization and self-propagating motility. Membrane compartments such as the endoplasmic reticulum, the Golgi apparatus and transport vesicles provide additional examples of cellular organization dependent upon dynamic processes far from equilibrium. A further level of complexity is introduced by the fact that the self-organization of one system, such as membrane compartments, may be dependent upon another, such as the cytoskeleton. A challenge for the new cell biology is to go beyond ‘toon’ explanations, to understand the emergent, self-organizing properties of interdependent systems. It is likely that an adequate response to this challenge will be multidisciplinary, involving approaches not normally associated with mainstream cell biology. We are likely to be in for a heavy dose of biophysics, computer modeling and systems analysis. A serious problem will be to identify functional levels of decomposition and reconstitution. Because of the microscopic scale, thermal energy, randomness and stochastic processes will be an intrinsic part of the landscape. Brownian motions may present a Damoclean double edge. They are commonly thought to be responsible for the degradation of order into disorder. But, counterintuitively, random thermal processes may also provide the raw energy which, if biased by energy-dependent molecular switches and motors, generates order from disorder. Non-deterministic processes and selection from among alternative pathways may be a common strategy. Fluctuation theory, probabilistic formulations and rare events may underpin the capacity of molecular ensembles to ‘evolve’ into ordered configurations. Further, biological properties such as error-checking and adaptiveness imply an ‘intelligence’, which suggests that the systems analysis may have ‘software’ as well as ‘hardware’ dimensions. Molecular logic may be non-deterministic, ‘fuzzy’ and able to ‘learn’. The evolvability of the system may itself be an important consideration in understanding the design principles. The belief that cells obey the laws of physics and chemistry means that, in terms of the molecular ballet, the director is not only hidden - he doesn't exist. One is tempted to say that the challenge is to understand how the ballet came to be self- choreographed. But even this formulation misses the point that the individual dancers have no definite positions on the stage. Organization in the cell is a continuity of form, not individual molecules. The challenge is to understand how the ensemble is able to perform the dance with chaotic free substitution.
Style APA, Harvard, Vancouver, ISO itp.
20

Lawson, K. "Pipeline corrosion risk analysis – an assessment of deterministic and probabilistic methods". Anti-Corrosion Methods and Materials 52, nr 1 (1.02.2005): 3–10. http://dx.doi.org/10.1108/00035590510574862.

Pełny tekst źródła
Streszczenie:
PurposeThis paper compares and contrasts two approaches to the treatment of pipeline corrosion “risk” – the probabilistic approach and the more traditional, deterministic approach. The paper aims to discuss the merits and potential pitfalls of each approach.Design/methodology/approachProvides an outline of each approach. The probabilistic approach to the assessment of pipeline corrosion risks deals with many of the uncertainties that are common to the data employed and those with regard to the predictive models that are used also. Rather than considering each input parameter as an average value the approach considers the inputs as a series of probability density functions, the collective use during the assessment of risk yields a risk profile that is quantified on the basis of uncertain data. This approach differs from the traditional deterministic assessment in that the output yields a curve that shows how the “risk” of failure increases with time. The pipeline operator simply chooses the level of risk that is acceptable and then devises a strategy to deal with those risks. The traditional (deterministic) approach merely segments the output risks as either “high”, “medium” or “low”; a strategy for managing is devised based on the selection of an appropriate time interval to allow a reasonable prospect of detecting deterioration before the pipeline corrosion allowance is exceeded, or no longer complies with code. Applies both approaches to the case of a 16.1 km long, 14 in. main export line in the North Sea.FindingsThe deterministic assessment yielded a worst‐case failure probability of “medium” with a corresponding consequence of “high”; classifications that are clearly subjective. The probabilistic assessments quantified pipeline failure probabilities, although it is important to note that more effort was required when performing such an assessment. Using target probabilities for “high” and “normal” consequence pipeline segments, indications were that between 8.5 and 13 years was the time period for which the target (predicted) failure probabilities would be reached, again depending on how effective corrosion mitigation activities are in practice. Basing pipeline inspections in particular on the outputs from the deterministic assessment would therefore be conservative in this instance; but this may not necessarily always be so. That the probabilistic assessment indicates that inspections justifiably may be extended beyond that suggested by the deterministic assessment is a clear benefit, in that it affords the opportunity to defer expenditure on pipeline inspections to a later date, but it may be the case that the converse may be required. It may be argued therefore, that probabilistic assessment provides a superior basis for driving pipeline corrosion management activities given that the approach deals with the uncertainties in the basic input data.Originality/valueA probabilistic assessment approach that effectively mirrors pipeline operations, provides a superior basis upon which to manage risk and would therefore likely maximize both safety and business performance.
Style APA, Harvard, Vancouver, ISO itp.
21

Ovchinnikov, Andrei, Alina Veresova i Anna Fominykh. "Decoding of linear codes for single error bursts correction based on the determination of certain events". Information and Control Systems, nr 6 (27.12.2022): 41–52. http://dx.doi.org/10.31799/1684-8853-2022-6-41-52.

Pełny tekst źródła
Streszczenie:
Introduction: In modern systems for communication, data storage and processing the error-correction capability of codes are estimated for memoryless channels. In real channels the noise is correlated, which leads to grouping error in bursts. A traditional method to fight this phenomenon is channel decorrelation, which does not allow developing of coding schemes, mostly utilizing the channel capacity. Thus the development of bursts decoding algorithms for arbitrary linear codes is the actual task. Purpose: To develop a single error burst decoding algorithm for linear codes, to estimate the decoding error probability and computational complexity. Results: Two approaches are proposed to burst error correction. The first one is based on combining the window sliding modification of well-known bit-flipping algorithm with preliminary analysis of the structure of parity check matrix. The second one is based on the recursive procedure of constructing the sequence of certain events which, in the worst case, performs the exhaustive search of error bursts, but in many cases the search may be significantly decreased by using the proposed heuristics. The proposed recursive decoding algorithm allows a guaranteed correction of any single error bursts within burst-correction capability of the code, and in many cases beyond the burst-correction capability. The complexity of this algorithm is significantly lower than that of a bit flipping algorithm if the parity-check matrix of the code is sparse enough. An alternative hybrid decoding algorithm is proposed utilizing the bit-flipping approach and showing the error probability and completion time comparable to the recursive algorithm, however, in this case the possibility of a guaranteed burst correction hardly can be proved. Practical relevance: The proposed decoding methods may be used in modern and perspective communication systems, allowing energy saving and increasing reliability of data transmission by better error performance and computational complexity.
Style APA, Harvard, Vancouver, ISO itp.
22

Tripathi, Hemant G., Emily S. Woollen, Mariana Carvalho, Catherine L. Parr i Casey M. Ryan. "Agricultural expansion in African savannas: effects on diversity and composition of trees and mammals". Biodiversity and Conservation 30, nr 11 (14.07.2021): 3279–97. http://dx.doi.org/10.1007/s10531-021-02249-w.

Pełny tekst źródła
Streszczenie:
AbstractLand use change (LUC) is the leading cause of biodiversity loss worldwide. However, the global understanding of LUC's impact on biodiversity is mainly based on comparisons of land use endpoints (habitat vs non-habitat) in forest ecosystems. Hence, it may not generalise to savannas, which are ecologically distinct from forests, as they are inherently patchy, and disturbance adapted. Endpoint comparisons also cannot inform the management of intermediate mosaic landscapes. We aim to address these gaps by investigating species- and community-level responses of mammals and trees along a gradient of small scale agricultural expansion in the miombo woodlands of northern Mozambique. Thus, the case study represents the most common pathway of LUC and biodiversity change in the world's largest savanna. Tree abundance, mammal occupancy, and tree- and mammal-species richness showed a non-linear relationship with agricultural expansion (characterised by the Land Division Index, LDI). These occurrence and diversity metrics increased at intermediate LDI (0.3 to 0.7), started decreasing beyond LDI > 0.7, and underwent high levels of decline at extreme levels of agricultural expansion (LDI > 0.9). Despite similarities in species richness responses, the two taxonomic groups showed contrasting β-diversity patterns in response to increasing LDI: increased dissimilarity among tree communities (heterogenisation) and high similarity among mammals (homogenisation). Our analysis along a gradient of landscape-scale land use intensification allows a novel understanding of the impacts of different levels of land conversion, which can help guide land use and restoration policy. Biodiversity loss in this miombo landscape was lower than would be inferred from existing global syntheses of biodiversity-land use relations for Africa or the tropics, probably because such syntheses take a fully converted landscape as the endpoint. As, currently, most African savanna landscapes are a mosaic of savanna habitats and small scale agriculture, biodiversity loss is probably lower than in current global estimates, albeit with a trend towards further conversion. However, at extreme levels of land use change (LDI > 0.9 or < 15% habitat cover) miombo biodiversity appears to be more sensitive to LUC than inferred from the meta-analyses. To mitigate the worst effects of land use on biodiversity, our results suggest that miombo landscapes should retain > 25% habitat cover and avoid LDI > 0.75—after which species richness of both groups begin to decline. Our findings indicate that tree diversity may be easier to restore from natural restoration than mammal diversity, which became spatially homogeneous.
Style APA, Harvard, Vancouver, ISO itp.
23

Shita, Abel, Alemayehu Worku Yalew, Edom Seife, Tsion Afework, Aragaw Tesfaw, Zenawi Hagos Gufue, Friedemann Rabe, Lesley Taylor, Eva Johanna Kantelhardt i Sefonias Getachew. "Survival and predictors of breast cancer mortality in South Ethiopia: A retrospective cohort study". PLOS ONE 18, nr 3 (6.03.2023): e0282746. http://dx.doi.org/10.1371/journal.pone.0282746.

Pełny tekst źródła
Streszczenie:
Background Breast cancer is the most frequently diagnosed cancer and the leading cause of cancer death in over 100 countries. In March 2021, the World Health Organization called on the global community to decrease mortality by 2.5% per year. Despite the high burden of the disease, the survival status and the predictors for mortality are not yet fully determined in many countries in Sub-Saharan Africa, including Ethiopia. Here, we report the survival status and predictors of mortality among breast cancer patients in South Ethiopia as crucial baseline data to be used for the design and monitoring of interventions to improve early detection, diagnosis, and treatment capacity. Methods A hospital-based retrospective cohort study was conducted among 302 female breast cancer patients diagnosed from 2013 to 2018 by reviewing their medical records and telephone interviews. The median survival time was estimated using the Kaplan-Meier survival analysis method. A log-rank test was used to compare the observed differences in survival time among different groups. The Cox proportional hazards regression model was used to identify predictors of mortality. Results are presented using the crude and adjusted as hazard ratios along with their corresponding 95% confidence intervals. Sensitivity analysis was performed with the assumption that loss to follow-up patients might die 3 months after the last hospital visit. Results The study participants were followed for a total of 4,685.62 person-months. The median survival time was 50.81 months, which declined to 30.57 months in the worst-case analysis. About 83.4% of patients had advanced-stage disease at presentation. The overall survival probability of patients at two and three years was 73.2% and 63.0% respectively. Independent predictors of mortality were: patients residing in rural areas (adjusted hazard ratio = 2.71, 95% CI: 1.44, 5.09), travel time to a health facility ≥7 hours (adjusted hazard ratio = 3.42, 95% CI: 1.05, 11.10), those who presented within 7–23 months after the onset of symptoms (adjusted hazard ratio = 2.63, 95% CI: 1.22, 5.64), those who presented more than 23 months after the onset of symptoms (adjusted hazard ratio = 2.37, 95% CI: 1.00, 5.59), advanced stage at presentation (adjusted hazard ratio = 3.01, 95% CI: 1.05, 8.59), and patients who never received chemotherapy (adjusted hazard ratio = 6.69, 95% CI: 2.20, 20.30). Conclusion Beyond three years after diagnosis, patients from southern Ethiopia had a survival rate of less than 60% despite treatment at a tertiary health facility. It is imperative to improve the early detection, diagnosis, and treatment capacities for breast cancer patients to prevent premature death in these women.
Style APA, Harvard, Vancouver, ISO itp.
24

Cherkasov, P. "Academician Nodari Simonia – the Last Marxist". World Economy and International Relations 65, nr 11 (2021): 131–40. http://dx.doi.org/10.20542/0131-2227-2021-65-11-131-140.

Pełny tekst źródła
Streszczenie:
The author traces and analyzes the career and activity of Academician Nodari Aleksandrovich Simoniyа (1932–2019), a prominent orientalist and expert in international relations who headed the Institute of World Economy and International Relations of the Russian Academy of Sciences in 2000–2006. The article reveals the formation of the general worldview and academic views of N. Simoniа, assesses his contribution to the study of the East after the collapse of the colonial system and the formation of young independent states. The author acquaints the reader with the views of the Academician on the European, Asian and Russian revolutions, with his approach to understanding the processes of contemporary world development, explains his civil position, both under the Soviet regime and in post-Soviet Russia. N. Simonia combined a detailed knowledge of realities in the Eastern regions he studied – primarily Southeast Asia – with a deep theoretical approach to the study of complex processes in the East after the end of World War II. Over time, the interests of the Academician went beyond the East, to which he devoted several decades of research. At the turn of the 1990s&#8209;2000s, his attention was attracted by the problems of global world development, as well as the development of post-Soviet Russia. All the works of N. Simonia – he published 18 books and dozens of articles in Russian and foreign academic journals – were written by him, as he himself admitted, on the basis of the Marxist methodology. But Simonia’s Marxism had nothing in common with vulgar ideas in Bolsheviks’ teachings of Marx and their “theory of Marxism-Leninism”. At the same time, the Academician criticized not only Stalin and Lenin, but also Marx himself, who succeeded only in deep analysis of contemporary pre-monopoly capitalism. N. Simonia criticized the Soviet model of socialism as well, believing that there has never been any real socialism in the USSR. He was equally critical of the “liberal” turn of the Russian intellectual elite after 1991, blaming its radical faction, which influenced President Boris Yeltsin, for instilling in Russia a model of the “worst”, as he wrote, “the most parasitic version of bureaucratic capitalism”. For Simonia, the latter was associated with Indonesia under Sukarno. But even there, not to mention Japan and South Korea, the business elite has never been antipatriotic, as it happened in modern Russia. In his opinion, the Russian model of capitalism turned out to be unlike either the Western or the Eastern model, and the modernization, which Russia urgently needs, is inseparable from genuine democratization, but should not represent an imitation of democracy, as is the case.
Style APA, Harvard, Vancouver, ISO itp.
25

Battikh, Naim G., Elrazi Awadelkarim Awadelkarim Hamid Ali i Mohamed A. Yassin. "Osteolytic Bone Lesions in Patients with Primary Myelofibrosis: A Systematic Review". Blood 138, Supplement 1 (5.11.2021): 4624. http://dx.doi.org/10.1182/blood-2021-144449.

Pełny tekst źródła
Streszczenie:
Abstract Background: Philadelphia negative Myeloproliferative neoplasms classically characterized by excess production of terminal myeloid cells in the peripheral blood. Among this group, primary myelofibrosis is the least common and usually carries the worst prognosis. Bone involvement in primary myelofibrosis has many forms and tend to manifest as osteosclerotic lesions in vast majority of cases, however osteolytic lesions are reported in exceptional occasions. In this review, we tried to shed the light on this rare association. Methods: We performed a systematic review following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. We searched the English literature (Google scholar, PubMed, and SCOPUS) for studies, reviews, case series, and case reports about patient with myelofibrosis who develop lytic bone lesion. We used the terms in combination: "Myelofibrosis'" or "Primary myelofibrosis" OR "chronic idiopathic myelofibrosis" OR "agnogenic myeloid metaplasia" and "Osteolytic bone lesion", "Osteolytic lesion", "lytic bone lesion". The review included patients with primary myelofibrosis confirmed by biopsy. The reference lists of the included studies were scanned for any additional articles. The search included all articles published up to 10th April 2021. Two independent reviewers screened the titles and abstracts of the records independently and papers unrelated to our inclusion criteria were excluded. A total of 13 articles were included in the review. Results : Total of 13 patients were included in the review. 7 patients were males, male to female ratio almost of 1:1. The mean age at time of diagnosis was 57.69 year, only two cases were diagnosed at young age, however the majority have osteolytic bone lesion at age above 50 years (12/13) of cases. The mean time between the diagnosis of primary myelofibrosis until the osteolytic bone lesion capturing was approximately 8.8 years. 9 out of 13 patients have painful bone lesion, others were incidental finding during a scan for other reasons. All patients have significant splenomegaly. All patients had the lytic lesion detected on x ray, and 2 patients had confirmed findings on magnetic resonance imaging (MRI). The most common affected bones were the vertebrae, pelvis, ribs, humerus then the scapula, femur and skull and less frequently wrist bones and calcaneus. Only one case has reported involvement of the tibia and fibula. The shape, the extension and the numbers of lesion were variable, some showed cortical sparing and others come with cortical destruction. 10 out of 13 cases have confirmed the nature of the osteolytic lesion containing hematopoietic stem cells with or without fibrosis, 2 cases were positive for JAK2 mutation. 2 patients have received ruxolitinib, one of them preceded with bone marrow transplant, others received nonspecific therapies. Discussion: The hyperdynamic ineffective bone marrow can have a negative impact on the bone structure resulting in different types of bone pathology including lytic and sclerotic lesions. The exact mechanism beyond developing lytic lesions is not fully studied, observations revealed two possible causes: systemic inflammation and direct mechanical compression from para-osseus lymph nodes. Lesions prevalence was equal in both genders which can be attributable to a small sample size, in addition, most of the patients were in advanced stages when the lytic lesions discovered and this observation can be explained by the needed time to generate extramedullary hematopoiesis and its subsequent effect on bone structure. The variation in time between the diagnosis of PMF and development of osteolytic bone lesions could be due to the indolent phase of the disease, in which patients can survive for decades without symptoms. Until recently the treatment of myelofibrosis was supportive, but after establishing the JAK2-stat pathway role in myeloproliferative disorders the FDA approved ruxolitinib a JAK2 inhibitor which shows not only survival benefit but also has a significant impact on the resolution of the lytic bone lesions as well. conclusion Osteolytic bone lesions in patients with primary myelofibrosis is extremely rare finding, and noticed shortly after diagnosis in elderly and after longer duration in young patients. The lytic lesion seems to have a bad prognostic value as we can notice 11 out of 13 patients died within one year of detection. Figure 1 Figure 1. Disclosures No relevant conflicts of interest to declare.
Style APA, Harvard, Vancouver, ISO itp.
26

Kirkpatrick, Helen Beryl, Jennifer Brasch, Jacky Chan i Shaminderjot Singh Kang. "A Narrative Web-Based Study of Reasons To Go On Living after a Suicide Attempt: Positive Impacts of the Mental Health System". Journal of Mental Health and Addiction Nursing 1, nr 1 (15.02.2017): e3-e9. http://dx.doi.org/10.22374/jmhan.v1i1.10.

Pełny tekst źródła
Streszczenie:
Background and Objective: Suicide attempts are 10-20X more common than completed suicide and an important risk factor for death by suicide, yet most people who attempt suicide do not die by suicide. The process of recovering after a suicide attempt has not been well studied. The Reasons to go on Living (RTGOL) Project, a narrative web-based study, focuses on experiences of people who have attempted suicide and made the decision to go on living, a process not well studied. Narrative research is ideally suited to understanding personal experiences critical to recovery following a suicide attempt, including the transition to a state of hopefulness. Voices from people with lived experience can help us plan and conceptualize this work. This paper reports on a secondary research question of the larger study: what stories do participants tell of the positive role/impact of the mental health system. Material and Methods: A website created for The RTGOL Project (www.thereasons.ca) enabled participants to anonymously submit a story about their suicide attempt and recovery, a process which enabled participation from a large and diverse group of participants. The only direction given was “if you have made a suicide attempt or seriously considered suicide and now want to go on living, we want to hear from you.” The unstructured narrative format allowed participants to describe their experiences in their own words, to include and emphasize what they considered important. Over 5 years, data analysis occurred in several phases over the course of the study, resulting in the identification of data that were inputted into an Excel file. This analysis used stories where participants described positive involvement with the mental health system (50 stories). Results: Several participants reflected on experiences many years previous, providing the privilege of learning how their life unfolded, what made a difference. Over a five-year period, 50 of 226 stories identified positive experiences with mental health care with sufficient details to allow analysis, and are the focus of this paper. There were a range of suicidal behaviours in these 50 stories, from suicidal ideation only to medically severe suicide attempts. Most described one or more suicide attempts. Three themes identified included: 1) trust and relationship with a health care professional, 2) the role of friends and family and friends, and 3) a wide range of services. Conclusion: Stories open a window into the experiences of the period after a suicide attempt. This study allowed for an understanding of how mental health professionals might help individuals who have attempted suicide write a different story, a life-affirming story. The stories that participants shared offer some understanding of “how” to provide support at a most-needed critical juncture for people as they interact with health care providers, including immediately after a suicide attempt. Results of this study reinforce that just one caring professional can make a tremendous difference to a person who has survived a suicide attempt. Key Words: web-based; suicide; suicide attempt; mental health system; narrative research Word Count: 478 Introduction My Third (or fourth) Suicide AttemptI laid in the back of the ambulance, the snow of too many doses of ativan dissolving on my tongue.They hadn't even cared enough about meto put someone in the back with me,and so, frustrated,I'd swallowed all the pills I had with me— not enough to do what I wanted it to right then,but more than enough to knock me out for a good 14 hours.I remember very little after that;benzodiazepines like ativan commonly cause pre- and post-amnesia, says Google helpfullyI wake up in a locked rooma woman manically drawing on the windows with crayonsthe colors of light through the glassdiffused into rainbows of joy scattered about the roomas if she were coloring on us all,all of the tattered remnants of humanity in a psych wardmade into a brittle mosaic, a quilt of many hues, a Technicolor dreamcoatand I thoughtI am so glad to be able to see this. (Story 187)The nurse opening that door will have a lasting impact on how this story unfolds and on this person’s life. Each year, almost one million people die from suicide, approximately one death every 40 seconds. Suicide attempts are much more frequent, with up to an estimated 20 attempts for every death by suicide.1 Suicide-related behaviours range from suicidal ideation and self-injury to death by suicide. We are unable to directly study those who die by suicide, but effective intervention after a suicide attempt could reduce the risk of subsequent death by suicide. Near-fatal suicide attempts have been used to explore the boundary with completed suicides. Findings indicated that violent suicide attempters and serious attempters (seriousness of the medical consequences to define near-fatal attempts) were more likely to make repeated, and higher lethality suicide attempts.2 In a case-control study, the medically severe suicide attempts group (78 participants), epidemiologically very similar to those who complete suicide, had significantly higher communication difficulties; the risk for death by suicide multiplied if accompanied by feelings of isolation and alienation.3 Most research in suicidology has been quantitative, focusing almost exclusively on identifying factors that may be predictive of suicidal behaviours, and on explanation rather than understanding.4 Qualitative research, focusing on the lived experiences of individuals who have attempted suicide, may provide a better understanding of how to respond in empathic and helpful ways to prevent future attempts and death by suicide.4,5 Fitzpatrick6 advocates for narrative research as a valuable qualitative method in suicide research, enabling people to construct and make sense of the experiences and their world, and imbue it with meaning. A review of qualitative studies examining the experiences of recovering from or living with suicidal ideation identified 5 interconnected themes: suffering, struggle, connection, turning points, and coping.7 Several additional qualitative studies about attempted suicide have been reported in the literature. Participants have included patients hospitalized for attempting suicide8, and/or suicidal ideation,9 out-patients following a suicide attempt and their caregivers,10 veterans with serious mental illness and at least one hospitalization for a suicide attempt or imminent suicide plan.11 Relationships were a consistent theme in these studies. Interpersonal relationships and an empathic environment were perceived as therapeutic and protective, enabling the expression of thoughts and self-understanding.8 Given the connection to relationship issues, the authors suggested it may be helpful to provide support for the relatives of patients who have attempted suicide. A sheltered, friendly environment and support systems, which included caring by family and friends, and treatment by mental health professionals, helped the suicidal healing process.10 Receiving empathic care led to positive changes and an increased level of insight; just one caring professional could make a tremendous difference.11 Kraft and colleagues9 concluded with the importance of hearing directly from those who are suicidal in order to help them, that only when we understand, “why suicide”, can we help with an alternative, “why life?” In a grounded theory study about help-seeking for self-injury, Long and colleagues12 identified that self-injury was not the problem for their participants, but a panacea, even if temporary, to painful life experiences. Participant narratives reflected a complex journey for those who self-injured: their wish when help-seeking was identified by the theme “to be treated like a person”. There has also been a focus on the role and potential impact of psychiatric/mental health nursing. Through interviews with experienced in-patient nurses, Carlen and Bengtsson13 identified the need to see suicidal patients as subjective human beings with unique experiences. This mirrors research with patients, which concluded that the interaction with personnel who are devoted, hope-mediating and committed may be crucial to a patient’s desire to continue living.14 Interviews with individuals who received mental health care for a suicidal crisis following a serious attempt led to the development of a theory for psychiatric nurses with the central variable, reconnecting the person with humanity across 3 phases: reflecting an image of humanity, guiding the individual back to humanity, and learning to live.15 Other research has identified important roles for nurses working with patients who have attempted suicide by enabling the expression of thoughts and developing self-understanding8, helping to see things differently and reconnecting with others,10 assisting the person in finding meaning from their experience to turn their lives around, and maintain/and develop positive connections with others.16 However, one literature review identified that negative attitudes toward self-harm were common among nurses, with more positive attitudes among mental health nurses than general nurses. The authors concluded that education, both reflective and interactive, could have a positive impact.17 This paper is one part of a larger web-based narrative study, the Reasons to go on Living Project (RTGOL), that seeks to understand the transition from making a suicide attempt to choosing life. When invited to tell their stories anonymously online, what information would people share about their suicide attempts? This paper reports on a secondary research question of the larger study: what stories do participants tell of the positive role/impact of the mental health system. The focus on the positive impact reflects an appreciative inquiry approach which can promote better practice.18 Methods Design and Sample A website created for The RTGOL Project (www.thereasons.ca) enabled participants to anonymously submit a story about their suicide attempt and recovery. Participants were required to read and agree with a consent form before being able to submit their story through a text box or by uploading a file. No demographic information was requested. Text submissions were embedded into an email and sent to an account created for the Project without collecting information about the IP address or other identifying information. The content of the website was reviewed by legal counsel before posting, and the study was approved by the local Research Ethics Board. Stories were collected for 5 years (July 2008-June 2013). The RTGOL Project enabled participation by a large, diverse audience, at their own convenience of time and location, providing they had computer access. The unstructured narrative format allowed participants to describe their experiences in their own words, to include and emphasize what they considered important. Of the 226 submissions to the website, 112 described involvement at some level with the mental health system, and 50 provided sufficient detail about positive experiences with mental health care to permit analysis. There were a range of suicidal behaviours in these 50 stories: 8 described suicidal ideation only; 9 met the criteria of medically severe suicide attempts3; 33 described one or more suicide attempts. For most participants, the last attempt had been some years in the past, even decades, prior to writing. Results Stories of positive experiences with mental health care described the idea of a door opening, a turning point, or helping the person to see their situation differently. Themes identified were: (1) relationship and trust with a Health Care Professional (HCP), (2) the role of family and friends (limited to in-hospital experiences), and (3) the opportunity to access a range of services. The many reflective submissions of experiences told many years after the suicide attempt(s) speaks to the lasting impact of the experience for that individual. Trust and Relationship with a Health Care Professional A trusting relationship with a health professional helped participants to see things in a different way, a more hopeful way and over time. “In that time of crisis, she never talked down to me, kept her promises, didn't panic, didn't give up, and she kept believing in me. I guess I essentially borrowed the hope that she had for me until I found hope for myself.” (Story# 35) My doctor has worked extensively with me. I now realize that this is what will keep me alive. To be able to feel in my heart that my doctor does care about me and truly wants to see me get better.” (Story 34). The writer in Story 150 was a nurse, an honours graduate. The 20 years following graduation included depression, hospitalizations and many suicide attempts. “One day after supper I took an entire bottle of prescription pills, then rode away on my bike. They found me late that night unconscious in a downtown park. My heart threatened to stop in the ICU.” Then later, “I finally found a person who was able to connect with me and help me climb out of the pit I was in. I asked her if anyone as sick as me could get better, and she said, “Yes”, she had seen it happen. Those were the words I had been waiting to hear! I quickly became very motivated to get better. I felt heard and like I had just found a big sister, a guide to help me figure out how to live in the world. This person was a nurse who worked as a trauma therapist.” At the time when the story was submitted, the writer was applying to a graduate program. Role of Family and Friends Several participants described being affected by their family’s response to their suicide attempt. Realizing the impact on their family and friends was, for some, a turning point. The writer in Story 20 told of experiences more than 30 years prior to the writing. She described her family of origin as “truly dysfunctional,” and she suffered from episodes of depression and hospitalization during her teen years. Following the birth of her second child, and many family difficulties, “It was at this point that I became suicidal.” She made a decision to kill herself by jumping off the balcony (6 stories). “At the very last second as I hung onto the railing of the balcony. I did not want to die but it was too late. I landed on the parking lot pavement.” She wrote that the pain was indescribable, due to many broken bones. “The physical pain can be unbearable. Then you get to see the pain and horror in the eyes of someone you love and who loves you. Many people suggested to my husband that he should leave me in the hospital, go on with life and forget about me. During the process of recovery in the hospital, my husband was with me every day…With the help of psychiatrists and a later hospitalization, I was actually diagnosed as bipolar…Since 1983, I have been taking lithium and have never had a recurrence of suicidal thoughts or for that matter any kind of depression.” The writer in Story 62 suffered childhood sexual abuse. When she came forward with it, she felt she was not heard. Self-harm on a regular basis was followed by “numerous overdoses trying to end my life.” Overdoses led to psychiatric hospitalizations that were unhelpful because she was unable to trust staff. “My way of thinking was that ending my life was the only answer. There had been numerous attempts, too many to count. My thoughts were that if I wasn’t alive I wouldn’t have to deal with my problems.” In her final attempt, she plunged over the side of a mountain, dropping 80 feet, resulting in several serious injuries. “I was so angry that I was still alive.” However, “During my hospitalization I began to realize that my family and friends were there by my side continuously, I began to realize that I wasn't only hurting myself. I was hurting all the important people in my life. It was then that I told myself I am going to do whatever it takes.” A turning point is not to say that the difficulties did not continue. The writer of Story 171 tells of a suicide attempt 7 years previous, and the ongoing anguish. She had been depressed for years and had thoughts of suicide on a daily basis. After a serious overdose, she woke up the next day in a hospital bed, her husband and 2 daughters at her bed. “Honestly, I was disappointed to wake up. But, then I saw how scared and hurt they were. Then I was sorry for what I had done to them. Since then I have thought of suicide but know that it is tragic for the family and is a hurt that can never be undone. Today I live with the thought that I am here for a reason and when it is God's time to take me then I will go. I do believe living is harder than dying. I do believe I was born for a purpose and when that is accomplished I will be released. …Until then I try to remind myself of how I am blessed and try to appreciate the wonders of the world and the people in it.” Range of Services The important role of mental health and recovery services was frequently mentioned, including dialectical behavioural therapy (DBT)/cognitive-behavioural therapy (CBT), recovery group, group therapy, Alcoholics Anonymous, accurate diagnosis, and medications. The writer in Story 30 was 83 years old when she submitted her story, reflecting on a life with both good and bad times. She first attempted suicide at age 10 or 12. A serious post-partum depression followed the birth of her second child, and over the years, she experienced periods of suicidal intent: “Consequently, a few years passed and I got to feeling suicidal again. I had pills in one pocket and a clipping for “The Recovery Group” in the other pocket. As I rode on the bus trying to make up my mind, I decided to go to the Recovery Group first. I could always take the pills later. I found the Recovery Group and yoga helpful; going to meetings sometimes twice a day until I got thinking more clearly and learned how to deal with my problems.” Several participants described the value of CBT or DBT in learning to challenge perceptions. “I have tools now to differentiate myself from the illness. I learned I'm not a bad person but bad things did happen to me and I survived.”(Story 3) “The fact is that we have thoughts that are helpful and thoughts that are destructive….. I knew it was up to me if I was to get better once and for all.” (Story 32): “In the hospital I was introduced to DBT. I saw a nurse (Tanya) every day and attended a group session twice a week, learning the techniques. I worked with the people who wanted to work with me this time. Tanya said the same thing my counselor did “there is no study that can prove whether or not suicide solves problems” and I felt as though I understood it then. If I am dead, then all the people that I kept pushing away and refusing their help would be devastated. If I killed myself with my own hand, my family would be so upset. DBT taught me how to ‘ride my emotional wave’. ……….. DBT has changed my life…….. My life is getting back in order now, thanks to DBT, and I have lots of reasons to go on living.”(Story 19) The writer of Story 67 described the importance of group therapy. “Group therapy was the most helpful for me. It gave me something besides myself to focus on. Empathy is such a powerful emotion and a pathway to love. And it was a huge relief to hear others felt the same and had developed tools of their own that I could try for myself! I think I needed to learn to communicate and recognize when I was piling everything up to build my despair. I don’t think I have found the best ways yet, but I am lifetimes away from that teenage girl.” (Story 67) The author of story 212 reflected on suicidal ideation beginning over 20 years earlier, at age 13. Her first attempt was at 28. “I thought everyone would be better off without me, especially my children, I felt like the worst mum ever, I felt like a burden to my family and I felt like I was a failure at life in general.” She had more suicide attempts, experienced the death of her father by suicide, and then finally found her doctor. “Now I’m on meds for a mood disorder and depression, my family watch me closely, and I see my doctor regularly. For the first time in 20 years, I love being a mum, a sister, a daughter, a friend, a cousin etc.” Discussion The 50 stories that describe positive experiences in the health care system constitute a larger group than most other similar studies, and most participants had made one or more suicide attempts. Several writers reflected back many years, telling stories of long ago, as with the 83-year old participant (Story 30) whose story provided the privilege of learning how the author’s life unfolded. In clinical practice, we often do not know – how did the story turn out? The stories that describe receiving health care speak to the impact of the experience, and the importance of the issues identified in the mental health system. We identified 3 themes, but it was often the combination that participants described in their stories that was powerful, as demonstrated in Story 20, the young new mother who had fallen from a balcony 30 years earlier. Voices from people with lived experience can help us plan and conceptualize our clinical work. Results are consistent with, and add to, the previous work on the importance of therapeutic relationships.8,10,11,14–16 It is from the stories in this study that we come to understand the powerful experience of seeing a family members’ reaction following a participant’s suicide attempt, and how that can be a potent turning point as identified by Lakeman and Fitzgerald.7 Ghio and colleagues8 and Lakeman16 identified the important role for staff/nurses in supporting families due to the connection to relationship issues. This research also calls for support for families to recognize the important role they have in helping the person understand how much they mean to them, and to promote the potential impact of a turning point. The importance of the range of services reflect Lakeman and Fitzgerald’s7 theme of coping, associating positive change by increasing the repertoire of coping strategies. These findings have implications for practice, research and education. Working with individuals who are suicidal can help them develop and tell a different story, help them move from a death-oriented to life-oriented position,15 from “why suicide” to “why life.”9 Hospitalization provides a person with the opportunity to reflect, to take time away from “the real world” to consider oneself, the suicide attempt, connections with family and friends and life goals, and to recover physically and emotionally. Hospitalization is also an opening to involve the family in the recovery process. The intensity of the immediate period following a suicide attempt provides a unique opportunity for nurses to support and coach families, to help both patients and family begin to see things differently and begin to create that different story. In this way, family and friends can be both a support to the person who has attempted suicide, and receive help in their own struggles with this experience. It is also important to recognize that this short period of opportunity is not specific to the nurses in psychiatric units, as the nurses caring for a person after a medically severe suicide attempt will frequently be the nurses in the ICU or Emergency departments. Education, both reflective and interactive, could have a positive impact.17 Helping staff develop the attitudes, skills and approach necessary to be helpful to a person post-suicide attempt is beginning to be reported in the literature.21 Further implications relate to nursing curriculum. Given the extent of suicidal ideation, suicide attempts and deaths by suicide, this merits an important focus. This could include specific scenarios, readings by people affected by suicide, both patients themselves and their families or survivors, and discussions with individuals who have made an attempt(s) and made a decision to go on living. All of this is, of course, not specific to nursing. All members of the interprofessional health care team can support the transition to recovery of a person after a suicide attempt using the strategies suggested in this paper, in addition to other evidence-based interventions and treatments. Findings from this study need to be considered in light of some specific limitations. First, the focus was on those who have made a decision to go on living, and we have only the information the participants included in their stories. No follow-up questions were possible. The nature of the research design meant that participants required access to a computer with Internet and the ability to communicate in English. This study does not provide a comprehensive view of in-patient care. However, it offers important inputs to enhance other aspects of care, such as assessing safety as a critical foundation to care. We consider these limitations were more than balanced by the richness of the many stories that a totally anonymous process allowed. Conclusion Stories open a window into the experiences of a person during the period after a suicide attempt. The RTGOL Project allowed for an understanding of how we might help suicidal individuals change the script, write a different story. The stories that participants shared give us some understanding of “how” to provide support at a most-needed critical juncture for people as they interact with health care providers immediately after a suicide attempt. While we cannot know the experiences of those who did not survive a suicide attempt, results of this study reinforce that just one caring professional can make a crucial difference to a person who has survived a suicide attempt. We end with where we began. Who will open the door? References 1. World Health Organization. Suicide prevention and special programmes. http://www.who.int/mental_health/prevention/suicide/suicideprevent/en/index.html Geneva: Author; 2013.2. Giner L, Jaussent I, Olie E, et al. Violent and serious suicide attempters: One step closer to suicide? J Clin Psychiatry 2014:73(3):3191–197.3. Levi-Belz Y, Gvion Y, Horesh N, et al. Mental pain, communication difficulties, and medically serious suicide attempts: A case-control study. Arch Suicide Res 2014:18:74–87.4. Hjelmeland H and Knizek BL. Why we need qualitative research in suicidology? Suicide Life Threat Behav 2010:40(1):74–80.5. Gunnell D. A population health perspective on suicide research and prevention: What we know, what we need to know, and policy priorities. Crisis 2015:36(3):155–60.6. Fitzpatrick S. Looking beyond the qualitative and quantitative divide: Narrative, ethics and representation in suicidology. Suicidol Online 2011:2:29–37.7. Lakeman R and FitzGerald M. How people live with or get over being suicidal: A review of qualitative studies. J Adv Nurs 2008:64(2):114–26.8. Ghio L, Zanelli E, Gotelli S, et al. Involving patients who attempt suicide in suicide prevention: A focus group study. J Psychiatr Ment Health Nurs 2011:18:510–18.9. Kraft TL, Jobes DA, Lineberry TW., Conrad, A., & Kung, S. Brief report: Why suicide? Perceptions of suicidal inpatients and reflections of clinical researchers. Arch Suicide Res 2010:14(4):375-382.10. Sun F, Long A, Tsao L, et al. The healing process following a suicide attempt: Context and intervening conditions. Arch Psychiatr Nurs 2014:28:66–61.11. Montross Thomas L, Palinkas L, et al. Yearning to be heard: What veterans teach us about suicide risk and effective interventions. Crisis 2014:35(3):161–67.12. Long M, Manktelow R, and Tracey A. The healing journey: Help seeking for self-injury among a community population. Qual Health Res 2015:25(7):932–44.13. Carlen P and Bengtsson A. Suicidal patients as experienced by psychiatric nurses in inpatient care. Int J Ment Health Nurs 2007:16:257–65.14. Samuelsson M, Wiklander M, Asberg M, et al. Psychiatric care as seen by the attempted suicide patient. J Adv Nurs 2000:32(3):635–43.15. Cutcliffe JR, Stevenson C, Jackson S, et al. A modified grounded theory study of how psychiatric nurses work with suicidal people. Int J Nurs Studies 2006:43(7):791–802.16. Lakeman, R. What can qualitative research tell us about helping a person who is suicidal? Nurs Times 2010:106(33):23–26.17. Karman P, Kool N, Poslawsky I, et al. Nurses’ attitudes toward self-harm: a literature review. J Psychiatr Ment Health Nurs 2015:22:65–75.18. Carter B. ‘One expertise among many’ – working appreciatively to make miracles instead of finding problems: Using appreciative inquiry as a way of reframing research. J Res Nurs 2006:11(1): 48–63.19. Lieblich A, Tuval-Mashiach R, Zilber T. Narrative research: Reading, analysis, and interpretation. Sage Publications; 1998.20. Braun V and Clarke V. Using thematic analysis in psychology. Qual Res Psychol 2006:3(2):77–101.21. Kishi Y, Otsuka K, Akiyama K, et al. Effects of a training workshop on suicide prevention among emergency room nurses. Crisis 2014:35(5):357–61.
Style APA, Harvard, Vancouver, ISO itp.
27

Fekete, Sándor P., Phillip Keldenich i Christian Scheffer. "Packing Disks into Disks with Optimal Worst-Case Density". Discrete & Computational Geometry, 15.09.2022. http://dx.doi.org/10.1007/s00454-022-00422-8.

Pełny tekst źródła
Streszczenie:
AbstractWe provide a tight result for a fundamental problem arising from packing disks into a circular container: The critical density of packing disks in a disk is 0.5. This implies that any set of (not necessarily equal) disks of total area $$\delta \le 1/2$$ δ ≤ 1 / 2 can always be packed into a disk of area 1; on the other hand, for any $$\varepsilon >0$$ ε > 0 there are sets of disks of area $$1/2+\varepsilon $$ 1 / 2 + ε that cannot be packed. The proof uses a careful manual analysis, complemented by a minor automatic part that is based on interval arithmetic. Beyond the basic mathematical importance, our result is also useful as a blackbox lemma for the analysis of recursive packing algorithms.
Style APA, Harvard, Vancouver, ISO itp.
28

Deng, Xiaotie, Yansong Gao i Jie Zhang. "Beyond the worst-case analysis of random priority: Smoothed and average-case approximation ratios in mechanism design". Information and Computation, maj 2022, 104920. http://dx.doi.org/10.1016/j.ic.2022.104920.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
29

Braun, David, Michael M. Marb, Jorg Angelov, Maximilian Wechner i Florian Holzapfel. "Worst-Case Analysis of Complex Nonlinear Flight Control Designs Using Deep Q-Learning". Journal of Guidance, Control, and Dynamics, 31.03.2023, 1–13. http://dx.doi.org/10.2514/1.g007335.

Pełny tekst źródła
Streszczenie:
With the objective of exposing hidden design deficiencies in complex nonlinear systems, this paper presents the use of reinforcement learning techniques for application in flight control law development and testing. Following the idea of worst-case testing, a deep Q-network agent is trained to identify input sequences that lead to detrimental system behavior. Because the analysis is based directly on the repeated interaction between the agent and the investigated system, no model simplifications are required, making the presented method applicable to highly complex systems. The capability of the learning-based worst-case analysis is demonstrated for the speed protection function of the hover flight control law of an electric vertical takeoff and landing (eVTOL) aircraft. The analysis discovers possible piloted maneuvers that violate the implemented protection algorithm. A root cause analysis of the emerging behavior reveals the neglect of an important flight mechanical coupling term in the design of the protection algorithm and ultimately leads to the revision and improvement of the controller. This demonstrates the benefits of the presented testing method for the design, verification, and validation of complex systems. The application to a high-fidelity system used for control law development of an actual eVTOL prototype currently under construction demonstrates the relevance of the method beyond academia.
Style APA, Harvard, Vancouver, ISO itp.
30

Pfohl, Stephen R., Haoran Zhang, Yizhe Xu, Agata Foryciarz, Marzyeh Ghassemi i Nigam H. Shah. "A comparison of approaches to improve worst-case predictive model performance over patient subpopulations". Scientific Reports 12, nr 1 (28.02.2022). http://dx.doi.org/10.1038/s41598-022-07167-7.

Pełny tekst źródła
Streszczenie:
AbstractPredictive models for clinical outcomes that are accurate on average in a patient population may underperform drastically for some subpopulations, potentially introducing or reinforcing inequities in care access and quality. Model training approaches that aim to maximize worst-case model performance across subpopulations, such as distributionally robust optimization (DRO), attempt to address this problem without introducing additional harms. We conduct a large-scale empirical study of DRO and several variations of standard learning procedures to identify approaches for model development and selection that consistently improve disaggregated and worst-case performance over subpopulations compared to standard approaches for learning predictive models from electronic health records data. In the course of our evaluation, we introduce an extension to DRO approaches that allows for specification of the metric used to assess worst-case performance. We conduct the analysis for models that predict in-hospital mortality, prolonged length of stay, and 30-day readmission for inpatient admissions, and predict in-hospital mortality using intensive care data. We find that, with relatively few exceptions, no approach performs better, for each patient subpopulation examined, than standard learning procedures using the entire training dataset. These results imply that when it is of interest to improve model performance for patient subpopulations beyond what can be achieved with standard practices, it may be necessary to do so via data collection techniques that increase the effective sample size or reduce the level of noise in the prediction problem.
Style APA, Harvard, Vancouver, ISO itp.
31

Liao, Haitao, Jianjun Wang, Jianyao Yao i Qihan Li. "Mistuning Forced Response Characteristics Analysis of Mistuned Bladed Disks". Journal of Engineering for Gas Turbines and Power 132, nr 12 (20.08.2010). http://dx.doi.org/10.1115/1.4001054.

Pełny tekst źródła
Streszczenie:
The problem of determining the worst-case mistuning pattern and robust maximum mistuning forced response of a mistuned bladed rotor is formulated and solved as an optimization problem. This approach is exemplified on a two-degrees-of-freedom per blade disk model, two three-degrees-of-freedom per blade disk models, and a mistuned two-stage bladed rotor. The results of the optimum search of the worst-case mistuning patterns for the lumped parameter models are analyzed, which reveals that the maximum blade forced response in a mistuned bladed disk is associated with mistuning jump, which causes strong localization of the vibration response in a particular blade. The mistuning jump-localization phenomenon has been observed for all of the numerical examples, and it is also demonstrated that the highest response was always experienced by a blade of mistuning value jump. The two- and three-degrees-of-freedom per blade disk models are also for determination of its sensitivity coefficients with respect to mistuning variation. Studies show that there is not a threshold of mistuning beyond which the maximum forced response levels off, or even drops, as the degree of mistuning is increased further. The maximum magnification factor is found to increase as the mistuning level is increased and reaches a maximum value at the upper limit of the mistuning level. The influence of the multistage coupling is revealed by comparing the results of single-stage analysis with that of the multistage case. The computed results have been compared with the Monte Carlo simulation produced, and it is demonstrated that the accuracy and efficiency of the maximum amplitude magnification factor computed by the presented method can be better than that of Monte Carlo simulations.
Style APA, Harvard, Vancouver, ISO itp.
32

Cîrstoiu, Cristina, Zoë Holmes, Joseph Iosue, Lukasz Cincio, Patrick J. Coles i Andrew Sornborger. "Variational fast forwarding for quantum simulation beyond the coherence time". npj Quantum Information 6, nr 1 (18.09.2020). http://dx.doi.org/10.1038/s41534-020-00302-0.

Pełny tekst źródła
Streszczenie:
Abstract Trotterization-based, iterative approaches to quantum simulation (QS) are restricted to simulation times less than the coherence time of the quantum computer (QC), which limits their utility in the near term. Here, we present a hybrid quantum-classical algorithm, called variational fast forwarding (VFF), for decreasing the quantum circuit depth of QSs. VFF seeks an approximate diagonalization of a short-time simulation to enable longer-time simulations using a constant number of gates. Our error analysis provides two results: (1) the simulation error of VFF scales at worst linearly in the fast-forwarded simulation time, and (2) our cost function’s operational meaning as an upper bound on average-case simulation error provides a natural termination condition for VFF. We implement VFF for the Hubbard, Ising, and Heisenberg models on a simulator. In addition, we implement VFF on Rigetti’s QC to demonstrate simulation beyond the coherence time. Finally, we show how to estimate energy eigenvalues using VFF.
Style APA, Harvard, Vancouver, ISO itp.
33

Powell, Adrian, Richard Kubiak, Evan Parker, Keith Bowen i Milena Polcarova. "X-ray Characterisation of a V90S Sige MBE System". MRS Proceedings 208 (1990). http://dx.doi.org/10.1557/proc-208-161.

Pełny tekst źródła
Streszczenie:
ABSTRACTX ray techniques have enabled a fast characterisation of a new VG Semicon V90S MBE system for growth of Si and SiGe material. X-ray rocking curves have allowed characterization of SiGe layers to tolerances as tight as ± 0.1 At%. X-Y uniformity measurements demonstrated that layers are grown to ± 0.5 % over a 150 mm wafer. Information on the flux rate uniformity during growth can be obtained from analysis of superlattice structures. These enable calculation of “worst case” flux variation during a growth run. An empirical relationship has been found that enables the prediction of the degree of residual strain remaining in a buffer structure grown beyond the metastable critical thickness.
Style APA, Harvard, Vancouver, ISO itp.
34

Klootwijk, Stefan, i Bodo Manthey. "Probabilistic Analysis of Optimization Problems on Sparse Random Shortest Path Metrics". Algorithmica, 26.08.2023. http://dx.doi.org/10.1007/s00453-023-01167-3.

Pełny tekst źródła
Streszczenie:
AbstractSimple heuristics for (combinatorial) optimization problems often show a remarkable performance in practice. Worst-case analysis often falls short of explaining this performance. Because of this, “beyond worst-case analysis” of algorithms has recently gained a lot of attention, including probabilistic analysis of algorithms. The instances of many (combinatorial) optimization problems are essentially a discrete metric space. Probabilistic analysis for such metric optimization problems has nevertheless mostly been conducted on instances drawn from Euclidean space, which provides a structure that is usually heavily exploited in the analysis. However, most instances from practice are not Euclidean. Little work has been done on metric instances drawn from other, more realistic, distributions. Some initial results have been obtained in recent years, where random shortest path metrics generated from dense graphs (either complete graphs or Erdős–Rényi random graphs) have been used so far. In this paper we extend these findings to sparse graphs, with a focus on sparse graphs with ‘fast growing cut sizes’, i.e. graphs for which $$|\delta (U)|=\Omega (|U|^\varepsilon )$$ | δ ( U ) | = Ω ( | U | ε ) for some constant $$\varepsilon \in (0,1)$$ ε ∈ ( 0 , 1 ) for all subsets U of the vertices, where $$\delta (U)$$ δ ( U ) is the set of edges connecting U to the remaining vertices. A random shortest path metric is constructed by drawing independent random edge weights for each edge in the graph and setting the distance between every pair of vertices to the length of a shortest path between them with respect to the drawn weights. For such instances generated from a sparse graph with fast growing cut sizes, we prove that the greedy heuristic for the minimum distance maximum matching problem, and the nearest neighbor and insertion heuristics for the traveling salesman problem all achieve a constant expected approximation ratio. Additionally, for instances generated from an arbitrary sparse graph, we show that the 2-opt heuristic for the traveling salesman problem also achieves a constant expected approximation ratio.
Style APA, Harvard, Vancouver, ISO itp.
35

Hawkins, H. Gene. "Sensitivity Analysis of Sign Luminance as Function of Input Factors". Transportation Research Record: Journal of the Transportation Research Board, 15.04.2022, 036119812210785. http://dx.doi.org/10.1177/03611981221078573.

Pełny tekst źródła
Streszczenie:
The luminance, or brightness, produced by a traffic sign at night is a function of several different factors including the performance of the retroreflective sheeting, the light output of the headlamp, the geometry between the headlamps and driver, and the position of the sign relative to the vehicle. In this study, the author calculated luminance for 27 different combinations of the factors affecting sign luminance. The result is eight figures comparing the sign luminance for the analyzed conditions to the luminance required for a dark rural or suburban roadway environment. The results indicated that there can be a wide range in luminance performance depending on the input variables, many of which are beyond the control of transportation agencies. The final comparison found that the difference between the best- and worst-case combinations of input variables resulted in a luminance difference of over 2,000%. Such a wide range of performance measurement means that traffic sign sheeting that is selected based on performance for a specific set of circumstances may perform in a vastly different manner because of other factors such as the type of vehicle, the headlamps on the vehicle, the roadway geometry, or the position of a sign relative to the roadway and vehicle.
Style APA, Harvard, Vancouver, ISO itp.
36

Akinyemi, Ayodeji Stephen, Kabeya Musasa i Innocent E. Davidson. "Analysis of voltage rise phenomena in electrical power network with high concentration of renewable distributed generations". Scientific Reports 12, nr 1 (12.05.2022). http://dx.doi.org/10.1038/s41598-022-11765-w.

Pełny tekst źródła
Streszczenie:
AbstractThe increasing penetration levels of renewable distributed generation (RDG) into a power system have proven to bring both positive and negative impacts. The occurrence of under voltage at the far end of a conventional electrical distribution network (DN) may not raise concern anymore with RDGs integration into a power system. However, a penetration of RDGs into power system may cause problems such as voltage rise or over-voltage and reverse power flows at the Point of Common Coupling (PCC) between RDG and DN. This research paper presents the impact of voltage rise effect and reverse power flow constraint in power system with high concentration of RDG. The analysis is conducted on a sample DN, i.e., IEEE 13-bus test system, with RDG penetration by considering the most critical scenario such as low power demand in DN and a peak power injection by RDG. For studying the impact of voltage rise and reverse power flow, a mathematical model of a DN integrating RDG is developed. Furthermore, a controller incorporating an advance control-algorithm is proposed to be installed at PCC between DN and RDG to regulate the voltage rise effects and to mitigate the reverse power flow when operating at a worst critical scenario of minimum load and maximum generation from RDG. The proposed control strategy also mitigates the voltage–current harmonic distortions, improves the power factor, and maintain the voltage stability at PCC. The simulations are carried out using MATLAB/Simulink software. Finally, recommendations are provided for the power producers to counteract the effects of voltage rise at PCC. The study has demonstrated that, voltage at PCC can be sustained with a high concentration of RDG during a worst-case scenario without a reverse power flow and voltage rise beyond grid code limits.
Style APA, Harvard, Vancouver, ISO itp.
37

Jalota, Devansh, Dario Paccagnan, Maximilian Schiffer i Marco Pavone. "Online Routing Over Parallel Networks: Deterministic Limits and Data-driven Enhancements". INFORMS Journal on Computing, 22.02.2023. http://dx.doi.org/10.1287/ijoc.2023.1275.

Pełny tekst źródła
Streszczenie:
Over the past decade, GPS-enabled traffic applications such as Google Maps and Waze have become ubiquitous and have had a significant influence on billions of daily commuters’ travel patterns. A consequence of the online route suggestions of such applications, for example, via greedy routing, has often been an increase in traffic congestion since the induced travel patterns may be far from the system optimum. Spurred by the widespread impact of traffic applications on travel patterns, this work studies online traffic routing in the context of capacity-constrained parallel road networks and analyzes this problem from two perspectives. First, we perform a worst-case analysis to identify the limits of deterministic online routing. Although we find that deterministic online algorithms achieve finite, problem/instance-dependent competitive ratios in special cases, we show that for a general setting the competitive ratio is unbounded. This result motivates us to move beyond worst-case analysis. Here, we consider algorithms that exploit knowledge of past problem instances and show how to design data-driven algorithms whose performance can be quantified and formally generalized to unseen future instances. We then present numerical experiments based on an application case for the San Francisco Bay Area to evaluate the performance of the proposed data-driven algorithms compared with the greedy algorithm and two look-ahead heuristics with access to additional information on the values of time and arrival time parameters of users. Our results show that the developed data-driven algorithms outperform commonly used greedy online-routing algorithms. Furthermore, our work sheds light on the interplay between data availability and achievable solution quality. History: Accepted by Andrea Lodi, Area Editor for Design and Analysis of Algorithms–Discrete. Funding: This work was supported by National Science Foundation (NSF) Award 1830554 and by the German Research Foundation (DFG) under [Grant 449261765]. Supplemental Material: The e-companion is available at https://doi.org/10.1287/ijoc.2023.1275 .
Style APA, Harvard, Vancouver, ISO itp.
38

Sala, Rafał, Kamil Węglarz i Andrzej Suchecki. "Analysis of lubricating oil degradation and its influence on brake specific fuel consumption of a light-duty compression-ignition engine running a durability cycle on a test stand". Combustion Engines, 5.08.2023. http://dx.doi.org/10.19206/ce-169488.

Pełny tekst źródła
Streszczenie:
The Euro 6 emission standard required compliance with legal exhaust emissions limits for newly registered vehicles and obligates light duty vehicle manufacturers to respect the 160,000 km durability requirements for in-service conformity. Although there is no legal limit set for fuel consumption, manufacturers are obligated to decrease the carbon footprint of vehicle fleets in order to obtain carbon neutral mobility beyond 2035. The aim of this paper is to analyse the impact of various oils’ and viscosity grades’ degradation on the change in break specific fuel consumption (BSFC) measured over a standardized durability test cycle. Each oil candidate underwent 300 h of durability test running performed on a test bed, without any oil changes. The purpose of the laboratory test was to reproduce the worst-case operating conditions and degradation process of the long-life engine oil type that can be experienced during extreme real life driving of a vehicle. In order to define the influence of the engine oil deterioration on the BSFC profile, the engine operation parameters were continually monitored throughout the test run. Additionally, chemical analysis of the oil was performed and the solid deposits formed on the turbocharger’s compressor side were evaluated. The test results revealed differences up to 3.5% in the BSFC values between the oil candidates tested over the durability cycle. The observed BSFC increase was directly related to the decrease in engine efficiency and can cause higher fuel consumption of the engine, which in turn has an adverse effect on environmental protection goals.
Style APA, Harvard, Vancouver, ISO itp.
39

Citro, Rodolfo, Ospedale San Luca, Mario Previtali, Daniella Bovelli, Costantino Astarita, Quirino Ciampi, Olga Vriz i in. "Abstract 4005: Morning Preference In The Onset of Tako-Tsubo Cardiomyopathy". Circulation 118, suppl_18 (28.10.2008). http://dx.doi.org/10.1161/circ.118.suppl_18.s_816-b.

Pełny tekst źródła
Streszczenie:
Background : Aim of this study is to assess the circadian variation in the occurrence of Tako-tsubo cardiomyopathy(TTC). Methods: We evaluated 78 consecutive pts (76 F, median age 61 yy) with TTC occurring between 2002 and 2008 at the Tako-Tsubo Cardiomyopathy Italian Network (TIN) Centers. All pts fulfilled the following diagnostic criteria for TTC: transient akinesia/diskinesia beyond a single major coronary artery vascular distribution; no angiographic evidence of significant coronary artery disease; new electrocardiographic changes; absence of intracranial bleeding, pheochromocitoma, myocarditis or hypertrophic cardiomyopathy. The time of symptom onset during day was categorized into four 6-hour intervals (night: 00:00 – 06:00; morning: 06:00 –12:00; afternoon: 12:00 –18:00; evening: 18:00 –24:00) for circadian analysis. Information on timing of the event that allowed categorization in to 1 of the 4 groups was available in 70 of 78 cases (90%). The distribution was tested for uniformity by the χ 2 test goodness of fit. Results: A circadian pattern, characterized by a significant peak in the morning (χ 2 =13,38, p=0.004), was found in patents with TTC (figure ). This pattern persisted even after adjustment according to a worst-case scenario in which untimed cases were arbitrarily assigned equally to each period according to the null amplitude hypothesis. Conclusions : Our data indicate a morning prevalence in the onset of TTC similar to other cardiovascular conditions and might be related to the circadian catecholamine activity.
Style APA, Harvard, Vancouver, ISO itp.
40

Hawkes, Martine. "Transmitting Genocide: Genocide and Art". M/C Journal 9, nr 1 (1.03.2006). http://dx.doi.org/10.5204/mcj.2592.

Pełny tekst źródła
Streszczenie:
In July 2005, while European heads of state attended memorials to mark the ten year anniversary of the Srebrenica genocide and court trials continued in The Hague at the International Criminal Tribunal for the former Yugoslavia (ICTY), Bosnian-American artist Aida Sehovic presented the aftermath of this genocide on a day-to-day level through her art installation in memory of the victims of Srebrenica. Drawing on the Bosnian tradition of coming together for coffee, this installation, ‘Što te Nema?’ (Why are you not here?), comprised a collection of tiny white porcelain cups (‘fildzans’ in Bosnian) arranged in the geographic shape of Srebrenica in the lobby of the United Nations building in New York. It was to represent Europe’s worst mass killing since the Second World War, which took place in July 1995 in the Bosnian town of Srebrenica. Up to 8,000 Bosnian Muslim (Bosniak) men and boys were killed when Bosnian Serb troops overran the internationally protected enclave (The Guardian). The cups were gathered from Bosnian families in the United States of America and Bosnia & Herzegovina, and in particular from members of ‘Zene Srebrenice’ (‘the women of Srebrenica’). Each of the 1,705 cups represented one exhumed, identified and re-buried victim of the Srebrenica genocide (1,705 at July 2005). The cups were filled either with coffee or, in the case of victims not yet 18 and therefore not old enough at the time of their death to have participated in the coffee tradition, with sugar cubes. The names and birth dates of the victims were recited on an audio loop. Genocide is the methodical destruction of the existence of a people. It is noted through the ‘UN Convention on the Prevention and Punishment of the Crime of Genocide’ that genocide has inflicted great losses on humanity throughout history (UNHCHR). Tribunals, such as the ICTY, with their focus on justice, are formal and responsibility-based modes of responding to genocide. Society seeks justice, but raising awareness around genocide through the telling and hearing of the individual story is also required. Responding to genocide and communicating its existence through artistic expression has been a valuable way of bearing witness to such a horrendous and immense crime against humanity. Art can address the gaps in healing and understanding that cannot be addressed through tribunals. From Picasso’s ‘Guernica’, to the children’s pictures triggered by the Rwandan genocide, to the ‘War Rugs’ of Afghanistan and to vast installations such as Peter Eisenman’s recently opened Holocaust memorial in Berlin; art has proved a powerful medium for representing such atrocities and attempting to find healing after genocide. Artworks such as Sehovic’s ‘Što te Nema?’ give insight into the personal experience of genocide while challenging indifference and maintaining memory. For the affected communities, this addresses the impact on individuals; the human cost and the loss of everyday experiences. As Srebrenica survivor Emir Suljagic comments, “when you tell someone that 10,000 people died, they cannot understand or imagine that. What I want to say is that these people were peasants, car mechanics or masons. That they had daughters, mothers, that they leave someone behind; that a lot of people are hurt by this person’s death” (qtd. in Vulliamy). ‘Što te Nema?’ transmits this personal dimension of genocide by using an everyday situation of showing hospitality with family and friends, which is familiar and practised in most cultural experiences, juxtaposed with the loss of a family member who is missing as a result of genocide. This transmits the notion of genocide into the sphere of common experience, attachment and emotion. It acts as an invitation to explore the impact of genocide beyond the impersonal statistics and the aloof legalese of the courtroom drama. Beyond providing a representation of the facts or emotions around genocide, art provides a way of responding to a crime, which, by its nature, is generally difficult to comprehend. Art can offer a mode of giving testimony and providing catharsis about events which are not easily approached or discussed. As Sehovic says of ‘Što te Nema?’ (it) is a way of healing for Bosnians, coming to terms with this terrible thing that happened to us … it is building a bridge of understanding where Bosnian people are coming from, because it is very hard to talk about these things (qtd. in Vermont Quarterly Magazine). For its receiver, genocide art, with all its capacity to arouse our emotions and empathy, transmits something that we cannot see or engage with in the factual reporting of genocide or in a political analysis of the topic. Through art, it is possible to encounter genocide at an individual, personal level. As Mödersheim points out, we seem to need symbolic expressions to help us understand, and deal with the complex nature of events so horrific that reason and emotion fail to grasp their magnitude. To the intellect, many aspects of these experiences are unfathomable, and yet to keep our humanity we need to understand them … where words and explanations fail, we look for images (Mödersheim 18). An artist’s responses to genocide can vary from the need of survivors to create actual depictions of the atrocities, to more abstract portrayals of the emotional response to acts of genocide. Art that is created by survivors or witnesses to the genocide demonstrates a documentation and testament to what has occurred – a symbolic act of transmitting the personal experience of genocide. Artistic responses to genocide by those, such as Sehovic, who did not witness the event first hand, express how genocide “remains deeply felt to the point where we could not say it has ended” (Morris 329). Such art represents the continuation and global repercussions of genocide. The question of what ‘genocide art’ means to the neutral or removed viewer or society is also significant. Art is often associated with pleasure. Issues of mass killing and war are often not the types of topics one wishes to view on a trip to an art gallery. However, art has a more crucial function as a social reflector. It is often the reaction of non-acceptance of such artworks which indicates how society wishes to consider questions of genocide or of war in general. For example, Rayner Hoff’s 1932 war memorial ‘The Crucifixion of Civilisation 1914’ was rejected for display because it was considered too confronting and controversial in its depiction of a naked, tortured female victim of war in a Christ-like pose. As Picasso commented, “painting is not done to decorate apartments. It is an instrument of war for attack and defense against the enemy” (qtd. in Mödersheim 15). In discussing the art that emerged from the Sierra Leone Civil War, Ross notes, “as our stomachs and hearts turn over at such sights, we get a small taste of what the artists felt. Even as we look at the images and experience the horror, disgust and anger that comes with knowing that they really happened, we realise that if these images are to be understood as reports from the field, serving the same function as photojournalism, it means that we have been sheltered from this type of reporting from our own news sources” (Ross 39). Here, art can address the often cursory acknowledgment given to ‘events which happen in faraway places’ and lend an insight into the personal. As Adorno notes, “history in artworks is not something made, and history alone frees the work from being merely something posited or manufactured” (133). Here we see the indivisibility of the genocide (the ‘history’) from the artwork – that what is seen is not mere ‘depiction’ but art’s ability to turn the anonymous statistics or the unknown genocide into the realisation of a brutal annihilation of individual human beings – to bring history to life as it were. What the viewer does after viewing such art is perhaps immaterial; the important thing is that they now know. But why is it important to know and important to remember? It has been argued that genocides which occurred in places like Srebrenica and Rwanda happened because the international community did not know or refused to recognise the events to the point of initially declining to apply the term ‘genocide’ to Srebrenica and settling for the more sanitised term ‘ethnic cleansing’ (Bringa 196). It would be nave and even condescending to argue that ‘Što te Nema?’ or any of the myriad other artistic responses to genocide have the possibility of undoing a genocide such as that which took place in Srebrenica, or even the hope of preventing another genocide. However, it is in transporting genocide into the personal realm that the message is transmitted and ignorance to the event can no longer be claimed. The concept of genocide can be too horrendous and vast to take in; art, whilst making it no less horrific, transmits the message to and confronts the viewer at a more direct and personal level. Such art provokes and provides a starting point for comment and debate. Art also stands as a lasting memorial to those who have lost their lives as a result of genocide and as a reminder to humanity that to ignore, underestimate or forget genocide makes possible its recurrence. References Adorno, Theodor. Aesthetic Theory. Trans. by Robert Hullot-Kentor. Minneapolis: University of Minnesota Press, 1997. Bringa, Tone. “Averted Gaze: Genocide in Bosnia-Herzegovina 1992-1995.” Annihilating Difference: The Anthropology of Genocide. Ed: Alexander Hinton Laban. London: University of California Press, 2002. 194-225. Kohn, Rachael. “War Memorials, Sublime & Scandalous.” Radio National 14 August 2005. 12 December 2005 http://www.abc.net.au/rn/relig/ark/stories/s1433477.htm>. Mödersheim, Sabine. “Art and War.” Representations of Violence: Art about the Sierra Leone Civil War. Ed. Chris Corcoran, Abu-Hassan Koroma, P.K. Muana. Chicago, 2004. 15-20. Morris, Daniel. “Jewish Artists in New York: The Holocaust Years.” American Jewish History 90.3 (September 2002): 329-331. Ross, Mariama. “Bearing Witness.” Representations of Violence: Art about the Sierra Leone Civil War. Ed. Chris Corcoran, Abu-Hassan Koroma, P.K. Muana. Chicago, 2004. 37-40. The Guardian. “Massacre at Srebrenica: Interactive Guide.” May 2005. 5 November 2005 http://www.guardian.co.uk/flash/0,5860,474564,00.html>. United Nations. “International Criminal Tribunal for the Former Yugoslavia.” 10 January 2006 http://www.un.org/icty/>. UNHCHR. “Convention on the Prevention and Punishment of the Crime of Genocide.” 1951. 3 January 2006 http://www.unhchr.ch/html/menu3/b/p_genoci.htm>. Vermont Quarterly Magazine. “Cups of Memory.” Winter 2005. 1 December 2005 http://www.uvm.edu/~uvmpr/vq/vqwinter05/aidasehovic.html>. Vulliamy, Ed. “Srebrenica Ten Years On.” June 2005. 10 February 2006 http://www.opendemocracy.net/conflict-yugoslavia/srebrenica_2651.jsp>. Citation reference for this article MLA Style Hawkes, Martine. "Transmitting Genocide: Genocide and Art." M/C Journal 9.1 (2006). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0603/09-hawkes.php>. APA Style Hawkes, M. (Mar. 2006) "Transmitting Genocide: Genocide and Art," M/C Journal, 9(1). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0603/09-hawkes.php>.
Style APA, Harvard, Vancouver, ISO itp.
41

Habel, Chad Sean. "Doom Guy Comes of Age: Mediating Masculinities in Power Fantasy Video Games". M/C Journal 21, nr 2 (25.04.2018). http://dx.doi.org/10.5204/mcj.1383.

Pełny tekst źródła
Streszczenie:
Introduction: Game Culture and GenderAs texts with the potential to help mediate specific forms of identity, video games are rich and complex sites for analysis. A tendency, however, still exists in scholarship to treat video games as just another kind of text, and work that explores the expression of masculine identity persists in drawing from cinematic analysis without proper consideration of game design and how these games are played (Triana). For example, insights from studies into horror cinema may illuminate the relationship between players and game systems in survival horror video games (Habel & Kooyman), but further study is needed to explore how people interact with the game.This article aims to build towards a scholarly definition of the term “Power Fantasy”, a concept that seems well established in wider discourse but is not yet well theorised in the scholarly literature. It does so through a case of the most recent reboot of Doom (2016), a game that in its original incarnation established an enduring tradition for high-action Power Fantasy. In the first-person shooter game Doom, the player fills the role of the “Doom Guy”, a faceless hero who shuttles between Earth and Hell with the sole aim of eviscerating demonic hordes as graphically as possible.How, then, do we begin to theorise the kind of automediation that an iconic game text like Doom facilitates? Substantial work has been done to explore player identification in online games (see Taylor; Yee). Shaw (“Rethinking”) suggests that single-player games are unexplored territory compared to the more social spaces of Massively Multiplayer Online games and other multiplayer experiences, but it is important to distinguish between direct identification with the avatar per se and the ways in which the game text mediates broader gender constructions.Abstract theorisation is not enough, though. To effectively understand this kind of automediation we also need a methodology to gain insights into its processes. The final part of this article, therefore, proposes the analysis of “Let’s Play” videos as a kind of gender identity performance which gives insight into the automediation of dominant masculine gender identities through Power Fantasy video games like Doom. This reflexive performance works to denaturalise gender construction rather than reinforce stable hegemonic identities.Power Fantasy and Gender IdentityPower Fantasy has become an established trope in online critiques and discussions of popular culture. It can be simply defined as “character imagines himself taking revenge on his bullies” (TVTropes). This trope takes on special resonance in video games, where the players themselves live out the violent revenge fantasy in the world of the game.The “power fantasy” of games implies escapism and meaninglessness, evoking outsize explosions and equally outsized displays of dominance. A “power gamer” is one who plays with a single-minded determination to win, at the expense of nuance, social relationships between players, or even their own pleasure in play. (Baker)Many examples apply this concept of Power Fantasy in video games: from God of War to Metroid: Prime and Grand Theft Auto, this prevalent trope of game design uses a kind of “agency mechanics” (Habel & Kooyman) to convince the player that they are becoming increasingly skilful in the game, when in reality the game is simply decreasing in difficulty (PBS Digital Studios). The operation of the Power Fantasy trope is also gendered; in a related trope known as “I Just Want To Be a Badass”, “males are somewhat more prone to harbour [the] wish” to feel powerful (TVTropes). More broadly, even though the game world is obviously not real, playing it requires “an investment in and commitment to a type of masculine performance that is based on the Real (particularly if one is interested in ‘winning’, pummelling your opponent, kicking ass, etc.)” (Burrill 2).Indeed, there is a perceived correlation (if not causation) between the widespread presence of Power Fantasy video games and how “game culture as it stands is shot through with sexism, racism, homophobia, and other biases” (Baker). Golding and van Deventer undertake an extended exploration of this disconcerting side of game culture, concluding that games have “become a venue for some of the more unsophisticated forms of patriarchy” (213) evidenced in the highly-publicised GamerGate movement. This saw an alignment between the label of “gamer” and extreme misogyny, abuse and harassment of women and other minorities in the industry.We have, then, a tentative connection between dominant gameplay forms based on high skill that may be loosely characterised as “Power Fantasy” and some of the most virulent toxic gender expressions seen in recent times. More research is needed to gain a clearer understanding of precisely what Power Fantasy is. Baker’s primary argument is that “power” in games can also be characterised as “power to” or “power with”, as well as the more traditional “power over”. Kurt Squire uses the phrase “Power Fantasy” as a castaway framing for a player who seeks an alternative reward to the usual game progression in Sid Meier’s Civilisation. More broadly, much scholarly work concerning gameplay design and gender identity has been focussed on the hot-button question of videogame violence and its connection to real-world violence, a question that this article avoids since it is well covered elsewhere. Here, a better understanding of the mediation of gender identity through Power Fantasy in Doom can help to illuminate how games function as automedia.Auto-Mediating Gender through Performance in Doom (2016) As a franchise, Doom commands near-incomparable respect as a seminal text of the first-person shooter genre. First released in 1993, it set the benchmark for 3D rendered graphics, energetic sound design, and high-paced action gameplay that was visceral and deeply immersive. It is impossible to mention more recent reboots without recourse to its first seminal instalment and related game texts: Kim Justice suggests a personal identification with it in a 29-minute video analysis entitled “A Personal History of Demon Slaughter”. Doom is a cherished game for many players, possibly because it evokes memories of “boyhood” gaming and all its attendant gender identity formation (Burrill).This identification also arises in livestreams and playthroughs of the game. YouTuber and game reviewer Markiplier describes nostalgically and at lengths his formative experiences playing it (and recounts a telling connection with his father who, he explains, introduced him to gaming), saying “Doom is very important to me […] this was the first game that I sat down and played over and over and over again.” In contrast, Wanderbots confesses that he has never really played Doom, but acknowledges its prominent position in the gaming community by designating himself outside the identifying category of “Doom fan”. He states that he has started playing due to “gushing” recommendations from other gamers. The nostalgic personal connection is important, even in absentia.For the most part, the critical and community response to the 2016 version of Doom was approving: Gilroy admits that it “hit all the right power chords”, raising the signature trope in reference to both gameplay and music (a power chord is a particular technique of playing heavy metal guitar often used in heavy metal music). Doom’s Metacritic score is currently a respectable 85, and, the reception is remarkably consistent between critics and players, especially for such a potentially divisive game (Metacritic). Commentators tend to cite its focus on its high action, mobility, immersion, sound design, and general faithfulness to the spirit of the original Doom as reasons for assessments such as “favourite game ever” (Habel). Game critic Yahtzee’s uncharacteristically approving video review in the iconic Zero Punctuation series is very telling in its assessment of the game’s light narrative framing:Doom seems to have a firm understanding of its audience because, while there is a plot going on, the player-character couldn’t give a half an ounce of deep fried shit; if you want to know the plot then pause the game and read all the fluff text in the character and location database, sipping daintily from your pink teacup full of pussy juice, while the game waits patiently for you to strap your bollocks back on and get back in the fray. (Yahtzee)This is a strident expression of the gendered expectations and response to Doom’s narratological refusal, which is here cast as approvingly masculine and opposed to a “feminine” desire for plot or narrative. It also feeds into a discourse which sees the game as one which demands skill, commitment, and an achievement orientation cast within an exclusivist ideology of “toxic meritocracy” (Paul).In addition to examining reception, approaches to understanding how Doom functions as a “Power Fantasy” or “badass” trope could take a variety of forms. It is tempting to undertake a detailed analysis of its design and gameplay, especially since these feed directly into considerations of player interaction. This could direct a critical focus towards gameplay design elements such as traversal and mobility, difficulty settings, “glory kills”, and cinematic techniques in the same vein as Habel and Kooyman’s analysis of survival horror video games in relation to horror cinema. However, Golding and van Deventer warn against a simplistic analysis of decontextualized gameplay (29-30), and there is a much more intriguing possibility hinted at by Harper’s notion of “Play Practice”.It is useful to analyse a theoretical engagement with a video game as a thought-experiment. But with the rise of gaming as spectacle, and particularly gaming as performance through “Let’s Play” livestreams (or video on demand) on platforms such as TwitchTV and YouTube, it becomes possible to analyse embodied performances of the gameplay of such video games. This kind of analysis allows the opportunity for a more nuanced understanding of how such games mediate gender identities. For Judith Butler, gender is not only performed, it is also performative:Because there is neither an “essence” that gender expresses or externalizes nor an objective ideal to which gender aspires, and because gender is not a fact, the various acts of gender create the idea of gender, and without those acts, there would be no gender at all. (214)Let’s Play videos—that show a player playing a game in real time with their commentary overlaying the on screen action—allow us to see the performative aspects of gameplay. Let’s Plays are a highly popular and developing form: they are not simple artefacts by any means, and can be understood as expressive works in their own right (Lee). They are complex and multifaceted, and while they do not necessarily provide direct insights into the player’s perception of their own identification, with sufficient analysis and unpacking they help us to explore both the construction and denaturalisation of gender identity. In this case, we follow Josef Nguyen’s analysis of Let’s Plays as essential for expression of player identity through performance, but instead focus on how some identity construction may narrow rather than expand the diversity range. T.L. Taylor also has a monograph forthcoming in 2018 titled Watch Me Play: Twitch and the Rise of Game Live Streaming, suggesting the time is ripe for such analysis.These performances are clear in ways we have already discussed: for example, both SplatterCat and Markiplier devote significant time to describing their formative experiences playing Doom as a background to their gameplay performance, while Wanderbots is more distanced. There is no doubt that these videos are popular: Markiplier, for example, has attracted nearly 5 million views of his Doom playthrough. If we see gameplay as automediation, though, these videos become useful artefacts for analysis of gendered performance through gameplay.When SplatterCat discovers the suit of armor for the game’s protagonist, Doom Guy, he half-jokingly remarks “let us be all of the Doom Guy that we can possibly be” (3:20). This is an aspirational mantra, a desire for enacting the game’s Power Fantasy. Markiplier speaks at length about his nostalgia for the game, specifically about how his father introduced him to Doom when he was a child, and he expresses hopes that he will again experience “Doom’s original super-fast pace and just pure unadulterated action; Doom Guy is a badass” (4:59). As the action picks up early into the game, Markiplier expresses the exhilaration and adrenaline that accompany performances of this fastpaced, highly mobile kind of gameplay, implying that he is becoming immersed in the character and, by performing Doom Guy, inhabiting the “badass” role and thus enacting a performance of Power Fantasy:Doom guy—and I hope I’m playing Doom guy himself—is just the embodiment of kickass. He destroys everything and he doesn’t give a fuck about what he breaks in the process. (8:45)This performance of gender through the skilled control of Doom Guy is, initially, unambiguously mediated as Power Fantasy: in control, highly skilled, suffused with Paul’s ‘toxic meritocracy. A similar sentiment is expressed in Wanderbots’ playthrough when the player-character dispenses with narrative/conversation by smashing a computer terminal: “Oh I like this guy already! Alright. Doom Guy does not give a shit. It’s like Wolf Blascowicz [sic], but like, plus plus” (Wanderbots). This is a reference to another iconic first-person shooter franchise, Wolfenstein, which also originated in the 1980s and has experienced a recent successful reboot, and which operates in a similar Power Fantasy mode. This close alignment between these two streamers’ performances suggests significant coherence in both genre and gameplay design and the ways in which players engage with the game as a gendered performative space.Nonetheless, there is no simple one-to-one relationship here—there is not enough evidence to argue that this kind of gameplay experience leads directly to the kind of untrammelled misogyny we see in game culture more broadly. While Gabbiadini et al. found evidence in an experimental study that a masculinist ideology combined with violent video game mechanics could lead to a lack of empathy for women and girls who are victims of violence, Ferguson and Donnellan dispute this finding based on poor methodology, arguing that there is no evidence for a causal relationship between gender, game type and lack of empathy for women and girls. This inconclusiveness in the research is mirrored by an ambiguity in the gendered performance of males playing through Doom, where the Power Fantasy is profoundly undercut in multiple ways.Wanderbots’ Doom playthrough is literally titled ‘I have no idea what I’m Dooming’ and he struggles with particular mechanics and relatively simple progression tools early in the game: this reads against masculinist stereotypes of superior and naturalised gameplay skill. Markiplier’s performance of the “badass” Doom Guy is undercut at various stages: in encountering the iconic challenge of the game, he mentions that “I am halfway decent… not that good at video games” (9:58), and on the verge of the protagonist’s death he admits “If I die this early into my first video I’m going to be very disappointed, so I’m going to have to kick it up a notch” (15:30). This suggests that rather than being an unproblematic and simple expression of male power in a fantasy video game world, the gameplay performances of Power Fantasy games are ambiguous and contested, and not always successfully performed via the avatar. They therefore demonstrate a “kind of gender performance [which] will enact and reveal the performativity of gender itself in a way that destabilizes the naturalized categories of identity and desire” (Butler 211). This cuts across the empowered performance of videogame mastery and physical dominance over the game world, and suggests that the automediation of gender identity through playing video games is a complex phenomenon urgently in need of further theorisation.ConclusionUltimately, this kind of analysis of the mediation of hegemonic gender identities is urgent for a cultural product as ubiquitous as video games. The hyper-empowered “badass” digital avatars of Power Fantasy video games can be expected to have some shaping effect on the identities of those who play them, evidenced by the gendered gameplay performance of Doom briefly explored here. This is by no means a simple or unproblematic process, though. Much further research is needed to test the methodological insights possible by using video performances of gameplay as explorations of the auto-mediation of gender identities through video games.ReferencesBaker, Meguey. “Problematizing Power Fantasy.” The Enemy 1.2 (2015). 18 Feb. 2018 <http://theenemyreader.org/problematizing-power-fantasy/>.Burrill, Derrick. Die Tryin’: Video Games, Masculinity, Culture. New York: Peter Lang, 2008.Butler, Judith. Gender Trouble. 10th ed. London: Routledge, 2002.Ferguson, Christopher, and Brent Donellan. “Are Associations between “Sexist” Video Games and Decreased Empathy toward Women Robust? A Reanalysis of Gabbiadini et al. 2016.” Journal of Youth and Adolescence 46.12 (2017): 2446–2459.Gabbiadini, Alessandro, Paolo Riva, Luca Andrighetto, Chiara Volpato, and Brad J. Bushman. “Acting like a Tough Guy: Violent-Sexist Video Games, Identification with Game Characters, Masculine Beliefs, & Empathy for Female Violence Victims.” PloS One 11.4 (2016). 14 Apr. 2018 <http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0152121>.Gilroy, Joab. “Doom: Review.” IGN, 16 May 2016. 22 Feb. 2018 <http://au.ign.com/articles/2016/05/16/doom-review-2?page=1>.Golding, Dan, and Leena van Deventer. Game Changers: From Minecraft to Misogyny, the Fight for the Future of Videogames. South Melbourne: Affirm, 2016.Habel, Chad, and Ben Kooyman. “Agency Mechanics: Gameplay Design in Survival Horror Video Games”. Digital Creativity 25.1 (2014):1-14.Habel, Chad. “Doom: Review (PS4).” Game Truck Australia, 2017. 22 Feb. 2018 <http://www.gametruckaustralia.com.au/review-doom-2016-ps4/>.Harper, Todd. The Culture of Digital Fighting Games: Performance and Practice. London: Routledge, 2013.Kim Justice. “Doom: A Personal History of Demon Slaughter.” YouTube, 16 Jan. 2017. 22 Feb. 2018 <https://www.youtube.com/watch?v=JtvoENhvkys>.Lee, Patrick. “The Best Let’s Play Videos Offer More than Vicarious Playthroughs.” The A.V. Club, 24 Apr. 2015. 22 Feb. 2018 <https://games.avclub.com/the-best-let-s-play-videos-offer-more-than-vicarious-pl-1798279027>.Markiplier. “KNEE-DEEP IN THE DEAD | DOOM – Part 1.” YouTube, 13 May 2016. 21 Feb. 2018 <https://www.youtube.com/watch?v=pCygvprsgIk>.Metacritic. “Doom (PS4).” 22 Feb. 2018 <http://www.metacritic.com/game/playstation-4/doom>.Nguyen, Josef. “Performing as Video Game Players in Let’s Plays.” Transformative Works and Cultures 22 (2016). <http://journal.transformativeworks.org/index.php/twc/article/view/698>.Paul, Christopher. The Toxic Meritocracy of Video Games: Why Gaming Culture Is the Worst. Minneapolis: U of Minnesota P, 2018.PBS Digital Studios. “Do Games Give Us Too Much Power?” 2017. 18 Feb. 2018 <https://www.youtube.com/watch?v=9COt-_3C0xI>.Shaw, Adrienne. “Do You Identify as a Gamer? Gender, Race, Sexuality, and Gamer Identity.” New Media and Society 14.1 (2012): 28-44.———. “Rethinking Game Studies: A Case Study Approach to Video Game Play and Identification.” Critical Studies in Media Communication 30.5 (2013): 347-361.SplatterCatGaming. “DOOM 2016 PC – Gameplay Intro – #01 Let's Play DOOM 2016 Gameplay.” YouTube, 13 May 2016. 21 Feb. 2018 <https://www.youtube.com/watch?v=tusgsunWEIs>.Squire, Kurt. “Open-Ended Video Games: A Model for Developing Learning for the Interactive Age.” The Ecology of Games: Connecting Youth, Games, and Learning. Ed. Katie Salen. Cambridge, MA: MIT P, 2008. 167–198.Boellstorff, Tom, Bonnie Nardi, Celia Pearce, and T.L. Taylor. Ethnography and Virtual Worlds: A Handbook of Method. Princeton, NJ: Princeton UP, 2012.Triana, Benjamin. “Red Dead Maculinity: Constructing a Conceptual Framework for Analysing the Narrative and Message Found in Video Games.” Journal of Games Criticism 2.2 (2015). 12 Apr. 2018 <http://gamescriticism.org/articles/triana-2-2/>.TVTropes. Playing With / Power Fantasy. 18 Feb. 2018 <http://tvtropes.org/pmwiki/pmwiki.php/PlayingWith/PowerFantasy>.———. I Just Want to Be a Badass. 18 Feb. 2018 <http://tvtropes.org/pmwiki/pmwiki.php/Main/IJustWantToBeBadass>.Wanderbots. “Let’s Play Doom (2016) – PC Gameplay Part I – I Have No Idea What I’m Dooming!” YouTube, 18 June 2016. 14 Apr. 2018 <https://www.youtube.com/watch?v=AQSMNhWAf0o>.Yee, Nick. “Maps of Digital Desires: Exploring the Topography of Gender and Play in Online Games.” Beyond Barbie and Mortal Kombat: New Perspectives on Gender and Gaming. Eds. Yasmin B. Kafai, Carrie Heeter, Jill Denner, and Jennifer Sun. Cambridge, MA: MIT P, 2008. 83-96.Yahtzee. “Doom (2016) Review.” Zero Punctuation, 8 June 2016. 14 Apr. 2018 <https://www.youtube.com/watch?v=HQGxC8HKCD4>.
Style APA, Harvard, Vancouver, ISO itp.
42

Lam, Ryan. "Escaping the Shadow". Voices in Bioethics 8 (25.09.2022). http://dx.doi.org/10.52214/vib.v8i.9966.

Pełny tekst źródła
Streszczenie:
Photo by Karl Raymund Catabas on Unsplash The interests of patients at most levels of policymaking are represented by a disconnected patchwork of groups … “After Buddha was dead, they still showed his shadow in a cave for centuries – a tremendous, gruesome shadow. God is dead; but given the way people are, there may still for millennia be caves in which they show his shadow. – And we – we must still defeat his shadow as well!” – Friedrich Nietzsche[1] INTRODUCTION Friedrich Nietzsche famously declared that “God is dead!”[2] but lamented that his contemporaries remained living in the shadow of God. For Nietzsche, the morality of his time was still based in the Christian tradition, even though faith in God was waning. Bioethics lives under a similar shadow: the shadow of Enlightenment Era-rationalism. Bioethics curricula focus on principles derived from Kantian deontology and utilitarianism. The allure of maintaining a moral framework that provides a rational method that can be handily applied to any situation remains strong. The principlist approach advanced by Tom Beauchamp and James Childress is taught to nearly all medical students in the United States,[3] and is essentially the canonical ethical framework of bioethics. In this model, the principle of autonomy is Kantian in nature, and the principles of beneficence and non-maleficence are utilitarian in nature.[4] Moreover, the presented framework is an approach that, when applied rationally to any healthcare scenario, will yield an outcome “considered moral.”[5] This reflects a faulty conception of philosophy that plagues much of bioethics, wherein the only contribution of philosophy pertinent to bioethics is moral philosophy elucidated by European thinkers in the Enlightenment Era. The landscape of moral philosophy has evolved significantly from the 18th century. However, the bioethical world has not kept up with the philosophical world, remaining instead in the shadow of antiquated moral thinking. Also lacking in bioethics are other disciplines of philosophy, such as philosophy of language, existentialism, and aesthetics, which are often given no consideration at all. The inclusion of both modern moral philosophy and other philosophical fields is necessary if bioethics is to survive its transition into modernity. l. The Shadow of Enlightenment Enlightenment Era philosophers such as Immanuel Kant argued that one need only employ reason to obtain knowledge; emotion bore no relevance when determining ethical behavior. Kant’s moral theories thus privileged a duty to act according to moral imperatives over feelings. Other Enlightenment Era philosophers such as John Locke developed systems that attempted to quantify human goods and human ills. This quantification potentially reduces human welfare and suffering to utility. Today, in the world of philosophy, such a “neutral analysis,” as Cora Diamond noted, is “dead or moribund.”[6] Bernard Williams remarked that such moral philosophy is “empty and boring,”[7] and G. E. M. Anscombe stated that it “no longer generally survives.”[8] And yet, just as the atheists in Nietzsche’s world dwelt in the moral code of a dead God, bioethicists still pursue a unified moral system that takes an input, applies some moral rules, and generates a moral outcome, like the four principles approach that Beauchamp and Childress laid out.[9] Some detractors of principlism take issue with their approach for not being unified enough and want to replace it with a procedural framework that is even more systematic and complicated. They argue that the resulting moral framework would be a “comprehensive decision procedure for arriving at answers”[10] that retains the “impartiality that is an essential part of morality.”[11] The shadow of rationalist morality has caused bioethical decision making to become detached and rigid when bioethics should concern itself with the humans whose lives it affects. A rational, divorced-from-emotion way of thinking ultimately fails to yield satisfactory results when decisions are made by and for emotional beings. Dr. Paul Farmer, among others, championed the idea that bioethics should be de-philosophized, as philosophy, cold and calculated, fails to adequately respond to the realities of those worst off.[12] Instead, Dr. Farmer emphasized the inclusion of the social sciences, like sociology and anthropology, in bioethics. Undoubtedly, Dr. Farmer was on the right track; bioethics should certainly engage directly with the people whom its decisions involve. If the narrow band of moral philosophy currently found in bioethics – that of stringent rationalism – were all that philosophy had to offer, I, too, would advocate for a de-philosophization. Ludwig Wittgenstein notes that to attempt to capture the complexity of moral thinking in a manner that employs reason alone and casts aside emotion is a “hopeless task,” like reconstructing a sharp image “from a blurred one.”[13] Unfortunately, bioethics is mired in the remnants of this hopeless task. To Dr. Farmer, the dominant moral framework was too restrictive and was unresponsive to the social and humanitarian needs of those whom bioethics is meant to help. As such, he wished to free bioethics from the shadow of a morality derived from rationalist thinkers. ll. Beyond Rationalism Like Nietzsche, who tried to resolve Europe’s post-religion vacuum by providing his society with a new way to live, Dr. Farmer wanted to replace the rationalist philosophy upon which bioethics was built with a “resocialization” of the field.[14] I agree with Dr. Farmer’s call for resocialization, as well as his denouncement of philosophy as it exists in bioethics. Evaluating risks and benefits along a predetermined array of moral principles is far too rigid and impersonal to guide what are often the most important decisions one will make. For Dr. Farmer, the most needed change was restoring the social element of bioethics. However, in advocating for this resocialization, Dr. Farmer casts philosophy as the antithesis of social science, noting that “few would regard philosophy … as a socializing discipline.”[15] I disagree. Rationalist moral philosophy may be lacking in socializing force, but there are other fields of philosophy that are responsive to our social reality. Rather than de-philosophizing bioethics, it makes more sense to replace the antisocial philosophies predominant in bioethics with prosocial philosophies better suited to it. Of course, the contribution of philosophy to bioethics is more than moral theories from the Enlightenment Era. There are more recent philosophical contributions from outside the field of moral philosophy that have roused bioethical interest. Jennifer Blumenthal-Barby, et al., argue for philosophy’s continued place in bioethics, citing Derek Parfit’s “non-identity problem,” which altered the landscape of reproductive ethics, and David Chalmers’ contributions to philosophy of consciousness, which have implications for the moral status of brain organoids.[16] Still, these are narrow applications of philosophy to highly specialized areas of bioethics, which not all bioethicists are inclined to delve into. Philosophy in bioethics should not be confined to niche applications in specialist fields but should influence all bioethical thought. Fortunately, there remains untapped a wealth of philosophical disciplines that pertains to exactly this. Philosophy of language investigates the nature of meaning and understanding in communication, which is a necessary social action. Successfully deciphering and conveying moral values in discourse is a bioethicist’s bread and butter, as is resolving disagreements and reaching agreements. Indeed, it is often the case that miscommunication lies at the root of an impasse between a doctor and a patient. An understanding of the nature of the disagreement would help resolve the conflict, as different types of disagreements require different interventions for resolution. For instance, a “substantive disagreement,”[17] in which two parties use the same terms in the same ways and have a fundamental disagreement on which outcome is more desirable, can be resolved only if one party yields to the other. On the other hand, a “merely verbal dispute,”[18] in which two parties use the same terms to represent entirely different concepts and values, requires standardization of terminological usage for its resolution. As such, no one can overstate the moral importance of successful communication in bioethics, and an exploration of language itself would prove invaluable to a bioethicist’s training. Existentialism is another subset of philosophy that acknowledges the social nature of human existence, noting that one’s being in the universe is concomitant with the existence of others sharing the same universe.[19] Thus, there is the recognition that whatever existence is, it is not complete without the existence of others. With this as a starting point, existentialists examined how to live meaningfully with others in this world. Since ethics crucially involves others, it is no surprise that existentialists pondered how to live moral lives. Existentialist conceptions of morality did not revolve around acting in accordance with a set of rules, but rather, recognized individual freedom in choosing how to act and emphasized acting authentically. In this vein, bioethicists should commit to doing what is right rather than committing to applying a set of principles. Existentialism, while part of the broader bioethics literature, is less common throughout bioethics curricula and deserves more prominence. Martin Heidegger, for instance, emphasized the difference between two types of thinking: “calculative thinking” and “meditative thinking.” Heidegger characterizes calculative thinking as a computation, wherein from some given starting conditions “definite results”[20] are determined, and contrasts this with meditative thinking, which he describes as “thinking which contemplates the meaning which reigns in everything that is.”[21] Heidegger was critical of the pervasiveness of calculative thinking, seeing it as the “ground of thoughtlessness,”[22] in which we only relate to the world in a meaningless, mechanical way. This is the emphasized type of thinking in rationalist conceptions of morality popular in bioethics; from a set of starting conditions, a series of rules are applied, and a moral outcome is calculated. Such a technique, however, discounts the personal meaning individuals place on the aspects of their lives relevant to their decision making, as well as the meaning in committing to doing what is right. Under calculative thinking, such a commitment is reduced to rote rule-following. A turn to meditative thinking would ensure that bioethical decisions comport with living meaningful lives. Even aesthetics, a discipline devoted to examining beauty and taste,[23] has a place in bioethics. Just as the viewing of a painting, the listening of a song, or the reading of a book elicits an effective response, hearing a patient’s story leaves an emotional imprint. The recounting of a traumatic moment imparts sadness, and a joyous occasion begets joy in the listener as well. As acknowledged in the field of everyday aesthetics, these aesthetic experiences often spur us to act:[24] The unsightly appearance of a polluted riverbank drives us to remove the trash; the presence of sorrow in one’s life drives us to ameliorate it. To be mindful of aesthetic experiences and allow them to affect us emotionally is paramount to the motivation of a bioethicist to serve the patient, not out of an obligation to a job description, but out of a desire to truly avail the patient of their anguish. For example, the new field of narrative medicine utilizes critical reading and literary techniques to train clinicians and bioethicists in emotional understanding and listening skills that stress the social aspects of medicine beyond rational analysis and decision making. CONCLUSION Dr. Farmer is absolutely correct; bioethics is in dire need of resocialization. It should not be the case that the justification for a moral action is essentially that “the rules say so,” or that simply by teaching such rules to medical students, the very act of making bioethical decisions that diverge from those determined by principles can be seen as an act of “bad faith … hubris or, worse, malpractice.”[25] As bioethicists are coming to realize, the rationalist philosophical traditions that bioethics was founded upon are past their expiry, and the time for change is now. Indeed, as Dr. Farmer urges, “socializing disciplines” like anthropology, history, political economy, and sociology are necessary to humanize the field of bioethics.[26] So too, however, can philosophy be a socializing discipline, if we know where to look. Bioethics should evolve. Its new goal should be to focus on meaningful human relationships, and to phase out rigid, impersonal modes of moral thinking. The limited sampling of unsatisfying moral theories from hundreds of years ago leaves many bioethics students cold, and it is easy to see why bioethicists are ready to part ways with philosophy. I believe this is a move in the wrong direction; there is a place for philosophy in the future of bioethics. Just as bioethics needs a resocialization, it is also needs of a re-philosophization. These enrichments complement one another. There is more to bioethics than mechanically determining the right course of action in a healthcare setting. Bioethics engages with the most ancient of philosophical questions: questions of what makes human existence meaningful, what makes us who we are, how we want to relate to others, how and why we feel, what our place in the world is, how we can communicate what we think, and why our moral intuitions are so compelling. We would be remiss if we did not begin to investigate additional contributions to morality from a wider range of philosophies that try to provide answers to such questions, as they offer a richness to moral thinking that cannot be gleaned from traditional bioethical approaches alone. - [1] Friedrich Wilhelm Nietzsche, The Gay Science: With a Prelude in German Rhymes and an Appendix of Songs, ed. Bernard Williams, Josefine Nauckhoff, and Adrian Del Caro, Cambridge Texts in the History of Philosophy (Cambridge, U.K. ; New York: Cambridge University Press, 2001), 109. [2] Nietzsche, 120. [3] Daniel C O’Brien, “Medical Ethics as Taught and as Practiced: Principlism, Narrative Ethics, and the Case of Living Donor Liver Transplantation,” The Journal of Medicine and Philosophy: A Forum for Bioethics and Philosophy of Medicine 47, no. 1 (February 1, 2022): 97, https://doi.org/10.1093/jmp/jhab039. [4] K. D. Clouser and B. Gert, “A Critique of Principlism,” Journal of Medicine and Philosophy 15, no. 2 (April 1, 1990): 219–36, https://doi.org/10.1093/jmp/15.2.219. [5] O’Brien, “Medical Ethics as Taught and as Practiced,” 97. [6] Cora Diamond, “Having a Rough Story about What Moral Philosophy Is,” New Literary History 15, no. 1 (1983): 168, https://doi.org/10.2307/468998. [7] Bernard Williams, Morality: An Introduction to Ethics, Canto ed (Cambridge ; New York: Cambridge University Press, 1993), xvii. [8] G. E. M. Anscombe, “Modern Moral Philosophy,” Philosophy 33, no. 124 (January 1958): 1, https://doi.org/10.1017/S0031819100037943. [9] Tom L. Beauchamp and James F. Childress, Principles of Biomedical Ethics, 7th ed (New York: Oxford University Press, 2013), 13. [10] Clouser and Gert, 233. [11] Clouser and Gert, “A Critique of Principlism,” 235. [12] Paul Farmer and Nicole Gastineau Campos, “Rethinking Medical Ethics: A View from Below,” Developing World Bioethics 4, no. 1 (May 2004): 17–41, https://doi.org/10.1111/j.1471-8731.2004.00065.x. [13] Ludwig Wittgenstein, Philosophical Investigations, ed. Joachim Schulte, trans. P. M. S. Hacker, 4th edition (Chichester, West Sussex, U.K. ; Malden, MA: Wiley-Blackwell, 2009), 40. [14] Farmer and Campos, “Rethinking Medical Ethics,” 20. [15] Farmer and Campos, 20. [16] Jennifer Blumenthal-Barby et al., “The Place of Philosophy in Bioethics Today,” The American Journal of Bioethics: AJOB, June 30, 2021, 3–5, https://doi.org/10.1080/15265161.2021.1940355. [17] Brendan Balcerak Jackson, “Verbal Disputes and Substantiveness,” Erkenntnis 79, no. S1 (March 2014): 31–54, https://doi.org/10.1007/s10670-013-9444-5. [18] C. S. I. Jenkins, “Merely Verbal Disputes,” Erkenntnis 79, no. 1 (March 1, 2014): 11–30, https://doi.org/10.1007/s10670-013-9443-6. [19] Steven Crowell, “Existentialism,” in The Stanford Encyclopedia of Philosophy, ed. Edward N. Zalta, Summer 2020 (Metaphysics Research Lab, Stanford University, 2020), https://plato.stanford.edu/archives/sum2020/entries/existentialism/; Anita Avramides, “Other Minds,” in The Stanford Encyclopedia of Philosophy, ed. Edward N. Zalta, Winter 2020 (Metaphysics Research Lab, Stanford University, 2020), https://plato.stanford.edu/archives/win2020/entries/other-minds/. [20] Martin Heidegger, Discourse on Thinking, Harper Torchbooks (New York, NY: Harper & Row, 1969), 46. [21] Heidegger, 46. [22] Heidegger, 45. [23] Nick Zangwill, “Aesthetic Judgment,” in The Stanford Encyclopedia of Philosophy, ed. Edward N. Zalta, Winter 2021 (Metaphysics Research Lab, Stanford University, 2021), https://plato.stanford.edu/archives/win2021/entries/aesthetic-judgment/. [24] Yuriko Saito, “Aesthetics of the Everyday,” in The Stanford Encyclopedia of Philosophy, ed. Edward N. Zalta, Spring 2021 (Metaphysics Research Lab, Stanford University, 2021), https://plato.stanford.edu/archives/spr2021/entries/aesthetics-of-everyday/. [25] O’Brien, “Medical Ethics as Taught and as Practiced,” 112. [26] Farmer and Campos, “Rethinking Medical Ethics,” 20.
Style APA, Harvard, Vancouver, ISO itp.
43

Chapman, Owen. "Mixing with Records". M/C Journal 4, nr 2 (1.04.2001). http://dx.doi.org/10.5204/mcj.1900.

Pełny tekst źródła
Streszczenie:
Introduction "Doesn't that wreck your records?" This is one of the first things I generally get asked when someone watches me at work in my home or while spinning at a party. It reminds me of a different but related question I once asked someone who worked at Rotate This!, a particularly popular Toronto DJ refuge, a few days after I had bought my first turntable: DJO: "How do you stop that popping and crackling sound your record gets when you scratch back and forth on the same spot for a while?" CLERK: "You buy two copies of everything, one you keep at home all wrapped-up nice and never use, and the other you mess with." My last $150 had just managed to pay for an old Dual direct drive record player. The precious few recently-released records I had were gifts. I nodded my head and made my way over to the rows of disks which I flipped through to make it look like I was maybe going to buy something. Lp cover after lp cover stared back at me all with names I had absolutely never heard of before, organised according to a hyper- hybridised classification scheme that completely escaped my dictionary-honed alphabetic expectations. Worst of all, there seemed to be only single copies of everything left! A sort of outsider's vertigo washed over me, and 3 minutes after walking into unfamiliar territory, I zipped back out onto the street. Thus was to begin my love/hate relationship with the source of all DJ sounds, surliness and misinformation--the independent record shop. My query had (without my planning) boldly pronounced my neophyte status. The response it solicited challenged my seriousness. How much was I willing to invest in order to ride "the wheels of steel"? Sequence 1 Will Straw describes the meteoric rise to prominence of the CD format, If the compact disk has emerged as one of the most dazzlingly effective of commodity forms, this has little to do with its technical superiority to the vinyl record (which we no longer remember to notice). Rather, the effectiveness has to do with its status as the perfect crossover consumer object. As a cutting-edge audiophile invention, it seduced the technophilic, connoisseurist males who typically buy new sound equipment and quickly build collections of recordings. At the same time, its visual refinement and high price rapidly rendered it legitimate as a gift. In this, the CD has found a wide audience among the population of casual record buyers.(61) Straw's point has to do with the fate of musical recordings within contemporary commodity culture. In the wake of a late 70's record industry slump, music labels turned their attention toward the recapturing of casual record sales (read: aging baby boomers). The general shape of this attempt revolved around a re-configuring of the record- shopping experience dedicated towards reducing "the intimidation seen as endemic to the environment of the record store."(59) The CD format, along with the development of super-sized, general interest (all-genre) record outlets has worked (according to Straw) to streamline record sales towards more-predictable patterns, all the while causing less "selection stress."(59) Re-issues and compilations, special-series trademarks, push-button listening stations, and maze-like display layouts, combined with department store-style service ("Can I help you find anything?") all work towards eliminating the need for familiarity with particular music "scenes" in order to make personally gratifying (and profit engendering) musical choices. Straw's analysis is exemplary in its dissatisfaction with treating the arena of personal musical choice as unaffected by any constraints apart from subjective matters of taste. Straw's evaluation also isolates the vinyl record as an object eminently ready (post-digital revolution) for subcultural appropriation. Its displacement by the CD as the dominant medium for collecting recorded music involved the recasting of the turntable as outdated and inferior, thereby relegating it to the dusty attic, basement or pawn shop (along with crates upon crates upon crates of records). These events set the stage for vinyl's spectacular rise from the ashes. The most prominent feature of this re-emergence has to do not simply with possession of the right kind of stuff (the cachet of having a music collection difficult for others to borrow aside), but with what vinyl and turntable technology can do. Bridge In Subculture: The Meaning of Style, Dick Hebdige claims that subcultures are, cultures of conspicuous consumption...and it is through the distinctive rituals of consumption, through style, that the subculture at once reveals its "secret identity" and communicates its forbidden meanings. It is basically the way in which commodities are used in subculture which mark the subculture off from more orthodox cultural formations.(103 Hebdige borrows the notion of bricolage from Levi Strauss in order to describe the particular kind of use subcultures make of the commodities they appropriate. Relationships of identity, difference and order are developed from out of the minds of those who make use of the objects in question and are not necessarily determined by particular qualities inherent to the objects themselves. Henceforth a safety pin more often used for purposes like replacing missing buttons or temporarily joining pieces of fabric can become a punk fashion statement once placed through the nose, ear or torn Sex Pistols tee-shirt. In the case of DJ culture, it is the practice of mixing which most obviously presents itself as definitive of subcultural participation. The objects of conspicuous consumption in this case--record tracks. If mixing can be understood as bricolage, then attempts "to discern the hidden messages inscribed in code"(18) by such a practice are not in vain. Granting mixing the power of meaning sets a formidable (semiotic) framework in place for investigating the practice's outwardly visible (spectacular) form and structure. Hebdige's description of bricolage as a particularly conspicuous and codified type of using, however, runs the risk of privileging an account of record collecting and mixing which interprets it entirely on the model of subjective expression.(1.) What is necessary is a means of access to the dialogue which takes place between a DJ and her records as such. The contents of a DJ's record bag (like Straw's CD shopping bag) are influenced by more that just her imagination, pocket book and exposure to different kinds of music. They are also determined in an important way by each other. Audio mixing is not one practice, it is many, and the choice to develop or use one sort of skill over another is intimately tied up with the type and nature of track one is working with. Sequence 2 The raw practice of DJing relies heavily on a slider integral to DJ mixers known as the _cross-fader_(ital). With the standard DJ set up, when the cross-fader is all the way to the left, the left turntable track plays through the system; vice versa when the fader is all the way to the right. In between is the "open" position which allows both inputs to be heard simultaneously. The most straightforward mixing technique, "cutting," involves using this toggle to quickly switch from one source to another--resulting in the abrupt end of one sound- flow followed by its instantaneous replacement. This technique can be used to achieve a variety of different effects--from the rather straightforward stringing together of the final beat of a four bar sequence from one track with a strong downbeat from something new in order to provide continuous, but sequential musical output, to the thoroughly difficult practice of "beat juggling," where short excerpts of otherwise self-contained tracks ("breaks") are isolated and then extended indefinitely through the use of two copies of the same record (while one record plays, the DJ spins the other back to the downbeat of the break in question, which is then released in rhythm). In both cases timing and rhythm are key. These features of the practice help to explain DJ predilections for tracks which make heavy, predictable use of their rhythm sections. "Blending" is a second technique which uses the open position on the cross-fader to mix two inputs into a live sonic collage. Tempo, rhythm and "density" of source material have an enormous impact on the end result. While any two tracks can be layered in this way, beats that are not synchronized are quick to create cacophony, and vocals also tend to clash dramatically. Melodic lines in general pose certain challenges here since these are in particular keys and have obvious starts and finishes. This is one reason why tracks produced specifically for DJing often have such long, minimal intros and exits. This makes it much easier to create "natural" sounding blends. Atmospheric sounds, low-frequency hums, speech samples and repetitive loops with indeterminate rhythm structures are often used for these segments in order to allow drawn-out, subtle transitions when moving between tracks. If an intro contains a fixed beat (as is the case often with genres constructed specifically for non-stop dancing like house, techno and to some extent drum and bass), then those who want seamless blends need to "beat match" if they want to maintain a dancer's groove. The roots of this technique go back to disco and demand fairly strict genre loyalty in order to insure that a set's worth of tracks all hover around the same tempo, defined in beats-per- minute, or BPMs. The basic procedure involves finding the downbeat of the track one wishes to mix through a set of headphones, releasing that beat in time with the other record while making fine tempo- adjustments via the turntable's pitch control to the point where the track coming through the earphones and the track being played over the system are in synch. The next step is "back-spinning" or "needle dropping" to the start of the track to be mixed, then releasing it again, this time with the cross-fader open. Volume levels can then be adjusted in order to allow the new track to slowly take prominence (the initial track being close to its end at this point) before the cross-fader is closed into the new position and the entire procedure is repeated. Scratching is perhaps the most notorious mixing technique and involves the most different types of manipulations. The practice is most highly developed in hip hop (and related genres like drum and bass) and is used both as an advanced cutting technique for moving between tracks as well as a sonic end-in-itself. It's genesis is attributed to a South Bronx DJ known as Grand Wizard Theodore who was the first (1977) to try to make creative use of the sound associated with moving a record needle back and forth over the same drumbeat, a phenomena familiar to DJs used to cueing-up downbeats through headphones. This trick is now referred to as the "baby scratch," and it along with an ever-increasing host of mutations and hybrids make- up the skills that pay the bills for hip hop DJs. In the case of many of these techniques, the cross-fader is once again used heavily in order to remove unwanted elements of particular scratches from the mix, as well as adding certain staccato and volume-fading effects. Isolated, "pure" sounds are easiest to scratch with and are therefore highly sought after by this sort of DJ--a pastime affectionately referred to as "digging in the crates." Sources of such sounds are extremely diverse, but inevitably revolve around genre's which use minimal orchestration (like movie-soundtracks), accentuated rhythms with frequent breakdowns (like funk or jazz), or which eschew musical form all together (like sound-effects, comedy and children's records). Exit To answer the question which started this investigation, in the end, how wrecked my records get depends a lot on what I'm using them for. To be sure, super-fast scratching patterns and tricks that use lots of back-spinning like beat-juggling will eventually "burn" static into spots on one's records. But with used records costing as little as $1 for three, and battle records (2.) widely available, the effect of this feature of the technology on the actual pursuit of the practice is negligible. And most techniques don't noticeably burn records at all, especially if a DJ's touch is light enough to allow for minimal tone-arm weight (a parameter which controls a turntable's groove-tracking ability). This is the kind of knowledge which comes from interaction with objects. It is also the source of a great part of the subcultural bricoleur's stylistic savvy. Herein lies the essence of the intimidating power of the indie record shop--its display of intimate, physical familiarity with the hidden particularities of the new vinyl experience. Investigators confronted with such familiarity need to find ways to go beyond analyses which stop at the level of acknowledgment of the visible logic displayed by spectacular subcultural practices if they wish to develop nuanced accounts of subcultural life. Such plumbing of the depths often requires listening in the place of observing--whether to first-hand accounts collected through ethnography or to the subtle voice of the objects themselves. (1.) An example of such an account: "DJ-ing is evangelism; a desire to share songs. A key skill is obviously not just to drop the popular, well-known songs at the right part of the night, but to pick the right new releases, track down the obscurer tunes and newest imports, get hold of next month's big tune this month; you gather this pile, this tinder, together, then you work the records, mix them, drop them, cut them, scratch them, melt them, beat them all together until they unite. Voilà; disco inferno." Dave Haslam, "DJ Culture," p. 169. (2.) Records specifically designed by and for scratch DJs and which consist of long strings of scratchable sounds. References Haslam, David. "DJ Culture." The Clubcultures Reader. Oxford: Blackwell Publishers. 1997 Hebdige, Dick. Subculture: The Meaning of Style. London: Melvin and Co. Ltd.. 1979 Straw, Will. "Organized Disorder: The Changing Space of the Record Shop." The Clubcultures Reader. Oxford: Blackwell Publishers. 1997
Style APA, Harvard, Vancouver, ISO itp.
44

Pearce, Lynne. "Diaspora". M/C Journal 14, nr 2 (1.05.2011). http://dx.doi.org/10.5204/mcj.373.

Pełny tekst źródła
Streszczenie:
For the past twenty years, academics and other social commentators have, by and large, shared the view that the phase of modernity through which we are currently passing is defined by two interrelated catalysts of change: the physical movement of people and the virtual movement of information around the globe. As we enter the second decade of the new millennium, it is certainly a timely moment to reflect upon the ways in which the prognoses of the scholars and scientists writing in the late twentieth century have come to pass, especially since—during the time this special issue has been in press—the revolutions that are gathering pace in the Arab world appear to be realising the theoretical prediction that the ever-increasing “flows” of people and information would ultimately bring about the end of the nation-state and herald an era of transnationalism (Appadurai, Urry). For writers like Arjun Appadurai, moreover, the concept of diaspora was key to grasping how this new world order would take shape, and how it would operate: Diasporic public spheres, diverse amongst themselves, are the crucibles of a postnational political order. The engines of their discourse are mass media (both interactive and expressive) and the movement of refugees, activists, students, laborers. It may be that the emergent postnational order proves not to be a system of homogeneous units (as with the current system of nation-states) but a system based on relations between heterogeneous units (some social movements, some interest groups, some professional bodies, some non-governmental organizations, some armed constabularies, some judicial bodies) ... In the short run, as we can see already, it is likely to be a world of increased incivility and violence. In the longer run, free from the constraints of the nation form, we may find that cultural freedom and sustainable justice in the world do not presuppose the uniform and general existence of the nation-state. This unsettling possibility could be the most exciting dividend of living in modernity at large. (23) In this editorial, we would like to return to the “here and now” of the late 1990s in which theorists like Arjun Appaduri, Ulrich Beck, John Urry, Zygmunt Bauman, Robert Robertson and others were “imagining” the consequences of both globalisation and glocalisation for the twenty-first century in order that we may better assess what is, indeed, coming to pass. While most of their prognoses for this “second modernity” have proven remarkably accurate, it is their—self-confessed—inability to forecast either the nature or the extent of the digital revolution that most vividly captures the distance between the mid-1990s and now; and it is precisely the consequences of this extraordinary technological revolution on the twin concepts of “glocality” and “diaspora” that the research featured in this special issue seeks to capture. Glocal Imaginaries Appadurai’s endeavours to show how globalisation was rapidly making itself felt as a “structure of feeling” (Williams in Appadurai 189) as well as a material “fact” was also implicit in our conceptualisation of the conference, “Glocal Imaginaries: Writing/Migration/Place,” which gave rise to this special issue. This conference, which was the culmination of the AHRC-funded project “Moving Manchester: Literature/Migration/Place (2006-10)”, constituted a unique opportunity to gain an international, cross-disciplinary perspective on urgent and topical debates concerning mobility and migration in the early twenty-first century and the strand “Networked Diasporas” was one of the best represented on the program. Attracting papers on broadcast media as well as the new digital technologies, the strand was strikingly international in terms of the speakers’ countries of origin, as is this special issue which brings together research from six European countries, Australia and the Indian subcontinent. The “case-studies” represented in these articles may therefore be seen to constitute something of a “state-of-the-art” snapshot of how Appadurai’s “glocal imaginary” is being lived out across the globe in the early years of the twenty-first century. In this respect, the collection proves that his hunch with regards to the signal importance of the “mass-media” in redefining our spatial and temporal coordinates of being and belonging was correct: The third and final factor to be addressed here is the role of the mass-media, especially in its electronic forms, in creating new sorts of disjuncture between spatial and virtual neighborhoods. This disjuncture has both utopian and dystopian potentials, and there is no easy way to tell how these may play themselves out in the future of the production of locality. (194) The articles collected here certainly do serve as testament to the “bewildering plethora of changes in ... media environments” (195) that Appadurai envisaged, and yet it can clearly also be argued that this agent of glocalisation has not yet brought about the demise of the nation-state in the way (or at the speed) that many commentators predicted. Digital Diasporas in a Transnational World Reviewing the work of the leading social science theorists working in the field during the late 1990s, it quickly becomes evident that: (a) the belief that globalisation presented a threat to the nation-state was widely held; and (b) that the “jury” was undecided as to whether this would prove a good or bad thing in the years to come. While the commentators concerned did their best to complexify both their analysis of the present and their view of the future, it is interesting to observe, in retrospect, how the rhetoric of both utopia and dystopia invaded their discourse in almost equal measure. We have already seen how Appadurai, in his 1996 publication, Modernity at Large, looks beyond the “increased incivility and violence” of the “short term” to a world “free from the constraints of the nation form,” while Roger Bromley, following Agamben and Deleuze as well as Appadurai, typifies a generation of literary and cultural critics who have paid tribute to the way in which the arts (and, in particular, storytelling) have enabled subjects to break free from their national (af)filiations (Pearce, Devolving 17) and discover new “de-territorialised” (Deleuze and Guattari) modes of being and belonging. Alongside this “hope,” however, the forces and agents of globalisation were also regarded with a good deal of suspicion and fear, as is evidenced in Ulrich Beck’s What is Globalization? In his overview of the theorists who were then perceived to be leading the debate, Beck draws distinctions between what was perceived to be the “engine” of globalisation (31), but is clearly most exercised by the manner in which the transformation has taken shape: Without a revolution, without even any change in laws or constitutions, an attack has been launched “in the normal course of business”, as it were, upon the material lifelines of modern national societies. First, the transnational corporations are to export jobs to parts of the world where labour costs and workplace obligations are lowest. Second, the computer-generation of worldwide proximity enables them to break down and disperse goods and services, and produce them through a division of labour in different parts of the world, so that national and corporate labels inevitably become illusory. (3; italics in the original) Beck’s concern is clearly that all these changes have taken place without the nation-states of the world being directly involved in any way: transnational corporations began to take advantage of the new “mobility” available to them without having to secure the agreement of any government (“Companies can produce in one country, pay taxes in another and demand state infrastructural spending in yet another”; 4-5); the export of the labour market through the use of digital communications (stereotypically, call centres in India) was similarly unregulated; and the world economy, as a consequence, was in the process of becoming detached from the processes of either production or consumption (“capitalism without labour”; 5-7). Vis-à-vis the dystopian endgame of this effective “bypassing” of the nation-state, Beck is especially troubled about the fate of the human rights legislation that nation-states around the world have developed, with immense effort and over time (e.g. employment law, trade unions, universal welfare provision) and cites Zygmunt Bauman’s caution that globalisation will, at worst, result in widespread “global wealth” and “local poverty” (31). Further, he ends his book with a fully apocalyptic vision, “the Brazilianization of Europe” (161-3), which unapologetically calls upon the conventions of science fiction to imagine a worst-case scenario for a Europe without nations. While fourteen or fifteen years is evidently not enough time to put Beck’s prognosis to the test, most readers would probably agree that we are still some way away from such a Europe. Although the material wealth and presence of the transnational corporations strikes a chord, especially if we include the world banks and finance organisations in their number, the financial crisis that has rocked the world for the past three years, along with the wars in Iraq and Afghanistan, and the ascendancy of Al-Qaida (all things yet to happen when Beck was writing in 1997), has arguably resulted in the nations of Europe reinforcing their (respective and collective) legal, fiscal, and political might through rigorous new policing of their physical borders and regulation of their citizens through “austerity measures” of an order not seen since World War Two. In other words, while the processes of globalisation have clearly been instrumental in creating the financial crisis that Europe is presently grappling with and does, indeed, expose the extent to which the world economy now operates outside the control of the nation-state, the nation-state still exists very palpably for all its citizens (whether permanent or migrant) as an agent of control, welfare, and social justice. This may, indeed, cause us to conclude that Bauman’s vision of a world in which globalisation would make itself felt very differently for some groups than others came closest to what is taking shape: true, the transnationals have seized significant political and economic power from the nation-state, but this has not meant the end of the nation-state; rather, the change is being experienced as a re-trenching of whatever power the nation-state still has (and this, of course, is considerable) over its citizens in their “local”, everyday lives (Bauman 55). If we now turn to the portrait of Europe painted by the articles that constitute this special issue, we see further evidence of transglobal processes and practices operating in a realm oblivious to local (including national) concerns. While our authors are generally more concerned with the flows of information and “identity” than business or finance (Appaduri’s “ethnoscapes,” “technoscapes,” and “ideoscapes”: 33-7), there is the same impression that this “circulation” (Latour) is effectively bypassing the state at one level (the virtual), whilst remaining very materially bound by it at another. In other words, and following Bauman, we would suggest that it is quite possible for contemporary subjects to be both the agents and subjects of globalisation: a paradox that, as we shall go on to demonstrate, is given particularly vivid expression in the case of diasporic and/or migrant peoples who may be able to bypass the state in the manufacture of their “virtual” identities/communities) but who (Cohen) remain very much its subjects (or, indeed, “non-subjects”) when attempting movement in the material realm. Two of the articles in the collection (Leurs & Ponzanesi and Marcheva) deal directly with the exponential growth of “digital diasporas” (sometimes referred to as “e-diasporas”) since the inception of Facebook in 2004, and both provide specific illustrations of the way in which the nation-state both has, and has not, been transcended. First, it quickly becomes clear that for the (largely) “youthful” (Leurs & Ponzanesi) participants of nationally inscribed networking sites (e.g. “discovernikkei” (Japan), “Hyves” (Netherlands), “Bulgarians in the UK” (Bulgaria)), shared national identity is a means and not an end. In other words, although the participants of these sites might share in and actively produce a fond and nostalgic image of their “homeland” (Marcheva), they are rarely concerned with it as a material or political entity and an expression of their national identities is rapidly supplemented by the sharing of other (global) identity markers. Leurs & Ponzanesi invoke Deleuze and Guattari’s concept of the “rhizome” to describe the way in which social networkers “weave” a “rhizomatic path” to identity, gradually accumulating a hybrid set of affiliations. Indeed, the extent to which the “nation” disappears on such sites can be remarkable as was also observed in our investigation of the digital storytelling site, “Capture Wales” (BBC) (Pearce, "Writing"). Although this BBC site was set up to capture the voices of the Welsh nation in the early twenty-first century through a collection of (largely) autobiographical stories, very few of the participants mention either Wales or their “Welshness” in the stories that they tell. Further, where the “home” nation is (re)imagined, it is generally in an idealised, or highly personalised, form (e.g. stories about one’s own family) or through a sharing of (perceived and actual) cultural idiosyncrasies (Marcheva on “You know you’re a Bulgarian when …”) rather than an engagement with the nation-state per se. As Leurs & Ponzanesi observe: “We can see how the importance of the nation-state gets obscured as diasporic youth, through cultural hybridisation of youth culture and ethnic ties initiate subcultures and offer resistance to mainstream cultural forms.” Both the articles just discussed also note the shading of the “national” into the “transnational” on the social networking sites they discuss, and “transnationalism”—in the sense of many different nations and their diasporas being united through a common interest or cause—is also a focus of Pikner’s article on “collective actions” in Europe (notably, “EuroMayDay” and “My Estonia”) and Harb’s highly topical account of the role of both broadcast media (principally, Al-Jazeera) and social media in the revolutions and uprisings currently sweeping through the Arab world (spring 2011). On this point, it should be noted that Harb identifies this as the moment when Facebook’s erstwhile predominantly social function was displaced by a manifestly political one. From this we must conclude that both transnationalism and social media sites can be put to very different ends: while young people in relatively privileged democratic countries might embrace transnationalism as an expression of their desire to “rise above” national politics, the youth of the Arab world have engaged it as a means of generating solidarity for nationalist insurgency and liberation. Another instance of “g/local” digital solidarity exceeding national borders is to be found in Johanna Sumiala’s article on the circulatory power of the Internet in the Kauhajoki school shooting which took place Finland in 2008. As well as using the Internet to “stage manage” his rampage, the Kauhajoki shooter (whose name the author chose to withhold for ethical reasons) was subsequently found to have been a member of numerous Web-based “hate groups”, many of them originating in the United States and, as a consequence, may be understood to have committed his crime on behalf of a transnational community: what Sumiala has defined as a “networked community of destruction.” It must also be noted, however, that the school shootings were experienced as a very local tragedy in Finland itself and, although the shooter may have been psychically located in a transnational hyper-reality when he undertook the killings, it is his nation-state that has had to deal with the trauma and shame in the long term. Woodward and Brown & Rutherford, meanwhile, show that it remains the tendency of public broadcast media to uphold the raison d’être of the nation-state at the same time as embracing change. Woodward’s feature article (which reports on the AHRC-sponsored “Tuning In” project which has researched the BBC World Service) shows how the representation of national and diasporic “voices” from around the world, either in opposition to or in dialogue with the BBC’s own reporting, is key to the way in which the Commission has changed and modernised in recent times; however, she is also clear that many of the objectives that defined the service in its early days—such as its commitment to a distinctly “English” brand of education—still remain. Similarly, Brown & Rutherford’s article on the innovative Australian ABC children’s television series, My Place (which has combined traditional broadcasting with online, interactive websites) may be seen to be positively promoting the Australian nation by making visible its commitment to multiculturalism. Both articles nevertheless reveal the extent to which these public service broadcasters have recognised the need to respond to their nations’ changing demographics and, in particular, the fact that “diaspora” is a concept that refers not only to their English and Australian audiences abroad but also to their now manifestly multicultural audiences at home. When it comes to commercial satellite television, however, the relationship between broadcasting and national and global politics is rather harder to pin down. Subramanian exposes a complex interplay of national and global interests through her analysis of the Malayalee “reality television” series, Idea Star Singer. Exported globally to the Indian diaspora, the show is shamelessly exploitative in the way in which it combines residual and emergent ideologies (i.e. nostalgia for a traditional Keralayan way of life vs aspirational “western lifestyles”) in pursuit of its (massive) audience ratings. Further, while the ISS series is ostensibly a g/local phenomenon (the export of Kerala to the rest of the world rather than “India” per se), Subramanian passionately laments all the progressive national initiatives (most notably, the campaign for “women’s rights”) that the show is happy to ignore: an illustration of one of the negative consequences of globalisation predicted by Beck (31) noted at the start of this editorial. Harb, meanwhile, reflects upon a rather different set of political concerns with regards to commercial satellite broadcasting in her account of the role of Al-Jazeera and Al Arabiya in the recent (2011) Arab revolutions. Despite Al-Jazeera’s reputation for “two-sided” news coverage, recent events have exposed its complicity with the Qatari government; further, the uprisings have revealed the speed with which social media—in particular Facebook and Twitter—are replacing broadcast media. It is now possible for “the people” to bypass both governments and news corporations (public and private) in relaying the news. Taken together, then, what our articles would seem to indicate is that, while the power of the nation-state has notionally been transcended via a range of new networking practices, this has yet to undermine its material power in any guaranteed way (witness recent counter-insurgencies in Libya, Bahrain, and Syria).True, the Internet may be used to facilitate transnational “actions” against the nation-state (individual or collective) through a variety of non-violent or violent actions, but nation-states around the world, and especially in Western Europe, are currently wielding immense power over their subjects through aggressive “austerity measures” which have the capacity to severely compromise the freedom and agency of the citizens concerned through widespread unemployment and cuts in social welfare provision. This said, several of our articles provide evidence that Appadurai’s more utopian prognoses are also taking shape. Alongside the troubling possibility that globalisation, and the technologies that support it, is effectively eroding “difference” (be this national or individual), there are the ever-increasing (and widely reported) instances of how digital technology is actively supporting local communities and actions around the world in ways that bypass the state. These range from the relatively modest collective action, “My Estonia”, featured in Pikner’s article, to the ways in which the Libyan diaspora in Manchester have made use of social media to publicise and support public protests in Tripoli (Harb). In other words, there is compelling material evidence that the heterogeneity that Appadurai predicted and hoped for has come to pass through the people’s active participation in (and partial ownership of) media practices. Citizens are now able to “interfere” in the representation of their lives as never before and, through the digital revolution, communicate with one another in ways that circumvent state-controlled broadcasting. We are therefore pleased to present the articles that follow as a lively, interdisciplinary and international “state-of-the-art” commentary on how the ongoing revolution in media and communication is responding to, and bringing into being, the processes and practices of globalisation predicted by Appadurai, Beck, Bauman, and others in the 1990s. The articles also speak to the changing nature of the world’s “diasporas” during this fifteen year time frame (1996-2011) and, we trust, will activate further debate (following Cohen) on the conceptual tensions that now manifestly exist between “virtual” and “material” diasporas and also between the “transnational” diasporas whose objective is to transcend the nation-state altogether and those that deploy social media for specifically local or national/ist ends. Acknowledgements With thanks to the Arts and Humanities Research Council (UK) for their generous funding of the “Moving Manchester” project (2006-10). Special thanks to Dr Kate Horsley (Lancaster University) for her invaluable assistance as ‘Web Editor’ in the production of this special issue (we could not have managed without you!) and also to Gail Ferguson (our copy-editor) for her expertise in the preparation of the final typescript. References Appadurai, Arjun. Modernity at Large: Cultural Dimensions of Globalisation. Minneapolis: U of Minnesota P, 1996. Bauman, Zygmunt. Globalization. Cambridge: Polity, 1998. Beck, Ulrich. What is Globalization? Trans. Patrick Camiller. Cambridge: Polity, 2000 (1997). Bromley, Roger. Narratives for a New Belonging: Diasporic Cultural Fictions. Edinburgh: Edinburgh UP, 2000. Cohen, Robin. Global Diasporas. 2nd ed. London and New York: Routledge, 2008. Deleuze, Gilles, and Felix Guattari. A Thousand Plateaus: Capitalism and Schizophrenia. Trans. Brian Massumi. Minneapolis: U of Minnesota P, 1987. Latour, Bruno. Reassembling the Social: An Introduction to Actor-Network Theory. Oxford: Oxford UP, 1995. Pearce, Lynne, ed. Devolving Identities: Feminist Readings in Home and Belonging. London: Ashgate, 2000. Pearce, Lynne. “‘Writing’ and ‘Region’ in the Twenty-First Century: Epistemological Reflections on Regionally Located Art and Literature in the Wake of the Digital Revolution.” European Journal of Cultural Studies 13.1 (2010): 27-41. Robertson, Robert. Globalization: Social Theory and Global Culture. London: Sage, 1992. Urry, John. Sociology beyond Societies. London: Routledge, 1999. Williams, Raymond. Dream Worlds: Mass Consumption in Late Nineteenth-Century France. Berkeley: U of California P, 1982.
Style APA, Harvard, Vancouver, ISO itp.
45

Deer, Patrick, i Toby Miller. "A Day That Will Live In … ?" M/C Journal 5, nr 1 (1.03.2002). http://dx.doi.org/10.5204/mcj.1938.

Pełny tekst źródła
Streszczenie:
By the time you read this, it will be wrong. Things seemed to be moving so fast in these first days after airplanes crashed into the World Trade Center, the Pentagon, and the Pennsylvania earth. Each certainty is as carelessly dropped as it was once carelessly assumed. The sounds of lower Manhattan that used to serve as white noise for residents—sirens, screeches, screams—are no longer signs without a referent. Instead, they make folks stare and stop, hurry and hustle, wondering whether the noises we know so well are in fact, this time, coefficients of a new reality. At the time of writing, the events themselves are also signs without referents—there has been no direct claim of responsibility, and little proof offered by accusers since the 11th. But it has been assumed that there is a link to US foreign policy, its military and economic presence in the Arab world, and opposition to it that seeks revenge. In the intervening weeks the US media and the war planners have supplied their own narrow frameworks, making New York’s “ground zero” into the starting point for a new escalation of global violence. We want to write here about the combination of sources and sensations that came that day, and the jumble of knowledges and emotions that filled our minds. Working late the night before, Toby was awoken in the morning by one of the planes right overhead. That happens sometimes. I have long expected a crash when I’ve heard the roar of jet engines so close—but I didn’t this time. Often when that sound hits me, I get up and go for a run down by the water, just near Wall Street. Something kept me back that day. Instead, I headed for my laptop. Because I cannot rely on local media to tell me very much about the role of the US in world affairs, I was reading the British newspaper The Guardian on-line when it flashed a two-line report about the planes. I looked up at the calendar above my desk to see whether it was April 1st. Truly. Then I got off-line and turned on the TV to watch CNN. That second, the phone rang. My quasi-ex-girlfriend I’m still in love with called from the mid-West. She was due to leave that day for the Bay Area. Was I alright? We spoke for a bit. She said my cell phone was out, and indeed it was for the remainder of the day. As I hung up from her, my friend Ana rang, tearful and concerned. Her husband, Patrick, had left an hour before for work in New Jersey, and it seemed like a dangerous separation. All separations were potentially fatal that day. You wanted to know where everyone was, every minute. She told me she had been trying to contact Palestinian friends who worked and attended school near the event—their ethnic, religious, and national backgrounds made for real poignancy, as we both thought of the prejudice they would (probably) face, regardless of the eventual who/what/when/where/how of these events. We agreed to meet at Bruno’s, a bakery on La Guardia Place. For some reason I really took my time, though, before getting to Ana. I shampooed and shaved under the shower. This was a horror, and I needed to look my best, even as men and women were losing and risking their lives. I can only interpret what I did as an attempt to impose normalcy and control on the situation, on my environment. When I finally made it down there, she’d located our friends. They were safe. We stood in the street and watched the Towers. Horrified by the sight of human beings tumbling to their deaths, we turned to buy a tea/coffee—again some ludicrous normalization—but were drawn back by chilling screams from the street. Racing outside, we saw the second Tower collapse, and clutched at each other. People were streaming towards us from further downtown. We decided to be with our Palestinian friends in their apartment. When we arrived, we learnt that Mark had been four minutes away from the WTC when the first plane hit. I tried to call my daughter in London and my father in Canberra, but to no avail. I rang the mid-West, and asked my maybe-former novia to call England and Australia to report in on me. Our friend Jenine got through to relatives on the West Bank. Israeli tanks had commenced a bombardment there, right after the planes had struck New York. Family members spoke to her from under the kitchen table, where they were taking refuge from the shelling of their house. Then we gave ourselves over to television, like so many others around the world, even though these events were happening only a mile away. We wanted to hear official word, but there was just a huge absence—Bush was busy learning to read in Florida, then leading from the front in Louisiana and Nebraska. As the day wore on, we split up and regrouped, meeting folks. One guy was in the subway when smoke filled the car. Noone could breathe properly, people were screaming, and his only thought was for his dog DeNiro back in Brooklyn. From the panic of the train, he managed to call his mom on a cell to ask her to feed “DeNiro” that night, because it looked like he wouldn’t get home. A pregnant woman feared for her unborn as she fled the blasts, pushing the stroller with her baby in it as she did so. Away from these heart-rending tales from strangers, there was the fear: good grief, what horrible price would the US Government extract for this, and who would be the overt and covert agents and targets of that suffering? What blood-lust would this generate? What would be the pattern of retaliation and counter-retaliation? What would become of civil rights and cultural inclusiveness? So a jumble of emotions came forward, I assume in all of us. Anger was not there for me, just intense sorrow, shock, and fear, and the desire for intimacy. Network television appeared to offer me that, but in an ultimately unsatisfactory way. For I think I saw the end-result of reality TV that day. I have since decided to call this ‘emotionalization’—network TV’s tendency to substitute analysis of US politics and economics with a stress on feelings. Of course, powerful emotions have been engaged by this horror, and there is value in addressing that fact and letting out the pain. I certainly needed to do so. But on that day and subsequent ones, I looked to the networks, traditional sources of current-affairs knowledge, for just that—informed, multi-perspectival journalism that would allow me to make sense of my feelings, and come to a just and reasoned decision about how the US should respond. I waited in vain. No such commentary came forward. Just a lot of asinine inquiries from reporters that were identical to those they pose to basketballers after a game: Question—‘How do you feel now?’ Answer—‘God was with me today.’ For the networks were insistent on asking everyone in sight how they felt about the end of las torres gemelas. In this case, we heard the feelings of survivors, firefighters, viewers, media mavens, Republican and Democrat hacks, and vacuous Beltway state-of-the-nation pundits. But learning of the military-political economy, global inequality, and ideologies and organizations that made for our grief and loss—for that, there was no space. TV had forgotten how to do it. My principal feeling soon became one of frustration. So I headed back to where I began the day—The Guardian web site, where I was given insightful analysis of the messy factors of history, religion, economics, and politics that had created this situation. As I dealt with the tragedy of folks whose lives had been so cruelly lost, I pondered what it would take for this to stop. Or whether this was just the beginning. I knew one thing—the answers wouldn’t come from mainstream US television, no matter how full of feelings it was. And that made Toby anxious. And afraid. He still is. And so the dreams come. In one, I am suddenly furloughed from my job with an orchestra, as audience numbers tumble. I make my evening-wear way to my locker along with the other players, emptying it of bubble gum and instrument. The next night, I see a gigantic, fifty-feet high wave heading for the city beach where I’ve come to swim. Somehow I am sheltered behind a huge wall, as all the people around me die. Dripping, I turn to find myself in a media-stereotype “crack house” of the early ’90s—desperate-looking black men, endless doorways, sudden police arrival, and my earnest search for a passport that will explain away my presence. I awake in horror, to the realization that the passport was already open and stamped—racialization at work for Toby, every day and in every way, as a white man in New York City. Ana’s husband, Patrick, was at work ten miles from Manhattan when “it” happened. In the hallway, I overheard some talk about two planes crashing, but went to teach anyway in my usual morning stupor. This was just the usual chatter of disaster junkies. I didn’t hear the words, “World Trade Center” until ten thirty, at the end of the class at the college I teach at in New Jersey, across the Hudson river. A friend and colleague walked in and told me the news of the attack, to which I replied “You must be fucking joking.” He was a little offended. Students were milling haphazardly on the campus in the late summer weather, some looking panicked like me. My first thought was of some general failure of the air-traffic control system. There must be planes falling out of the sky all over the country. Then the height of the towers: how far towards our apartment in Greenwich Village would the towers fall? Neither of us worked in the financial district a mile downtown, but was Ana safe? Where on the college campus could I see what was happening? I recognized the same physical sensation I had felt the morning after Hurricane Andrew in Miami seeing at a distance the wreckage of our shattered apartment across a suburban golf course strewn with debris and flattened power lines. Now I was trapped in the suburbs again at an unbridgeable distance from my wife and friends who were witnessing the attacks first hand. Were they safe? What on earth was going on? This feeling of being cut off, my path to the familiar places of home blocked, remained for weeks my dominant experience of the disaster. In my office, phone calls to the city didn’t work. There were six voice-mail messages from my teenaged brother Alex in small-town England giving a running commentary on the attack and its aftermath that he was witnessing live on television while I dutifully taught my writing class. “Hello, Patrick, where are you? Oh my god, another plane just hit the towers. Where are you?” The web was choked: no access to newspapers online. Email worked, but no one was wasting time writing. My office window looked out over a soccer field to the still woodlands of western New Jersey: behind me to the east the disaster must be unfolding. Finally I found a website with a live stream from ABC television, which I watched flickering and stilted on the tiny screen. It had all already happened: both towers already collapsed, the Pentagon attacked, another plane shot down over Pennsylvania, unconfirmed reports said, there were other hijacked aircraft still out there unaccounted for. Manhattan was sealed off. George Washington Bridge, Lincoln and Holland tunnels, all the bridges and tunnels from New Jersey I used to mock shut down. Police actions sealed off the highways into “the city.” The city I liked to think of as the capital of the world was cut off completely from the outside, suddenly vulnerable and under siege. There was no way to get home. The phone rang abruptly and Alex, three thousand miles away, told me he had spoken to Ana earlier and she was safe. After a dozen tries, I managed to get through and spoke to her, learning that she and Toby had seen people jumping and then the second tower fall. Other friends had been even closer. Everyone was safe, we thought. I sat for another couple of hours in my office uselessly. The news was incoherent, stories contradictory, loops of the planes hitting the towers only just ready for recycling. The attacks were already being transformed into “the World Trade Center Disaster,” not yet the ahistorical singularity of the emergency “nine one one.” Stranded, I had to spend the night in New Jersey at my boss’s house, reminded again of the boundless generosity of Americans to relative strangers. In an effort to protect his young son from the as yet unfiltered images saturating cable and Internet, my friend’s TV set was turned off and we did our best to reassure. We listened surreptitiously to news bulletins on AM radio, hoping that the roads would open. Walking the dog with my friend’s wife and son we crossed a park on the ridge on which Upper Montclair sits. Ten miles away a huge column of smoke was rising from lower Manhattan, where the stunning absence of the towers was clearly visible. The summer evening was unnervingly still. We kicked a soccer ball around on the front lawn and a woman walked distracted by, shocked and pale up the tree-lined suburban street, suffering her own wordless trauma. I remembered that though most of my students were ordinary working people, Montclair is a well-off dormitory for the financial sector and high rises of Wall Street and Midtown. For the time being, this was a white-collar disaster. I slept a short night in my friend’s house, waking to hope I had dreamed it all, and took the commuter train in with shell-shocked bankers and corporate types. All men, all looking nervously across the river toward glimpses of the Manhattan skyline as the train neared Hoboken. “I can’t believe they’re making us go in,” one guy had repeated on the station platform. He had watched the attacks from his office in Midtown, “The whole thing.” Inside the train we all sat in silence. Up from the PATH train station on 9th street I came onto a carless 6th Avenue. At 14th street barricades now sealed off downtown from the rest of the world. I walked down the middle of the avenue to a newspaper stand; the Indian proprietor shrugged “No deliveries below 14th.” I had not realized that the closer to the disaster you came, the less information would be available. Except, I assumed, for the evidence of my senses. But at 8 am the Village was eerily still, few people about, nothing in the sky, including the twin towers. I walked to Houston Street, which was full of trucks and police vehicles. Tractor trailers sat carrying concrete barriers. Below Houston, each street into Soho was barricaded and manned by huddles of cops. I had walked effortlessly up into the “lockdown,” but this was the “frozen zone.” There was no going further south towards the towers. I walked the few blocks home, found my wife sleeping, and climbed into bed, still in my clothes from the day before. “Your heart is racing,” she said. I realized that I hadn’t known if I would get back, and now I never wanted to leave again; it was still only eight thirty am. Lying there, I felt the terrible wonder of a distant bystander for the first-hand witness. Ana’s face couldn’t tell me what she had seen. I felt I needed to know more, to see and understand. Even though I knew the effort was useless: I could never bridge that gap that had trapped me ten miles away, my back turned to the unfolding disaster. The television was useless: we don’t have cable, and the mast on top of the North Tower, which Ana had watched fall, had relayed all the network channels. I knew I had to go down and see the wreckage. Later I would realize how lucky I had been not to suffer from “disaster envy.” Unbelievably, in retrospect, I commuted into work the second day after the attack, dogged by the same unnerving sensation that I would not get back—to the wounded, humbled former center of the world. My students were uneasy, all talked out. I was a novelty, a New Yorker living in the Village a mile from the towers, but I was forty-eight hours late. Out of place in both places. I felt torn up, but not angry. Back in the city at night, people were eating and drinking with a vengeance, the air filled with acrid sicklysweet smoke from the burning wreckage. Eyes stang and nose ran with a bitter acrid taste. Who knows what we’re breathing in, we joked nervously. A friend’s wife had fallen out with him for refusing to wear a protective mask in the house. He shrugged a wordlessly reassuring smile. What could any of us do? I walked with Ana down to the top of West Broadway from where the towers had commanded the skyline over SoHo; downtown dense smoke blocked the view to the disaster. A crowd of onlookers pushed up against the barricades all day, some weeping, others gawping. A tall guy was filming the grieving faces with a video camera, which was somehow the worst thing of all, the first sign of the disaster tourism that was already mushrooming downtown. Across the street an Asian artist sat painting the street scene in streaky black and white; he had scrubbed out two white columns where the towers would have been. “That’s the first thing I’ve seen that’s made me feel any better,” Ana said. We thanked him, but he shrugged blankly, still in shock I supposed. On the Friday, the clampdown. I watched the Mayor and Police Chief hold a press conference in which they angrily told the stream of volunteers to “ground zero” that they weren’t needed. “We can handle this ourselves. We thank you. But we don’t need your help,” Commissioner Kerik said. After the free-for-all of the first couple of days, with its amazing spontaneities and common gestures of goodwill, the clampdown was going into effect. I decided to go down to Canal Street and see if it was true that no one was welcome anymore. So many paths through the city were blocked now. “Lock down, frozen zone, war zone, the site, combat zone, ground zero, state troopers, secured perimeter, national guard, humvees, family center”: a disturbing new vocabulary that seemed to stamp the logic of Giuliani’s sanitized and over-policed Manhattan onto the wounded hulk of the city. The Mayor had been magnificent in the heat of the crisis; Churchillian, many were saying—and indeed, Giuliani quickly appeared on the cover of Cigar Afficionado, complete with wing collar and the misquotation from Kipling, “Captain Courageous.” Churchill had not believed in peacetime politics either, and he never got over losing his empire. Now the regime of command and control over New York’s citizens and its economy was being stabilized and reimposed. The sealed-off, disfigured, and newly militarized spaces of the New York through which I have always loved to wander at all hours seemed to have been put beyond reach for the duration. And, in the new post-“9/11” post-history, the duration could last forever. The violence of the attacks seemed to have elicited a heavy-handed official reaction that sought to contain and constrict the best qualities of New York. I felt more anger at the clampdown than I did at the demolition of the towers. I knew this was unreasonable, but I feared the reaction, the spread of the racial harassment and racial profiling that I had already heard of from my students in New Jersey. This militarizing of the urban landscape seemed to negate the sprawling, freewheeling, boundless largesse and tolerance on which New York had complacently claimed a monopoly. For many the towers stood for that as well, not just as the monumental outposts of global finance that had been attacked. Could the American flag mean something different? For a few days, perhaps—on the helmets of firemen and construction workers. But not for long. On the Saturday, I found an unmanned barricade way east along Canal Street and rode my bike past throngs of Chinatown residents, by the Federal jail block where prisoners from the first World Trade Center bombing were still being held. I headed south and west towards Tribeca; below the barricades in the frozen zone, you could roam freely, the cops and soldiers assuming you belonged there. I felt uneasy, doubting my own motives for being there, feeling the blood drain from my head in the same numbing shock I’d felt every time I headed downtown towards the site. I looped towards Greenwich Avenue, passing an abandoned bank full of emergency supplies and boxes of protective masks. Crushed cars still smeared with pulverized concrete and encrusted with paperwork strewn by the blast sat on the street near the disabled telephone exchange. On one side of the avenue stood a horde of onlookers, on the other television crews, all looking two blocks south towards a colossal pile of twisted and smoking steel, seven stories high. We were told to stay off the street by long-suffering national guardsmen and women with southern accents, kids. Nothing happening, just the aftermath. The TV crews were interviewing worn-out, dust-covered volunteers and firemen who sat quietly leaning against the railings of a park filled with scraps of paper. Out on the West Side highway, a high-tech truck was offering free cellular phone calls. The six lanes by the river were full of construction machinery and military vehicles. Ambulances rolled slowly uptown, bodies inside? I locked my bike redundantly to a lamppost and crossed under the hostile gaze of plainclothes police to another media encampment. On the path by the river, two camera crews were complaining bitterly in the heat. “After five days of this I’ve had enough.” They weren’t talking about the trauma, bodies, or the wreckage, but censorship. “Any blue light special gets to roll right down there, but they see your press pass and it’s get outta here. I’ve had enough.” I fronted out the surly cops and ducked under the tape onto the path, walking onto a Pier on which we’d spent many lazy afternoons watching the river at sunset. Dust everywhere, police boats docked and waiting, a crane ominously dredging mud into a barge. I walked back past the camera operators onto the highway and walked up to an interview in process. Perfectly composed, a fire chief and his crew from some small town in upstate New York were politely declining to give details about what they’d seen at “ground zero.” The men’s faces were dust streaked, their eyes slightly dazed with the shock of a horror previously unimaginable to most Americans. They were here to help the best they could, now they’d done as much as anyone could. “It’s time for us to go home.” The chief was eloquent, almost rehearsed in his precision. It was like a Magnum press photo. But he was refusing to cooperate with the media’s obsessive emotionalism. I walked down the highway, joining construction workers, volunteers, police, and firemen in their hundreds at Chambers Street. No one paid me any attention; it was absurd. I joined several other watchers on the stairs by Stuyvesant High School, which was now the headquarters for the recovery crews. Just two or three blocks away, the huge jagged teeth of the towers’ beautiful tracery lurched out onto the highway above huge mounds of debris. The TV images of the shattered scene made sense as I placed them into what was left of a familiar Sunday afternoon geography of bike rides and walks by the river, picnics in the park lying on the grass and gazing up at the infinite solidity of the towers. Demolished. It was breathtaking. If “they” could do that, they could do anything. Across the street at tables military policeman were checking credentials of the milling volunteers and issuing the pink and orange tags that gave access to ground zero. Without warning, there was a sudden stampede running full pelt up from the disaster site, men and women in fatigues, burly construction workers, firemen in bunker gear. I ran a few yards then stopped. Other people milled around idly, ignoring the panic, smoking and talking in low voices. It was a mainly white, blue-collar scene. All these men wearing flags and carrying crowbars and flashlights. In their company, the intolerance and rage I associated with flags and construction sites was nowhere to be seen. They were dealing with a torn and twisted otherness that dwarfed machismo or bigotry. I talked to a moustachioed, pony-tailed construction worker who’d hitched a ride from the mid-west to “come and help out.” He was staying at the Y, he said, it was kind of rough. “Have you been down there?” he asked, pointing towards the wreckage. “You’re British, you weren’t in World War Two were you?” I replied in the negative. “It’s worse ’n that. I went down last night and you can’t imagine it. You don’t want to see it if you don’t have to.” Did I know any welcoming ladies? he asked. The Y was kind of tough. When I saw TV images of President Bush speaking to the recovery crews and steelworkers at “ground zero” a couple of days later, shouting through a bullhorn to chants of “USA, USA” I knew nothing had changed. New York’s suffering was subject to a second hijacking by the brokers of national unity. New York had never been America, and now its terrible human loss and its great humanity were redesignated in the name of the nation, of the coming war. The signs without a referent were being forcibly appropriated, locked into an impoverished patriotic framework, interpreted for “us” by a compliant media and an opportunistic regime eager to reign in civil liberties, to unloose its war machine and tighten its grip on the Muslim world. That day, drawn to the river again, I had watched F18 fighter jets flying patterns over Manhattan as Bush’s helicopters came in across the river. Otherwise empty of air traffic, “our” skies were being torn up by the military jets: it was somehow the worst sight yet, worse than the wreckage or the bands of disaster tourists on Canal Street, a sign of further violence yet to come. There was a carrier out there beyond New York harbor, there to protect us: the bruising, blustering city once open to all comers. That felt worst of all. In the intervening weeks, we have seen other, more unstable ways of interpreting the signs of September 11 and its aftermath. Many have circulated on the Internet, past the blockages and blockades placed on urban spaces and intellectual life. Karl-Heinz Stockhausen’s work was banished (at least temporarily) from the canon of avant-garde electronic music when he described the attack on las torres gemelas as akin to a work of art. If Jacques Derrida had described it as an act of deconstruction (turning technological modernity literally in on itself), or Jean Baudrillard had announced that the event was so thick with mediation it had not truly taken place, something similar would have happened to them (and still may). This is because, as Don DeLillo so eloquently put it in implicit reaction to the plaintive cry “Why do they hate us?”: “it is the power of American culture to penetrate every wall, home, life and mind”—whether via military action or cultural iconography. All these positions are correct, however grisly and annoying they may be. What GK Chesterton called the “flints and tiles” of nineteenth-century European urban existence were rent asunder like so many victims of high-altitude US bombing raids. As a First-World disaster, it became knowable as the first-ever US “ground zero” such precisely through the high premium immediately set on the lives of Manhattan residents and the rarefied discussion of how to commemorate the high-altitude towers. When, a few weeks later, an American Airlines plane crashed on take-off from Queens, that borough was left open to all comers. Manhattan was locked down, flown over by “friendly” bombers. In stark contrast to the open if desperate faces on the street of 11 September, people went about their business with heads bowed even lower than is customary. Contradictory deconstructions and valuations of Manhattan lives mean that September 11 will live in infamy and hyper-knowability. The vengeful United States government and population continue on their way. Local residents must ponder insurance claims, real-estate values, children’s terrors, and their own roles in something beyond their ken. New York had been forced beyond being the center of the financial world. It had become a military target, a place that was receiving as well as dispatching the slings and arrows of global fortune. Citation reference for this article MLA Style Deer, Patrick and Miller, Toby. "A Day That Will Live In … ?" M/C: A Journal of Media and Culture 5.1 (2002). [your date of access] < http://www.media-culture.org.au/0203/adaythat.php>. Chicago Style Deer, Patrick and Miller, Toby, "A Day That Will Live In … ?" M/C: A Journal of Media and Culture 5, no. 1 (2002), < http://www.media-culture.org.au/0203/adaythat.php> ([your date of access]). APA Style Deer, Patrick and Miller, Toby. (2002) A Day That Will Live In … ?. M/C: A Journal of Media and Culture 5(1). < http://www.media-culture.org.au/0203/adaythat.php> ([your date of access]).
Style APA, Harvard, Vancouver, ISO itp.
46

Nielsen, Hanne E. F., Chloe Lucas i Elizabeth Leane. "Rethinking Tasmania’s Regionality from an Antarctic Perspective: Flipping the Map". M/C Journal 22, nr 3 (19.06.2019). http://dx.doi.org/10.5204/mcj.1528.

Pełny tekst źródła
Streszczenie:
IntroductionTasmania hangs from the map of Australia like a drop in freefall from the substance of the mainland. Often the whole state is mislaid from Australian maps and logos (Reddit). Tasmania has, at least since federation, been considered peripheral—a region seen as isolated, a ‘problem’ economically, politically, and culturally. However, Tasmania not only cleaves to the ‘north island’ of Australia but is also subject to the gravitational pull of an even greater land mass—Antarctica. In this article, we upturn the political conventions of map-making that place both Antarctica and Tasmania in obscure positions at the base of the globe. We show how a changing global climate re-frames Antarctica and the Southern Ocean as key drivers of worldwide environmental shifts. The liquid and solid water between Tasmania and Antarctica is revealed not as a homogenous barrier, but as a dynamic and relational medium linking the Tasmanian archipelago with Antarctica. When Antarctica becomes the focus, the script is flipped: Tasmania is no longer on the edge, but core to a network of gateways into the southern land. The state’s capital of Hobart can from this perspective be understood as an “Antarctic city”, central to the geopolitics, economy, and culture of the frozen continent (Salazar et al.). Viewed from the south, we argue, Tasmania is not a problem, but an opportunity for a form of ecological, cultural, economic, and political sustainability that opens up the southern continent to science, discovery, and imagination.A Centre at the End of the Earth? Tasmania as ParadoxThe islands of Tasmania owe their existence to climate change: a period of warming at the end of the last ice age melted the vast sheets of ice covering the polar regions, causing sea levels to rise by more than one hundred metres (Tasmanian Climate Change Office 8). Eleven thousand years ago, Aboriginal people would have witnessed the rise of what is now called Bass Strait, turning what had been a peninsula into an archipelago, with the large island of Tasmania at its heart. The heterogeneous practices and narratives of Tasmanian regional identity have been shaped by the geography of these islands, and their connection to the Southern Ocean and Antarctica. Regions, understood as “centres of collective consciousness and sociospatial identities” (Paasi 241) are constantly reproduced and reimagined through place-based social practices and communications over time. As we will show, diverse and contradictory narratives of Tasmanian regionality often co-exist, interacting in complex and sometimes complementary ways. Ecocritical literary scholar C.A. Cranston considers duality to be embedded in the textual construction of Tasmania, writing “it was hell, it was heaven, it was penal, it was paradise” (29). Tasmania is multiply polarised: it is both isolated and connected; close and far away; rich in resources and poor in capital; the socially conservative birthplace of radical green politics (Hay 60). The weather, as if sensing the fine balance of these paradoxes, blows hot and cold at a moment’s notice.Tasmania has wielded extraordinary political influence at times in its history—notably during the settlement of Melbourne in 1835 (Boyce), and during protests against damming the Franklin River in the early 1980s (Mercer). However, twentieth-century historical and political narratives of Tasmania portray the Bass Strait as a barrier, isolating Tasmanians from the mainland (Harwood 61). Sir Bede Callaghan, who headed one of a long line of federal government inquiries into “the Tasmanian problem” (Harwood 106), was clear that Tasmania was a victim of its own geography:the major disability facing the people of Tasmania (although some residents may consider it an advantage) is that Tasmania is an island. Separation from the mainland adversely affects the economy of the State and the general welfare of the people in many ways. (Callaghan 3)This perspective may stem from the fact that Tasmania has maintained the lowest Gross Domestic Product per capita of all states since federation (Bureau of Infrastructure Transport and Regional Economics 9). Socially, economically, and culturally, Tasmania consistently ranks among the worst regions of Australia. Statistical comparisons with other parts of Australia reveal the population’s high unemployment, low wages, poor educational outcomes, and bad health (West 31). The state’s remoteness and isolation from the mainland states and its reliance on federal income have contributed to the whole of Tasmania, including Hobart, being classified as ‘regional’ by the Australian government, in an attempt to promote immigration and economic growth (Department of Infrastructure and Regional Development 1). Tasmania is indeed both regional and remote. However, in this article we argue that, while regionality may be cast as a disadvantage, the island’s remote location is also an asset, particularly when viewed from a far southern perspective (Image 1).Image 1: Antarctica (Orthographic Projection). Image Credit: Wikimedia Commons, Modified Shading of Tasmania and Addition of Captions by H. Nielsen.Connecting Oceans/Collapsing DistanceTasmania and Antarctica have been closely linked in the past—the future archipelago formed a land bridge between Antarctica and northern land masses until the opening of the Tasman Seaway some 32 million years ago (Barker et al.). The far south was tangible to the Indigenous people of the island in the weather blowing in from the Southern Ocean, while the southern lights, or “nuyina”, formed a visible connection (Australia’s new icebreaker vessel is named RSV Nuyina in recognition of these links). In the contemporary Australian imagination, Tasmania tends to be defined by its marine boundaries, the sea around the islands represented as flat, empty space against which to highlight the topography of its landscape and the isolation of its position (Davies et al.). A more relational geographic perspective illuminates the “power of cross-currents and connections” (Stratford et al. 273) across these seascapes. The sea country of Tasmania is multiple and heterogeneous: the rough, shallow waters of the island-scattered Bass Strait flow into the Tasman Sea, where the continental shelf descends toward an abyssal plain studded with volcanic seamounts. To the south, the Southern Ocean provides nutrient-rich upwellings that attract fish and cetacean populations. Tasmania’s coast is a dynamic, liminal space, moving and changing in response to the global currents that are driven by the shifting, calving and melting ice shelves and sheets in Antarctica.Oceans have long been a medium of connection between Tasmania and Antarctica. In the early colonial period, when the seas were the major thoroughfares of the world and inland travel was treacherous and slow, Tasmania’s connection with the Southern Ocean made it a valuable hub for exploration and exploitation of the south. Between 1642 and 1900, early European explorers were followed by British penal colonists, convicts, sealers, and whalers (Kriwoken and Williamson 93). Tasmania was well known to polar explorers, with expeditions led by Jules Dumont d’Urville, James Clark Ross, Roald Amundsen, and Douglas Mawson all transiting through the port of Hobart. Now that the city is no longer a whaling hub, growing populations of cetaceans continue to migrate past the islands on their annual journeys from the tropics, across the Sub-Antarctic Front and Antarctic circumpolar current, and into the south polar region, while southern species such as leopard seals are occasionally seen around Tasmania (Tasmania Parks and Wildlife). Although the water surrounding Tasmania and Antarctica is at times homogenised as a ‘barrier’, rendering these places isolated, the bodies of water that surround both are in fact permeable, and regularly crossed by both humans and marine species. The waters are diverse in their physical characteristics, underlying topography, sea life, and relationships, and serve to connect many different ocean regions, ecosystems, and weather patterns.Views from the Far SouthWhen considered in terms of its relative proximity to Antarctic, rather than its distance from Australia’s political and economic centres, Tasmania’s identity undergoes a significant shift. A sign at Cockle Creek, in the state’s far south, reminds visitors that they are closer to Antarctica than to Cairns, invoking a discourse of connectedness that collapses the standard ten-day ship voyage to Australia’s closest Antarctic station into a unit comparable with the routinely scheduled 5.5 hour flight to North Queensland. Hobart is the logistical hub for the Australian Antarctic Division and the French Institut Polaire Francais (IPEV), and has hosted Antarctic vessels belonging to the USA, South Korea, and Japan in recent years. From a far southern perspective, Hobart is not a regional Australian capital but a global polar hub. This alters the city’s geographic imaginary not only in a latitudinal sense—from “top down” to “bottom up”—but also a longitudinal one. Via its southward connection to Antarctica, Hobart is also connected east and west to four other recognized gateways: Cape Town in South Africa, Christchurch in New Zealand; Punta Arenas in Chile; and Ushuaia in Argentina (Image 2). The latter cities are considered small by international standards, but play an outsized role in relation to Antarctica.Image 2: H. Nielsen with a Sign Announcing Distances between Antarctic ‘Gateway’ Cities and Antarctica, Ushuaia, Argentina, 2018. Image Credit: Nicki D'Souza.These five cities form what might be called—to adapt geographer Klaus Dodds’ term—a ‘Southern Rim’ around the South Polar region (Dodds Geopolitics). They exist in ambiguous relationship to each other. Although the five cities signed a Statement of Intent in 2009 committing them to collaboration, they continue to compete vigorously for northern hemisphere traffic and the brand identity of the most prominent global gateway. A state government brochure spruiks Hobart, for example, as the “perfect Antarctic Gateway” emphasising its uniqueness and “natural advantages” in this regard (Tasmanian Government, 2016). In practice, the cities are automatically differentiated by their geographic position with respect to Antarctica. Although the ‘ice continent’ is often conceived as one entity, it too has regions, in both scientific and geographical senses (Terauds and Lee; Antonello). Hobart provides access to parts of East Antarctica, where the Australian, French, Japanese, and Chinese programs (among others) have bases; Cape Town is a useful access point for Europeans going to Dronning Maud Land; Christchurch is closest to the Ross Sea region, site of the largest US base; and Punta Arenas and Ushuaia neighbour the Antarctic Peninsula, home to numerous bases as well as a thriving tourist industry.The Antarctic sector is important to the Tasmanian economy, contributing $186 million (AUD) in 2017/18 (Wells; Gutwein; Tasmanian Polar Network). Unsurprisingly, Tasmania’s gateway brand has been actively promoted, with the 2016 Australian Antarctic Strategy and 20 Year Action Plan foregrounding the need to “Build Tasmania’s status as the premier East Antarctic Gateway for science and operations” and the state government releasing a “Tasmanian Antarctic Gateway Strategy” in 2017. The Chinese Antarctic program has been a particular focus: a Memorandum of Understanding focussed on Australia and China’s Antarctic relations includes a “commitment to utilise Australia, including Tasmania, as an Antarctic ‘gateway’.” (Australian Antarctic Division). These efforts towards a closer relationship with China have more recently come under attack as part of a questioning of China’s interests in the region (without, it should be noted, a concomitant questioning of Australia’s own considerable interests) (Baker 9). In these exchanges, a global power and a state of Australia generally classed as regional and peripheral are brought into direct contact via the even more remote Antarctic region. This connection was particularly visible when Chinese President Xi Jinping travelled to Hobart in 2014, in a visit described as both “strategic” and “incongruous” (Burden). There can be differences in how this relationship is narrated to domestic and international audiences, with issues of sovereignty and international cooperation variously foregrounded, laying the ground for what Dodds terms “awkward Antarctic nationalism” (1).Territory and ConnectionsThe awkwardness comes to a head in Tasmania, where domestic and international views of connections with the far south collide. Australia claims sovereignty over almost 6 million km2 of the Antarctic continent—a claim that in area is “roughly the size of mainland Australia minus Queensland” (Bergin). This geopolitical context elevates the importance of a regional part of Australia: the claims to Antarctic territory (which are recognised only by four other claimant nations) are performed not only in Antarctic localities, where they are made visible “with paraphernalia such as maps, flags, and plaques” (Salazar 55), but also in Tasmania, particularly in Hobart and surrounds. A replica of Mawson’s Huts in central Hobart makes Australia’s historic territorial interests in Antarctica visible an urban setting, foregrounding the figure of Douglas Mawson, the well-known Australian scientist and explorer who led the expeditions that proclaimed Australia’s sovereignty in the region of the continent roughly to its south (Leane et al.). Tasmania is caught in a balancing act, as it fosters international Antarctic connections (such hosting vessels from other national programs), while also playing a key role in administering what is domestically referred to as the Australian Antarctic Territory. The rhetoric of protection can offer common ground: island studies scholar Godfrey Baldacchino notes that as island narratives have moved “away from the perspective of the ‘explorer-discoverer-colonist’” they have been replaced by “the perspective of the ‘custodian-steward-environmentalist’” (49), but reminds readers that a colonising disposition still lurks beneath the surface. It must be remembered that terms such as “stewardship” and “leadership” can undertake sovereignty labour (Dodds “Awkward”), and that Tasmania’s Antarctic connections can be mobilised for a range of purposes. When Environment Minister Greg Hunt proclaimed at a press conference that: “Hobart is the gateway to the Antarctic for the future” (26 Apr. 2016), the remark had meaning within discourses of both sovereignty and economics. Tasmania’s capital was leveraged as a way to position Australia as a leader in the Antarctic arena.From ‘Gateway’ to ‘Antarctic City’While discussion of Antarctic ‘Gateway’ Cities often focuses on the economic and logistical benefit of their Antarctic connections, Hobart’s “gateway” identity, like those of its counterparts, stretches well beyond this, encompassing geological, climatic, historical, political, cultural and scientific links. Even the southerly wind, according to cartoonist Jon Kudelka, “has penguins in it” (Image 3). Hobart residents feel a high level of connection to Antarctica. In 2018, a survey of 300 randomly selected residents of Greater Hobart was conducted under the umbrella of the “Antarctic Cities” Australian Research Council Linkage Project led by Assoc. Prof. Juan Francisco Salazar (and involving all three present authors). Fourteen percent of respondents reported having been involved in an economic activity related to Antarctica, and 36% had attended a cultural event about Antarctica. Connections between the southern continent and Hobart were recognised as important: 71.9% agreed that “people in my city can influence the cultural meanings that shape our relationship to Antarctica”, while 90% agreed or strongly agreed that Hobart should play a significant role as a custodian of Antarctica’s future, and 88.4% agreed or strongly agreed that: “How we treat Antarctica is a test of our approach to ecological sustainability.” Image 3: “The Southerly” Demonstrates How Weather Connects Hobart and Antarctica. Image Credit: Jon Kudelka, Reproduced with Permission.Hobart, like the other gateways, activates these connections in its conscious place-branding. The city is particularly strong as a centre of Antarctic research: signs at the cruise-ship terminal on the waterfront claim that “There are more Antarctic scientists based in Hobart […] than at any other one place on earth, making Hobart a globally significant contributor to our understanding of Antarctica and the Southern Ocean.” Researchers are based at the Institute for Marine and Antarctic Studies (IMAS), the Commonwealth Scientific and Industrial Research Organisation (CSIRO), and the Australian Antarctic Division (AAD), with several working between institutions. Many Antarctic researchers located elsewhere in the world also have a connection with the place through affiliations and collaborations, leading journalist Jo Chandler to assert that “the breadth and depth of Hobart’s knowledge of ice, water, and the life forms they nurture […] is arguably unrivalled anywhere in the world” (86).Hobart also plays a significant role in Antarctica’s governance, as the site of the secretariats for the Commission for the Conservation of Antarctic Marine Living Resources (CCAMLR) and the Agreement on the Conservation of Albatrosses and Petrels (ACAP), and as host of the Antarctic Consultative Treaty Meetings on more than one occasion (1986, 2012). The cultural domain is active, with Tasmanian Museum and Art Gallery (TMAG) featuring a permanent exhibit, “Islands to Ice”, emphasising the ocean as connecting the two places; the Mawson’s Huts Replica Museum aiming (among other things) to “highlight Hobart as the gateway to the Antarctic continent for the Asia Pacific region”; and a biennial Australian Antarctic Festival drawing over twenty thousand visitors, about a sixth of them from interstate or overseas (Hingley). Antarctic links are evident in the city’s natural and built environment: the dolerite columns of Mt Wellington, the statue of the Tasmanian Antarctic explorer Louis Bernacchi on the waterfront, and the wharfs that regularly accommodate icebreakers such as the Aurora Australis and the Astrolabe. Antarctica is figured as a southern neighbour; as historian Tom Griffiths puts it, Tasmanians “grow up with Antarctica breathing down their necks” (5). As an Antarctic City, Hobart mediates access to Antarctica both physically and in the cultural imaginary.Perhaps in recognition of the diverse ways in which a region or a city might be connected to Antarctica, researchers have recently been suggesting critical approaches to the ‘gateway’ label. C. Michael Hall points to a fuzziness in the way the term is applied, noting that it has drifted from its initial definition (drawn from economic geography) as denoting an access and supply point to a hinterland that produces a certain level of economic benefits. While Hall looks to keep the term robustly defined to avoid empty “local boosterism” (272–73), Gabriela Roldan aims to move the concept “beyond its function as an entry and exit door”, arguing that, among other things, the local community should be actively engaged in the Antarctic region (57). Leane, examining the representation of Hobart as a gateway in historical travel texts, concurs that “ingress and egress” are insufficient descriptors of Tasmania’s relationship with Antarctica, suggesting that at least discursively the island is positioned as “part of an Antarctic rim, itself sharing qualities of the polar region” (45). The ARC Linkage Project described above, supported by the Hobart City Council, the State Government and the University of Tasmania, as well as other national and international partners, aims to foster the idea of the Hobart and its counterparts as ‘Antarctic cities’ whose citizens act as custodians for the South Polar region, with a genuine concern for and investment in its future.Near and Far: Local Perspectives A changing climate may once again herald a shift in the identity of the Tasmanian islands. Recognition of the central role of Antarctica in regulating the global climate has generated scientific and political re-evaluation of the region. Antarctica is not only the planet’s largest heat sink but is the engine of global water currents and wind patterns that drive weather patterns and biodiversity across the world (Convey et al. 543). For example, Tas van Ommen’s research into Antarctic glaciology shows the tangible connection between increased snowfall in coastal East Antarctica and patterns of drought southwest Western Australia (van Ommen and Morgan). Hobart has become a global centre of marine and Antarctic science, bringing investment and development to the city. As the global climate heats up, Tasmania—thanks to its low latitude and southerly weather patterns—is one of the few regions in Australia likely to remain temperate. This is already leading to migration from the mainland that is impacting house prices and rental availability (Johnston; Landers 1). The region’s future is therefore closely entangled with its proximity to the far south. Salazar writes that “we cannot continue to think of Antarctica as the end of the Earth” (67). Shifting Antarctica into focus also brings Tasmania in from the margins. As an Antarctic city, Hobart assumes a privileged positioned on the global stage. This allows the city to present itself as central to international research efforts—in contrast to domestic views of the place as a small regional capital. The city inhabits dual identities; it is both on the periphery of Australian concerns and at the centre of Antarctic activity. Tasmania, then, is not in freefall, but rather at the forefront of a push to recognise Antarctica as entangled with its neighbours to the north.AcknowledgementsThis work was supported by the Australian Research Council under LP160100210.ReferencesAntonello, Alessandro. “Finding Place in Antarctica.” Antarctica and the Humanities. Eds. Peder Roberts, Lize-Marie van der Watt, and Adrian Howkins. London: Palgrave Macmillan, 2016. 181–204.Australian Government. Australian Antarctic Strategy and 20 Year Action Plan. Canberra: Commonwealth of Australia, 2016. 15 Apr. 2019. <http://www.antarctica.gov.au/__data/assets/pdf_file/0008/180827/20YearStrategy_final.pdf>.Australian Antarctic Division. “Australia-China Collaboration Strengthens.” Australian Antarctic Magazine 27 Dec. 2014. 15 Apr. 2019. <http://www.antarctica.gov.au/magazine/2011-2015/issue-27-december-2014/in-brief/australia-china-collaboration-strengthens>.Baker, Emily. “Worry at Premier’s Defence of China.” The Mercury 15 Sep. 2018: 9.Baldacchino, G. “Studying Islands: On Whose Terms?” Island Studies Journal 3.1 (2008): 37–56.Barker, Peter F., Gabriel M. Filippelli, Fabio Florindo, Ellen E. Martin, and Howard D. Schere. “Onset and Role of the Antarctic Circumpolar Current.” Deep Sea Research Part II: Topical Studies in Oceanography. 54.21–22 (2007): 2388–98.Bergin, Anthony. “Australia Needs to Strengthen Its Strategic Interests in Antarctica.” Australian Strategic Policy Institute. 29 Apr. 2016. 21 Feb. 2019 <https://www.aspi.org.au/index.php/opinion/australia-needs-strengthen-its-strategic-interests-antarctica>.Boyce, James. 1835: The Founding of Melbourne and the Conquest of Australia. Melbourne: Black Inc., 2011.Burden, Hilary. “Xi Jinping's Tasmania Visit May Seem Trivial, But Is Full of Strategy.” The Guardian 18 Nov. 2014. 19 May 2019 <https://www.theguardian.com/world/2014/nov/18/xi-jinpings-tasmania-visit-lacking-congruity-full-of-strategy>.Bureau of Infrastructure Transport and Regional Economics (BITRE). A Regional Economy: A Case Study of Tasmania. Canberra: Commonwealth of Australia, 2008. 14 May 2019 <http://www.bitre.gov.au/publications/86/Files/report116.pdf>.Chandler, Jo. “The Science Laboratory: From Little Things, Big Things Grow.” Griffith Review: Tasmania: The Tipping Point? 29 (2013) 83–101.Christchurch City Council. Statement of Intent between the Southern Rim Gateway Cities to the Antarctic: Ushuaia, Punta Arenas, Christchurch, Hobart and Cape Town. 25 Sep. 2009. 11 Apr. 2019 <http://archived.ccc.govt.nz/Council/proceedings/2009/September/CnclCover24th/Clause8Attachment.pdf>.Convey, P., R. Bindschadler, G. di Prisco, E. Fahrbach, J. Gutt, D.A. Hodgson, P.A. Mayewski, C.P. Summerhayes, J. Turner, and ACCE Consortium. “Antarctic Climate Change and the Environment.” Antarctic Science 21.6 (2009): 541–63.Cranston, C. “Rambling in Overdrive: Travelling through Tasmanian Literature.” Tasmanian Historical Studies 8.2 (2003): 28–39.Davies, Lynn, Margaret Davies, and Warren Boyles. Mapping Van Diemen’s Land and the Great Beyond: Rare and Beautiful Maps from the Royal Society of Tasmania. Hobart: The Royal Society of Tasmania, 2018.Department of Infrastructure and Regional Development. Guidelines for Analysing Regional Australia Impacts and Developing a Regional Australia Impact Statement. Canberra: Commonwealth of Australia, 2017. 11 Apr. 2019 <https://regional.gov.au/regional/information/rais/>.Dodds, Klaus. “Awkward Antarctic Nationalism: Bodies, Ice Cores and Gateways in and beyond Australian Antarctic Territory/East Antarctica.” Polar Record 53.1 (2016): 16–30.———. Geopolitics in Antarctica: Views from the Southern Oceanic Rim. Chichester: John Wiley, 1997.Griffiths, Tom. “The Breath of Antarctica.” Tasmanian Historical Studies 11 (2006): 4–14.Gutwein, Peter. “Antarctic Gateway Worth $186 Million to Tasmanian Economy.” Hobart: Tasmanian Government, 20 Feb. 2019. 21 Feb. 2019 <http://www.premier.tas.gov.au/releases/antarctic_gateway_worth_$186_million_to_tasmanian_economy>.Hall, C. Michael. “Polar Gateways: Approaches, Issues and Review.” The Polar Journal 5.2 (2015): 257–77. Harwood Andrew. “The Political Constitution of Islandness: The ‘Tasmanian Problem’ and Ten Days on the Island.” PhD Thesis. U of Tasmania, 2011. <http://eprints.utas.edu.au/11855/%5Cninternal-pdf://5288/11855.html>.Hay, Peter. “Destabilising Tasmanian Politics: The Key Role of the Greens.” Bulletin of the Centre for Tasmanian Historical Studies 3.2 (1991): 60–70.Hingley, Rebecca. Personal Communication, 28 Nov. 2018.Johnston, P. “Is the First Wave of Climate Migrants Landing in Hobart?” The Fifth Estate 11 Sep. 2018. 15 Mar. 2019 <https://www.thefifthestate.com.au/urbanism/climate-change-news/climate-migrants-landing-hobart>.Kriwoken, L., and J. Williamson. “Hobart, Tasmania: Antarctic and Southern Ocean Connections.” Polar Record 29.169 (1993): 93–102.Kudelka, John. “The Southerly.” Kudelka Cartoons. 27 Jun. 2014. 21 Feb. 2019 <https://www.kudelka.com.au/2014/06/the-southerly/>.Leane, E., T. Winter, and J.F. Salazar. “Caught between Nationalism and Internationalism: Replicating Histories of Antarctica in Hobart.” International Journal of Heritage Studies 22.3 (2016): 214–27. Leane, Elizabeth. “Tasmania from Below: Antarctic Travellers’ Accounts of a Southern ‘Gateway’.” Studies in Travel Writing 20.1 (2016): 34-48.Mawson’s Huts Replica Museum. “Mission Statement.” 15 Apr. 2019 <http://www.mawsons-huts-replica.org.au/>.Mercer, David. "Australia's Constitution, Federalism and the ‘Tasmanian Dam Case’." Political Geography Quarterly 4.2 (1985): 91–110.Paasi, A. “Deconstructing Regions: Notes on the Scales of Spatial Life.” Environment and Planning A: Economy and Space 23.2 (1991) 239–56.Reddit. “Maps without Tasmania.” 15 Apr. 2019 <https://www.reddit.com/r/MapsWithoutTasmania/>.Roldan, Gabriela. “'A Door to the Ice?: The Significance of the Antarctic Gateway Cities Today.” Journal of Antarctic Affairs 2 (2015): 57–70.Salazar, Juan Francisco. “Geographies of Place-Making in Antarctica: An Ethnographic Epproach.” The Polar Journal 3.1 (2013): 53–71.———, Elizabeth Leane, Liam Magee, and Paul James. “Five Cities That Could Change the Future of Antarctica.” The Conversation 5 Oct. 2016. 19 May 2019 <https://theconversation.com/five-cities-that-could-change-the-future-of-antarctica-66259>.Stratford, Elaine, Godfrey Baldacchino, Elizabeth McMahon, Carol Farbotko, and Andrew Harwood. “Envisioning the Archipelago.” Island Studies Journal 6.2 (2011): 113–30.Tasmanian Climate Change Office. Derivation of the Tasmanian Sea Level Rise Planning Allowances. Aug. 2012. 17 Apr. 2019 <http://www.dpac.tas.gov.au/__data/assets/pdf_file/0003/176331/Tasmanian_SeaLevelRisePlanningAllowance_TechPaper_Aug2012.pdf>.Tasmanian Government Department of State Growth. “Tasmanian Antarctic Gateway Strategy.” Hobart: Tasmanian Government, 12 Dec. 2017. 21 Feb. 2019 <https://www.antarctic.tas.gov.au/__data/assets/pdf_file/0004/164749/Tasmanian_Antarctic_Gateway_Strategy_12_Dec_2017.pdf>.———. “Tasmania Delivers…” Apr. 2016. 15 Apr. 2019 <https://www.antarctic.tas.gov.au/__data/assets/pdf_file/0005/66461/Tasmania_Delivers_Antarctic_Southern_Ocean_web.pdf>.———. “Antarctic Tasmania.” 17 Feb. 2019. 15 Apr. 2019 <https://www.antarctic.tas.gov.au/about/hobarts_antarctic_attractions>.Tasmanian Polar Network. “Welcome to the Tasmanian Polar Network.” 28 Feb. 2019 <https://www.tasmanianpolarnetwork.com.au/>.Terauds, Aleks, and Jasmine Lee. “Antarctic Biogeography Revisited: Updating the Antarctic Conservation Biogeographic Regions.” Diversity and Distributions 22 (2016): 836–40.Van Ommen, Tas, and Vin Morgan. “Snowfall Increase in Coastal East Antarctica Linked with Southwest Western Australian Drought.” Nature Geoscience 3 (2010): 267–72.Wells Economic Analysis. The Contribution of the Antarctic and Southern Ocean Sector to the Tasmanian Economy 2017. 18 Nov. 2018. 15 Apr. 2019 <https://www.stategrowth.tas.gov.au/__data/assets/pdf_file/0010/185671/Wells_Report_on_the_Value_of_the_Antarctic_Sector_2017_18.pdf>.West, J. “Obstacles to Progress: What’s Wrong with Tasmania, Really?” Griffith Review: Tasmania: The Tipping Point? 39 (2013): 31–53.
Style APA, Harvard, Vancouver, ISO itp.
47

May, Lawrence. "Confronting Ecological Monstrosity". M/C Journal 24, nr 5 (5.10.2021). http://dx.doi.org/10.5204/mcj.2827.

Pełny tekst źródła
Streszczenie:
Introduction Amidst ecological collapse and environmental catastrophe, humankind is surrounded by indications that our habitat is turning against us in monstrous ways. The very environments we live within now evoke existential terror, and this state of ecological monstrosity has permeated popular media, including video games. Such cultural manifestations of planetary catastrophe are particularly evident in video game monsters. These virtual figures continue monsters’ long-held role in reflecting the socio-cultural anxieties of their particular era. The horrific figures that monsters present play a culturally reflexive role, echoing the fears and anxieties of their social, political and cultural context. Media monsters closely reflect their surrounding cultural conditions (Cohen 47), representing “a symptom of or a metaphor for something bigger and more significant than the ostensible reality of the monster itself” (Hutchings 37). Society’s deepest anxieties culminate in these figures in forms that are “threatening and impure” (Carroll 28), “unnatural, transgressive, obscene, contradictory” (Kearney 4–5), and abject (Kristeva 4). In this article I ask how the appearance of the monstrous within contemporary video games reflects an era of climate change and ecological collapse, and how this could inform the engagement of players with discourse concerning climate change. Central to this inquiry is the literary practice of ecocriticism, which seeks to examine environmental rather than human representation in cultural artefacts, increasingly including accounts of contemporary ecological decay and disorder (Bulfin 144). I build on such perspectives to address play encounters that foreground figures of monstrosity borne of the escalating climate crisis, and summarise case studies of two recent video games undertaken as part of this project — The Legend of Zelda: Breath of the Wild (Nintendo EPD) and The Last of Us Part II (Naughty Dog). An ecocritical approach to the monsters that populate these case studies reveals the emergence of a ludic form of ecological monstrosity tied closely to our contemporary climatic conditions and taking two significant forms: one accentuating a visceral otherness and aberrance, and the other marked by the uncanny recognition of human authorship of climate change. Horrors from the Anthropocene A growing climate emergency surrounds us, enveloping us in the abject and aberrant conditions of what could be described as an ecological monstrosity. Monstrous threats to our environment and human survival are experienced on a planetary scale and research evidence plainly illustrates a compounding catastrophe. The United Nations Intergovernmental Panel on Climate Change (IPCC), a relatively cautious and conservative body (Parenti 5), reports that a human-made emergency has developed since the Industrial Revolution. The multitude of crises that confront us include: changes in the Earth’s atmosphere driving up global temperatures, ice sheets in retreat, sea levels rising, natural ecosystems and species in collapse, and an unprecedented frequency and magnitude of heatwaves, droughts, flooding, winter storms, hurricanes, and wildfires (United Nations Environment Programme). Further human activity, including a post-war addiction to the plastics that have now spread their way across our oceans like a “liquid smog” (Robles-Anderson and Liboiron 258), or short-sighted enthusiasm for pesticides, radiation energy, and industrial chemicals (Robles-Anderson and Liboiron 254), has ensured a damaging shift in the nature of the feedback loops that Earth’s ecosystems depend upon for stability (Parenti 6). Climatic equilibrium has been disrupted, and growing damage to the ecosystems that sustain human life suggests an inexorable, entropic path to decay. To understand Earth’s profound crisis requires thinking beyond just climate and to witness the interconnected “extraordinary burdens” placed on our planet by “toxic chemistry, mining, nuclear pollution, depletion of lakes and rivers under and above ground, ecosystem simplification, vast genocides of people” which will continue to lead to the recursive collapse of interlinked major systems (Haraway 100). To speak of climate change is really to speak of the ruin of ecologies, those “living systems composed of many moving parts” that make up the tapestry of organic life on Earth (Robles-Anderson and Liboiron 251). The emergency that presents itself, as Renata Tyszczuk observes, comprises a pervasiveness, uncertainty, and interdependency that together “affect every aspect of human lives, politics and culture” (47). The emergence of the term Anthropocene (or the Age of the Humans) to describe our current geological epoch (and to supersede the erstwhile and more stable Holocene) (Zalasiewicz et al. 1036–7; Chang 7) reflects a contemporary impossibility with talking about planet Earth without acknowledging the damaging impact of humankind on its ecosystems (Bulfin 142). This recognition of human complicity in the existential crisis engulfing our planet once again connects ecological monstrosity to the socio-cultural history of the monstrous. Monsters, Jeffrey Jerome Cohen points out, “are our children” and despite our repressive efforts, “always return” in order to “ask us why we have created them” (20). Ecological monstrosity declares to us that our relegation of greenhouse gases, rising sea levels, toxic waste, species extinction, and much more, to the discursive periphery has only been temporary. Monsters, when examined closely, start to look a lot like ourselves in terms of biological origins (Perron 357), as well as other abject cultural and social markers that signal these horrific figures as residing “too close to the borders of our [own] subjectivity for comfort” (Spittle 314). Isabel Pinedo sees this uncanny nature of the horror genre’s antagonists as a postmodern condition, a ghoulish reminder of the era’s breakdown of categories, blurring of boundaries, and collapse of master narratives that combine to ensure “mastery is lost … and the stable, unified, coherent self acquires the status of a fiction” (17–18). In standing in for anxiety, the other, and the aberrant, the figure of the monster deftly turns the mirror back on its human victims. Ecocritical Play The vast scale of ecological collapse has complicated effective public communication on the subject. The scope involved is unsettling, even paralysing, to its audiences: climate change might just be “too here, too there, too everywhere, too weird, too much, too big, too everything” to bring oneself to engage with (Tyszczuk 47). The detail involved has also been captured by scientific discourse, a detached communicative mode which too easily obviates the everyday human experience of the emergency (Bulfin 140; Abraham and Jayemanne 74–76). Considerable effort has been focussed upon producing higher-fidelity models of ecological catastrophe (Robles-Anderson and Liboiron 248), rather than addressing the more significant “trouble with representing largely intangible linkages” between micro-environmental actions and macro-environmental repercussions (Chang 86). Ecocriticism is, however, emerging as a cultural means by which the crisis, and restorative possibilities, may be rendered more legible to a wider audience. Representations of ecology and catastrophe not only sustain genres such as Eco-Disaster and Cli-Fi (Bulfin 140), but are also increasingly becoming a precondition for fiction centred upon human life (Tyszczuk 47). Media artefacts concerned with environment are able to illustrate the nature of the emergency alongside “a host of related environmental issues that the technocratic ‘facts and figures’ approach … is unlikely to touch” (Abraham and Jayemanne 76) and encourage in audiences a suprapersonal understanding of the environmental impact of individual actions (Chang 70). Popular culture offers a chance to foster ‘ecological thought’ wherein it becomes “frighteningly easy … to join the dots and see that everything is interconnected” (Morton, Ecological Thought 1) rather than founder before the inexplicability of the temporalities and spatialities involved in ecological collapse. An ecocritical approach is “one of the most crucial—yet under-researched—ways of looking into the possible cultural impact of the digital entertainment industry” upon public discourse relating to the environment crisis (Felczak 185). Video games demand this closer attention because, in a mirroring of the interconnectedness of Earth’s own ecosystems, “the world has also inevitably permeated into our technical artefacts, including games” (Chang 11), and recent scholarship has worked to investigate this very relationship. Benjamin Abraham has extended Morton’s arguments to outline a mode of ecological thought for games (What Is an Ecological Game?), Alenda Chang has closely examined how games model natural environments, and Benjamin Abraham and Darshana Jayemanne have outlined four modes in which games manifest players’ ecological relationships. Close analysis of texts and genres has addressed the capacity of game mechanics to persuade players about matters of sustainability (Kelly and Nardi); implicated Minecraft players in an ecological practice of writing upon landscapes (Bohunicky); argued that Final Fantasy VII’s plot fosters ecological responsibility (Milburn); and, identified in ARMA III’s ambient, visual backdrops of renewable power generation the potential to reimagine cultural futures (Abraham, Video Game Visions). Video games allow for a particular form of ecocriticism that has been overlooked in existing efforts to speak about ecological crisis: “a politics that includes what appears least political—laughter, the playful, even the silly” (Morton, Dark Ecology 113). Play is liminal, emergent, and necessarily incomplete, and this allows its various actors—players, developers, critics and texts themselves—to come together in non-authoritarian, imaginative and potentially radical ways. Through play, audiences are offered new and novel modes for envisioning ecological problems, solutions, and futures. To return, then, to encounters with ecological monstrosity, I next consider the visions of crisis that emerge through the video game monsters that draw upon the aberrant nature of ecological collapse, as well as those that foreground our own complicity as humans in the climate crisis, declaring that we players might ourselves be monstrous. The two case studies that follow are necessarily brief, but indicate the value of further research and textual analysis to more fully uncover the role of ecological monstrosity in contemporary video games. Breath of the Wild’s Corrupted Ecology The Legend of Zelda: Breath of the Wild (Nintendo EPD) is a fantasy action-adventure game in which players adopt the role of the games series’ long-running protagonist, Link, and explore the virtual landscapes of fictional Hyrule in unstructured and nonlinear ways. Landscape is immediately striking to players of Breath of the Wild, with the game using a distinctive, high-definition cel-shaded animation style to vividly render natural environments. Within the first ten minutes of play, lush green grass sways around the player’s avatar, densely treed forests interrupt rolling vistas, and finely detailed mountains tower over the player’s perspective. The player soon learns, however, that behind these inviting landscapes lies a catastrophic corruption of natural order, and that their virtual enemies will manifest a powerful monstrosity that seems to mirror Earth’s own ecological crises. The game’s backstory centres around the Zelda series’ persistent antagonist, Ganon, and his use of a primal form of evil to overwhelm a highly evolved and industrialised Hyrulian civilisation, in an event dubbed the Great Calamity. Hyrule’s dependency on mechanical technology in its defences is misjudged, and Ganon’s re-appearance causes widespread devastation. The parallel between Hyrule’s fate and humankind’s own unsustainable commitment to heavy industry and agriculture, and faith in technological approaches to mitigation in the face of looming catastrophe, are immediately recognisable. Visible, too, is the echo of the revenge of Earth’s climate in the organic and primal force of Ganon’s destructive power. Ganon leaves in his wake an array of impossible, aberrant creatures hostile to the player, including the deformed humanoid figure of the Bokoblin (bearing snouts, arrow-shaped tails, and a horn), the sand-swimming spike-covered whale known as a Molduga, and the Stone Talus, an anthropomorphic rock formation that bursts into life out of otherwise innocuous geological features. One particularly apposite monster, known simply as Malice, is a glowing black and purple substance that oozes its way through environments in Hyrule, spreading to cover and corrupt organic material. Malice is explained by in-game introductory text as “poisonous bogs formed by water that was sullied during the Great Calamity”—an environmental element thrown out of equilibrium by pollution. Monstrosity in Breath of the Wild is decidedly ecological, and its presentation of unstable biologies, poisoned waters, and a collapsed natural order offer a conspicuous display of our contemporary climate crisis. Breath of the Wild places players in a traditional position in relation to its virtual monsters: direct opposition (Taylor 31), with a clear mandate to eliminate the threat(s) and restore equilibrium (Krzywinska 12). The game communicates its collection of biological impossibilities and inexorable corruptions as clear aberrations of a once-balanced natural order, with Hyrule’s landscapes needing purification at the player’s hands. Video games are driven, according to Jaroslav Švelch, by a logic of informatic control when it comes to virtual monsters, where our previously “inscrutable and abject” antagonists can be analysed, defined and defeated as “the medium’s computational and procedural nature makes monstrosity fit into databases and algorithms” (194). In requiring Link, and players, to scrutinise and come to “know” monsters, the game suggests a particular ecocritical possibility. Ecological monstrosity becomes educative, placing the terrors of the climate crisis directly before players’ avatars, screens, and eyes and connecting, in visceral ways, mastery over these threats with pleasure and achievement. The monsters of Breath of the Wild offer the possibility of affectively preparing players for versions of the future by mediating such engagements with disaster and catastrophe. Recognising the Monstrosity Within Set in the aftermath of the outbreak of a mutant strain of the Cordyceps fungus (through exposure to which humans transform into aggressive, zombified ‘Infected’), The Last of Us Part II (Naughty Dog) is a post-apocalyptic action-adventure game. Players alternate between two playable human characters, Ellie and Abby, whose travels through the infection-ravaged states of Wyoming, Washington, and California overlap and intertwine. At first glance, The Last of Us Part II appears to construct similar forms of ecology and monstrosity as Breath of the Wild. Players are thrust into an experience of the sublime in the game’s presentation of natural environments that are vastly capacious and highly fidelitous in their detailing. Players begin the game scrambling across snowbound ranges and fleeing through thick forests, and later encounter lush grass, rushing rivers, and wild animals reclaiming once-urban environments. And, as in Breath of the Wild, monstrosity in this gameworld appears to embody impurity and corruption, whether through the horrific deformations of various types of zombie bodies, or the fungal masses that carpet many of the game’s abandoned buildings in a reclamation of human environments by nature. Closer analysis, however, demonstrates that the monstrosity that defines the play experience of The Last of Us Part II uncannily reflects the more uncomfortable truths of the Anthropocentric era. A key reason why zombies are traditionally frightening is because they are us. The semblance of human faces and bodies that remain etched into these monsters’ decaying forms act as portents for our own fates when faced with staggering hordes and overwhelmingly poor odds of survival. Impure biologies are presented to players in these zombies, but rather than represent a distant ‘other’ they stand as more-than-likely futures for the game’s avatars, just as Earth’s climate crisis is intimately bound up in human origins and inexorable futures. The Last of Us Part II further pursues its line of anthropocentric critique, as both Ellie and Abby interact during the game with different groupings of human survivors, including hubristic militia and violent religious cultists. The player comes to understand through these encounters that it is the distrust, dogmatism, and depravity of their fellow humans that pose immediate threats to avatarial survival, rather than the scrutable, reliable, and predictable horrors of the mindless zombies. In keeping with the appearance of monsters in both interactive and cinematic texts, monsters’ most important lessons emerge when the boundaries between reality and fiction, human and nonhuman, and normality and abnormality become blurred. The Last of Us Part II utilises this underlying ambiguity in monstrosity to suggest a confronting ecological claim: that monstrous culpability belongs to us—the inhabitants of Earth. For video game users in particular, this is a doubly pointed accusation. As Thomas Apperley and Darshana Jayemanne observe of digital games, “however much their digital virtuality is celebrated they are enacted and produced in strikingly visceral—ontologically virtual—ways”, and such a materialist consideration “demands that they are also understood as objects in the world” (15). The ecological consequences of the production of such digital objects are too often taken for granted, despite critical work examining the damaging impact of resource extraction, electronic waste, energy transfer, telecommunications transfer, and the logics of obsolescence involved (Dyer-Witheford and de Peuter; Newman; Chang 152). By foregrounding humanity’s own monstrosity, The Last of Us Part II illustrates what Timothy Morton describes as the “weirdly weird” consequences of human actions during the Anthropocene; those uncanny, unexpected, and planetarily destructive outcomes of the post-industrial myth of progress (Morton, Dark Ecology 7). The ecocritical work of video games could remind players that so many of our worst contemporary nightmares result from human hubris (Weinstock 286), a realisation played out in first-person perspective by Morton: “I am the criminal. And I discover this via scientific forensics … I’m the detective and the criminal!” (Dark Ecology 9). Playing with Ecological Monstrosity The Legend of Zelda: Breath of the Wild and The Last of Us Part II confront players with an ecological form of monstrosity, which is deeply recursive in its nature. Players encounter monsters that stand in for socio-political anxieties about ecological disaster as well as those that reflect humanity’s own monstrously destructive hubris. Attention is further drawn to the player’s own, lived role as a contributor to climate crisis, a consequence of not only the material characteristics of digital games, but also their broader participation in the unsustainable economics of the post-industrial age. To begin to make the connections between these recursive monsters and analogies is to engage in the type of ecological thought that lets us see the very interconnectedness that defines the ecosystems we have damaged so fatally. In understanding that video games are the “point of convergence for a whole array of technical, cultural, and promotional dynamics of which [players] are, at best, only partially aware” (Kline, Dyer-Witheford, and de Peuter 19), we see that the nested layering of anxieties, fears, fictions, and realities is fundamental to the very fabric of digital games. Recursion, Donna Haraway observes in relation to the interlinked failure of ecosystems, “can be a drag” (100), but I want to suggest that playing with ecological monstrosity instead turns recursion into opportunity. An ecocritical approach to the examination of contemporary videogame monsters demonstrates that these horrific figures, through their primordial aesthetic and affective impacts, are adept at foregrounding the ecosystemic nature of the relationship between games and our own world. Videogames play a role in representing both desirable and objectionable versions of the world, and such “utopian and dystopian projections of the future can shape our acts in the present” (Fordyce 295). By confronting players with viscerally accessible encounters with the horror of an aberrant and abjected near future (so near that it is, in fact, already the present), games such as Breath of the Wild and The Last of Us Part II can critically position players in relation to discourse and wider public debate about ecological issues and climate change (and further research could more closely examine players’ engagements with ecological monstrosity). Drawing attention to the symmetry between monstrosity and ecological catastrophe is a crucial way that contemporary games might encourage players to untangle the recursive environmental consequences of our anthropocentric era. Morton argues that beneath the abjectness that has come to define our human co-existence with other ecological actors there lies a perverse form of pleasure, a “delicious guilt, delicious shame, delicious melancholy, delicious horror [and] delicious sadness” (Dark Ecology 129). This bitter form of “pleasure” aptly describes an ecocritical encounter with ecological monstrosity: the pleasure of battling and defeating virtual monsters, complemented by desolate (and possibly motivating) reflections of the ongoing ruination of our planet provided through the development of ecological thought on the part of players. References Abraham, Benjamin. “Video Game Visions of Climate Futures: ARMA 3 and Implications for Games and Persuasion.” Games and Culture 13.1 (2018): 71–91. Abraham, Benjamin. “What Is an Ecological Game? Examining Gaming’s Ecological Dynamics and Metaphors through the Survival-Crafting Genre.” TRACE: A Journal of Writing Media and Ecology 2 (2018). 1 Oct. 2021 <http://tracejournal.net/trace-issues/issue2/01-Abraham.html>. Abraham, Benjamin, and Darshana Jayemanne. “Where Are All the Climate Change Games? Locating Digital Games’ Response to Climate Change.” Transformations 30 (2017): 74–94. Apperley, Thomas H., and Darshana Jayemane. “Game Studies’ Material Turn.” Westminster Papers in Communication and Culture 9.1 (2012). 1 Oct. 2021 <http://www.westminsterpapers.org/article/10.16997/wpcc.145/>. Bohunicky, Kyle Matthew. “Ecocomposition: Writing Ecologies in Digital Games.” Green Letters 18.3 (2014): 221–235. Bulfin, Ailise. “Popular Culture and the ‘New Human Condition’: Catastrophe Narratives and Climate Change.” Global and Planetary Change 156 (2017): 140–146. Carroll, Noël. The Philosophy of Horror, or, Paradoxes of the Heart. New York: Routledge, 1990. Chang, Alenda Y. Playing Nature: Ecology in Video Games. Minneapolis: U of Minnesota P, 2019. Cohen, Jeffrey Jerome. “Monster Culture (Seven Theses)”. Monster Theory: Reading Culture. Ed. Jeffrey Jerome Cohen. Minneapolis: U of Minnesota P, 1996. 3–25. Dyer-Witheford, Nick, and Greig de Peuter. Games of Empire: Global Capitalism and Video Games. Minneapolis: U of Minnesota P, 2009. Felczak, Mateusz. “Ludic Guilt, Paidian Joy: Killing and Ecocriticism in the TheHunter Series.” Journal of Gaming & Virtual Worlds 12.2 (2020): 183–200. Fordyce, Robbie. “Play, History and Politics: Conceiving Futures beyond Empire.” Games and Culture 16.3 (2021): 294–304. Haraway, Donna J. Staying with the Trouble: Making Kin in the Chthulucene. Durham: Duke UP, 2016. Hutchings, Peter. The Horror Film. Harlow: Pearson Longman, 2002. Kearney, Richard. Strangers, Gods and Monsters: Interpreting Otherness. London: Routledge, 2002. Kelly, Shawna, and Bonnie Nardi. “Playing with Sustainability: Using Video Games to Simulate Futures of Scarcity.” First Monday 19.5 (2014). 1 Oct. 2021 <https://firstmonday.org/ojs/index.php/fm/article/view/5259>. Kline, Stephen, Nick Dyer-Witheford, and Greig de Peuter. Digital Play: The Interaction of Technology, Culture, and Marketing. Montreal: McGill-Queen’s UP, 2003. Kristeva, Julia. Powers of Horror: An Essay on Abjection. New York: Columbia UP, 1982. Krzywinska, Tanya. “Hands-on Horror.” Spectator 22.2 (2003): 12–23. Milburn, Colin. “’There Ain’t No Gettin’ offa This Train’: Final Fantasy VII and the Pwning of Environmental Crisis.” Sustainable Media. Ed. Nicole Starosielski and Janet Walker. New York: Routledge, 2016. 77–93. Morton, Timothy. Dark Ecology: For a Logic of Future Coexistence. New York: Columbia UP, 2016. Morton, Timothy. The Ecological Thought. Cambridge: Harvard UP, 2010. Newman, James. Best Before: Videogames, Supersession and Obsolescence. New York: Routledge, 2012. Nintendo EPD. The Legend of Zelda: Breath of the Wild. Kyoto, Japan: Nintendo, 2017. Parenti, Christian. Tropic of Chaos: Climate Change and the New Geography of Violence. New York: Nation Books, 2011. Perron, Bernard. The World of Scary Video Games: A Study in Videoludic Horror. London: Bloomsbury Academic, 2018. Pinedo, Isabel. “Recreational Terror: Postmodern Elements of the Contemporary Horror Film.” Journal of Film and Video 48.1 (1996): 17–31. Robles-Anderson, Erica, and Max Liboiron. “Coupling Complexity: Ecological Cybernetics as a Resource for Nonrepresentational Moves to Action.” Sustainable Media. Ed. Nicole Starosielski and Janet Walker. New York: Routledge, 2016. 248–263. Spittle, Steve. “‘Did This Game Scare You? Because It Sure as Hell Scared Me!’ F.E.A.R., the Abject and the Uncanny.” Games and Culture 6.4 (2011): 312–326. Švelch, Jaroslav. “Monsters by the Numbers: Controlling Monstrosity in Video Games.” Monster Culture in the 21st Century: A Reader. Eds. Marina Levina and Diem-My T. Bui. New York: Bloomsbury Academic, 2013. 193–208. Taylor, Laurie N. “Not of Woman Born: Monstrous Interfaces and Monstrosity in Video Games.” PhD Thesis. University of Florida, 2006. 1 Oct. 2021 <http://ufdcimages.uflib.ufl.edu/uf/00/08/11/73/00001/taylor_l.pdf>. The Last of Us Part II. Naughty Dog. San Mateo, California: Sony Interactive Entertainment, 2020. Tyszczuk, Renata. “Cautionary Tales: The Sky Is Falling! The World Is Ending!” Culture and Climate Change: Narratives. Eds. Joe Smith, Renata Tyszczuk, and Robert Butler. Cambridge: Shed, 2014. 45–57. United Nations Environment Programme. “Facts about the Climate Emergency.” UNEP – UN Environment Programme. 1 Oct. 2021 <http://www.unep.org/explore-topics/climate-change/facts-about-climate-emergency>. Weinstock, Jeffrey Andrew. “Invisible Monsters: Vision, Horror, and Contemporary Culture.” The Ashgate Research Companion to Monsters and the Monstrous. Eds. Asa Simon Mittman and Peter J. Dendle. Abingdon: Routledge, 2013. 275–289. Zalasiewicz, Jan, et al. “Stratigraphy of the Anthropocene.” Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 369.1938 (2011): 1036–1055.
Style APA, Harvard, Vancouver, ISO itp.
48

Broady, Timothy. "Resilience across the Continuum of Care". M/C Journal 16, nr 5 (28.08.2013). http://dx.doi.org/10.5204/mcj.698.

Pełny tekst źródła
Streszczenie:
Who Are Carers? A carer is any individual who provides unpaid care and support to a family member or friend who has a disability, mental illness, drug and/or alcohol dependency, chronic condition, terminal illness or who is frail. Carers come from all walks of life, cultural backgrounds and age groups. For many, caring is a 24 hour-a-day job with emotional, physical and financial impacts, with implications for their participation in employment, education and community activities. Carers exist in all communities, including amongst Aboriginal communities, those of culturally and linguistically diverse backgrounds, amongst Gay, Lesbian, Bisexual, Transgender, Intersex communities, and throughout metropolitan, regional and rural areas (Carers NSW). These broad characteristics mean that caring occurs across a wide variety of situations and care responsibilities can impact an even wider group of people. The ubiquitous nature of informal care warrants its consideration as a major social issue, as well as the potential impacts that these roles can have on carers in both short and long term contexts. Caring for a loved one is often an unseen component of people’s domestic lives. As will be outlined below, the potentially burdensome nature of care can have negative influences on carers’ wellbeing. As such, factors that can enhance the resilience of carers in the face of such adversity have been widely investigated. This being said, individual differences exist in carers’ responses to their caring responsibilities. The caring experience can therefore be argued to exist on a continuum, from the adversity in relation to stressful challenges through to prosperity in light of their caring responsibilities. By considering the experience of care as existing along this continuum, the place of resilience within people’s domestic spaces can be viewed as a mechanism towards identifying and developing supportive practices. Negative Impacts of Care A significant body of research has identified potential negative impacts of caring. Many of the most commonly cited outcomes relate to negative effects on mental health and/or psychological functioning, including stress, anxiety and depression (e.g. Baker et al.; Barlow, Cullen-Powell and Cheshire; Cheshire, Barlow and Powell; Dunn et al.; Gallagher et al.; Hastings et al.; Lach et al.; Singer; Sörensen et al.; Vitaliano, Zhang and Scanlan; Whittingham et al.; Yamada et al.). These feelings can be exacerbated when caring responsibilities become relentlessly time consuming, as demonstrated by this comment from a carer of a person with dementia: “I can’t get away from it” (O'Dwyer, Moyle and van Wyk 758). Similarly, emotional responses such as sorrow, grief, anger, frustration, and guilt can result from caring for a loved one (Heiman; Whittingham et al.). Negative emotional responses are not necessarily a direct result of caring responsibilities as such, but an understanding of the challenges faced by the person requiring their care. The following quote from the carer of a child with autism exemplifies the experience of sorrow: “It was actually the worst day of our lives, that was the day we came to terms with the fact that we had this problem” (Midence and O’Neill 280). Alongside these psychological and emotional outcomes, physical health may also be negatively impacted due to certain demands of the caring role (Lach et al.; Sörensen et al.; Vitaliano, Zhang and Scanlan). Outcomes such as these are likely to vary across individual caring circumstances, dictated by variables such as the specific tasks required of the carer, and individual personality characteristics of both the carer and the person for whom they care. Nevertheless, an awareness of these potential outcomes is particularly important when considering the place of resilience in the domestic space of individuals caring for a loved one. This conceptualisation of caring as being a burdensome task reflects many publicly held perceptions. If caring is widely viewed as compromising carers’ wellbeing, then there is likely to be an increased likelihood of carers viewing themselves as victims. This is particularly true amongst children and adolescents with caring responsibilities, since young people are most susceptible to having their personal identities shaped by others’ perceptions (Andreouli, Skovdal and Campbell). Resilience in Caring Adversity Despite the widely acknowledged potential for caring to have negative consequences for carers, it must be noted that the occurrence of these outcomes are not inevitable. In fact, much of the research that has identified increased stress amongst carers also finds that the majority cope well with the demands of their role (Barnett et al.). These carers have been considered by many researchers to demonstrate resilience (e.g. Barnett et al.; O'Dwyer, Moyle and van Wyk). The ability to respond positively despite exposure to risk or adversity is a key feature of most definitions of resilience (Luthar, Cicchetti and Becker; Masten and Obradović; Zauszniewski, Bekhet and Suresky). Resilience in this context can thus be defined as a psychological process that facilitates healthy functioning in response to intense life stressors (Johnson et al.). Since caring experiences are likely to continue for an extended period of time, resilience is likely to be necessary on an ongoing basis, rather than in response to a single traumatic event. A resilient carer is therefore one who is able to effectively and adaptively cope with extenuating pressures of caring for a loved one. This involves the presence of personal, social, familial, or institutional protective factors that enable carers to resist stress (Kaplan et al.). For example, support from health professionals, family, or community has been found to effectively support carers in coping with their role (Bekhet, Johnson and Zauszniewski; Gardiner and Iarocci; Heiman; Whittingham et al.). The benefit of support networks in assisting carers to cope in their role is widely reported in the associated research, reinforced by many examples such as the following from a carer of a person with dementia: “It’s a social thing, like, I’ve got friends on there… I find that is my escape” (O'Dwyer, Moyle and van Wyk 758). At an individual level, those who demonstrate resilient in the face of adversity demonstrate optimistic or hopeful outlooks (Ekas, Lickenbrock and Whitman; Lloyd and Hastings; Whittingham et al.), while simultaneously holding realistic expectations of the future (Rasmussen et al.; Wrosch, Miller, et al.; Wrosch, Scheier, et al.). Such attitudes are particularly significant amongst people caring for family members or friends with disabilities or illnesses. The following attitude held by a carer of a child with cerebral palsy exemplifies this optimistic outlook: “I look at the glass half full and say that “well, it’s only his walking, everything else is fine”. “So, get over [it] and deal with it” (Whittingham et al. 1451). Those who cognitively process information, rather than reacting in a highly emotion way have also been found to cope better (Bekhet, Johnson and Zauszniewski; Heiman; Monin et al.; Pennebaker, Mayne and Francis), as have those with a greater sense of self-efficacy or an internal locus of control (Bekhet, Johnson and Zauszniewski; Kuhn and Carter). However effective these coping strategies prove to be, this is unlikely to provide the full picture of caring experiences, or the place of resilience within that space. Associating resilience with adversity presumes a consensus on what constitutes adversity. Taking the typical approach to investigating resilience amongst carers risks making undue assumptions of the nature of individual carers’ experiences – namely, that caring equates to adversity. The following paragraphs will outline how this is not necessarily the case. And furthermore, that the concept of resilience still has a place in considering informal caring, regardless of whether adversity is considered to be present. Benefits of Care While a great deal of evidence suggests that caring for a loved one can be a stressful experience, research has also demonstrated the existence of positive impacts of care. In many instances, carers not only cope, but also thrive in their caring roles (Turnbull et al.). Elements such as positive relationships within caring relationships can both challenge and strengthen individuals – factors that only exist due to the specific nature of the individual caring role (Bayat; Heiman). Such positive elements of the caring experience have been reflected in the literature, illustrated by quotes such as: “In some sense, this makes our family closer” (Bayat 709). Rather than viewing carers from a perspective of victimisation (which is particularly prominent in relation to children and young people with caring responsibilities), recognising the prevalence of positive wellbeing within this population provides a more nuanced understanding of the lived experiences of all carers (Aldridge). Reported benefits of caring tend to revolve around personal relationships, particularly in reference to parents caring for their children with special needs. Reflective of the parental relationship, carers of children with disabilities or chronic illnesses generally report feelings of love, joy, optimism, strength, enjoyment, and satisfaction with their role (Barnett et al.; Heiman). The views of such carers do not reflect an attitude of coping with adversity, but rather a perspective that considers their children to be positive contributors to carers’ quality of life and the wellbeing of the wider family (King et al.). This point of view suggests an additional dimension to resilience; in particular, that resilience in the relative absence of risk factors, can cause carers to flourish within their caring role and relationships. In addition to benefits in relationships, carers may also prosper through their own personal growth and development in the course of their caring (Knight). This includes factors such as the development of life skills, maturity, purpose, social skills, a sense of responsibility, and recognition – particularly amongst young people in caring roles (Earley, Cushway and Cassidy; Early, Cushway and Cassidy; Jurkovic, Thirkield and Morrell; Skovdal and Andreouli; Stein, Rotheram-Borus and Lester; Tompkins). Recognition of the potential personal benefits of caring for a loved one is not intended to suggest that the view of carers coping with adversity is universally applicable. While it is likely that individual caring situations will have an impact on the extent to which a carer faces adversity (e.g. intensity of caring responsibilities, severity of loved one’s impairment, etc.), it is important to recognise the benefits that carers can experience alongside any challenges they may face. Circumstances that appear adversarial may not be thought of as such by those within that context. Defining resilience as an ability to cope with adversity therefore will not apply to such contexts. Rather, the concept of resilience needs to incorporate those who not only cope, but also prosper. Carers who do not perceive their role as burdensome, but identify positive outcomes, can therefore be said to demonstrate resilience though contextually different from those coping with adversity. This is not to suggest that resilience is the sole contributing factor in terms of prospering in the caring role. We must also consider individual circumstances and nuances differ between carers, those they care for, interpersonal relationships, and wider caring situations. Continuum of Care Awareness of the range of impacts that caring can have on carers leads to a recognition of the broad spectrum of experience that this role entails. Not only do caring experiences exhibit large variations in terms of practical issues (such as functional capacities, or type and severity of illness, disability, or condition), they include carers’ diverse personal responses to caring responsibilities. These responses can reflect either positive or negative dimensions, or a combination of both (Faso, Neal-Beevers and Carlson). In this way, caring experiences can be conceptualised as existing along a continuum. At one end of the spectrum, experiences align with the traditional view of caring as a struggle with and over adversity. More specifically, carers experience burdens as a result of their additional caring responsibilities, with negative outcomes likely to occur. At the other end of the spectrum, however, carers prosper in the role, experiencing significant personal benefits that would not have been possible without the caring role. This continuum makes a case for an expanded approach to stress and coping models of resilience to include positive concepts and a benefit-orientated perspective (Cassidy and Giles). In contrast to research that has argued for a progression from stress and coping models to strengths-based approaches (e.g. Glidden, Billings and Jobe; Knight), the continuum of care acknowledges the benefits of each of these theoretical positions, and thus may prove more comprehensive in attempting to understand the everyday lived experiences of carers. The framework provided by a representation of a continuum allows for the individual differences in caring situations and carers’ personal responses to be acknowledged, as well as accounting for any changes in these circumstances. Further, the experience and benefits of resilience in different contextual spheres can be identified. The flexibility afforded by such an approach is particularly important in light of individual differences in the ways carers respond to their situations, their changing caring contexts, and their subsequent individual needs (Monin et al.; Walsh; Whittingham et al.). As the caring experience can be dynamic and fluctuate in both directions along the continuum, resilience may be seen as the mechanism by which such movement occurs. In line with stress and coping models, resilience can assist carers to cope with adversarial circumstances at that end of the continuum. Similarly, it may be argued that those who prosper in their caring role exhibit characteristics of resilience. In other words, it is resilience that enables carers to cope with adversity at one end of the continuum and also to prosper at the other. Furthermore, by supporting the development of resilient characteristics, carers may be assisted in shifting their experiences along the continuum, from adversity to prosperity. This view extends upon traditional approaches reported in the stress and coping literature by contending that caring experiences may progress beyond positions of coping with adversity, to a position where caring is not understood in terms of adversity at all, but rather in terms of benefits. The individual circumstances of any carer must be taken into consideration with this framework of resilience and the continuum of care. It is unrealistic to assume that all caring situations will allow for the possibility of reaching the end point of this continuum. Carers with particularly high demands in terms of time, resources, effort, or energy may not reach a stage where they no longer consider their caring role to involve any personal burden. However, the combination of a coping and strengths-based approach suggests that there is always the possibility of moving away from perceptions of adversity and further towards an attitude of prosperity. Implications for Supportive Practice From the perspective of this continuum of care, the protective factors and coping strategies identified in previous literature provide a valuable starting point for the facilitation of resilience amongst carers. Enhancing factors such as these can assist carers to move from situations of adversity towards experiences of prosperity (Benzies and Mychasiuk). Research has suggested that carers who are less analytical in their thinking and less optimistic about their personal situations may find particular benefit from support systems that assist them in redirecting their attention towards positive aspects of their daily lives, such as the benefits of caring outlined earlier (Monin et al.). The principle of focusing on positive experiences and reframing negative thoughts is thought to benefit carers across all levels of functioning and adaptive experience (Monin et al.). While those entrenched in more burdensome mindsets are likely to experience the greatest benefit from supportive interventions, there is still merit in providing similar supports to carers who do not appear to experience the similar experiences of burden, or demonstrate greater resilience or adaptation to their situation. The dynamic view of caring situations and resilience suggested by a continuum of care incorporates benefits of stress and coping models as well as strengths-based approaches. This has implications for supportive practice in that the focus is not on determining whether or not a carer is resilient, but identifying the ways in which they already are resilient (Simon, Murphy and Smith). For carers who experience their role through a lens of adversity, resilience may need to be purposefully fostered in order to better enable them to cope and develop through the ongoing stresses of their role. For carers at the other end of the spectrum, resilience is likely to take on a substantially different meaning. Under these circumstances, caring for a loved one is not considered a burdensome task; rather, the positive impact of the role is pre-eminent. This point of view suggests that carers are resilient, not only in terms of an ability to thrive despite adversity, but in prospering to the extent that adversity is not considered to exist. The attitudes and approaches of services, support networks, and governments towards carers should remain flexible enough to acknowledge the wide variety of caring circumstances that exist. The continuum of care provides a framework through which certain aspects of caring and variations in resilience can be interpreted, as well as the type of support required by individual carers. Furthermore, it must be noted that caring circumstances can change – either gradually or suddenly – with the extent to which carers experience adversity, coping or prosperity also changing. Any attempts to provide support to carers or acknowledge their resilience should demonstrate an awareness of the potential for such fluctuation. The fundamental view that carers always have the potential to move towards more positive outcomes has the potential to reframe perceptions of carers as victims, or as simply coping, to one that embraces the personal strengths and resilience of the individual. As such, carers can be supported when faced with adversity, and to flourish beyond that position. This in turn has the potential to safeguard against any detrimental effects of adversity that may arise in the future. References Aldridge, Jo. "All Work and No Play? Understanding the Needs of Children with Caring Responsibilities." Children & Society 22.4 (2008): 253-264. Andreouli, Eleni, Morten Skovdal, and Catherine Campbell. "‘It Made Me Realise That I Am Lucky for What I Got’: British Young Carers Encountering the Realities of Their African Peers." Journal of Youth Studies (2013): 1-16. Baker, Bruce L., et al. "Behavior Problems and Parenting Stress in Families of Three-Year-Old Children with and without Developmental Delays." American Journal on Mental Retardation 107.6 (2002): 433-44. Barlow, J. H., L. A. Cullen-Powell, and A. Cheshire. "Psychological Well-Being among Mothers of Children with Cerebral Palsy." Early Child Development and Care 176.3-4 (2006): 421-428. Barnett, Douglas, et al. "Building New Dreams: Supporting Parents' Adaptation to Their Child with Special Needs." Infants and Young Children 16.3 (2003): 184. Bayat, M. "Evidence of Resilience in Families of Children with Autism." Journal of Intellectual Disability Research 51.9 (2007): 702-714. Bekhet, Abir K., Norah L. Johnson, and Jaclene A. Zauszniewski. "Resilience in Family Members of Persons with Autism Spectrum Disorder: A Review of the Literature." Issues in Mental Health Nursing 33.10 (2012): 650-656. Benzies, Karen, and Richelle Mychasiuk. "Fostering Family Resiliency: A Review of the Key Protective Factors." Child and Family Social Work 14 (2009): 103-114. Carers NSW. Carers NSW Strategic Directions 2012-2015. 2012. Cassidy, Tony, and Melanie Giles. "Further Exploration of the Young Carers Perceived Stress Scale: Identifying a Benefit-Finding Dimension." British Journal of Health Psychology 18.3 (2013): 642-655. Cheshire, Anna, Julie H. Barlow, and Lesley A. Powell. "The Psychosocial Well-Being of Parents of Children with Cerebral Palsy: A Comparison Study." Disability and Rehabilitation 32.20 (2010): 1673-1677. Dunn, Michael E., et al. "Moderators of Stress in Parents of Children with Autism." Community Mental Health Journal 37.1 (2001): 39-52. Earley, Louise, Delia Cushway, and Tony Cassidy. "Children's Perceptions and Experiences of Care Giving: A Focus Group Study." Counselling Psychology Quarterly 20.1 (2007): 69-80. Early, Louise, Delia Cushway, and Tony Cassidy. "Perceived Stress in Young Carers: Development of a Measure." Journal of Child and Family Studies 15.2 (2006): 165-176. Ekas, Naomi V., Diane M. Lickenbrock, and Thomas L. Whitman. "Optimism, Social Support, and Well-Being in Mothers of Children with Autism Spectrum Disorder." Journal of Autism and Developmental Disorders 40.10 (2010): 1274-1284. Faso, Daniel J., A. Rebecca Neal-Beevers, and Caryn L. Carlson. "Vicarious Futurity, Hope, and Well-Being in Parents of Children with Autism Spectrum Disorder." Research in Autism Spectrum Disorders 7.2 (2013): 288-297. Gallagher, Stephen, et al. "Predictors of Psychological Morbidity in Parents of Children with Intellectual Disabilities." Journal of Pediatric Psychology 33.10 (2008): 1129-1136. Gardiner, Emily, and Grace Iarocci. "Unhappy (and Happy) in Their Own Way: A Developmental Psychopathology Perspective on Quality of Life for Families Living with Developmental Disability with and without Autism." Research in Developmental Disabilities 33.6 (2012): 2177-2192. Glidden, L. M., F. J. Billings, and B. M. Jobe. "Personality, Coping Style and Well-Being of Parents Rearing Children with Developmental Disabilities." Journal of Intellectual Disability Research 50.12 (2006): 949-962. Hastings, Richard P., et al. "Coping Strategies in Mothers and Fathers of Preschool and School-Age Children with Autism." Autism 9.4 (2005): 377-91. Heiman, Tali. "Parents of Children with Disabilities: Resilience, Coping, and Future Expectations." Journal of Developmental and Physical Disabilities 14.2 (2002): 159-171. Johnson, Douglas C., et al. "Development and Initial Validation of the Response to Stressful Experiences Scale." Military Medicine 176.2 (2011): 161-169. Jurkovic, GregoryJ, Alison Thirkield, and Richard Morrell. "Parentification of Adult Children of Divorce: A Multidimensional Analysis." Journal of Youth and Adolescence 30.2 (2001): 245-257. Kaplan, Carol P., et al. "Promoting Resilience Strategies: A Modified Consultation Model." Children & Schools 18.3 (1996): 158-168. King, G. A., et al. "A Qualitative Investigation of Changes in the Belief Systems of Families of Children with Autism or Down Syndrome." Child: Care, Health and Development 32.3 (2006): 353-369. Knight, Kathryn. "The Changing Face of the ‘Good Mother’: Trends in Research into Families with a Child with Intellectual Disability, and Some Concerns." Disability & Society 28.5 (2013): 660-673. Kuhn, Jennifer C., and Alice S. Carter. "Maternal Self-Efficacy and Associated Parenting Cognitions among Mothers of Children with Autism." American Journal of Orthopsychiatry 76.4 (2006): 564-575. Lach, Lucyna M., et al. "The Health and Psychosocial Functioning of Caregivers of Children with Neurodevelopmental Disorders." Disability and Rehabilitation 31.8 (2009): 607-18. Lloyd, T. J., and R. Hastings. "Hope as a Psychological Resilience Factor in Mothers and Fathers of Children with Intellectual Disabilities." Journal of Intellectual Disability Research 53.12 (2009): 957-68. Luthar, Suniya S., Dante Cicchetti, and Bronwyn Becker. "The Construct of Resilience: A Critical Evaluation and Guidelines for Future Work." Child Development 71.3 (2000): 543-62. Masten, Ann S., and Jelena Obradović. "Competence and Resilience in Development." Annals of the New York Academy of Sciences 1094.1 (2006): 13-27. Midence, Kenny, and Meena O’Neill. "The Experience of Parents in the Diagnosis of Autism: A Pilot Study." Autism 3.3 (1999): 273-85. Monin, Joan K., et al. "Linguistic Markers of Emotion Regulation and Cardiovascular Reactivity among Older Caregiving Spouses." Psychology and Aging 27.4 (2012): 903-11. O'Dwyer, Siobhan, Wendy Moyle, and Sierra van Wyk. "Suicidal Ideation and Resilience in Family Carers of People with Dementia: A Pilot Qualitative Study." Aging & Mental Health 17.6 (2013): 753-60. Pennebaker, James W., Tracy J. Mayne, and Martha E. Francis. "Linguistic Predictors of Adaptive Bereavement." Journal of Personality and Social Psychology 72.4 (1997): 863-71. Rasmussen, Heather N., et al. "Self-Regulation Processes and Health: The Importance of Optimism and Goal Adjustment." Journal of Personality 74.6 (2006): 1721-48. Simon, Joan B., John J. Murphy, and Shelia M. Smith. "Understanding and Fostering Family Resilience." The Family Journal 13.4 (2005): 427-36. Singer, George H. S. "Meta-Analysis of Comparative Studies of Depression in Mothers of Children with and without Developmental Disabilities." American Journal on Mental Retardation 111.3 (2006): 155-69. Skovdal, Morten, and Eleni Andreouli. "Using Identity and Recognition as a Framework to Understand and Promote the Resilience of Caregiving Children in Western Kenya." Journal of Social Policy 40.03 (2011): 613-30. Sörensen, Silvia, et al. "Dementia Care: Mental Health Effects, Intervention Strategies, and Clinical Implications." The Lancet Neurology 5.11 (2006): 961-73. Stein, Judith A., Mary Jane Rotheram-Borus, and Patricia Lester. "Impact of Parentification on Long-Term Outcomes among Children of Parents with Hiv/Aids." Family Process 46.3 (2007): 317-33. Tompkins, Tanya L. "Parentification and Maternal HIV Infection: Beneficial Role or Pathological Burden?" Journal of Child and Family Studies 16.1 (2007): 108-18. Turnbull, Ann P., et al. "Conceptualization and Measurement of Family Outcomes Associated with Families of Individuals with Intellectual Disabilities." Mental Retardation and Developmental Disabilities Research Reviews 13.4 (2007): 346-56. Vitaliano, Peter P., Jianping Zhang, and James M. Scanlan. "Is Caregiving Hazardous to One's Physical Health? A Meta-Analysis." Psychological Bulletin 129.6 (2003): 946-72. Walsh, Froma. "Family Resilience: A Framework for Clinical Practice." Family Process 42.1 (2003): 1-18. Whittingham, Koa, et al. "Sorrow, Coping and Resiliency: Parents of Children with Cerebral Palsy Share Their Experiences." Disability and Rehabilitation 35.17 (2013): 1447-52. Wrosch, Carsten, et al. "Giving Up on Unattainable Goals: Benefits for Health?" Personality and Social Psychology Bulletin 33.2 (2007): 251-65. Wrosch, Carsten, et al. "The Importance of Goal Disengagement in Adaptive Self-Regulation: When Giving Up Is Beneficial." Self and Identity 2.1 (2003): 1-20. Yamada, Atsurou, et al. "Emotional Distress and Its Correlates among Parents of Children with Pervasive Developmental Disorders." Psychiatry and Clinical Neurosciences 61.6 (2007): 651-57. Zauszniewski, Jaclene A., Abir K. Bekhet, and M. J. Suresky. "Resilience in Family Members of Persons with Serious Mental Illness." Nursing Clinics of North America 45.4 (2010): 613-26.
Style APA, Harvard, Vancouver, ISO itp.
49

Allen, Rob. "Lost and Now Found: The Search for the Hidden and Forgotten". M/C Journal 20, nr 5 (13.10.2017). http://dx.doi.org/10.5204/mcj.1290.

Pełny tekst źródła
Streszczenie:
The Digital TurnMuch of the 19th century disappeared from public view during the 20th century. Historians recovered what they could from archives and libraries, with the easy pickings-the famous and the fortunate-coming first. Latterly, social and political historians of different hues determinedly sought out the more hidden, forgotten, and marginalised. However, there were always limitations to resources-time, money, location, as well as purpose, opportunity, and permission. 'History' was principally a professionalised and privileged activity dominated by academics who had preferential access to, and significant control over, the resources, technologies and skills required, as well as the social, economic and cultural framework within which history was recovered, interpreted, approved and disseminated.Digitisation and the broader development of new communication technologies has, however, transformed historical research processes and practice dramatically, removing many constraints, opening up many opportunities, and allowing many others than the professional historian to trace and track what would have remained hidden, forgotten, or difficult to find, as well as verify (or otherwise), what has already been claimed and concluded. In the 21st century, the SEARCH button has become a dominant tool of research. This, along with other technological and media developments, has altered the practice of historians-professional or 'public'-who can now range deep and wide in the collection, portrayal and dissemination of historical information, in and out of the confines of the traditional institutional walls of retained information, academia, location, and national boundaries.This incorporation of digital technologies into academic historical practice generally, has raised, as Cohen and Rosenzweig, in their book Digital History, identified a decade ago, not just promises, but perils. For the historian, there has been the move, through digitisation, from the relative scarcity and inaccessibility of historical material to its (over) abundance, but also the emerging acceptance that, out of both necessity and preference, a hybridity of sources will be the foreseeable way forward. There has also been a significant shift, as De Groot notes in his book Consuming History, in the often conflicted relationship between popular/public history and academic history, and the professional and the 'amateur' historian. This has brought a potentially beneficial democratization of historical practice but also an associated set of concerns around the loss of control of both practice and product of the professional historian. Additionally, the development of digital tools for the collection and dissemination of 'history' has raised fears around the commercialised development of the subject's brand, products and commodities. This article considers the significance and implications of some of these changes through one protracted act of recovery and reclamation in which the digital made the difference: the life of a notorious 19th century professional agitator on both sides of the Atlantic, John De Morgan. A man thought lost, but now found."Who Is John De Morgan?" The search began in 1981, linked to the study of contemporary "race riots" in South East London. The initial purpose was to determine whether there was a history of rioting in the area. In the Local History Library, a calm and dusty backwater, an early find was a fading, but evocative and puzzling, photograph of "The Plumstead Common Riots" of 1876. It showed a group of men and women, posing for the photographer on a hillside-the technology required stillness, even in the middle of a riot-spades in hand, filling in a Mr. Jacob's sandpits, illegally dug from what was supposed to be common land. The leader of this, and other similar riots around England, was John De Morgan. A local journalist who covered the riots commented: "Of Mr. De Morgan little is known before or since the period in which he flashed meteorlike through our section of the atmosphere, but he was indisputably a remarkable man" (Vincent 588). Thus began a trek, much interrupted, sometimes unmapped and haphazard, to discover more about this 'remarkable man'. "Who is John De Morgan" was a question frequently asked by his many contemporary antagonists, and by subsequent historians, and one to which De Morgan deliberately gave few answers. The obvious place to start the search was the British Museum Reading Room, resplendent in its Victorian grandeur, the huge card catalogue still in the 1980s the dominating technology. Together with the Library's newspaper branch at Colindale, this was likely to be the repository of all that might then easily be known about De Morgan.From 1869, at the age of 21, it appeared that De Morgan had embarked on a life of radical politics that took him through the UK, made him notorious, lead to accusations of treasonable activities, sent him to jail twice, before he departed unexpectedly to the USA in 1880. During that period, he was involved with virtually every imaginable radical cause, at various times a temperance advocate, a spiritualist, a First Internationalist, a Republican, a Tichbornite, a Commoner, an anti-vaccinator, an advanced Liberal, a parliamentary candidate, a Home Ruler. As a radical, he, like many radicals of the period, "zigzagged nomadically through the mayhem of nineteenth century politics fighting various foes in the press, the clubs, the halls, the pulpit and on the street" (Kazin 202). He promoted himself as the "People's Advocate, Champion and Friend" (Allen). Never a joiner or follower, he established a variety of organizations, became a professional agitator and orator, and supported himself and his politics through lecturing and journalism. Able to attract huge crowds to "monster meetings", he achieved fame, or more correctly notoriety. And then, in 1880, broke and in despair, he disappeared from public view by emigrating to the USA.LostThe view of De Morgan as a "flashing meteor" was held by many in the 1870s. Historians of the 20th century took a similar position and, while considering him intriguing and culturally interesting, normally dispatched him to the footnotes. By the latter part of the 20th century, he was described as "one of the most notorious radicals of the 1870s yet remains a shadowy figure" and was generally dismissed as "a swashbuckling demagogue," a "democratic messiah," and" if not a bandit … at least an adventurer" (Allen 684). His politics were deemed to be reactionary, peripheral, and, worst of all, populist. He was certainly not of sufficient interest to pursue across the Atlantic. In this dismissal, he fell foul of the highly politicised professional culture of mid-to-late 20th-century academic historians. In particular, the lack of any significant direct linkage to the story of the rise of a working class, and specifically the British Labour party, left individuals like De Morgan in the margins and footnotes. However, in terms of historical practice, it was also the case that his mysterious entry into public life, his rapid rise to brief notability and notoriety, and his sudden disappearance, made the investigation of his career too technically difficult to be worthwhile.The footprints of the forgotten may occasionally turn up in the archived papers of the important, or in distant public archives and records, but the primary sources are the newspapers of the time. De Morgan was a regular, almost daily, visitor to the pages of the multitude of newspapers, local and national, that were published in Victorian Britain and Gilded Age USA. He also published his own, usually short-lived and sometimes eponymous, newspapers: De Morgan's Monthly and De Morgan's Weekly as well as the splendidly titled People's Advocate and National Vindicator of Right versus Wrong and the deceptively titled, highly radical, House and Home. He was highly mobile: he noted, without too much hyperbole, that in the 404 days between his English prison sentences in the mid-1870s, he had 465 meetings, travelled 32,000 miles, and addressed 500,000 people. Thus the newspapers of the time are littered with often detailed and vibrant accounts of his speeches, demonstrations, and riots.Nonetheless, the 20th-century technologies of access and retrieval continued to limit discovery. The white gloves, cradles, pencils and paper of the library or archive, sometimes supplemented by the century-old 'new' technology of the microfilm, all enveloped in a culture of hallowed (and pleasurable) silence, restricted the researcher looking to move into the lesser known and certainly the unknown. The fact that most of De Morgan's life was spent, it was thought, outside of England, and outside the purview of the British Library, only exacerbated the problem. At a time when a historian had to travel to the sources and then work directly on them, pencil in hand, it needed more than curiosity to keep searching. Even as many historians in the late part of the century shifted their centre of gravity from the known to the unknown and from the great to the ordinary, in any form of intellectual or resource cost-benefit analysis, De Morgan was a non-starter.UnknownOn the subject of his early life, De Morgan was tantalisingly and deliberately vague. In his speeches and newspapers, he often leaked his personal and emotional struggles as well as his political battles. However, when it came to his biographical story, he veered between the untruthful, the denial, and the obscure. To the twentieth century observer, his life began in 1869 at the age of 21 and ended at the age of 32. His various political campaign "biographies" gave some hints, but what little he did give away was often vague, coy and/or unlikely. His name was actually John Francis Morgan, but he never formally acknowledged it. He claimed, and was very proud, to be Irish and to have been educated in London and at Cambridge University (possible but untrue), and also to have been "for the first twenty years of his life directly or indirectly a railway servant," and to have been a "boy orator" from the age of ten (unlikely but true). He promised that "Some day-nay any day-that the public desire it, I am ready to tell the story of my strange life from earliest recollection to the present time" (St. Clair 4). He never did and the 20th century could unearth little evidence in relation to any of his claims.The blend of the vague, the unlikely and the unverifiable-combined with an inclination to self-glorification and hyperbole-surrounded De Morgan with an aura, for historians as well as contemporaries, of the self-seeking, untrustworthy charlatan with something to hide and little to say. Therefore, as the 20th century moved to closure, the search for John De Morgan did so as well. Though interesting, he gave most value in contextualising the lives of Victorian radicals more generally. He headed back to the footnotes.Now FoundMeanwhile, the technologies underpinning academic practice generally, and history specifically, had changed. The photocopier, personal computer, Internet, and mobile device, had arrived. They formed the basis for both resistance and revolution in academic practices. For a while, the analytical skills of the academic community were concentrated on the perils as much as the promises of a "digital history" (Cohen and Rosenzweig Digital).But as the Millennium turned, and the academic community itself spawned, inter alia, Google, the practical advantages of digitisation for history forced themselves on people. Google enabled the confident searching from a neutral place for things known and unknown; information moved to the user more easily in both time and space. The culture and technologies of gathering, retrieval, analysis, presentation and preservation altered dramatically and, as a result, the traditional powers of gatekeepers, institutions and professional historians was redistributed (De Groot). Access and abundance, arguably over-abundance, became the platform for the management of historical information. For the search for De Morgan, the door reopened. The increased global electronic access to extensive databases, catalogues, archives, and public records, as well as people who knew, or wanted to know, something, opened up opportunities that have been rapidly utilised and expanded over the last decade. Both professional and "amateur" historians moved into a space that made the previously difficult to know or unknowable now accessible.Inevitably, the development of digital newspaper archives was particularly crucial to seeking and finding John De Morgan. After some faulty starts in the early 2000s, characterised as a "wild west" and a "gold rush" (Fyfe 566), comprehensive digitised newspaper archives became available. While still not perfect, in terms of coverage and quality, it is a transforming technology. In the UK, the British Newspaper Archive (BNA)-in pursuit of the goal of the digitising of all UK newspapers-now has over 20 million pages. Each month presents some more of De Morgan. Similarly, in the US, Fulton History, a free newspaper archive run by retired computer engineer Tom Tryniski, now has nearly 40 million pages of New York newspapers. The almost daily footprints of De Morgan's radical life can now be seen, and the lives of the social networks within which he worked on both sides of the Atlantic, come easily into view even from a desk in New Zealand.The Internet also allows connections between researchers, both academic and 'public', bringing into reach resources not otherwise knowable: a Scottish genealogist with a mass of data on De Morgan's family; a Californian with the historian's pot of gold, a collection of over 200 letters received by De Morgan over a 50 year period; a Leeds Public Library blogger uncovering spectacular, but rarely seen, Victorian electoral cartoons which explain De Morgan's precipitate departure to the USA. These discoveries would not have happened without the infrastructure of the Internet, web site, blog, and e-mail. Just how different searching is can be seen in the following recent scenario, one of many now occurring. An addition in 2017 to the BNA shows a Master J.F. Morgan, aged 13, giving lectures on temperance in Ledbury in 1861, luckily a census year. A check of the census through Ancestry shows that Master Morgan was born in Lincolnshire in England, and a quick look at the 1851 census shows him living on an isolated blustery hill in Yorkshire in a railway encampment, along with 250 navvies, as his father, James, works on the construction of a tunnel. Suddenly, literally within the hour, the 20-year search for the childhood of John De Morgan, the supposedly Irish-born "gentleman who repudiated his class," has taken a significant turn.At the end of the 20th century, despite many efforts, John De Morgan was therefore a partial character bounded by what he said and didn't say, what others believed, and the intellectual and historiographical priorities, technologies, tools and processes of that century. In effect, he "lived" historically for a less than a quarter of his life. Without digitisation, much would have remained hidden; with it there has been, and will still be, much to find. De Morgan hid himself and the 20th century forgot him. But as the technologies have changed, and with it the structures of historical practice, the question that even De Morgan himself posed – "Who is John De Morgan?" – can now be addressed.SearchingDigitisation brings undoubted benefits, but its impact goes a long way beyond the improved search and detection capabilities, into a range of technological developments of communication and media that impact on practice, practitioners, institutions, and 'history' itself. A dominant issue for the academic community is the control of "history." De Groot, in his book Consuming History, considers how history now works in contemporary popular culture and, in particular, examines the development of the sometimes conflicted relationship between popular/public history and academic history, and the professional and the 'amateur' historian.The traditional legitimacy of professional historians has, many argue, been eroded by shifts in technology and access with the power of traditional cultural gatekeepers being undermined, bypassing the established control of institutions and professional historian. While most academics now embrace the primary tools of so-called "digital history," they remain, De Groot argues, worried that "history" is in danger of becoming part of a discourse of leisure, not a professionalized arena (18). An additional concern is the role of the global capitalist market, which is developing, or even taking over, 'history' as a brand, product and commodity with overt fiscal value. Here the huge impact of newspaper archives and genealogical software (sometimes owned in tandem) is of particular concern.There is also the new challenge of "navigating the chaos of abundance in online resources" (De Groot 68). By 2005, it had become clear that:the digital era seems likely to confront historians-who were more likely in the past to worry about the scarcity of surviving evidence from the past-with a new 'problem' of abundance. A much deeper and denser historical record, especially one in digital form seems like an incredible opportunity and a gift. But its overwhelming size means that we will have to spend a lot of time looking at this particular gift horse in mouth. (Cohen and Rosenzweig, Web).This easily accessible abundance imposes much higher standards of evidence on the historian. The acceptance within the traditional model that much could simply not be done or known with the resources available meant that there was a greater allowance for not knowing. But with a search button and public access, democratizing the process, the consumer as well as the producer can see, and find, for themselves.Taking on some of these challenges, Zaagsma, having reminded us that the history of digital humanities goes back at least 60 years, notes the need to get rid of the "myth that historical practice can be uncoupled from technological, and thus methodological developments, and that going digital is a choice, which, I cannot emphasis strongly enough, it is not" (14). There is no longer a digital history which is separate from history, and with digital technologies that are now ubiquitous and pervasive, historians have accepted or must quickly face a fundamental break with past practices. However, also noting that the great majority of archival material is not digitised and is unlikely to be so, Zaagsma concludes that hybridity will be the "new normal," combining "traditional/analogue and new/digital practices at least in information gathering" (17).ConclusionA decade on from Cohen and Rozenzweig's "Perils and Promises," the digital is a given. Both historical practice and historians have changed, though it is a work in progress. An early pioneer of the use of computers in the humanities, Robert Busa wrote in 1980 that "the principal aim is the enhancement of the quality, depth and extension of research and not merely the lessening of human effort and time" (89). Twenty years later, as Google was launched, Jordanov, taking on those who would dismiss public history as "mere" popularization, entertainment or propaganda, argued for the "need to develop coherent positions on the relationships between academic history, the media, institutions…and popular culture" (149). As the digital turn continues, and the SEARCH button is just one part of that, all historians-professional or "amateur"-will take advantage of opportunities that technologies have opened up. Looking across the whole range of transformations in recent decades, De Groot concludes: "Increasingly users of history are accessing the past through complex and innovative media and this is reconfiguring their sense of themselves, the world they live in and what history itself might be about" (310). ReferencesAllen, Rob. "'The People's Advocate, Champion and Friend': The Transatlantic Career of Citizen John De Morgan (1848-1926)." Historical Research 86.234 (2013): 684-711.Busa, Roberto. "The Annals of Humanities Computing: The Index Thomisticus." Computers and the Humanities 14.2 (1980): 83-90.Cohen, Daniel J., and Roy Rosenzweig. Digital History: A Guide to Gathering, Preserving, and Presenting the Past on the Web. Philadelphia, PA: U Pennsylvania P, 2005.———. "Web of Lies? Historical Knowledge on the Internet." First Monday 10.12 (2005).De Groot, Jerome. Consuming History: Historians and Heritage in Contemporary Popular Culture. 2nd ed. Abingdon: Routledge, 2016.De Morgan, John. Who Is John De Morgan? A Few Words of Explanation, with Portrait. By a Free and Independent Elector of Leicester. London, 1877.Fyfe, Paul. "An Archaeology of Victorian Newspapers." Victorian Periodicals Review 49.4 (2016): 546-77."Interchange: The Promise of Digital History." Journal of American History 95.2 (2008): 452-91.Johnston, Leslie. "Before You Were Born, We Were Digitizing Texts." The Signal 9 Dec. 2012, Library of Congress. <https://blogs.loc.gov/thesignal/292/12/before-you-were-born-we-were-digitizing-texts>.Jordanova, Ludmilla. History in Practice. 2nd ed. London: Arnold, 2000.Kazin, Michael. A Godly Hero: The Life of William Jennings Bryan. New York: Anchor Books, 2006.Saint-Clair, Sylvester. Sketch of the Life and Labours of J. De Morgan, Elocutionist, and Tribune of the People. Leeds: De Morgan & Co., 1880.Vincent, William T. The Records of the Woolwich District, Vol. II. Woolwich: J.P. Jackson, 1890.Zaagsma, Gerban. "On Digital History." BMGN-Low Countries Historical Review 128.4 (2013): 3-29.
Style APA, Harvard, Vancouver, ISO itp.
50

Currie, Susan, i Donna Lee Brien. "Mythbusting Publishing: Questioning the ‘Runaway Popularity’ of Published Biography and Other Life Writing". M/C Journal 11, nr 4 (1.07.2008). http://dx.doi.org/10.5204/mcj.43.

Pełny tekst źródła
Streszczenie:
Introduction: Our current obsession with the lives of others “Biography—that is to say, our creative and non-fictional output devoted to recording and interpreting real lives—has enjoyed an extraordinary renaissance in recent years,” writes Nigel Hamilton in Biography: A Brief History (1). Ian Donaldson agrees that biography is back in fashion: “Once neglected within the academy and relegated to the dustier recesses of public bookstores, biography has made a notable return over recent years, emerging, somewhat surprisingly, as a new cultural phenomenon, and a new academic adventure” (23). For over a decade now, commentators having been making similar observations about our obsession with the intimacies of individual people’s lives. In a lecture in 1994, Justin Kaplan asserted the West was “a culture of biography” (qtd. in Salwak 1) and more recent research findings by John Feather and Hazel Woodbridge affirm that “the undiminished human curiosity about other peoples lives is clearly reflected in the popularity of autobiographies and biographies” (218). At least in relation to television, this assertion seems valid. In Australia, as in the USA and the UK, reality and other biographically based television shows have taken over from drama in both the numbers of shows produced and the viewers these shows attract, and these forms are also popular in Canada (see, for instance, Morreale on The Osbournes). In 2007, the program Biography celebrated its twentieth anniversary season to become one of the longest running documentary series on American television; so successful that in 1999 it was spun off into its own eponymous channel (Rak; Dempsey). Premiered in May 1996, Australian Story—which aims to utilise a “personal approach” to biographical storytelling—has won a significant viewership, critical acclaim and professional recognition (ABC). It can also be posited that the real home movies viewers submit to such programs as Australia’s Favourite Home Videos, and “chat” or “confessional” television are further reflections of a general mania for biographical detail (see Douglas), no matter how fragmented, sensationalized, or even inane and cruel. A recent example of the latter, the USA-produced The Moment of Truth, has contestants answering personal questions under polygraph examination and then again in front of an audience including close relatives and friends—the more “truthful” their answers (and often, the more humiliated and/or distressed contestants are willing to be), the more money they can win. Away from television, but offering further evidence of this interest are the growing readerships for personally oriented weblogs and networking sites such as MySpace and Facebook (Grossman), individual profiles and interviews in periodical publications, and the recently widely revived newspaper obituary column (Starck). Adult and community education organisations run short courses on researching and writing auto/biographical forms and, across Western countries, the family history/genealogy sections of many local, state, and national libraries have been upgraded to meet the increasing demand for these services. Academically, journals and e-mail discussion lists have been established on the topics of biography and autobiography, and North American, British, and Australian universities offer undergraduate and postgraduate courses in life writing. The commonly aired wisdom is that published life writing in its many text-based forms (biography, autobiography, memoir, diaries, and collections of personal letters) is enjoying unprecedented popularity. It is our purpose to examine this proposition. Methodological problems There are a number of problems involved in investigating genre popularity, growth, and decline in publishing. Firstly, it is not easy to gain access to detailed statistics, which are usually only available within the industry. Secondly, it is difficult to ascertain how publishing statistics are gathered and what they report (Eliot). There is the question of whether bestselling booklists reflect actual book sales or are manipulated marketing tools (Miller), although the move from surveys of booksellers to electronic reporting at point of sale in new publishing lists such as BookScan will hopefully obviate this problem. Thirdly, some publishing lists categorise by subject and form, some by subject only, and some do not categorise at all. This means that in any analysis of these statistics, a decision has to be made whether to use the publishing list’s system or impose a different mode. If the publishing list is taken at face value, the question arises of whether to use categorisation by form or by subject. Fourthly, there is the bedeviling issue of terminology. Traditionally, there reigned a simple dualism in the terminology applied to forms of telling the true story of an actual life: biography and autobiography. Publishing lists that categorise their books, such as BookScan, have retained it. But with postmodern recognition of the presence of the biographer in a biography and of the presence of other subjects in an autobiography, the dichotomy proves false. There is the further problem of how to categorise memoirs, diaries, and letters. In the academic arena, the term “life writing” has emerged to describe the field as a whole. Within the genre of life writing, there are, however, still recognised sub-genres. Academic definitions vary, but generally a biography is understood to be a scholarly study of a subject who is not the writer; an autobiography is the story of a entire life written by its subject; while a memoir is a segment or particular focus of that life told, again, by its own subject. These terms are, however, often used interchangeably even by significant institutions such the USA Library of Congress, which utilises the term “biography” for all. Different commentators also use differing definitions. Hamilton uses the term “biography” to include all forms of life writing. Donaldson discusses how the term has been co-opted to include biographies of place such as Peter Ackroyd’s London: The Biography (2000) and of things such as Lizzie Collingham’s Curry: A Biography (2005). This reflects, of course, a writing/publishing world in which non-fiction stories of places, creatures, and even foodstuffs are called biographies, presumably in the belief that this will make them more saleable. The situation is further complicated by the emergence of hybrid publishing forms such as, for instance, the “memoir-with-recipes” or “food memoir” (Brien, Rutherford and Williamson). Are such books to be classified as autobiography or put in the “cookery/food & drink” category? We mention in passing the further confusion caused by novels with a subtitle of The Biography such as Virginia Woolf’s Orlando. The fifth methodological problem that needs to be mentioned is the increasing globalisation of the publishing industry, which raises questions about the validity of the majority of studies available (including those cited herein) which are nationally based. Whether book sales reflect what is actually read (and by whom), raises of course another set of questions altogether. Methodology In our exploration, we were fundamentally concerned with two questions. Is life writing as popular as claimed? And, if it is, is this a new phenomenon? To answer these questions, we examined a range of available sources. We began with the non-fiction bestseller lists in Publishers Weekly (a respected American trade magazine aimed at publishers, librarians, booksellers, and literary agents that claims to be international in scope) from their inception in 1912 to the present time. We hoped that this data could provide a longitudinal perspective. The term bestseller was coined by Publishers Weekly when it began publishing its lists in 1912; although the first list of popular American books actually appeared in The Bookman (New York) in 1895, based itself on lists appearing in London’s The Bookman since 1891 (Bassett and Walter 206). The Publishers Weekly lists are the best source of longitudinal information as the currently widely cited New York Times listings did not appear till 1942, with the Wall Street Journal a late entry into the field in 1994. We then examined a number of sources of more recent statistics. We looked at the bestseller lists from the USA-based Amazon.com online bookseller; recent research on bestsellers in Britain; and lists from Nielsen BookScan Australia, which claims to tally some 85% or more of books sold in Australia, wherever they are published. In addition to the reservations expressed above, caveats must be aired in relation to these sources. While Publishers Weekly claims to be an international publication, it largely reflects the North American publishing scene and especially that of the USA. Although available internationally, Amazon.com also has its own national sites—such as Amazon.co.uk—not considered here. It also caters to a “specific computer-literate, credit-able clientele” (Gutjahr: 219) and has an unashamedly commercial focus, within which all the information generated must be considered. In our analysis of the material studied, we will use “life writing” as a genre term. When it comes to analysis of the lists, we have broken down the genre of life writing into biography and autobiography, incorporating memoir, letters, and diaries under autobiography. This is consistent with the use of the terminology in BookScan. Although we have broken down the genre in this way, it is the overall picture with regard to life writing that is our concern. It is beyond the scope of this paper to offer a detailed analysis of whether, within life writing, further distinctions should be drawn. Publishers Weekly: 1912 to 2006 1912 saw the first list of the 10 bestselling non-fiction titles in Publishers Weekly. It featured two life writing texts, being headed by an autobiography, The Promised Land by Russian Jewish immigrant Mary Antin, and concluding with Albert Bigelow Paine’s six-volume biography, Mark Twain. The Publishers Weekly lists do not categorise non-fiction titles by either form or subject, so the classifications below are our own with memoir classified as autobiography. In a decade-by-decade tally of these listings, there were 3 biographies and 20 autobiographies in the lists between 1912 and 1919; 24 biographies and 21 autobiographies in the 1920s; 13 biographies and 40 autobiographies in the 1930s; 8 biographies and 46 biographies in the 1940s; 4 biographies and 14 autobiographies in the 1950s; 11 biographies and 13 autobiographies in the 1960s; 6 biographies and 11 autobiographies in the 1970s; 3 biographies and 19 autobiographies in the 1980s; 5 biographies and 17 autobiographies in the 1990s; and 2 biographies and 7 autobiographies from 2000 up until the end of 2006. See Appendix 1 for the relevant titles and authors. Breaking down the most recent figures for 1990–2006, we find a not radically different range of figures and trends across years in the contemporary environment. The validity of looking only at the top ten books sold in any year is, of course, questionable, as are all the issues regarding sources discussed above. But one thing is certain in terms of our inquiry. There is no upwards curve obvious here. If anything, the decade break-down suggests that sales are trending downwards. This is in keeping with the findings of Michael Korda, in his history of twentieth-century bestsellers. He suggests a consistent longitudinal picture across all genres: In every decade, from 1900 to the end of the twentieth century, people have been reliably attracted to the same kind of books […] Certain kinds of popular fiction always do well, as do diet books […] self-help books, celebrity memoirs, sensationalist scientific or religious speculation, stories about pets, medical advice (particularly on the subjects of sex, longevity, and child rearing), folksy wisdom and/or humour, and the American Civil War (xvii). Amazon.com since 2000 The USA-based Amazon.com online bookselling site provides listings of its own top 50 bestsellers since 2000, although only the top 14 bestsellers are recorded for 2001. As fiction and non-fiction are not separated out on these lists and no genre categories are specified, we have again made our own decisions about what books fall into the category of life writing. Generally, we erred on the side of inclusion. (See Appendix 2.) However, when it came to books dealing with political events, we excluded books dealing with specific aspects of political practice/policy. This meant excluding books on, for instance, George Bush’s so-called ‘war on terror,’ of which there were a number of bestsellers listed. In summary, these listings reveal that of the top 364 books sold by Amazon from 2000 to 2007, 46 (or some 12.6%) were, according to our judgment, either biographical or autobiographical texts. This is not far from the 10% of the 1912 Publishers Weekly listing, although, as above, the proportion of bestsellers that can be classified as life writing varied dramatically from year to year, with no discernible pattern of peaks and troughs. This proportion tallied to 4% auto/biographies in 2000, 14% in 2001, 10% in 2002, 18% in 2003 and 2004, 4% in 2005, 14% in 2006 and 20% in 2007. This could suggest a rising trend, although it does not offer any consistent trend data to suggest sales figures may either continue to grow, or fall again, in 2008 or afterwards. Looking at the particular texts in these lists (see Appendix 2) also suggests that there is no general trend in the popularity of life writing in relation to other genres. For instance, in these listings in Amazon.com, life writing texts only rarely figure in the top 10 books sold in any year. So rarely indeed, that from 2001 there were only five in this category. In 2001, John Adams by David McCullough was the best selling book of the year; in 2003, Hillary Clinton’s autobiographical Living History was 7th; in 2004, My Life by Bill Clinton reached number 1; in 2006, Nora Ephron’s I Feel Bad About My Neck: and Other Thoughts on Being a Woman was 9th; and in 2007, Ishmael Beah’s discredited A Long Way Gone: Memoirs of a Boy Soldier came in at 8th. Apart from McCulloch’s biography of Adams, all the above are autobiographical texts, while the focus on leading political figures is notable. Britain: Feather and Woodbridge With regard to the British situation, we did not have actual lists and relied on recent analysis. John Feather and Hazel Woodbridge find considerably higher levels for life writing in Britain than above with, from 1998 to 2005, 28% of British published non-fiction comprising autobiography, while 8% of hardback and 5% of paperback non-fiction was biography (2007). Furthermore, although Feather and Woodbridge agree with commentators that life writing is currently popular, they do not agree that this is a growth state, finding the popularity of life writing “essentially unchanged” since their previous study, which covered 1979 to the early 1990s (Feather and Reid). Australia: Nielsen BookScan 2006 and 2007 In the Australian publishing industry, where producing books remains an ‘expensive, risky endeavour which is increasingly market driven’ (Galligan 36) and ‘an inherently complex activity’ (Carter and Galligan 4), the most recent Australian Bureau of Statistics figures reveal that the total numbers of books sold in Australia has remained relatively static over the past decade (130.6 million in the financial year 1995–96 and 128.8 million in 2003–04) (ABS). During this time, however, sales volumes of non-fiction publications have grown markedly, with a trend towards “non-fiction, mass market and predictable” books (Corporall 41) resulting in general non-fiction sales in 2003–2004 outselling general fiction by factors as high as ten depending on the format—hard- or paperback, and trade or mass market paperback (ABS 2005). However, while non-fiction has increased in popularity in Australia, the same does not seem to hold true for life writing. Here, in utilising data for the top 5,000 selling non-fiction books in both 2006 and 2007, we are relying on Nielsen BookScan’s categorisation of texts as either biography or autobiography. In 2006, no works of life writing made the top 10 books sold in Australia. In looking at the top 100 books sold for 2006, in some cases the subjects of these works vary markedly from those extracted from the Amazon.com listings. In Australia in 2006, life writing makes its first appearance at number 14 with convicted drug smuggler Schapelle Corby’s My Story. This is followed by another My Story at 25, this time by retired Australian army chief, Peter Cosgrove. Jonestown: The Power and Myth of Alan Jones comes in at 34 for the Australian broadcaster’s biographer Chris Masters; the biography, The Innocent Man by John Grisham at 38 and Li Cunxin’s autobiographical Mao’s Last Dancer at 45. Australian Susan Duncan’s memoir of coping with personal loss, Salvation Creek: An Unexpected Life makes 50; bestselling USA travel writer Bill Bryson’s autobiographical memoir of his childhood The Life and Times of the Thunderbolt Kid 69; Mandela: The Authorised Portrait by Rosalind Coward, 79; and Joanne Lees’s memoir of dealing with her kidnapping, the murder of her partner and the justice system in Australia’s Northern Territory, No Turning Back, 89. These books reveal a market preference for autobiographical writing, and an almost even split between Australian and overseas subjects in 2006. 2007 similarly saw no life writing in the top 10. The books in the top 100 sales reveal a downward trend, with fewer titles making this band overall. In 2007, Terri Irwin’s memoir of life with her famous husband, wildlife warrior Steve Irwin, My Steve, came in at number 26; musician Andrew Johns’s memoir of mental illness, The Two of Me, at 37; Ayaan Hirst Ali’s autobiography Infidel at 39; John Grogan’s biography/memoir, Marley and Me: Life and Love with the World’s Worst Dog, at 42; Sally Collings’s biography of the inspirational young survivor Sophie Delezio, Sophie’s Journey, at 51; and Elizabeth Gilbert’s hybrid food, self-help and travel memoir, Eat, Pray, Love: One Woman’s Search for Everything at 82. Mao’s Last Dancer, published the year before, remained in the top 100 in 2007 at 87. When moving to a consideration of the top 5,000 books sold in Australia in 2006, BookScan reveals only 62 books categorised as life writing in the top 1,000, and only 222 in the top 5,000 (with 34 titles between 1,000 and 1,999, 45 between 2,000 and 2,999, 48 between 3,000 and 3,999, and 33 between 4,000 and 5,000). 2007 shows a similar total of 235 life writing texts in the top 5,000 bestselling books (75 titles in the first 1,000, 27 between 1,000 and 1,999, 51 between 2,000 and 2,999, 39 between 3,000 and 3,999, and 43 between 4,000 and 5,000). In both years, 2006 and 2007, life writing thus not only constituted only some 4% of the bestselling 5,000 titles in Australia, it also showed only minimal change between these years and, therefore, no significant growth. Conclusions Our investigation using various instruments that claim to reflect levels of book sales reveals that Western readers’ willingness to purchase published life writing has not changed significantly over the past century. We find no evidence of either a short, or longer, term growth or boom in sales in such books. Instead, it appears that what has been widely heralded as a new golden age of life writing may well be more the result of an expanded understanding of what is included in the genre than an increased interest in it by either book readers or publishers. What recent years do appear to have seen, however, is a significantly increased interest by public commentators, critics, and academics in this genre of writing. We have also discovered that the issue of our current obsession with the lives of others tends to be discussed in academic as well as popular fora as if what applies to one sub-genre or production form applies to another: if biography is popular, then autobiography will also be, and vice versa. If reality television programming is attracting viewers, then readers will be flocking to life writing as well. Our investigation reveals that such propositions are questionable, and that there is significant research to be completed in mapping such audiences against each other. This work has also highlighted the difficulty of separating out the categories of written texts in publishing studies, firstly in terms of determining what falls within the category of life writing as distinct from other forms of non-fiction (the hybrid problem) and, secondly, in terms of separating out the categories within life writing. Although we have continued to use the terms biography and autobiography as sub-genres, we are aware that they are less useful as descriptors than they are often assumed to be. In order to obtain a more complete and accurate picture, publishing categories may need to be agreed upon, redefined and utilised across the publishing industry and within academia. This is of particular importance in the light of the suggestions (from total sales volumes) that the audiences for books are limited, and therefore the rise of one sub-genre may be directly responsible for the fall of another. Bair argues, for example, that in the 1980s and 1990s, the popularity of what she categorises as memoir had direct repercussions on the numbers of birth-to-death biographies that were commissioned, contracted, and published as “sales and marketing staffs conclude[d] that readers don’t want a full-scale life any more” (17). Finally, although we have highlighted the difficulty of using publishing statistics when there is no common understanding as to what such data is reporting, we hope this study shows that the utilisation of such material does add a depth to such enquiries, especially in interrogating the anecdotal evidence that is often quoted as data in publishing and other studies. Appendix 1 Publishers Weekly listings 1990–1999 1990 included two autobiographies, Bo Knows Bo by professional athlete Bo Jackson (with Dick Schaap) and Ronald Reagan’s An America Life: An Autobiography. In 1991, there were further examples of life writing with unimaginative titles, Me: Stories of My Life by Katherine Hepburn, Nancy Reagan: The Unauthorized Biography by Kitty Kelley, and Under Fire: An American Story by Oliver North with William Novak; as indeed there were again in 1992 with It Doesn’t Take a Hero: The Autobiography of Norman Schwarzkopf, Sam Walton: Made in America, the autobiography of the founder of Wal-Mart, Diana: Her True Story by Andrew Morton, Every Living Thing, yet another veterinary outpouring from James Herriot, and Truman by David McCullough. In 1993, radio shock-jock Howard Stern was successful with the autobiographical Private Parts, as was Betty Eadie with her detailed recounting of her alleged near-death experience, Embraced by the Light. Eadie’s book remained on the list in 1994 next to Don’t Stand too Close to a Naked Man, comedian Tim Allen’s autobiography. Flag-waving titles continue in 1995 with Colin Powell’s My American Journey, and Miss America, Howard Stern’s follow-up to Private Parts. 1996 saw two autobiographical works, basketball superstar Dennis Rodman’s Bad as I Wanna Be and figure-skater, Ekaterina Gordeeva’s (with EM Swift) My Sergei: A Love Story. In 1997, Diana: Her True Story returns to the top 10, joining Frank McCourt’s Angela’s Ashes and prolific biographer Kitty Kelly’s The Royals, while in 1998, there is only the part-autobiography, part travel-writing A Pirate Looks at Fifty, by musician Jimmy Buffet. There is no biography or autobiography included in either the 1999 or 2000 top 10 lists in Publishers Weekly, nor in that for 2005. In 2001, David McCullough’s biography John Adams and Jack Welch’s business memoir Jack: Straight from the Gut featured. In 2002, Let’s Roll! Lisa Beamer’s tribute to her husband, one of the heroes of 9/11, written with Ken Abraham, joined Rudolph Giuliani’s autobiography, Leadership. 2003 saw Hillary Clinton’s autobiography Living History and Paul Burrell’s memoir of his time as Princess Diana’s butler, A Royal Duty, on the list. In 2004, it was Bill Clinton’s turn with My Life. In 2006, we find John Grisham’s true crime (arguably a biography), The Innocent Man, at the top, Grogan’s Marley and Me at number three, and the autobiographical The Audacity of Hope by Barack Obama in fourth place. Appendix 2 Amazon.com listings since 2000 In 2000, there were only two auto/biographies in the top Amazon 50 bestsellers with Lance Armstrong’s It’s Not about the Bike: My Journey Back to Life about his battle with cancer at 20, and Dave Eggers’s self-consciously fictionalised memoir, A Heartbreaking Work of Staggering Genius at 32. In 2001, only the top 14 bestsellers were recorded. At number 1 is John Adams by David McCullough and, at 11, Jack: Straight from the Gut by USA golfer Jack Welch. In 2002, Leadership by Rudolph Giuliani was at 12; Master of the Senate: The Years of Lyndon Johnson by Robert Caro at 29; Portrait of a Killer: Jack the Ripper by Patricia Cornwell at 42; Blinded by the Right: The Conscience of an Ex-Conservative by David Brock at 48; and Louis Gerstner’s autobiographical Who Says Elephants Can’t Dance: Inside IBM’s Historic Turnaround at 50. In 2003, Living History by Hillary Clinton was 7th; Benjamin Franklin: An American Life by Walter Isaacson 14th; Dereliction of Duty: The Eyewitness Account of How President Bill Clinton Endangered America’s Long-Term National Security by Robert Patterson 20th; Under the Banner of Heaven: A Story of Violent Faith by Jon Krakauer 32nd; Leap of Faith: Memoirs of an Unexpected Life by Queen Noor of Jordan 33rd; Kate Remembered, Scott Berg’s biography of Katharine Hepburn, 37th; Who’s your Caddy?: Looping for the Great, Near Great and Reprobates of Golf by Rick Reilly 39th; The Teammates: A Portrait of a Friendship about a winning baseball team by David Halberstam 42nd; and Every Second Counts by Lance Armstrong 49th. In 2004, My Life by Bill Clinton was the best selling book of the year; American Soldier by General Tommy Franks was 16th; Kevin Phillips’s American Dynasty: Aristocracy, Fortune and the Politics of Deceit in the House of Bush 18th; Timothy Russert’s Big Russ and Me: Father and Son. Lessons of Life 20th; Tony Hendra’s Father Joe: The Man who Saved my Soul 23rd; Ron Chernow’s Alexander Hamilton 27th; Cokie Roberts’s Founding Mothers: The Women Who Raised our Nation 31st; Kitty Kelley’s The Family: The Real Story of the Bush Dynasty 42nd; and Chronicles, Volume 1 by Bob Dylan was 43rd. In 2005, auto/biographical texts were well down the list with only The Year of Magical Thinking by Joan Didion at 45 and The Glass Castle: A Memoir by Jeanette Walls at 49. In 2006, there was a resurgence of life writing with Nora Ephron’s I Feel Bad About My Neck: and Other Thoughts on Being a Woman at 9; Grisham’s The Innocent Man at 12; Bill Buford’s food memoir Heat: an Amateur’s Adventures as Kitchen Slave, Line Cook, Pasta-Maker, and Apprentice to a Dante-Quoting Butcher in Tuscany at 23; more food writing with Julia Child’s My Life in France at 29; Immaculée Ilibagiza’s Left to Tell: Discovering God amidst the Rwandan Holocaust at 30; CNN anchor Anderson Cooper’s Dispatches from the Edge: A Memoir of War, Disasters and Survival at 43; and Isabella Hatkoff’s Owen & Mzee: The True Story of a Remarkable Friendship (between a baby hippo and a giant tortoise) at 44. In 2007, Ishmael Beah’s discredited A Long Way Gone: Memoirs of a Boy Soldier came in at 8; Walter Isaacson’s Einstein: His Life and Universe 13; Ayaan Hirst Ali’s autobiography of her life in Muslim society, Infidel, 18; The Reagan Diaries 25; Jesus of Nazareth by Pope Benedict XVI 29; Mother Teresa: Come be my Light 36; Clapton: The Autobiography 40; Tina Brown’s The Diana Chronicles 45; Tony Dungy’s Quiet Strength: The Principles, Practices & Priorities of a Winning Life 47; and Daniel Tammet’s Born on a Blue Day: Inside the Extraordinary Mind of an Autistic Savant at 49. Acknowledgements A sincere thank you to Michael Webster at RMIT for assistance with access to Nielsen BookScan statistics, and to the reviewers of this article for their insightful comments. Any errors are, of course, our own. References Australian Broadcasting Commission (ABC). “About Us.” Australian Story 2008. 1 June 2008. ‹http://www.abc.net.au/austory/aboutus.htm>. Australian Bureau of Statistics. “1363.0 Book Publishers, Australia, 2003–04.” 2005. 1 June 2008 ‹http://www.abs.gov.au/ausstats/abs@.nsf/mf/1363.0>. Bair, Deirdre “Too Much S & M.” Sydney Morning Herald 10–11 Sept. 2005: 17. Basset, Troy J., and Christina M. Walter. “Booksellers and Bestsellers: British Book Sales as Documented by The Bookman, 1891–1906.” Book History 4 (2001): 205–36. Brien, Donna Lee, Leonie Rutherford, and Rosemary Williamson. “Hearth and Hotmail: The Domestic Sphere as Commodity and Community in Cyberspace.” M/C Journal 10.4 (2007). 1 June 2008 ‹http://journal.media-culture.org.au/0708/10-brien.php>. Carter, David, and Anne Galligan. “Introduction.” Making Books: Contemporary Australian Publishing. St Lucia: U of Queensland P, 2007. 1–14. Corporall, Glenda. Project Octopus: Report Commissioned by the Australian Society of Authors. Sydney: Australian Society of Authors, 1990. Dempsey, John “Biography Rewrite: A&E’s Signature Series Heads to Sib Net.” Variety 4 Jun. 2006. 1 June 2008 ‹http://www.variety.com/article/VR1117944601.html?categoryid=1238&cs=1>. Donaldson, Ian. “Matters of Life and Death: The Return of Biography.” Australian Book Review 286 (Nov. 2006): 23–29. Douglas, Kate. “‘Blurbing’ Biographical: Authorship and Autobiography.” Biography 24.4 (2001): 806–26. Eliot, Simon. “Very Necessary but not Sufficient: A Personal View of Quantitative Analysis in Book History.” Book History 5 (2002): 283–93. Feather, John, and Hazel Woodbridge. “Bestsellers in the British Book Industry.” Publishing Research Quarterly 23.3 (Sept. 2007): 210–23. Feather, JP, and M Reid. “Bestsellers and the British Book Industry.” Publishing Research Quarterly 11.1 (1995): 57–72. Galligan, Anne. “Living in the Marketplace: Publishing in the 1990s.” Publishing Studies 7 (1999): 36–44. Grossman, Lev. “Time’s Person of the Year: You.” Time 13 Dec. 2006. Online edition. 1 June 2008 ‹http://www.time.com/time/magazine/article/0%2C9171%2C1569514%2C00.html>. Gutjahr, Paul C. “No Longer Left Behind: Amazon.com, Reader Response, and the Changing Fortunes of the Christian Novel in America.” Book History 5 (2002): 209–36. Hamilton, Nigel. Biography: A Brief History. Cambridge, MA: Harvard UP, 2007. Kaplan, Justin. “A Culture of Biography.” The Literary Biography: Problems and Solutions. Ed. Dale Salwak. Basingstoke: Macmillan, 1996. 1–11. Korda, Michael. Making the List: A Cultural History of the American Bestseller 1900–1999. New York: Barnes & Noble, 2001. Miller, Laura J. “The Bestseller List as Marketing Tool and Historical Fiction.” Book History 3 (2000): 286–304. Morreale, Joanne. “Revisiting The Osbournes: The Hybrid Reality-Sitcom.” Journal of Film and Video 55.1 (Spring 2003): 3–15. Rak, Julie. “Bio-Power: CBC Television’s Life & Times and A&E Network’s Biography on A&E.” LifeWriting 1.2 (2005): 1–18. Starck, Nigel. “Capturing Life—Not Death: A Case For Burying The Posthumous Parallax.” Text: The Journal of the Australian Association of Writing Programs 5.2 (2001). 1 June 2008 ‹http://www.textjournal.com.au/oct01/starck.htm>.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii