Articoli di riviste sul tema "Wikis (Computer science) – Access control – United States"

Segui questo link per vedere altri tipi di pubblicazioni sul tema: Wikis (Computer science) – Access control – United States.

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-24 articoli di riviste per l'attività di ricerca sul tema "Wikis (Computer science) – Access control – United States".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi gli articoli di riviste di molte aree scientifiche e compila una bibliografia corretta.

1

Rosenbaum, S., A. Somodevilla e M. Casoni. "Will EMTALA Be There for People With Pregnancy-related Emergencies?" Obstetric Anesthesia Digest 43, n. 3 (23 agosto 2023): 113–14. http://dx.doi.org/10.1097/01.aoa.0000946244.38317.d1.

Testo completo
Abstract (sommario):
(N Engl J Med. 2022;387:863–865) The Emergency Medical Treatment and Labor Act (EMTALA), a statute established in 1986 to prevent hospitals from refusing to treat pregnancy-related emergencies, is important to health care in the United States. The EMTALA statute helps safeguard nondiscriminatory hospital emergency medical access to anyone in need. Following the June 24, 2022, decision in Dobbs v. Jackson Women’s Health Organization, the constitutional right to an abortion was returned to state control. This Supreme Court decision poses the questions as to whether EMTALA will continue to help barricade against state laws that prevent emergent hospital care short of life-threatening situations.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

McHugh, Douglas, Richard Feinn, Jeff McIlvenna e Matt Trevithick. "A Random Controlled Trial to Examine the Efficacy of Blank Slate: A Novel Spaced Retrieval Tool with Real-Time Learning Analytics". Education Sciences 11, n. 3 (25 febbraio 2021): 90. http://dx.doi.org/10.3390/educsci11030090.

Testo completo
Abstract (sommario):
Learner-centered coaching and feedback are relevant to various educational contexts. Spaced retrieval enhances long-term knowledge retention. We examined the efficacy of Blank Slate, a novel spaced retrieval software application, to promote learning and prevent forgetting, while gathering and analyzing data in the background about learners’ performance. A total of 93 students from 6 universities in the United States were assigned randomly to control, sequential or algorithm conditions. Participants watched a video on the Republic of Georgia before taking a 60 multiple-choice-question assessment. Sequential (non-spaced retrieval) and algorithm (spaced retrieval) groups had access to Blank Slate and 60 digital cards. The algorithm group reviewed subsets of cards daily based on previous individual performance. The sequential group reviewed all 60 cards daily. All 93 participants were re-assessed 4 weeks later. Sequential and algorithm groups were significantly different from the control group but not from each other with regard to after and delta scores. Blank Slate prevented anticipated forgetting; authentic learning improvement and retention happened instead, with spaced retrieval incurring one-third of the time investment experienced by non-spaced retrieval. Embedded analytics allowed for real-time monitoring of learning progress that could form the basis of helpful feedback to learners for self-directed learning and educators for coaching.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Ravnyushkin, A. V. "The Legal Issues of Firearms Trafficking in the United States of America". Siberian Law Review 19, n. 4 (8 gennaio 2023): 356–73. http://dx.doi.org/10.19073/2658-7602-2022-19-4-356-373.

Testo completo
Abstract (sommario):
Relevance and subject of research. The circulation of firearms as a source of increased danger is subject to legal regulation and control in the Russian Federation. The use of weapons by police officers is no exception. The norms of the Federal Law of February 7, 2011 No. 3-FZ “About the Police” (hereinafter referred to as the Law “About the Police”) refer to the achievements of domestic administrative science as a result of the work of specialists. In systemic connection with the norms of criminal law, they regulate the conditions and limits for the use of coercive measures by police officers, including firearms. The fundamental ideas of the activities of the Russian police have successfully cooperated with the norms of international law. On the contrary, in the socalled “leading” democratic state – the United States of America, such cooperation does not look well-coordinated, which the Author substantiates when studying the origins of the right of citizens of this state to own firearms, the regulatory regulation of the circulation of weapons in the United States, the negative consequences of this regulation (based on research by American scientists and statistical data), the activities of the US police to counter armed attacks and its legal regulation. One of the US attempts to comply with international law in this area is analyzed, namely the adopted new policy of the US Customs and Border Protection on the use of force, including firearms.The purpose of the study is to determine the state of legal regulation of the circulation of civilian firearms in the United States, the use of these weapons as a coercive measure by police officers in order to identify its positive aspects, in the presence of which the decision on the possibility / impossibility of their introduction into Russian legislation. This led to the setting of the following tasks: to study the constitutional foundations of the right to own firearms by US citizens (historical aspect); to determine the current state of legal regulation of civilian circulation of firearms in the United States and its consequences; analyze the activities of the US police to counter armed attacks and its legal regulation, evaluate them and determine the prospects for their improvement; identify the provisions of American legislation that are of scientific interest, and the possibility / impossibility of their implementation in Russian legislation.The methodological basis of the study was a dialectical approach to the scientific knowledge of social relations associated with the circulation of firearms, the implementation of their state regulation, analysis and synthesis of the results obtained during the study, which made it possible to formulate and substantiate the conclusions. Among the special methods used in the study are the method of studying normative legal acts and documents, the empirical method, the method of processing and analyzing data, and their generalization. Findings. The study shows that the constitutional foundations for the right to own firearms by US citizens developed simultaneously with the emergence of statehood: first in individual states, and then in the very union of these states formed into a single US government. The existing multi-layered legal framework for regulating the circulation of firearms has created a wide range of owners with a relatively simple system of access, which negatively affects the criminal environment, in which armed attacks with mass casualties are of high importance. Cases of armed attacks and other negative illegal acts to a certain extent influenced the processes of militarization of the police, the creation and strengthening of special operations units, the adoption by the police of various types of military equipment, weapons and special means. Detailed legal regulation of the use of lethal force by the police is developing belatedly. The 2014 adoption of the U.S. Customs and Border Protection Manual did not prompt other law enforcement agencies to adopt similar rules, indicating the fragmentation of U.S. law enforcement. The U.S. Customs and Border Protection Guidelines on the use of force is of particular scientific interest, and after its careful analysis, it is possible to introduce certain provisions into the legal regulation of the activities of the Russian police, especially the use of lethal force. The fundamental ideas of police activity developed in Russia can be recognized as certain guidelines for the development of the American police. The relatively small number of firearm owners in Russia and the high requirements for the circulation of firearms are a deterrent to the negative developments taking place in the United States.
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Abdallah, Abdelrahman M., Mehmet E. Ozbek e Rebecca A. Atadero. "Transferring Research Innovations in Bridge Inspection Planning to Bridge Inspection Practice: A Qualitative Study". Infrastructures 8, n. 11 (20 novembre 2023): 164. http://dx.doi.org/10.3390/infrastructures8110164.

Testo completo
Abstract (sommario):
Over the last two decades, many researchers have focused on providing new ideas and frameworks to help improve conventional bridge inspection planning approaches, however, little guidance is provided for implementing these new ideas in practice, resulting in limited change. Accordingly, this qualitative study aims to identify the factors that can help improve research products and accelerate research transfer to bridge inspection departments with the goal of enhancing bridge inspection practice. This study used semi-structured interviews, written interviews, and questionnaires for data collection to provide rich results. Responses from twenty-six bridge personnel from state Departments of Transportation (DOTs) across the United States (U.S.) were included in this study. The study found that most participants support a fixed inspection interval over a variable interval since fixed intervals are easier in scheduling and budget planning. Also, participants indicated that the barriers hindering the use of nondestructive techniques are the training required by inspectors, traffic control, and the required access equipment. The study presents the factors change leaders should focus on to facilitate organizational change in DOTs such as enhancing the capacity of DOT staff members and gaining support from the Federal Highway Administration (FHWA)
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Rahman, Wasifur, Masum Hasan, Md Saiful Islam, Titilayo Olubajo, Jeet Thaker, Abdel-Rahman Abdelkader, Phillip Yang et al. "Auto-Gait". Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 7, n. 1 (27 marzo 2022): 1–19. http://dx.doi.org/10.1145/3580845.

Testo completo
Abstract (sommario):
Many patients with neurological disorders, such as Ataxia, do not have easy access to neurologists, -especially those living in remote localities and developing/underdeveloped countries. Ataxia is a degenerative disease of the nervous system that surfaces as difficulty with motor control, such as walking imbalance. Previous studies have attempted automatic diagnosis of Ataxia with the help of wearable biomarkers, Kinect, and other sensors. These sensors, while accurate, do not scale efficiently well to naturalistic deployment settings. In this study, we propose a method for identifying ataxic symptoms by analyzing videos of participants walking down a hallway, captured with a standard monocular camera. In a collaboration with 11 medical sites located in 8 different states across the United States, we collected a dataset of 155 videos along with their severity rating from 89 participants (24 controls and 65 diagnosed with or are pre-manifest spinocerebellar ataxias). The participants performed the gait task of the Scale for the Assessment and Rating of Ataxia (SARA). We develop a computer vision pipeline to detect, track, and separate the participants from their surroundings and construct several features from their body pose coordinates to capture gait characteristics such as step width, step length, swing, stability, speed, etc. Our system is able to identify and track a patient in complex scenarios. For example, if there are multiple people present in the video or an interruption from a passerby. Our Ataxia risk-prediction model achieves 83.06% accuracy and an 80.23% F1 score. Similarly, our Ataxia severity-assessment model achieves a mean absolute error (MAE) score of 0.6225 and a Pearson's correlation coefficient score of 0.7268. Our model competitively performed when evaluated on data from medical sites not used during training. Through feature importance analysis, we found that our models associate wider steps, decreased walking speed, and increased instability with greater Ataxia severity, which is consistent with previously established clinical knowledge. Furthermore, we are releasing the models and the body-pose coordinate dataset to the research community - the largest dataset on ataxic gait (to our knowledge). Our models could contribute to improving health access by enabling remote Ataxia assessment in non-clinical settings without requiring any sensors or special cameras. Our dataset will help the computer science community to analyze different characteristics of Ataxia and to develop better algorithms for diagnosing other movement disorders.
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Charpignon, Marie-Laure, Leo Anthony Celi, Marisa Cobanaj, Rene Eber, Amelia Fiske, Jack Gallifant, Chenyu Li, Gurucharan Lingamallu, Anton Petushkov e Robin Pierce. "Diversity and inclusion: A hidden additional benefit of Open Data". PLOS Digital Health 3, n. 7 (23 luglio 2024): e0000486. http://dx.doi.org/10.1371/journal.pdig.0000486.

Testo completo
Abstract (sommario):
The recent imperative by the National Institutes of Health to share scientific data publicly underscores a significant shift in academic research. Effective as of January 2023, it emphasizes that transparency in data collection and dedicated efforts towards data sharing are prerequisites for translational research, from the lab to the bedside. Given the role of data access in mitigating potential bias in clinical models, we hypothesize that researchers who leverage open-access datasets rather than privately-owned ones are more diverse. In this brief report, we proposed to test this hypothesis in the transdisciplinary and expanding field of artificial intelligence (AI) for critical care. Specifically, we compared the diversity among authors of publications leveraging open datasets, such as the commonly used MIMIC and eICU databases, with that among authors of publications relying exclusively on private datasets, unavailable to other research investigators (e.g., electronic health records from ICU patients accessible only to Mayo Clinic analysts). To measure the extent of author diversity, we characterized gender balance as well as the presence of researchers from low- and middle-income countries (LMIC) and minority-serving institutions (MSI) located in the United States (US). Our comparative analysis revealed a greater contribution of authors from LMICs and MSIs among researchers leveraging open critical care datasets (treatment group) than among those relying exclusively on private data resources (control group). The participation of women was similar between the two groups, albeit slightly larger in the former. Notably, although over 70% of all articles included at least one author inferred to be a woman, less than 25% had a woman as a first or last author. Importantly, we found that the proportion of authors from LMICs was substantially higher in the treatment than in the control group (10.1% vs. 6.2%, p<0.001), including as first and last authors. Moreover, we found that the proportion of US-based authors affiliated with a MSI was 1.5 times higher among articles in the treatment than in the control group, suggesting that open data resources attract a larger pool of participants from minority groups (8.6% vs. 5.6%, p<0.001). Thus, our study highlights the valuable contribution of the Open Data strategy to underrepresented groups, while also quantifying persisting gender gaps in academic and clinical research at the intersection of computer science and healthcare. In doing so, we hope our work points to the importance of extending open data practices in deliberate and systematic ways.
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Kovalchuk, V. P. "The method of obtaining soil's water-physical properties via their a granulometric composition". PLANT AND SOIL SCIENCE 12, n. 4 (2021): 115–25. http://dx.doi.org/10.31548/agr2021.04.115.

Testo completo
Abstract (sommario):
The method of obtaining water-physical properties of soils (basic hydrophysical characteristics (BHC) and moisture conductivity function) is presented. These properties, or functions, allow us to describe the vertical movement of moisture in unsaturated soils as one of the components of the expenditure item of the water balance. They are widely used in the substantiation of water reclamation and in the modeling of moisture transfer in the soil. The method is based on laboratory studies of soil samples taken in the field on the granulometry composition. The results of laboratory test now in Ukraine are usually obtained by the method of Kaczynski with two components, the percentage of clay and sand. They are graphically, with the help of integral (cumulative) curves are transformed into data corresponding to the international classification - with three components: the content of sand, dust, clay. The latter fractional distribution is used by the world community of soil scientists. Therefore, using data on the content of sand, silt, clay, using a computer program with open access "Rosetta" USDA (United States Department of Agriculture) the water-physical properties in the form of water constants: the saturated soil moisture, the residual soil moisture, the saturated hydraulic conductivity, and the coefficients of the equations of the mathematical model of van Genuchten are calculated. The publication provides examples of calculation of water-physical properties of soils by the presented method of dark chestnut soils and ordinary chernozems. The advantages of the proposed method include the low complexity of experimental studies, the availability of analyzes and the presence of many experimental studies of the granulometry composition of soils, including in literary sources. As a development of the research direction, the author shows the application of the obtained dependence for modeling moisture transfer during water reclamation (irrigation in irrigation control systems.). Regarding the directions of future research, the publication recommends comparing the accuracy of obtaining water-physical properties of soils by different methods, as well as obtaining an important water constant - the field capacity (FC) as the lowest field moisture content.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Ashad-Bishop, Kilan C., Jordan A. Baeker-Bispo, Zinzi D. Bailey e Erin K. Kobetz. "Abstract C083: Exploring relationships between neighborhood social vulnerability and cancer screening in Miami-Dade County". Cancer Epidemiology, Biomarkers & Prevention 32, n. 1_Supplement (1 gennaio 2023): C083. http://dx.doi.org/10.1158/1538-7755.disp22-c083.

Testo completo
Abstract (sommario):
Abstract Purpose: Social and structural contributors to social vulnerability have been associated with cancer disparities across the continuum. This study aimed to explore relationships between indicators of neighborhood social vulnerability and participation in breast, cervical and colorectal cancer screening in Miami-Dade County. Methods: Data were obtained at the census tract level from the United States Census Bureau American Community Survey (2014-2018), the Centers for Disease Control and Prevention (CDC) Social Vulnerability Index (2018), and the CDC PLACES dataset (2018). This analysis was restricted to Miami-Dade census tracts for which PLACES data was available on mammography (n=135), cervical cancer screening (n=115), and colorectal screening (n=136) participation. Census tracts were stratified into tertiles based on screening participation, then social vulnerability indicators were assessed among the tertiles. Principal component analysis (PCA) was used to identify characteristics responsible for most variability in breast, cervical and colorectal cancer screening. Results: Mammography participation was 51.76%, 58.80%, and 65.65% in the lower, middle, and upper tertiles, respectively. Among these tracts, per capita income (p&lt;.001), earning an income below poverty (p&lt;.001), educational attainment below earning an HS diploma (p&lt;.001), the proportion of non-Hispanic White residents (p&lt;.001), unemployed residents (p&lt;.001), residents with a disability (p&lt;.001), and people with no computer or limited access to the internet (p&lt;.001) were significantly different between the tertiles. Cervical cancer screening participation was 79.60%, 84.36%, and 87.80% in the lower, middle, and upper tertiles, respectively. Among these tracts, per capita income (p&lt;.001), earning an income below poverty (p&lt;.001), educational attainment below earning an HS diploma (p&lt;.001), and proportion of single-parent households with children under age 17 (p&lt;.001), non-Hispanic White residents (p&lt;.001), unemployed residents (p&lt;.001), residents with a disability (p&lt;.001), and people with no computer or limited access to the internet (p&lt;.001) were significantly different between the screening tertiles. Colorectal cancer screening participation was 79.26%, 81.06%, and 85.26% in the lower, middle, and upper tertiles, respectively. Among these tracts, per capita income (p&lt;.01), earning an income below poverty (p&lt;.004), educational attainment below earning an HS diploma (p&lt;.001), the proportion of residents with a disability (p&lt;.001), and people with no computer or limited access to the internet (p&lt;.001) were significantly different between the screening tertiles. Conclusions: These data suggest that social vulnerability is associated with cancer screening uptake, namely mammography, cervical cancer screening, and colorectal cancer screening. Further investigation of the social and structural factors contributing to disparities in cancer screening will help appropriately allocate resources and craft effective interventions to reduce the burden of cancer among those most vulnerable. Citation Format: Kilan C. Ashad-Bishop, Jordan A. Baeker-Bispo, Zinzi D. Bailey, Erin K. Kobetz. Exploring relationships between neighborhood social vulnerability and cancer screening in Miami-Dade County [abstract]. In: Proceedings of the 15th AACR Conference on the Science of Cancer Health Disparities in Racial/Ethnic Minorities and the Medically Underserved; 2022 Sep 16-19; Philadelphia, PA. Philadelphia (PA): AACR; Cancer Epidemiol Biomarkers Prev 2022;31(1 Suppl):Abstract nr C083.
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Fuller, Thomas F., e Leshinka Molel. "(Invited) Application of Open-Source, Python-Based Tools for the Simulation of Electrochemical Systems". ECS Meeting Abstracts MA2023-01, n. 25 (28 agosto 2023): 1630. http://dx.doi.org/10.1149/ma2023-01251630mtgabs.

Testo completo
Abstract (sommario):
There is a rich history of mathematical modeling of electrochemical systems. These simulations are useful 1) to refine our understanding of systems that contain complex, coupled phenomena, 2) to design and control electrochemical devices, and 3) to help novices in developing confidence and intuition for the behavior of electrochemical systems. Regardless of the application, cyclic voltammetry, storage batteries, secondary current distributions, or corrosion to name a few, elucidating the relationship between current and potential is central to understanding how electrochemical systems behave. Here, we report on historical and future perspectives of simulating electrochemical systems with open-source, python-based tools. The presentation includes a tutorial of the formulation of problems based on underlying engineering and electrochemistry principles. Within R1 universities in the United States, excellent resources are available at little to no cost for the simulation of electrochemical systems. However, the price for these tools can be prohibitive for most engineers and scientists working in industry. Access to these tools is even worse in low- and lower-middle-income countries. Actively supporting open-source software promotes a more inclusive scientific and research community that is essential to confronting the challenges facing society. Python was chosen because it is open-source. FEniCSx, a popular open-source computing platform for solving partial differential equations,1-2 is applied to the solution of primary and secondary current distributions for two- and three-dimensional geometries. FEniCSx is used on both desktop computers as well as within high performance computing environments, such as Georgia Tech’s PACE. Simulations have long been known to increase interactions between instructors and teachers as well as to help students visualize content.3-4 Recently, tools developed in python have been applied to simple electrochemical systems. 5-6. Because of the low barrier to entry and access to numerous computational packages, such as numpy, matplotlib, and scipy, the Anaconda distribution of python is promoted. A series of dynamic simulations are designed to help students improve their understanding of electrochemical systems. These simulations feature animation and extensive use of widgets that allow students to adjust parameters and immediately observe the results. A. Logg, K. A. Mardal, G. N. Wells. Automated solution of differential equations by the finite element method, Lecture Notes in Computational Science and Engineering, 84 LNCSE (2012). A, Logg and G. N. Wells. DOLFIN: Automated finite element computing, ACM Transactions on Mathematical Software, 37.2 (2010). T. de Jong, W. R. van Joolingen, Scientific Discovery Learning with Computer Simulations of Conceptual Domains, Review of Educational Research, 68, 179-201 (1998). R. E. West, C. R. Graham, Five Powerful Ways Technology Can Enhance Teaching and Learning in Higher Education, Educational Technology, 45, 20-27 (2005). X. Wang, Z. Wang, Animated Electrochemistry Simulation Modules, J. Chem. Educ., 99, 752-758 (2022). T.F. Fuller, J.N. Harb, Using Python Simulations for Inquiry-Based Learning of Electrochemical Systems, ECS Meeting Abstracts, (2021). DOI 10.1149/MA2021-02511503mtgabs
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Freedman, Neal D., Liliana Brown, Lori M. Newman, Jefferson M. Jones, Tina J. Benoit, Francisco Averhoff, Xiangning Bu et al. "COVID-19 SeroHub, an online repository of SARS-CoV-2 seroprevalence studies in the United States". Scientific Data 9, n. 1 (26 novembre 2022). http://dx.doi.org/10.1038/s41597-022-01830-4.

Testo completo
Abstract (sommario):
AbstractSeroprevalence studies provide useful information about the proportion of the population either vaccinated against SARS-CoV-2, previously infected with the virus, or both. Numerous studies have been conducted in the United States, but differ substantially by dates of enrollment, target population, geographic location, age distribution, and assays used. This can make it challenging to identify and synthesize available seroprevalence data by geographic region or to compare infection-induced versus combined infection- and vaccination-induced seroprevalence. To facilitate public access and understanding, the National Institutes of Health and the Centers for Disease Control and Prevention developed the COVID-19 Seroprevalence Studies Hub (COVID-19 SeroHub, https://covid19serohub.nih.gov/), a data repository in which seroprevalence studies are systematically identified, extracted using a standard format, and summarized through an interactive interface. Within COVID-19 SeroHub, users can explore and download data from 178 studies as of September 1, 2022. Tools allow users to filter results and visualize trends over time, geography, population, age, and antigen target. Because COVID-19 remains an ongoing pandemic, we will continue to identify and include future studies.
Gli stili APA, Harvard, Vancouver, ISO e altri
11

Saintila, Jacksaint, Cristian Ramos-Vera, Yaquelin E. Calizaya-Milla, Veronica Ileana Hidalgo Villarreal, Antonio Serpa-Barrientos e Wilter C. Morales-García. "Access to and use of health information technology among obese and non-obese Americans: Analysis of the Health Information National Trends Survey data". Malaysian Journal of Nutrition 29, n. 2 (26 luglio 2022). http://dx.doi.org/10.31246/mjn-2022-0058.

Testo completo
Abstract (sommario):
Introduction: Health information technology (HIT) is essential in the prevention, management, and treatment of obesity due to the medical data and information available to health care providers and patients. However, exploration of HIT access and use among obese individuals remains limited. Objective: The purpose of this study was to compare access to and use of HIT among obese and non-obese Americans. Methods: We considered cross-sectional secondary data from 3,865 United States adults that were collected through the Health Information National Trends Survey in 2020. Contingency tables were performed stratifying between men and women to assess whether they differed according to body mass index (BMI) levels with respect to HIT categories. Results: Elevated BMI in women was associated with the use of a computer, smartphone, or other electronic device to e-mail or use the Internet to communicate with a doctor or a doctor’s office. In addition, elevated BMI in both genders was associated with sharing information from a smartphone/electronic device with a health professional. Finally, the use of an electronic device to monitor or track health or activity was found to be more prevalent among women with elevated BMI compared to those with normal BMI. Conclusion: Future studies should expand research in terms of interventions linked to health information technology in adults with obesity by considering the gender factor. Moreover, the expansion of research into electronic health (eHealth) interventions is particularly important because it would favour the prevention, management, control, and treatment of obesity.
Gli stili APA, Harvard, Vancouver, ISO e altri
12

Garcia-Reyero, Natàlia, Mark A. Arick, E. Alice Woolard, Mitchell Wilbanks, John E. Mylroie, Kathleen Jensen, Michael Kahl et al. "Male fathead minnow transcriptomes and associated chemical analytes in the Milwaukee estuary system". Scientific Data 9, n. 1 (4 agosto 2022). http://dx.doi.org/10.1038/s41597-022-01553-6.

Testo completo
Abstract (sommario):
AbstractContaminants of Emerging Concern (CECs) can be measured in waters across the United States, including the tributaries of the Great Lakes. The extent to which these contaminants affect gene expression in aquatic wildlife is unclear. This dataset presents the full hepatic transcriptomes of laboratory-reared fathead minnows (Pimephales promelas) caged at multiple sites within the Milwaukee Estuary Area of Concern and control sites. Following 4 days of in situ exposure, liver tissue was removed from males at each site for RNA extraction and sequencing, yielding a total of 116 samples from which libraries were prepared, pooled, and sequenced. For each exposure site, 179 chemical analytes were also assessed. These data were created with the intention of inviting research on possible transcriptomic changes observed in aquatic species exposed to CECs. Access to both full sequencing reads of animal samples as well as water contaminant data across multiple Great Lakes sites will allow others to explore the health of these ecosystems in support of the aims of the Great Lakes Restoration Initiative.
Gli stili APA, Harvard, Vancouver, ISO e altri
13

Livingstone, Randall M. "Let’s Leave the Bias to the Mainstream Media: A Wikipedia Community Fighting for Information Neutrality". M/C Journal 13, n. 6 (23 novembre 2010). http://dx.doi.org/10.5204/mcj.315.

Testo completo
Abstract (sommario):
Although I'm a rich white guy, I'm also a feminist anti-racism activist who fights for the rights of the poor and oppressed. (Carl Kenner)Systemic bias is a scourge to the pillar of neutrality. (Cerejota)Count me in. Let's leave the bias to the mainstream media. (Orcar967)Because this is so important. (CuttingEdge)These are a handful of comments posted by online editors who have banded together in a virtual coalition to combat Western bias on the world’s largest digital encyclopedia, Wikipedia. This collective action by Wikipedians both acknowledges the inherent inequalities of a user-controlled information project like Wikpedia and highlights the potential for progressive change within that same project. These community members are taking the responsibility of social change into their own hands (or more aptly, their own keyboards).In recent years much research has emerged on Wikipedia from varying fields, ranging from computer science, to business and information systems, to the social sciences. While critical at times of Wikipedia’s growth, governance, and influence, most of this work observes with optimism that barriers to improvement are not firmly structural, but rather they are socially constructed, leaving open the possibility of important and lasting change for the better.WikiProject: Countering Systemic Bias (WP:CSB) considers one such collective effort. Close to 350 editors have signed on to the project, which began in 2004 and itself emerged from a similar project named CROSSBOW, or the “Committee Regarding Overcoming Serious Systemic Bias on Wikipedia.” As a WikiProject, the term used for a loose group of editors who collaborate around a particular topic, these editors work within the Wikipedia site and collectively create a social network that is unified around one central aim—representing the un- and underrepresented—and yet they are bound by no particular unified set of interests. The first stage of a multi-method study, this paper looks at a snapshot of WP:CSB’s activity from both content analysis and social network perspectives to discover “who” geographically this coalition of the unrepresented is inserting into the digital annals of Wikipedia.Wikipedia and WikipediansDeveloped in 2001 by Internet entrepreneur Jimmy Wales and academic Larry Sanger, Wikipedia is an online collaborative encyclopedia hosting articles in nearly 250 languages (Cohen). The English-language Wikipedia contains over 3.2 million articles, each of which is created, edited, and updated solely by users (Wikipedia “Welcome”). At the time of this study, Alexa, a website tracking organisation, ranked Wikipedia as the 6th most accessed site on the Internet. Unlike the five sites ahead of it though—Google, Facebook, Yahoo, YouTube (owned by Google), and live.com (owned by Microsoft)—all of which are multibillion-dollar businesses that deal more with information aggregation than information production, Wikipedia is a non-profit that operates on less than $500,000 a year and staffs only a dozen paid employees (Lih). Wikipedia is financed and supported by the WikiMedia Foundation, a charitable umbrella organisation with an annual budget of $4.6 million, mainly funded by donations (Middleton).Wikipedia editors and contributors have the option of creating a user profile and participating via a username, or they may participate anonymously, with only an IP address representing their actions. Despite the option for total anonymity, many Wikipedians have chosen to visibly engage in this online community (Ayers, Matthews, and Yates; Bruns; Lih), and researchers across disciplines are studying the motivations of these new online collectives (Kane, Majchrzak, Johnson, and Chenisern; Oreg and Nov). The motivations of open source software contributors, such as UNIX programmers and programming groups, have been shown to be complex and tied to both extrinsic and intrinsic rewards, including online reputation, self-satisfaction and enjoyment, and obligation to a greater common good (Hertel, Niedner, and Herrmann; Osterloh and Rota). Investigation into why Wikipedians edit has indicated multiple motivations as well, with community engagement, task enjoyment, and information sharing among the most significant (Schroer and Hertel). Additionally, Wikipedians seem to be taking up the cause of generativity (a concern for the ongoing health and openness of the Internet’s infrastructures) that Jonathan Zittrain notably called for in The Future of the Internet and How to Stop It. Governance and ControlAlthough the technical infrastructure of Wikipedia is built to support and perhaps encourage an equal distribution of power on the site, Wikipedia is not a land of “anything goes.” The popular press has covered recent efforts by the site to reduce vandalism through a layer of editorial review (Cohen), a tightening of control cited as a possible reason for the recent dip in the number of active editors (Edwards). A number of regulations are already in place that prevent the open editing of certain articles and pages, such as the site’s disclaimers and pages that have suffered large amounts of vandalism. Editing wars can also cause temporary restrictions to editing, and Ayers, Matthews, and Yates point out that these wars can happen anywhere, even to Burt Reynold’s page.Academic studies have begun to explore the governance and control that has developed in the Wikipedia community, generally highlighting how order is maintained not through particular actors, but through established procedures and norms. Konieczny tested whether Wikipedia’s evolution can be defined by Michels’ Iron Law of Oligopoly, which predicts that the everyday operations of any organisation cannot be run by a mass of members, and ultimately control falls into the hands of the few. Through exploring a particular WikiProject on information validation, he concludes:There are few indicators of an oligarchy having power on Wikipedia, and few trends of a change in this situation. The high level of empowerment of individual Wikipedia editors with regard to policy making, the ease of communication, and the high dedication to ideals of contributors succeed in making Wikipedia an atypical organization, quite resilient to the Iron Law. (189)Butler, Joyce, and Pike support this assertion, though they emphasise that instead of oligarchy, control becomes encapsulated in a wide variety of structures, policies, and procedures that guide involvement with the site. A virtual “bureaucracy” emerges, but one that should not be viewed with the negative connotation often associated with the term.Other work considers control on Wikipedia through the framework of commons governance, where “peer production depends on individual action that is self-selected and decentralized rather than hierarchically assigned. Individuals make their own choices with regard to resources managed as a commons” (Viegas, Wattenberg and McKeon). The need for quality standards and quality control largely dictate this commons governance, though interviewing Wikipedians with various levels of responsibility revealed that policies and procedures are only as good as those who maintain them. Forte, Larco, and Bruckman argue “the Wikipedia community has remained healthy in large part due to the continued presence of ‘old-timers’ who carry a set of social norms and organizational ideals with them into every WikiProject, committee, and local process in which they take part” (71). Thus governance on Wikipedia is a strong representation of a democratic ideal, where actors and policies are closely tied in their evolution. Transparency, Content, and BiasThe issue of transparency has proved to be a double-edged sword for Wikipedia and Wikipedians. The goal of a collective body of knowledge created by all—the “expert” and the “amateur”—can only be upheld if equal access to page creation and development is allotted to everyone, including those who prefer anonymity. And yet this very option for anonymity, or even worse, false identities, has been a sore subject for some in the Wikipedia community as well as a source of concern for some scholars (Santana and Wood). The case of a 24-year old college dropout who represented himself as a multiple Ph.D.-holding theology scholar and edited over 16,000 articles brought these issues into the public spotlight in 2007 (Doran; Elsworth). Wikipedia itself has set up standards for content that include expectations of a neutral point of view, verifiability of information, and the publishing of no original research, but Santana and Wood argue that self-policing of these policies is not adequate:The principle of managerial discretion requires that every actor act from a sense of duty to exercise moral autonomy and choice in responsible ways. When Wikipedia’s editors and administrators remain anonymous, this criterion is simply not met. It is assumed that everyone is behaving responsibly within the Wikipedia system, but there are no monitoring or control mechanisms to make sure that this is so, and there is ample evidence that it is not so. (141) At the theoretical level, some downplay these concerns of transparency and autonomy as logistical issues in lieu of the potential for information systems to support rational discourse and emancipatory forms of communication (Hansen, Berente, and Lyytinen), but others worry that the questionable “realities” created on Wikipedia will become truths once circulated to all areas of the Web (Langlois and Elmer). With the number of articles on the English-language version of Wikipedia reaching well into the millions, the task of mapping and assessing content has become a tremendous endeavour, one mostly taken on by information systems experts. Kittur, Chi, and Suh have used Wikipedia’s existing hierarchical categorisation structure to map change in the site’s content over the past few years. Their work revealed that in early 2008 “Culture and the arts” was the most dominant category of content on Wikipedia, representing nearly 30% of total content. People (15%) and geographical locations (14%) represent the next largest categories, while the natural and physical sciences showed the greatest increase in volume between 2006 and 2008 (+213%D, with “Culture and the arts” close behind at +210%D). This data may indicate that contributing to Wikipedia, and thus spreading knowledge, is growing amongst the academic community while maintaining its importance to the greater popular culture-minded community. Further work by Kittur and Kraut has explored the collaborative process of content creation, finding that too many editors on a particular page can reduce the quality of content, even when a project is well coordinated.Bias in Wikipedia content is a generally acknowledged and somewhat conflicted subject (Giles; Johnson; McHenry). The Wikipedia community has created numerous articles and pages within the site to define and discuss the problem. Citing a survey conducted by the University of Würzburg, Germany, the “Wikipedia:Systemic bias” page describes the average Wikipedian as:MaleTechnically inclinedFormally educatedAn English speakerWhiteAged 15-49From a majority Christian countryFrom a developed nationFrom the Northern HemisphereLikely a white-collar worker or studentBias in content is thought to be perpetuated by this demographic of contributor, and the “founder effect,” a concept from genetics, linking the original contributors to this same demographic has been used to explain the origins of certain biases. Wikipedia’s “About” page discusses the issue as well, in the context of the open platform’s strengths and weaknesses:in practice editing will be performed by a certain demographic (younger rather than older, male rather than female, rich enough to afford a computer rather than poor, etc.) and may, therefore, show some bias. Some topics may not be covered well, while others may be covered in great depth. No educated arguments against this inherent bias have been advanced.Royal and Kapila’s study of Wikipedia content tested some of these assertions, finding identifiable bias in both their purposive and random sampling. They conclude that bias favoring larger countries is positively correlated with the size of the country’s Internet population, and corporations with larger revenues work in much the same way, garnering more coverage on the site. The researchers remind us that Wikipedia is “more a socially produced document than a value-free information source” (Royal & Kapila).WikiProject: Countering Systemic BiasAs a coalition of current Wikipedia editors, the WikiProject: Countering Systemic Bias (WP:CSB) attempts to counter trends in content production and points of view deemed harmful to the democratic ideals of a valueless, open online encyclopedia. WP:CBS’s mission is not one of policing the site, but rather deepening it:Generally, this project concentrates upon remedying omissions (entire topics, or particular sub-topics in extant articles) rather than on either (1) protesting inappropriate inclusions, or (2) trying to remedy issues of how material is presented. Thus, the first question is "What haven't we covered yet?", rather than "how should we change the existing coverage?" (Wikipedia, “Countering”)The project lays out a number of content areas lacking adequate representation, geographically highlighting the dearth in coverage of Africa, Latin America, Asia, and parts of Eastern Europe. WP:CSB also includes a “members” page that editors can sign to show their support, along with space to voice their opinions on the problem of bias on Wikipedia (the quotations at the beginning of this paper are taken from this “members” page). At the time of this study, 329 editors had self-selected and self-identified as members of WP:CSB, and this group constitutes the population sample for the current study. To explore the extent to which WP:CSB addressed these self-identified areas for improvement, each editor’s last 50 edits were coded for their primary geographical country of interest, as well as the conceptual category of the page itself (“P” for person/people, “L” for location, “I” for idea/concept, “T” for object/thing, or “NA” for indeterminate). For example, edits to the Wikipedia page for a single person like Tony Abbott (Australian federal opposition leader) were coded “Australia, P”, while an edit for a group of people like the Manchester United football team would be coded “England, P”. Coding was based on information obtained from the header paragraphs of each article’s Wikipedia page. After coding was completed, corresponding information on each country’s associated continent was added to the dataset, based on the United Nations Statistics Division listing.A total of 15,616 edits were coded for the study. Nearly 32% (n = 4962) of these edits were on articles for persons or people (see Table 1 for complete coding results). From within this sub-sample of edits, a majority of the people (68.67%) represented are associated with North America and Europe (Figure A). If we break these statistics down further, nearly half of WP:CSB’s edits concerning people were associated with the United States (36.11%) and England (10.16%), with India (3.65%) and Australia (3.35%) following at a distance. These figures make sense for the English-language Wikipedia; over 95% of the population in the three Westernised countries speak English, and while India is still often regarded as a developing nation, its colonial British roots and the emergence of a market economy with large, technology-driven cities are logical explanations for its representation here (and some estimates make India the largest English-speaking nation by population on the globe today).Table A Coding Results Total Edits 15616 (I) Ideas 2881 18.45% (L) Location 2240 14.34% NA 333 2.13% (T) Thing 5200 33.30% (P) People 4962 31.78% People by Continent Africa 315 6.35% Asia 827 16.67% Australia 175 3.53% Europe 1411 28.44% NA 110 2.22% North America 1996 40.23% South America 128 2.58% The areas of the globe of main concern to WP:CSB proved to be much less represented by the coalition itself. Asia, far and away the most populous continent with more than 60% of the globe’s people (GeoHive), was represented in only 16.67% of edits. Africa (6.35%) and South America (2.58%) were equally underrepresented compared to both their real-world populations (15% and 9% of the globe’s population respectively) and the aforementioned dominance of the advanced Westernised areas. However, while these percentages may seem low, in aggregate they do meet the quota set on the WP:CSB Project Page calling for one out of every twenty edits to be “a subject that is systematically biased against the pages of your natural interests.” By this standard, the coalition is indeed making headway in adding content that strategically counterbalances the natural biases of Wikipedia’s average editor.Figure ASocial network analysis allows us to visualise multifaceted data in order to identify relationships between actors and content (Vego-Redondo; Watts). Similar to Davis’s well-known sociological study of Southern American socialites in the 1930s (Scott), our Wikipedia coalition can be conceptualised as individual actors united by common interests, and a network of relations can be constructed with software such as UCINET. A mapping algorithm that considers both the relationship between all sets of actors and each actor to the overall collective structure produces an image of our network. This initial network is bimodal, as both our Wikipedia editors and their edits (again, coded for country of interest) are displayed as nodes (Figure B). Edge-lines between nodes represents a relationship, and here that relationship is the act of editing a Wikipedia article. We see from our network that the “U.S.” and “England” hold central positions in the network, with a mass of editors crowding around them. A perimeter of nations is then held in place by their ties to editors through the U.S. and England, with a second layer of editors and poorly represented nations (Gabon, Laos, Uzbekistan, etc.) around the boundaries of the network.Figure BWe are reminded from this visualisation both of the centrality of the two Western powers even among WP:CSB editoss, and of the peripheral nature of most other nations in the world. But we also learn which editors in the project are contributing most to underrepresented areas, and which are less “tied” to the Western core. Here we see “Wizzy” and “Warofdreams” among the second layer of editors who act as a bridge between the core and the periphery; these are editors with interests in both the Western and marginalised nations. Located along the outer edge, “Gallador” and “Gerrit” have no direct ties to the U.S. or England, concentrating all of their edits on less represented areas of the globe. Identifying editors at these key positions in the network will help with future research, informing interview questions that will investigate their interests further, but more significantly, probing motives for participation and action within the coalition.Additionally, we can break the network down further to discover editors who appear to have similar interests in underrepresented areas. Figure C strips down the network to only editors and edits dealing with Africa and South America, the least represented continents. From this we can easily find three types of editors again: those who have singular interests in particular nations (the outermost layer of editors), those who have interests in a particular region (the second layer moving inward), and those who have interests in both of these underrepresented regions (the center layer in the figure). This last group of editors may prove to be the most crucial to understand, as they are carrying the full load of WP:CSB’s mission.Figure CThe End of Geography, or the Reclamation?In The Internet Galaxy, Manuel Castells writes that “the Internet Age has been hailed as the end of geography,” a bold suggestion, but one that has gained traction over the last 15 years as the excitement for the possibilities offered by information communication technologies has often overshadowed structural barriers to participation like the Digital Divide (207). Castells goes on to amend the “end of geography” thesis by showing how global information flows and regional Internet access rates, while creating a new “map” of the world in many ways, is still closely tied to power structures in the analog world. The Internet Age: “redefines distance but does not cancel geography” (207). The work of WikiProject: Countering Systemic Bias emphasises the importance of place and representation in the information environment that continues to be constructed in the online world. This study looked at only a small portion of this coalition’s efforts (~16,000 edits)—a snapshot of their labor frozen in time—which itself is only a minute portion of the information being dispatched through Wikipedia on a daily basis (~125,000 edits). Further analysis of WP:CSB’s work over time, as well as qualitative research into the identities, interests and motivations of this collective, is needed to understand more fully how information bias is understood and challenged in the Internet galaxy. The data here indicates this is a fight worth fighting for at least a growing few.ReferencesAlexa. “Top Sites.” Alexa.com, n.d. 10 Mar. 2010 ‹http://www.alexa.com/topsites>. Ayers, Phoebe, Charles Matthews, and Ben Yates. How Wikipedia Works: And How You Can Be a Part of It. San Francisco, CA: No Starch, 2008.Bruns, Axel. Blogs, Wikipedia, Second Life, and Beyond: From Production to Produsage. New York: Peter Lang, 2008.Butler, Brian, Elisabeth Joyce, and Jacqueline Pike. Don’t Look Now, But We’ve Created a Bureaucracy: The Nature and Roles of Policies and Rules in Wikipedia. Paper presented at 2008 CHI Annual Conference, Florence.Castells, Manuel. The Internet Galaxy: Reflections on the Internet, Business, and Society. Oxford: Oxford UP, 2001.Cohen, Noam. “Wikipedia.” New York Times, n.d. 12 Mar. 2010 ‹http://www.nytimes.com/info/wikipedia/>. Doran, James. “Wikipedia Chief Promises Change after ‘Expert’ Exposed as Fraud.” The Times, 6 Mar. 2007 ‹http://technology.timesonline.co.uk/tol/news/tech_and_web/article1480012.ece>. Edwards, Lin. “Report Claims Wikipedia Losing Editors in Droves.” Physorg.com, 30 Nov 2009. 12 Feb. 2010 ‹http://www.physorg.com/news178787309.html>. Elsworth, Catherine. “Fake Wikipedia Prof Altered 20,000 Entries.” London Telegraph, 6 Mar. 2007 ‹http://www.telegraph.co.uk/news/1544737/Fake-Wikipedia-prof-altered-20000-entries.html>. Forte, Andrea, Vanessa Larco, and Amy Bruckman. “Decentralization in Wikipedia Governance.” Journal of Management Information Systems 26 (2009): 49-72.Giles, Jim. “Internet Encyclopedias Go Head to Head.” Nature 438 (2005): 900-901.Hansen, Sean, Nicholas Berente, and Kalle Lyytinen. “Wikipedia, Critical Social Theory, and the Possibility of Rational Discourse.” The Information Society 25 (2009): 38-59.Hertel, Guido, Sven Niedner, and Stefanie Herrmann. “Motivation of Software Developers in Open Source Projects: An Internet-Based Survey of Contributors to the Linex Kernel.” Research Policy 32 (2003): 1159-1177.Johnson, Bobbie. “Rightwing Website Challenges ‘Liberal Bias’ of Wikipedia.” The Guardian, 1 Mar. 2007. 8 Mar. 2010 ‹http://www.guardian.co.uk/technology/2007/mar/01/wikipedia.news>. Kane, Gerald C., Ann Majchrzak, Jeremaih Johnson, and Lily Chenisern. A Longitudinal Model of Perspective Making and Perspective Taking within Fluid Online Collectives. Paper presented at the 2009 International Conference on Information Systems, Phoenix, AZ, 2009.Kittur, Aniket, Ed H. Chi, and Bongwon Suh. What’s in Wikipedia? Mapping Topics and Conflict Using Socially Annotated Category Structure. Paper presented at the 2009 CHI Annual Conference, Boston, MA.———, and Robert E. Kraut. Harnessing the Wisdom of Crowds in Wikipedia: Quality through Collaboration. Paper presented at the 2008 Association for Computing Machinery’s Computer Supported Cooperative Work Annual Conference, San Diego, CA.Konieczny, Piotr. “Governance, Organization, and Democracy on the Internet: The Iron Law and the Evolution of Wikipedia.” Sociological Forum 24 (2009): 162-191.———. “Wikipedia: Community or Social Movement?” Interface: A Journal for and about Social Movements 1 (2009): 212-232.Langlois, Ganaele, and Greg Elmer. “Wikipedia Leeches? The Promotion of Traffic through a Collaborative Web Format.” New Media & Society 11 (2009): 773-794.Lih, Andrew. The Wikipedia Revolution. New York, NY: Hyperion, 2009.McHenry, Robert. “The Real Bias in Wikipedia: A Response to David Shariatmadari.” OpenDemocracy.com 2006. 8 Mar. 2010 ‹http://www.opendemocracy.net/media-edemocracy/wikipedia_bias_3621.jsp>. Middleton, Chris. “The World of Wikinomics.” Computer Weekly, 20 Jan. 2009: 22-26.Oreg, Shaul, and Oded Nov. “Exploring Motivations for Contributing to Open Source Initiatives: The Roles of Contribution, Context and Personal Values.” Computers in Human Behavior 24 (2008): 2055-2073.Osterloh, Margit and Sandra Rota. “Trust and Community in Open Source Software Production.” Analyse & Kritik 26 (2004): 279-301.Royal, Cindy, and Deepina Kapila. “What’s on Wikipedia, and What’s Not…?: Assessing Completeness of Information.” Social Science Computer Review 27 (2008): 138-148.Santana, Adele, and Donna J. Wood. “Transparency and Social Responsibility Issues for Wikipedia.” Ethics of Information Technology 11 (2009): 133-144.Schroer, Joachim, and Guido Hertel. “Voluntary Engagement in an Open Web-Based Encyclopedia: Wikipedians and Why They Do It.” Media Psychology 12 (2009): 96-120.Scott, John. Social Network Analysis. London: Sage, 1991.Vego-Redondo, Fernando. Complex Social Networks. Cambridge: Cambridge UP, 2007.Viegas, Fernanda B., Martin Wattenberg, and Matthew M. McKeon. “The Hidden Order of Wikipedia.” Online Communities and Social Computing (2007): 445-454.Watts, Duncan. Six Degrees: The Science of a Connected Age. New York, NY: W. W. Norton & Company, 2003Wikipedia. “About.” n.d. 8 Mar. 2010 ‹http://en.wikipedia.org/wiki/Wikipedia:About>. ———. “Welcome to Wikipedia.” n.d. 8 Mar. 2010 ‹http://en.wikipedia.org/wiki/Main_Page>.———. “Wikiproject:Countering Systemic Bias.” n.d. 12 Feb. 2010 ‹http://en.wikipedia.org/wiki/Wikipedia:WikiProject_Countering_systemic_bias#Members>. Zittrain, Jonathan. The Future of the Internet and How to Stop It. New Haven, CT: Yale UP, 2008.
Gli stili APA, Harvard, Vancouver, ISO e altri
14

Kuntz, Alan, Maxwell Emerson, Tayfun Efe Ertop, Inbar Fried, Mengyu Fu, Janine Hoelscher, Margaret Rox et al. "Autonomous medical needle steering in vivo". Science Robotics 8, n. 82 (20 settembre 2023). http://dx.doi.org/10.1126/scirobotics.adf7614.

Testo completo
Abstract (sommario):
The use of needles to access sites within organs is fundamental to many interventional medical procedures both for diagnosis and treatment. Safely and accurately navigating a needle through living tissue to a target is currently often challenging or infeasible because of the presence of anatomical obstacles, high levels of uncertainty, and natural tissue motion. Medical robots capable of automating needle-based procedures have the potential to overcome these challenges and enable enhanced patient care and safety. However, autonomous navigation of a needle around obstacles to a predefined target in vivo has not been shown. Here, we introduce a medical robot that autonomously navigates a needle through living tissue around anatomical obstacles to a target in vivo. Our system leverages a laser-patterned highly flexible steerable needle capable of maneuvering along curvilinear trajectories. The autonomous robot accounts for anatomical obstacles, uncertainty in tissue/needle interaction, and respiratory motion using replanning, control, and safe insertion time windows. We applied the system to lung biopsy, which is critical for diagnosing lung cancer, the leading cause of cancer-related deaths in the United States. We demonstrated successful performance of our system in multiple in vivo porcine studies achieving targeting errors less than the radius of clinically relevant lung nodules. We also demonstrated that our approach offers greater accuracy compared with a standard manual bronchoscopy technique. Our results show the feasibility and advantage of deploying autonomous steerable needle robots in living tissue and how these systems can extend the current capabilities of physicians to further improve patient care.
Gli stili APA, Harvard, Vancouver, ISO e altri
15

Li, Xiaojin, Shiqiang Tao, Samden D. Lhatoo, Licong Cui, Yan Huang, Johnson P. Hampson e Guo-Qiang Zhang. "A multimodal clinical data resource for personalized risk assessment of sudden unexpected death in epilepsy". Frontiers in Big Data 5 (17 agosto 2022). http://dx.doi.org/10.3389/fdata.2022.965715.

Testo completo
Abstract (sommario):
Epilepsy affects ~2–3 million individuals in the United States, a third of whom have uncontrolled seizures. Sudden unexpected death in epilepsy (SUDEP) is a catastrophic and fatal complication of poorly controlled epilepsy and is the primary cause of mortality in such patients. Despite its huge public health impact, with a ~1/1,000 incidence rate in persons with epilepsy, it is an uncommon enough phenomenon to require multi-center efforts for well-powered studies. We developed the Multimodal SUDEP Data Resource (MSDR), a comprehensive system for sharing multimodal epilepsy data in the NIH funded Center for SUDEP Research. The MSDR aims at accelerating research to address critical questions about personalized risk assessment of SUDEP. We used a metadata-guided approach, with a set of common epilepsy-specific terms enforcing uniform semantic interpretation of data elements across three main components: (1) multi-site annotated datasets; (2) user interfaces for capturing, managing, and accessing data; and (3) computational approaches for the analysis of multimodal clinical data. We incorporated the process for managing dataset-specific data use agreements, evidence of Institutional Review Board review, and the corresponding access control in the MSDR web portal. The metadata-guided approach facilitates structural and semantic interoperability, ultimately leading to enhanced data reusability and scientific rigor. MSDR prospectively integrated and curated epilepsy patient data from seven institutions, and it currently contains data on 2,739 subjects and 10,685 multimodal clinical data files with different data formats. In total, 55 users registered in the current MSDR data repository, and 6 projects have been funded to apply MSDR in epilepsy research, including three R01 projects and three R21 projects.
Gli stili APA, Harvard, Vancouver, ISO e altri
16

Luwesi, Cush Ngonzo. "Foresight in Agriculture, Food and Nutrition for Planning Freshwater in the Course of Climate Change in Africa". Journal of Food Technology & Nutrition Sciences, 31 agosto 2022, 1–4. http://dx.doi.org/10.47363/jftns/2022(4)150.

Testo completo
Abstract (sommario):
Integrated Water Resources Management (IWRM) has been designed as a foresight process for world leaders to solve communities’ issues dealing with water uncertainty in agriculture, food and nutrition as well as other related industries. That is why a Global Water Partnership (GWP) was initiated in 1992 by the United Nations to develop water and place it to the center of the political and economic concerns of the member States with an aim to mobilize resources that are necessary to manage water rationally. A focus was put on more than two billion poor people living without access to adequate potable drinking water, among whom more than three-quarters (¾) of the African populations living in poor areas and unurbanized cities. Predictions show that by the 2050, most of this population will be living in African megacities. This will be amplifying the “3As” of water issues: Water Availability, Accessibility and Affordability. Solving this major crisis in prospect requires foresight, both as a process and an analytical tool to address these key issues in the course of climate change. As a process, foresight involves consultation among stakeholders to ensure socio-political, economic, agro-natural and engineering technological solutions to “Develop and Avail Water to All! “. This process, would later require an evaluation of the feedbacks to and from these proposed solutions and their tools. These may include among other strategies and legislations for water policies; innovative techniques for irrigation (production, storage, transport and distribution of water) and hydro-power generation; Payments for water ecosystems services (PWES); and various management operating systems for risk control and mitigation at the watershed and community levels. However, the uncoordinated efforts of scientists working the climate adaptation, mitigation and amelioration spheres have generated another threat, that of climate intervention in the form of solar Geoengineering. African leaders, thus need foresight to check closely opportunities and dangers arising from these technologies. They require a neutral organization having to conduct rigorous socio-economic and environmental impacts assessments prior to embracing these technologies. That is the only way they may ensure a climatic justice to peasants and farmers so that they can leave a legacy in the agriculture, food and nutrition niche for the next generations
Gli stili APA, Harvard, Vancouver, ISO e altri
17

Marshall, P. David. "Thinking through New". M/C Journal 1, n. 1 (1 luglio 1998). http://dx.doi.org/10.5204/mcj.1696.

Testo completo
Abstract (sommario):
A friend of mine once tried to capture the feeling that one gets from a new thing. He decided that there was no word to describe the sensation of having an unblemished eraser when you were in primary school, but nevertheless it produced a kind of fascinating awe in the apparent perfection of the new. A similar feeling captures the new car owner in smelling the interior's recently minted plastic. Used car dealers would doubtless love to bottle that smell because it produces the momentary pleasure of new ownership. And I am sure there are certain people who are addicted to that smell, and go test drive new cars with no intention of buying just for the experience of the "new" smell. New clothes produce that same sensation: most of us ignore the label which says "wash before wearing" because we want to experience the incredible stiff tactile sensation of a new shirt. My friend called this gle-gle, and it is a pervasive relationship to New in a variety of guises. New implies two kinds of objects or practices: it implies either the replacement of the old or it points to the emergence of something that has not existed before. In both cases, new always heralds change and has the potential for social or cultural transformation. As a result, popular writers and ad copy editors often link new with revolution. For example, the advent of the computer was seen to be revolutionary. Similarly a new detergent which worked in cold water promised cataclysmic change in the 1960s. But these promises of revolution through some innovation have not necessarily led to massive social upheaval; rather they have identified a discursive trope of contemporary culture which links new with rejuvenation. The claim that something is new is the mantra of modernity and the kitsch of the postmodern. This double-play of the concept of the new is best untangled through thinking how a once new object becomes the contemporary way of expressing the former hope of progress and change -- with raised and knowing eyebrow. I recently stumbled into one of these double-plays. While searching for bedding for yet another birthday slumber party, I picked up an old mattress which still had its 1950s label, where it proudly announced that the cushioning was the wonderful new revolutionary foam system called the Dunlopillo. The Dunlopillo system was certainly trademarked and no doubt patented for its then unique system of troughs and cones of army green foam; but in its current incarnation the foam was weak and the bed easily crumpled in half. All that was left of the sentiment of newness was the label, which in its graphics expressed the necessary connection to science as the future, and authoritative zeal in its seriousness of its revolutionary potential. But seen from 1998, the claims seemed bombastic and beautifully optimistic. Modernity's relationship to the new is to celebrate the potential for change. It is a cultural project that has enveloped the sentiments of capitalism and socialism from their origins in the 18th and 19th centuries, and manifested itself in what Schudson labelled "capitalist realism" in advertising, and what is known as socialist realism as a state-sanctioned artistic movement in the Soviet Union. Both representations provided their systems with the capacity to repaint the cultural canvas with each new product such as Dunlopillo, or in the Soviet system with each new five-year productivity plan for the collective. Maintaining the unity of the cultural project was a challenge to each system's representational regime; sustaining the power of the new as a revolutionary force is the fundamental link between capitalist and socialist systems throughout the twentieth century. These representational regimes were in fact connected to the production of new phenomena, new materials, new social formations. However, the message of the new has gradually weakened over the last thirty years. Think of the way in which the Space Race produced all sorts of new technologies of computing, calculation and the integration of electronics into the running of the automobile. It also produced the breakfast orange-juice substitute, 'Tang'. Indeed, the first advertisements for Tang intoned that it was the drink that astronauts enjoyed in space. Tang and its flavour crystals provided the ultimate form of efficiency and convenience, and provided a clear link between the highly ideologically driven space program and the everyday lives of citizens of the "free" world. In the 60s and 70s the link between the general project of modernity and improving everyday life was made evidently clear every time you added water to your Tang flavour crystals. One has to ask: where is Tang today? Not only is it difficult to find in my supermarket, but even if it were available it would not operate as the same representation of progress and the project of modernity. Instead, it would have little more than a nostalgic -- or, kitsch -- hold on a generation that has seen too many representations of the new and too many attempts at indicating improvement. The decay of the cultural power of the new is clearly linked to consumer culture's dependence on and overuse of the concept. The entire century has been enveloped by an accelerating pattern of symbolic change. Symbolic change is not necessarily the same as the futurologist Toffler claiming that we are in a constant state of "future shock"; rather it is much more the introduction of new designs as if there were not only transformed designs, but fundamentally transformed products. This perpetually 'new' is a feature of the fashion industry as it works toward seasonal transformation. Toothbrushes have also been the object of this design therapy, which produces both continual change over the last twenty years, and claims of new revolutionary designs. Central to this notion of symbolic change is advertising. Advertising plays with the hopes and desires of its audience by providing the contradictory symbolic materiality of progressive change. The cultural and political power of the new is the symbolic terrain that advertising has mined to present its "images of well-being". What one can now detect in the circulation of advertising is at least two responses to the decay of the power of the new. First, instead of advertising invoking the wonders of science and its technological offspring providing you with something revolutionary, advertising has moved increasingly towards personal transformation, echoing the 30-year-old self-help, self-discovery book industry. In Australia, GM-Holden's Barina television ads provide a typical example. No technical detail about the car is given in the ads, but a great deal of information --- via the singing, the superimposed dancers, and the graphics employed -- signifies that the car is designed for the young female driver. Symbolically, the car is transformed into a new space of feminine subjectivity. Second, advertising plays with the cynicism of the cognoscenti. If the new itself can no longer work to signify genuine change and improvement in contemporary culture, it is instead represented as a changed attitude to the contemporary world that only a particular demographic will actually comprehend. The level of sophistication in reading the new as a cultural phenomenon by advertisers (or by proxy, their agencies) is sometimes astounding. A recent Coca-Cola radio ad played with a singing style of ennui and anger that embodied punk, but only as punk has been reinvented in the mid-90s through such groups as Green Day. The lyrics were identical to the rest of the "Always Coca-Cola" campaign that has been circulating internationally for the last five years; however, the cynicism of the singers, the bare tunefulness, and even the use of a popular culture icon such as Coke as the object of a song (and ridicule), tries to capture a particular new cultural moment with a different audience. Advertising as a cultural discourse on its own expresses a malaise within the transforming promise of the new that has been so much a part of modernity. However, the myths of modernity -- its clear association with social progress -- have never completely dissipated. In contemporary culture, it has fallen on new computer technologies to keep the ember of modernity and progress glowing. Over the last two decades the personal computer has maintained the naiveté of the new that was central to mid-twentieth century advertising, if not post-war culture in general. Very much like the Space Race stitched together an ideological weave that connected the populace to the interests of what Eisenhower first described as a military-industrial complex, the computer has ignited a new generation of optimism. It has been appropriated by governments from Singapore and Malaysia (think of the Multimedia Super Corridor) to the United States (think of Vice President Al Gore's NII) as the rescue package for the organisation of capitalism. Through Microsoft's hegemony there is a sense of coherence in "operating systems" which makes their slogan "where do you want to go today?", in its evocation of choice, also an invocation of unity of purpose. The wonderful synergy of the personal computer is that it weaves the conception of personal desire back into a generalisable social system of value. Despite all these efforts at harnessing the new computer technologies into established political and economic forces, the new nature of computer technology draws us back to the reason why new is intrinsically exciting: the defining nature of the new is that it offers the potential for some form of social change. The Internet has been the source for this new discourse of utopia. If we follow Howard Rheingold's logic, New "virtual communities" are formed online. A disequilibrium in who controls the flow of information is part of the appeal of the Internet, and the very appearance of this journal stems from that sense of new access. The Internet is said to challenge the boundaries of nations and states (although English language hegemony and pure economic access continue to operate to control the flow of those boundaries), with regulation devolving out of state policy towards the individual. Transforming identities are also very much an element of online communities: if nothing else, the play of gender in online game and chat programs identifies the constructed nature of our identities. All of this energy, and what I would call affect, refers to how computer technology and the Internet have managed to produce a sensation of agency. What I mean by agency is not necessarily attached to the project of modernity; rather it is the sense of being able to produce the new itself, as opposed to just living in the architecture of the new provided by someone else. On one level, the Internet and personal computers do provide a way to make your information look as if it is more significant and of a higher quality. The continuing proliferation of personal websites attests to this narcissistic drive of contemporary culture. On another level, the narcissism also identifies activity and agency in engaging in a form of communication with others. The Internet then can be thought of as paralleling movements in contemporary music, where the ability to construct soundscapes through computer interfaces has given the musician greater agency in the production of new electronic music. The new is intrinsically an odd phenomenon. It continually threatens established patterns. What is different about the new and its meaning in the twentieth century is that it has become part of the central ideology of western culture in its characterised representation of modernity. In a strange mix, the new reinforces the old and established. Nonetheless, the new, like culture itself, is never completely contained by any overarching architecture. The new expresses the potential, and occasionally the enactment, of significant cultural change. The fatigue that I have identified in our thinking about the new identifies a decline in the power of modernity to capture change, difference and transformation. That very fatigue may indicate in and of itself something profoundly new. References Rheingold, Howard. The Virtual Community: Homesteading on the Electronic Frontier. New York: HarperPerennial, 1994. Schudson, Michael. Advertising, the Uneasy Persuasion: Its Dubious Impact on American Society. New York: Basic Books, 1984. Toffler, Alvin. Future Shock. London: Pan Books, 1971. Citation reference for this article MLA style: P. David Marshall. "Thinking through New." M/C: A Journal of Media and Culture 1.1 (1998). [your date of access] <http://www.uq.edu.au/mc/9807/think.php>. Chicago style: P. David Marshall, "Thinking through New," M/C: A Journal of Media and Culture 1, no. 1 (1998), <http://www.uq.edu.au/mc/9807/think.php> ([your date of access]). APA style: P. David Marshall. (1998) Thinking through new. M/C: A Journal of Media and Culture 1(1). <http://www.uq.edu.au/mc/9807/think.php> ([your date of access]).
Gli stili APA, Harvard, Vancouver, ISO e altri
18

Stevens, Ian. "The Epistemological Consequences of Artificial Intelligence, Precision Medicine, and Implantable Brain-Computer Interfaces". Voices in Bioethics 10 (30 giugno 2024). http://dx.doi.org/10.52214/vib.v10i.12654.

Testo completo
Abstract (sommario):
ABSTRACT I argue that this examination and appreciation for the shift to abductive reasoning should be extended to the intersection of neuroscience and novel brain-computer interfaces too. This paper highlights the implications of applying abductive reasoning to personalized implantable neurotechnologies. Then, it explores whether abductive reasoning is sufficient to justify insurance coverage for devices absent widespread clinical trials, which are better applied to one-size-fits-all treatments. INTRODUCTION In contrast to the classic model of randomized-control trials, often with a large number of subjects enrolled, precision medicine attempts to optimize therapeutic outcomes by focusing on the individual.[i] A recent publication highlights the strengths and weakness of both traditional evidence-based medicine and precision medicine.[ii] Plus, it outlines a tension in the shift from evidence-based medicine’s inductive reasoning style (the collection of data to postulate general theories) to precision medicine’s abductive reasoning style (the generation of an idea from the limited data available).[iii] The paper’s main example is the application of precision medicine for the treatment of cancer.[iv] I argue that this examination and appreciation for the shift to abductive reasoning should be extended to the intersection of neuroscience and novel brain-computer interfaces too. As the name suggests, brain-computer interfaces are a significant advancement in neurotechnology that directly connects someone’s brain to external or implanted devices.[v] Among the various kinds of brain-computer interfaces, adaptive deep brain stimulation devices require numerous personalized adjustments to their settings during the implantation and computation stages in order to provide adequate relief to patients with treatment-resistant disorders. What makes these devices unique is how adaptive deep brain stimulation integrates a sensory component to initiate the stimulation. While not commonly at the level of sophistication as self-supervising or generative large language models,[vi] they currently allow for a semi-autonomous form of neuromodulation. This paper highlights the implications of applying abductive reasoning to personalized implantable neurotechnologies. Then, it explores whether abductive reasoning is sufficient to justify insurance coverage for devices absent widespread clinical trials, which are better applied to one-size-fits-all treatments.[vii] ANALYSIS I. The State of Precision Medicine in Oncology and the Epistemological Shift While a thorough overview of precision medicine for the treatment of cancer is beyond the scope of this article, its practice can be roughly summarized as identifying clinically significant characteristics a patient possesses (e.g., genetic traits) to land on a specialized treatment option that, theoretically, should benefit the patient the most.[viii] However, in such a practice of stratification patients fall into smaller and smaller populations and the quality of evidence that can be applied to anyone outside these decreases in turn.[ix] As inductive logic helps to articulate, the greater the number of patients that respond to a particular therapy the higher the probability of its efficacy. By straying from this logical framework, precision medicine opens the treatment of cancer to more uncertainty about the validity of these approaches to the resulting disease subcategories.[x] Thus, while contemporary medical practices explicitly describe some treatments as “personalized”, they ought not be viewed as inherently better founded than other therapies.[xi] A relevant contemporary case of precision medicine out of Norway focuses on the care of a patient with cancer between the ventricles of the heart and esophagus, which had failed to respond to the standard regimen of therapies over four years.[xii] In a last-ditch effort, the patient elected to pay out-of-pocket for an experimental immunotherapy (nivolumab) at a private hospital. He experienced marked improvements and a reduction in the size of the tumor. Understandably, the patient tried to pursue further rounds of nivolumab at a public hospital. However, the hospital initially declined to pay for it given the “lack of evidence from randomised clinical trials for this drug relating to this [patient’s] condition.”[xiii] In rebuttal to this claim, the patient countered that he was actually similar to a subpopulation of patients who responded in “open‐label, single arm, phase 2 studies on another immune therapy drug” (pembrolizumab).[xiv] Given this interpretation of the prior studies and the patient’s response, further rounds of nivolumab were approved. Had the patient not had improvements in the tumor’s size following a round of nivolumab, then pembrolizumab’s prior empirical evidence in isolation would have been insufficient, inductively speaking, to justify his continued use of nivolumab.[xv] The case demonstrates a shift in reasoning from the traditional induction to abduction. The phenomenon of ‘cancer improvement’ is considered causally linked to nivolumab and its underlying physiological mechanisms.[xvi] However, “the weakness of abductions is that there may always be some other better, unknown explanation for an effect. The patient may for example belong to a special subgroup that spontaneously improves, or the change may be a placebo effect. This does not mean, however, that abductive inferences cannot be strong or reasonable, in the sense that they can make a conclusion probable.”[xvii] To demonstrate the limitations of relying on the abductive standard in isolation, commentators have pointed out that side effects in precision medicine are hard to rule out as being related to the initial intervention itself unless trends from a group of patients are taken into consideration.[xviii] As artificial intelligence (AI) assists the development of precision medicine for oncology, this uncertainty ought to be taken into consideration. The implementation of AI has been crucial to the development of precision medicine by providing a way to combine large patient datasets or a single patient with a large number of unique variables with machine learning to recommend matches based on statistics and probability of success upon which practitioners can base medical recommendations.[xix] The AI is usually not establishing a causal relationship[xx] – it is predicting. So, as AI bleeds into medical devices, like brain-computer interfaces, the same cautions about using abductive reasoning alone should be carried over. II. Responsive Neurostimulation, AI, and Personalized Medicine Like precision medicine in cancer treatment, computer-brain interface technology similarly focuses on the individual patient through personalized settings. In order to properly expose the intersection of AI, precision medicine, abductive reasoning, and implantable neurotechnologies, the descriptions of adaptive deep brain stimulation systems need to deepen.[xxi] As a broad summary of adaptive deep brain stimulation, to provide a patient with the therapeutic stimulation, a neural signal, typically referred to as a local field potential,[xxii] must first be detected and then interpreted by the device. The main adaptive deep brain stimulation device with premarket approval, the NeuroPace Responsive Neurostimulation system, is used to treat epilepsy by detecting and storing “programmer-defined phenomena.”[xxiii] Providers can optimize the detection settings of the device to align with the patient’s unique electrographic seizures as well as personalize the reacting stimulation’s parameters.[xxiv] The provider adjusts the technology based on trial and error. One day machine learning algorithms will be able to regularly aid this process in myriad ways, such as by identifying the specific stimulation settings a patient may respond to ahead of time based on their electrophysiological signatures.[xxv] Either way, with AI or programmers, adaptive neurostimulation technologies are individualized and therefore operate in line with precision medicine rather than standard treatments based on large clinical trials. Contemporary neurostimulation devices are not usually sophisticated enough to be prominent in AI discussions where the topics of neural networks, deep learning, generative models, and self-attention dominate the conversation. However, implantable high-density electrocorticography arrays (a much more sensitive version than adaptive deep brain stimulation systems use) have been used in combination with neural networks to help patients with neurologic deficits from a prior stroke “speak” through a virtual avatar.[xxvi] In some experimental situations, algorithms are optimizing stimulation parameters with increasing levels of independence.[xxvii] An example of neurostimulation that is analogous to the use of nivolumab in Norway surrounds a patient in the United States who was experiencing both treatment-resistant OCD and temporal lobe epilepsy.[xxviii]Given the refractory nature of her epilepsy, implantation of an adaptive deep brain stimulation system was indicated. As a form of experimental therapy, her treatment-resistant OCD was also indicated for the off-label use of an adaptive deep brain stimulation set-up. Another deep brain stimulation lead, other than the one implanted for epilepsy, was placed in the patient’s right nucleus accumbens and ventral pallidum region given the correlation these nuclei had with OCD symptoms in prior research. Following this, the patient underwent “1) ambulatory, patient-initiated magnet-swipe storage of data during moments of obsessive thoughts; (2) lab-based, naturalistic provocation of OCD-related distress (naturalistic provocation task); and (3) lab-based, VR [virtual reality] provocation of OCD-related distress (VR provocation task).”[xxix] Such signals were used to identify when to deliver the therapeutic stimulation in order to counter the OCD symptoms. Thankfully, following the procedure and calibration the patient exhibited marked improvements in their OCD symptoms and recently shared her results publicly.[xxx] In both cases, there is a similar level of abductive justification for the efficacy of the delivered therapy. In the case study in which the patient was treated with adaptive deep brain stimulation, they at least had their neural activity tested in various settings to determine the optimum parameters for treatment to avoid them being based on guesswork. Additionally, the adaptive deep brain stimulation lead was already placed before the calibration trials were conducted, meaning that the patient had already taken on the bulk of the procedural risk before the efficacy could be determined. Such an efficacy test could have been replicated in the first patient’s cancer treatment, had it been biopsied and tested against the remaining immunotherapies in vitro. Yet, in the case of cancer with few options, one previous dose of a drug that appeared to work on the patient may justify further doses. However, as the Norwegian case presents, corroboration with known responses to a similar drug (from a clinical trial) could be helpful to validate the treatment strategy. (It should be noted that both patients were resigned to these last resort options regardless of the efficacy of treatment.) There are some elements of inductive logic seen with adaptive deep brain stimulation research in general. For example, abductively the focus could be that patient X’s stimulation parameters are different from patient Y’s and patient Z’s. In contrast, when grouped as subjects who obtained personalized stimulation, patients X, Y, and Z demonstrate an inductive aspect to this approach’s safety and/or efficacy. The OCD case holds plenty of abductive characteristics in line with precision medicine’s approach to treating cancer and as more individuals try the method, there will be additional data. With the gradual integration of AI into brain-computer interfaces in the name of efficacy, this reliance on abduction will continue, if not grow, over time. Moving forward, if a responsive deep brain stimulation treatment is novel and individualized (like the dose of nivolumab) and there is some other suggestion of efficacy (like clinical similarities to other patients in the literature), then it may justify insurance coverage for the investigative intervention, absent other unrelated reasons to deny it. III. Ethical Implications and Next Steps While AI’s use in oncology and neurology is not yet as prominent as its use in other fields (e.g., radiology), it appears to be on the horizon for both.[xxxi] AI can be found in both the functioning of the neurotechnologies as well as the implementation of precision medicine. The increasing use of AI may serve to further individualize both oncologic and neurological therapies. Given these implications and the handful of publications cited in this article, it is important to have a nuanced evaluation of how these treatments, which heavily rely on abductive justification, ought to be managed. The just use an abductive approach may be difficult as AI infused precision medicine is further pursued. At baseline, such technology relies on a level of advanced technology literacy among the general public and could exclude populations who lack access to basic technological infrastructure or know-how from participation.[xxxii] Even among nations with adequate infrastructure, as more patients seek out implantable neurotechnologies, which require robust healthcare resources, the market will favor patient populations that can afford this complex care.[xxxiii] If patients already have the means to pay for an initial dose/use of a precision medicine product out of pocket, should insurance providers be required to cover subsequent treatments?[xxxiv] That is, if a first dose of a cancer drug or a deep brain stimulator over its initial battery life is successful, patients may feel justified in having the costs of further treatments covered. The Norwegian patient’s experience implies there is a precedent for the idea that some public insurance companies ought to cover successful cancer therapies, however, insurance companies may not all see themselves as obligated to cover neurotechnologies that rely on personalized settings or that are based on precision/abductive research more than on clinical trials. CONCLUSION The fact that the cases outlined above rely on abductive style of reasoning implies that there may not be as strong a justification for coverage by insurance, as they are both experimental and individualized, when compared to the more traditional large clinical trials in which groups have the same or a standardized protocol (settings/doses). If a study is examining the efficacy of a treatment with a large cohort of patients or with different experimental groups/phases, insurance companies may conclude that the resulting symptom improvements are more likely to be coming from the devices themselves. A preference for inductive justification may take priority when ruling in favor of funding someone’s continued use of an implantable neurostimulator. There are further nuances to this discussion surrounding the classifications of these interventions as research versus clinical care that warrant future exploration, since such a distinction is more of a scale[xxxv] than binary and could have significant impacts on the “right-to-try” approach to experimental therapies in the United States.[xxxvi] Namely, given the inherent limitations of conducting large cohort trials for deep brain stimulation interventions on patients with neuropsychiatric disorders, surgically innovative frameworks that blend abductive and inductive methodologies, like with sham stimulation phases, have traditionally been used.[xxxvii] Similarly, for adaptive brain-computer interface systems, if there are no large clinical trials and instead only publications that demonstrate that something similar worked for someone else, then, in addition to the evidence that the first treatment/dose worked for the patient in question, the balance of reasoning would be valid and arguably justify insurance coverage. As precision approaches to neurotechnology become more common, frameworks for evaluating efficacy will be crucial both for insurance coverage and for clinical decision making. ACKNOWLEDGEMENT This article was originally written as an assignment for Dr. Francis Shen’s “Bioethics & AI” course at Harvard’s Center for Bioethics. I would like to thank Dr. Shen for his comments as well as my colleagues in the Lázaro-Muñoz Lab fo - [i] Jonathan Kimmelman and Ian Tannock, “The Paradox of Precision Medicine,” Nature Reviews. Clinical Oncology 15, no. 6 (June 2018): 341–42, https://doi.org/10.1038/s41571-018-0016-0. [ii] Henrik Vogt and Bjørn Hofmann, “How Precision Medicine Changes Medical Epistemology: A Formative Case from Norway,” Journal of Evaluation in Clinical Practice 28, no. 6 (December 2022): 1205–12, https://doi.org/10.1111/jep.13649. [iii] David Barrett and Ahtisham Younas, “Induction, Deduction and Abduction,” Evidence-Based Nursing 27, no. 1 (January 1, 2024): 6–7, https://doi.org/10.1136/ebnurs-2023-103873. [iv] Vogt and Hofmann, “How Precision Medicine Changes Medical Epistemology,” 1208. [v] Wireko Andrew Awuah et al., “Bridging Minds and Machines: The Recent Advances of Brain-Computer Interfaces in Neurological and Neurosurgical Applications,” World Neurosurgery, May 22, 2024, S1878-8750(24)00867-2, https://doi.org/10.1016/j.wneu.2024.05.104. [vi] Mark Riedl, “A Very Gentle Introduction to Large Language Models without the Hype,” Medium (blog), May 25, 2023, https://mark-riedl.medium.com/a-very-gentle-introduction-to-large-language-models-without-the-hype-5f67941fa59e. [vii] David E. Burdette and Barbara E. Swartz, “Chapter 4 - Responsive Neurostimulation,” in Neurostimulation for Epilepsy, ed. Vikram R. Rao (Academic Press, 2023), 97–132, https://doi.org/10.1016/B978-0-323-91702-5.00002-5. [viii] Kimmelman and Tannock, 2018. [ix] Kimmelman and Tannock, 2018. [x] Simon Lohse, “Mapping Uncertainty in Precision Medicine: A Systematic Scoping Review,” Journal of Evaluation in Clinical Practice 29, no. 3 (April 2023): 554–64, https://doi.org/10.1111/jep.13789. [xi] Kimmelman and Tannock, “The Paradox of Precision Medicine.” [xii] Vogt and Hofmann, 1206. [xiii] Vogt and Hofmann, 1206. [xiv] Vogt and Hofmann, 1206. [xv] Vogt and Hofmann, 1207. [xvi] Vogt and Hofmann, 1207. [xvii] Vogt and Hofmann, 1207. [xviii] Vogt and Hofmann, 1210. [xix] Mehar Sahu et al., “Chapter Three - Artificial Intelligence and Machine Learning in Precision Medicine: A Paradigm Shift in Big Data Analysis,” in Progress in Molecular Biology and Translational Science, ed. David B. Teplow, vol. 190, 1 vols., Precision Medicine (Academic Press, 2022), 57–100, https://doi.org/10.1016/bs.pmbts.2022.03.002. [xx] Stefan Feuerriegel et al., “Causal Machine Learning for Predicting Treatment Outcomes,” Nature Medicine 30, no. 4 (April 2024): 958–68, https://doi.org/10.1038/s41591-024-02902-1. [xxi] Sunderland Baker et al., “Ethical Considerations in Closed Loop Deep Brain Stimulation,” Deep Brain Stimulation 3 (October 1, 2023): 8–15, https://doi.org/10.1016/j.jdbs.2023.11.001. [xxii] David Haslacher et al., “AI for Brain-Computer Interfaces,” 2024, 7, https://doi.org/10.1016/bs.dnb.2024.02.003. [xxiii] Burdette and Swartz, “Chapter 4 - Responsive Neurostimulation,” 103–4; “Premarket Approval (PMA),” https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfpma/pma.cfm?id=P100026. [xxiv] Burdette and Swartz, “Chapter 4 - Responsive Neurostimulation,” 104. [xxv] Burdette and Swartz, 126. [xxvi] Sean L. Metzger et al., “A High-Performance Neuroprosthesis for Speech Decoding and Avatar Control,” Nature 620, no. 7976 (August 2023): 1037–46, https://doi.org/10.1038/s41586-023-06443-4. [xxvii] Hao Fang and Yuxiao Yang, “Predictive Neuromodulation of Cingulo-Frontal Neural Dynamics in Major Depressive Disorder Using a Brain-Computer Interface System: A Simulation Study,” Frontiers in Computational Neuroscience 17 (March 6, 2023), https://doi.org/10.3389/fncom.2023.1119685; Mahsa Malekmohammadi et al., “Kinematic Adaptive Deep Brain Stimulation for Resting Tremor in Parkinson’s Disease,” Movement Disorders 31, no. 3 (2016): 426–28, https://doi.org/10.1002/mds.26482. [xxviii] Young-Hoon Nho et al., “Responsive Deep Brain Stimulation Guided by Ventral Striatal Electrophysiology of Obsession Durably Ameliorates Compulsion,” Neuron 0, no. 0 (October 20, 2023), https://doi.org/10.1016/j.neuron.2023.09.034. [xxix] Nho et al. [xxx] Nho et al.; Erik Robinson, “Brain Implant at OHSU Successfully Controls Both Seizures and OCD,” OHSU News, accessed March 3, 2024, https://news.ohsu.edu/2023/10/25/brain-implant-at-ohsu-successfully-controls-both-seizures-and-ocd. [xxxi] Awuah et al., “Bridging Minds and Machines”; Haslacher et al., “AI for Brain-Computer Interfaces.” [xxxii] Awuah et al., “Bridging Minds and Machines.” [xxxiii] Sara Green, Barbara Prainsack, and Maya Sabatello, “The Roots of (in)Equity in Precision Medicine: Gaps in the Discourse,” Personalized Medicine 21, no. 1 (January 2024): 5–9, https://doi.org/10.2217/pme-2023-0097. [xxxiv] Green, Prainsack, and Sabatello, 7. [xxxv] Robyn Bluhm and Kirstin Borgerson, “An Epistemic Argument for Research-Practice Integration in Medicine,” The Journal of Medicine and Philosophy: A Forum for Bioethics and Philosophy of Medicine 43, no. 4 (July 9, 2018): 469–84, https://doi.org/10.1093/jmp/jhy009. [xxxvi] Vijay Mahant, “‘Right-to-Try’ Experimental Drugs: An Overview,” Journal of Translational Medicine 18 (June 23, 2020): 253, https://doi.org/10.1186/s12967-020-02427-4. [xxxvii] Michael S. Okun et al., “Deep Brain Stimulation in the Internal Capsule and Nucleus Accumbens Region: Responses Observed during Active and Sham Programming,” Journal of Neurology, Neurosurgery & Psychiatry 78, no. 3 (March 1, 2007): 310–14, https://doi.org/10.1136/jnnp.2006.095315.
Gli stili APA, Harvard, Vancouver, ISO e altri
19

Edmundson, Anna. "Curating in the Postdigital Age". M/C Journal 18, n. 4 (10 agosto 2015). http://dx.doi.org/10.5204/mcj.1016.

Testo completo
Abstract (sommario):
It seems nowadays that any aspect of collecting and displaying tangible or intangible material culture is labeled as curating: shopkeepers curate their wares; DJs curate their musical selections; magazine editors curate media stories; and hipsters curate their coffee tables. Given the increasing ubiquity and complexity of 21st-century notions of curatorship, the current issue of MC Journal, ‘curate’, provides an excellent opportunity to consider some of the changes that have occurred in professional practice since the emergence of the ‘digital turn’. There is no doubt that the internet and interactive media have transformed the way we live our daily lives—and for many cultural commentators it only makes sense that they should also transform our cultural experiences. In this paper, I want to examine the issue of curatorial practice in the postdigital age, looking some of the ways that curating has changed over the last twenty years—and some of the ways it has not. The term postdigital comes from the work of Ross Parry, and is used to references the ‘tipping point’ where the use of digital technologies became normative practice in museums (24). Overall, I contend that although new technologies have substantially facilitated the way that curators do their jobs, core business and values have not changed as the result of the digital turn. While, major paradigm shifts have occurred in the field of professional curatorship over the last twenty years, these shifts have been issue-driven rather than a result of new technologies. Everyone’s a Curator In a 2009 article in the New York Times, journalist Alex Williams commented on the growing trend in American consumer culture of labeling oneself a curator. “The word ‘curate’,’’ he observed, “has become a fashionable code word among the aesthetically minded, who seem to paste it onto any activity that involves culling and selecting” (1). Williams dated the origins of the popular adoption of the term ‘curating’ to a decade earlier; noting the strong association between the uptake and the rise of the internet (2). This association is not surprising. The development of increasingly interactive software such as Web 2.0 has led to a rapid rise in new technologies aimed at connecting people and information in ways that were previously unimaginable. In particular the internet has become a space in which people can collect, store and most importantly share vast quantities of information. This information is often about objects. According to sociologist Jyri Engeström, the most successful social network sites on the internet (such as Pinterest, Flickr, Houzz etc), use discrete objects, rather than educational content or interpersonal relationships, as the basis for social interaction. So objects become the node for inter-personal communication. In these and other sites, internet users can find, collate and display multiple images of objects on the same page, which can in turn be connected at the press of a button to other related sources of information in the form of text, commentary or more images. These sites are often seen as the opportunity to virtually curate mini-exhibitions, as well as to create mood boards or sites of virtual consumption. The idea of curating as selective aesthetic editing is also popular in online markets places such as Etsy where numerous sellers offer ‘curated’ selections from home wares, to prints, to (my personal favorite) a curated selection of cat toys. In all of these exercises there is an emphasis on the idea of connoisseurship. As part of his article on the new breed of ‘curators’, for example, Alex Williams interviewed Tom Kalendrain, the Fashion Director of a leading American department store, which had engaged in a collaboration with Scott Schuman of the fashion blog, the Sartorialist. According to Kalendrain the store had asked Schuman to ‘curate’ a collection of clothes for them to sell. He justified calling Schuman a curator by explaining: “It was precisely his eye that made the store want to work with him; it was about the right shade of blue, about the cut, about the width of a lapel” (cited in Williams 2). The interview reveals much about current popular notions of what it means to be a curator. The central emphasis of Kalendrain’s distinction was on connoisseurship: exerting a privileged authoritative voice based on intimate knowledge of the subject matter and the ability to discern the very best examples from a plethora of choices. Ironically, in terms of contemporary museum practice, this is a model of curating that museums have consciously been trying to move away from for at least the last three decades. We are now witnessing an interesting disconnect in which the extra-museum community (represented in particular by a postdigital generation of cultural bloggers, commentators and entrepreneurs) are re-vivifying an archaic model of curating, based on object-centric connoisseurship, just at the point where professional curators had thought they had successfully moved on. From Being about Something to Being for Somebody The rejection of the object-expert model of curating has been so persuasive that it has transformed the way museums conduct core business across all sectors of the institution. Over the last thirty to forty years museums have witnessed a major pedagogical shift in how curators approach their work and how museums conceptualise their core values. These paradigmatic and pedagogical shifts were best characterised by the museologist Stephen Weil in his seminal article “From being about something to being for somebody.” Weil, writing in the late 1990s, noted that museums had turned away from traditional models in which individual curators (by way of scholarship and connoisseurship) dictated how the rest of the world (the audience) apprehended and understood significant objects of art, science and history—towards an audience centered approach where curators worked collaboratively with a variety of interested communities to create a pluralist forum for social change. In museum parlance these changes are referred to under the general rubric of the ‘new museology’: a paradigm shift, which had its origins in the 1970s; its gestation in the 1980s; and began to substantially manifest by the 1990s. Although no longer ‘new’, these shifts continue to influence museum practices in the 2000s. In her article, “Curatorship as Social Practice’” museologist Christina Kreps outlined some of the developments over recent decades that have challenged the object-centric model. According to Kreps, the ‘new museology’ was a paradigm shift that emerged from a widespread dissatisfaction with conventional interpretations of the museum and its functions and sought to re-orient itself away from strongly method and technique driven object-focused approaches. “The ‘new museum’ was to be people-centered, action-oriented, and devoted to social change and development” (315). An integral contributor to the developing new museology was the subjection of the western museum in the 1980s and ‘90s to representational critique from academics and activists. Such a critique entailed, in the words of Sharon Macdonald, questioning and drawing attention to “how meanings come to be inscribed and by whom, and how some come to be regarded as ‘right’ or taken as given” (3). Macdonald notes that postcolonial and feminist academics were especially engaged in this critique and the growing “identity politics” of the era. A growing engagement with the concept that museological /curatorial work is what Kreps (2003b) calls a ‘social process’, a recognition that; “people’s relationships to objects are primarily social and cultural ones” (154). This shift has particularly impacted on the practice of museum curatorship. By way of illustration we can compare two scholarly definitions of what constitutes a curator; one written in 1984 and one from 2001. The Manual of Curatorship, written in 1994 by Gary Edson and David Dean define a curator as: “a staff member or consultant who is as specialist in a particular field on study and who provides information, does research and oversees the maintenance, use, and enhancement of collections” (290). Cash Cash writing in 2001 defines curatorship instead as “a social practice predicated on the principle of a fixed relation between material objects and the human environment” (140). The shift has been towards increased self-reflexivity and a focus on greater plurality–acknowledging the needs of their diverse audiences and community stakeholders. As part of this internal reflection the role of curator has shifted from sole authority to cultural mediator—from connoisseur to community facilitator as a conduit for greater community-based conversation and audience engagement resulting in new interpretations of what museums are, and what their purpose is. This shift—away from objects and towards audiences—has been so great that it has led some scholars to question the need for museums to have standing collections at all. Do Museums Need Objects? In his provocatively titled work Do Museums Still Need Objects? Historian Steven Conn observes that many contemporary museums are turning away from the authority of the object and towards mass entertainment (1). Conn notes that there has been an increasing retreat from object-based research in the fields of art; science and ethnography; that less object-based research seems to be occurring in museums and fewer objects are being put on display (2). The success of science centers with no standing collections, the reduction in the number of objects put on display in modern museums (23); the increasing phalanx of ‘starchitect’ designed museums where the building is more important than the objects in it (11), and the increase of virtual museums and collections online, all seems to indicate that conventional museum objects have had their day (1-2). Or have they? At the same time that all of the above is occurring, ongoing research suggests that in the digital age, more than ever, people are seeking the authenticity of the real. For example, a 2008 survey of 5,000 visitors to living history sites in the USA, found that those surveyed expressed a strong desire to commune with historically authentic objects: respondents felt that their lives had become so crazy, so complicated, so unreal that they were seeking something real and authentic in their lives by visiting these museums. (Wilkening and Donnis 1) A subsequent research survey aimed specifically at young audiences (in their early twenties) reported that: seeing stuff online only made them want to see the real objects in person even more, [and that] they felt that museums were inherently authentic, largely because they have authentic objects that are unique and wonderful. (Wilkening 2) Adding to the question ‘do museums need objects?’, Rainey Tisdale argues that in the current digital age we need real museum objects more than ever. “Many museum professionals,” she reports “have come to believe that the increase in digital versions of objects actually enhances the value of in-person encounters with tangible, real things” (20). Museums still need objects. Indeed, in any kind of corporate planning, one of the first thing business managers look for in a company is what is unique about it. What can it provide that the competition can’t? Despite the popularity of all sorts of info-tainments, the one thing that museums have (and other institutions don’t) is significant collections. Collections are a museum’s niche resource – in business speak they are the asset that gives them the advantage over their competitors. Despite the increasing importance of technology in delivering information, including collections online, there is still overwhelming evidence to suggest that we should not be too quick to dismiss the traditional preserve of museums – the numinous object. And in fact, this is precisely the final argument that Steven Conn reaches in his above-mentioned publication. Curating in the Postdigital Age While it is reassuring (but not particularly surprising) that generations Y and Z can still differentiate between virtual and real objects, this doesn’t mean that museum curators can bury their heads in the collection room hoping that the digital age will simply go away. The reality is that while digitally savvy audiences continue to feel the need to see and commune with authentic materially-present objects, the ways in which they access information about these objects (prior to, during, and after a museum visit) has changed substantially due to technological advances. In turn, the ways in which curators research and present these objects – and stories about them – has also changed. So what are some of the changes that have occurred in museum operations and visitor behavior due to technological advances over the last twenty years? The most obvious technological advances over the last twenty years have actually been in data management. Since the 1990s a number of specialist data management systems have been developed for use in the museum sector. In theory at least, a curator can now access the entire collections of an institution without leaving their desk. Moreover, the same database that tells the curator how many objects the institution holds from the Torres Strait Islands, can also tell her what they look like (through high quality images); which objects were exhibited in past exhibitions; what their prior labels were; what in-house research has been conducted on them; what the conservation requirements are; where they are stored; and who to contact for copyright clearance for display—to name just a few functions. In addition a curator can get on the internet to search the online collection databases from other museums to find what objects they have from the Torres Strait Islands. Thus, while our curator is at this point conducting the same type of exhibition research that she would have done twenty years ago, the ease in which she can access information is substantially greater. The major difference of course is that today, rather than in the past, the curator would be collaborating with members of the original source community to undertake this project. Despite the rise of the internet, this type of liaison still usually occurs face to face. The development of accessible digital databases through the Internet and capacity to download images and information at a rapid rate has also changed the way non-museum staff can access collections. Audiences can now visit museum websites through which they can easily access information about current and past exhibitions, public programs, and online collections. In many cases visitors can also contribute to general discussion forums and collections provenance data through various means such as ‘tagging’; commenting on blogs; message boards; and virtual ‘talk back’ walls. Again, however, this represents a change in how visitors access museums but not a fundamental shift in what they can access. In the past, museum visitors were still encouraged to access and comment upon the collections; it’s just that doing so took a lot more time and effort. The rise of interactivity and the internet—in particular through Web 2.0—has led many commentators to call for a radical change in the ways museums operate. Museum analyst Lynda Kelly (2009) has commented on the issue that: the demands of the ‘information age’ have raised new questions for museums. It has been argued that museums need to move from being suppliers of information to providing usable knowledge and tools for visitors to explore their own ideas and reach their own conclusions because of increasing access to technologies, such as the internet. Gordon Freedman for example argues that internet technologies such as computers, the World Wide Web, mobile phones and email “… have put the power of communication, information gathering, and analysis in the hands of the individuals of the world” (299). Freedman argued that museums need to “evolve into a new kind of beast” (300) in order to keep up with the changes opening up to the possibility of audiences becoming mediators of information and knowledge. Although we often hear about the possibilities of new technologies in opening up the possibilities of multiple authors for exhibitions, I have yet to hear of an example of this successfully taking place. This doesn’t mean, however, that it will never happen. At present most museums seem to be merely dipping their toes in the waters. A recent example from the Art Gallery of South Australia illustrates this point. In 2013, the Gallery mounted an exhibition that was, in theory at least, curated by the public. Labeled as “the ultimate people’s choice exhibition” the project was hosted in conjunction with ABC Radio Adelaide. The public was encouraged to go online to the gallery website and select from a range of artworks in different categories by voting for their favorites. The ‘winning’ works were to form the basis of the exhibition. While the media spin on the exhibition gave the illusion of a mass curated show, in reality very little actual control was given over to the audience-curators. The public was presented a range of artworks, which had already been pre-selected from the standing collections; the themes for the exhibition had also already been determined as they informed the 120 artworks that were offered up for voting. Thus, in the end the pre-selection of objects and themes, as well as the timing and execution of the exhibition remained entirely in the hand of the professional curators. Another recent innovation did not attempt to harness public authorship, but rather enhanced individual visitor connections to museum collections by harnessing new GPS technologies. The Streetmuseum was a free app program created by the Museum of London to bring geotagged historical street views to hand held or portable mobile devices. The program allowed user to undertake a self-guided tour of London. After programing in their route, users could then point their device at various significant sites along the way. Looking through their viewfinder they would see a 3D historic photograph overlayed on the live site – allowing user not only to see what the area looked like in the past but also to capture an image of the overlay. While many of the available tagging apps simply allow for the opportunity of adding more white noise, allowing viewers to add commentary, pics, links to a particular geo tagged site but with no particular focus, the Streetmuseum had a well-defined purpose to encourage their audience to get out and explore London; to share their archival photograph collection with a broader audience; and to teach people more about London’s unique history. A Second Golden Age? A few years ago the Steven Conn suggested that museums are experiencing an international ‘golden age’ with more museums being built and visited and talked about than ever before (1). In the United States, where Conn is based, there are more than 17,500 accredited museums, and more than two million people visit some sort of museum per day, averaging around 865 million museum visits per year (2). However, at the same time that museums are proliferating, the traditional areas of academic research and theory that feed into museums such as history, cultural studies, anthropology and art history are experiencing a period of intense self reflexivity. Conn writes: At the turn of the twenty-first century, more people are going to more museums than at any time in the past, and simultaneously more scholars, critics, and others are writing and talking about museums. The two phenomena are most certainly related but it does not seem to be a happy relationship. Even as museums enjoy more and more success…many who write about them express varying degrees of foreboding. (1) There is no doubt that the internet and increasingly interactive media has transformed the way we live our daily lives—it only makes sense that it should also transform our cultural experiences. At the same time Museums need to learn to ride the wave without getting dumped into it. The best new media acts as a bridge—connecting people to places and ideas—allowing them to learn more about museum objects and historical spaces, value-adding to museum visits rather than replacing them altogether. As museologust Elaine Gurian, has recently concluded, the core business of museums seems unchanged thus far by the adoption of internet based technology: “the museum field generally, its curators, and those academic departments focused on training curators remain at the core philosophically unchanged despite their new websites and shiny new technological reference centres” (97). Virtual life has not replaced real life and online collections and exhibitions have not replaced real life visitations. Visitors want access to credible information about museum objects and museum exhibitions, they are not looking for Wiki-Museums. Or if they are are, they are looking to the Internet community to provide that service rather than the employees of state and federally funded museums. Both provide legitimate services, but they don’t necessarily need to provide the same service. In the same vein, extra-museum ‘curating’ of object and ideas through social media sites such as Pinterest, Flikr, Instagram and Tumblr provide a valuable source of inspiration and a highly enjoyable form of virtual consumption. But the popular uptake of the term ‘curating’ remains as easily separable from professional practice as the prior uptake of the terms ‘doctor’ and ‘architect’. An individual who doctors an image, or is the architect of their destiny, is still not going to operate on a patient nor construct a building. While major ontological shifts have occurred within museum curatorship over the last thirty years, these changes have resulted from wider social shifts, not directly from technology. This is not to say that technology will not change the museum’s ‘way of being’ in my professional lifetime—it’s just to say it hasn’t happened yet. References Cash Cash, Phillip. “Medicine Bundles: An Indigenous Approach.” Ed. T. Bray. The Future of the Past: Archaeologists, Native Americans and Repatriation. New York and London: Garland Publishing (2001): 139-145. Conn, Steven. Do Museums Still Need Objects? Philadelphia: University of Pennsylvania Press, 2011. Edson, Gary, and David Dean. The Handbook for Museums. New York and London: Routledge, 1994. Engeström, Jyri. “Why Some Social Network Services Work and Others Don’t — Or: The Case for Object-Centered Sociality.” Zengestrom Apr. 2005. 17 June 2015 ‹http://www.zengestrom.com/blog/2005/04/why-some-social-network-services-work-and-others-dont-or-the-case-for-object-centered-sociality.html›. Freedman, Gordon. “The Changing Nature of Museums”. Curator 43.4 (2000): 295-306. Gurian, Elaine Heumann. “Curator: From Soloist to Impresario.” Eds. Fiona Cameron and Lynda Kelly. Hot Topics, Public Culture, Museums. Newcastle: Cambridge Scholars Publishing, 2010. 95-111. Kelly, Lynda. “Museum Authority.” Blog 12 Nov. 2009. 25 June 2015 ‹http://australianmuseum.net.au/blogpost/museullaneous/museum-authority›. Kreps, Christina. “Curatorship as Social Practice.” Curator: The Museum Journal 46.3 (2003): 311-323. ———, Christina. Liberating Culture: Cross-Cultural Perspectives on Museums, Curation, and Heritage Preservation. London and New York: Routledge, 2003. Macdonald, Sharon. “Expanding Museum Studies: An Introduction.” Ed. Sharon MacDonald. A Companion to Museum Studies. Oxford: Blackwell Publishing, 2011. Parry, Ross. “The End of the Beginning: Normativity in the Postdigital Museum.” Museum Worlds: Advances in Research 1 (2013): 24-39. Tisdale, Rainey. “Do History Museums Still Need Objects?” History News (2011): 19-24. 18 June 2015 ‹http://aaslhcommunity.org/historynews/files/2011/08/RaineySmr11Links.pdf›. Suchy, Serene. Leading with Passion: Change Management in the Twenty-First Century Museum. Lanham: AltaMira Press, 2004. Weil, Stephen E. “From Being about Something to Being for Somebody: The Ongoing Transformation of the American Museum.” Daedalus, Journal of the American Academy of Arts and Sciences 128.3 (1999): 229–258. Wilkening, Susie. “Community Engagement and Objects—Mutually Exclusive?” Museum Audience Insight 27 July 2009. 14 June 2015 ‹http://reachadvisors.typepad.com/museum_audience_insight/2009/07/community-engagement-and-objects-mutually-exclusive.html›. ———, and Erica Donnis. “Authenticity? It Means Everything.” History News (2008) 63:4. Williams, Alex. “On the Tip of Creative Tongues.” New York Times 4 Oct. 2009. 4 June 2015 ‹http://www.nytimes.com/2009/10/04/fashion/04curate.html›.
Gli stili APA, Harvard, Vancouver, ISO e altri
20

Gerhard, David. "Three Degrees of “G”s: How an Airbag Deployment Sensor Transformed Video Games, Exercise, and Dance". M/C Journal 16, n. 6 (7 novembre 2013). http://dx.doi.org/10.5204/mcj.742.

Testo completo
Abstract (sommario):
Introduction The accelerometer seems, at first, both advanced and dated, both too complex and not complex enough. It sits in our video game controllers and our smartphones allowing us to move beyond mere button presses into immersive experiences where the motion of the hand is directly translated into the motion on the screen, where our flesh is transformed into the flesh of a superhero. Or at least that was the promise in 2005. Since then, motion control has moved from a promised revitalization of the video game industry to a not-quite-good-enough gimmick that all games use but none use well. Rogers describes the diffusion of innovation, as an invention or technology comes to market, in five phases: First, innovators will take risks with a new invention. Second, early adopters will establish a market and lead opinion. Third, the early majority shows that the product has wide appeal and application. Fourth, the late majority adopt the technology only after their skepticism has been allayed. Finally the laggards adopt the technology only when no other options are present (62). Not every technology makes it through the diffusion, however, and there are many who have never warmed to the accelerometer-controlled video game. Once an innovation has moved into the mainstream, additional waves of innovation may take place, when innovators or early adopters may find new uses for existing technology, and bring these uses into the majority. This is the case with the accelerometer that began as an airbag trigger and today is used for measuring and augmenting human motion, from dance to health (Walter 84). In many ways, gestural control of video games, an augmentation technology, was an interlude in the advancement of motion control. History In the early 1920s, bulky proofs-of-concept were produced that manipulated electrical voltage levels based on the movement of a probe, many related to early pressure or force sensors. The relationships between pressure, force, velocity and acceleration are well understood, but development of a tool that could measure one and infer the others was a many-fronted activity. Each of these individual sensors has its own specific application and many are still in use today, as pressure triggers, reaction devices, or other sensor-based interactivity, such as video games (Latulipe et al. 2995) and dance (Chu et al. 184). Over the years, the probes and devices became smaller and more accurate, and eventually migrated to the semiconductor, allowing the measurement of acceleration to take place within an almost inconsequential form-factor. Today, accelerometer chips are in many consumer devices and athletes wear battery-powered wireless accelerometer bracelets that report their every movement in real-time, a concept unimaginable only 20 years ago. One of the significant initial uses for accelerometers was as a sensor for the deployment of airbags in automobiles (Varat and Husher 1). The sensor was placed in the front bumper, detecting quick changes in speed that would indicate a crash. The system was a significant advance in the safety of automobiles, and followed Rogers’ diffusion through to the point where all new cars have airbags as a standard component. Airbags, and the accelerometers which allow them to function fast enough to save lives, are a ubiquitous, commoditized technology that most people take for granted, and served as the primary motivating factor for the mass-production of silicon-based accelerometer chips. On 14 September 2005, a device was introduced which would fundamentally alter the principal market for accelerometer microchips. The accelerometer was the ADXL335, a small, low-power, 3-Axis device capable of measuring up to 3g (1g is the acceleration due to gravity), and the device that used this accelerometer was the Wii remote, also called the Wiimote. Developed by Nintendo and its holding companies, the Wii remote was to be a defining feature of Nintendo’s 7th-generation video game console, in direct competition with the Xbox 360 and the Playstation 3. The Wii remote was so successful that both Microsoft and Sony added motion control to their platforms, in the form of the accelerometer-based “dual shock” controller for the Playstation, and later the Playstation Move controller; as well as an integrated accelerometer in the Xbox 360 controller and the later release of the Microsoft Kinect 3D motion sensing camera. Simultaneously, computer manufacturing companies saw a different, more pedantic use of the accelerometer. The primary storage medium in most computers today is the Hard Disk Drive (HDD), a set of spinning platters of electro-magnetically stored information. Much like a record player, the HDD contains a “head” which sweeps back and forth across the platter, reading and writing data. As computers changed from desktops to laptops, people moved their computers more often, and a problem arose. If the HDD inside a laptop was active when the laptop was moved, the read head might touch the surface of the disk, damaging the HDD and destroying information. Two solutions were implemented: vibration dampening in the manufacturing process, and the use of an accelerometer to detect motion. When the laptop is bumped, or dropped, the hard disk will sense the motion and immediately park the head, saving the disk and the valuable data inside. As a consequence of laptop computers and Wii remotes using accelerometers, the market for these devices began to swing from their use within car airbag systems toward their use in computer systems. And with an accelerometer in every computer, it wasn’t long before clever programmers began to make use of the information coming from the accelerometer for more than just protecting the hard drive. Programs began to appear that would use the accelerometer within a laptop to “lock” it when the user was away, invoking a loud noise like a car alarm to alert passers-by to any potential theft. Other programmers began to use the accelerometer as a gaming input, and this was the beginning of gesture control and the augmentation of human motion. Like laptops, most smartphones and tablets today have accelerometers included among their sensor suite (Brezmes et al. 796). These accelerometers strictly a user-interface tool, allowing the phone to re-orient its interface based on how the user is holding it, and allowing the user to play games and track health information using the phone. Many other consumer electronic devices use accelerometers, such as digital cameras for image stabilization and landscape/portrait orientation. Allowing a device to know its relative orientation and motion provides a wide range of augmentation possibilities. The Language of Measuring Motion When studying accelerometers, their function, and applications, a critical first step is to examine the language used to describe these devices. As the name implies, the accelerometer is a device which measures acceleration, however, our everyday connotation of this term is problematic at best. In colloquial language, we say “accelerate” when we mean “speed up”, but this is, in fact, two connotations removed from the physical property being measured by the device, and we must unwrap these layers of meaning before we can understand what is being measured. Physicists use the term “accelerate” to mean any change in velocity. It is worth reminding ourselves that velocity (to the physicists) is actually a pair of quantities: a speed coupled with a direction. Given this definition, when an object changes velocity (accelerates), it can be changing its speed, its direction, or both. So a car can be said to be accelerating when speeding up, slowing down, or even turning while maintaining a speed. This is why the accelerometer could be used as an airbag sensor in the first place. The airbags should deploy when a car suddenly changes velocity in any direction, including getting faster (due to being hit from behind), getting slower (from a front impact crash) or changing direction (being hit from the side). It is because of this ability to measure changes in velocity that accelerometers have come into common usage for laptop drop sensors and video game motion controllers. But even this understanding of accelerometers is incomplete. Because of the way that accelerometers are constructed, they actually measure “proper acceleration” within the context of a relativistic frame of reference. Discussing general relativity is beyond the scope of this paper, but it is sufficient to describe a relativistic frame of reference as one in which no forces are felt. A familiar example is being in orbit around the planet, when astronauts (and their equipment) float freely in space. A state of “free-fall” is one in which no forces are felt, and this is the only situation in which an accelerometer reads 0 acceleration. Since most of us are not in free-fall most of the time, any accelerometers in devices in normal use do not experience 0 proper acceleration, even when apparently sitting still. This is, of course, because of the force due to gravity. An accelerometer sitting on a table experiences 1g of force from the table, acting against the gravitational acceleration. This non-zero reading for a stationary object is the reason that accelerometers can serve a second (and, today, much more common) use: measuring orientation with respect to gravity. Gravity and Tilt Accelerometers typically measure forces with respect to three linear dimensions, labeled x, y, and z. These three directions orient along the axes of the accelerometer chip itself, with x and y normally orienting along the long faces of the device, and the z direction often pointing through the face of the device. Relative motion within a gravity field can easily be inferred assuming that the only force acting on the device is gravity. In this case, the single force is distributed among the three axes depending on the orientation of the device. This is how personal smartphones and video game controllers are able to use “tilt” control. When held in a natural position, the software extracts the relative value on all three axes and uses that as a reference point. When the user tilts the device, the new direction of the gravitational acceleration is then compared to the reference value and used to infer the tilt. This can be done hundreds of times a second and can be used to control and augment any aspect of the user experience. If, however, gravity is not the only force present, it becomes more difficult to infer orientation. Another common use for accelerometers is to measure physical activity like walking steps. In this case, it is the forces on the accelerometer from each footfall that are interpreted to measure fitness features. Tilt is unreliable in this circumstance because both gravity and the forces from the footfall are measured by the accelerometer, and it is impossible to separate the two forces from a single measurement. Velocity and Position A second common assumption with accelerometers is that since they can measure acceleration (rate of change of velocity), it should be possible to infer the velocity. If the device begins at rest, then any measured acceleration can be interpreted as changes to the velocity in some direction, thus inferring the new velocity. Although this is theoretically possible, real-world factors come in to play which prevent this from being realized. First, the assumption of beginning from a state of rest is not always reasonable. Further, if we don’t know whether the device is moving or not, knowing its acceleration at any moment will not help us to determine it’s new speed or position. The most important real-world problem, however, is that accelerometers typically show small variations even when the object is at rest. This is because of inaccuracies in the way that the accelerometer itself is interpreted. In normal operation, these small changes are ignored, but when trying to infer velocity or position, these little errors will quickly add up to the point where any inferred velocity or position would be unreliable. A common solution to these problems is in the combination of devices. Many new smartphones combine an accelerometer and a gyroscopes (a device which measures changes in rotational inertia) to provide a sensing system known as an IMU (Inertial measurement unit), which makes the readings from each more reliable. In this case, the gyroscope can be used to directly measure tilt (instead of inferring it from gravity) and this tilt information can be subtracted from the accelerometer reading to separate out the motion of the device from the force of gravity. Augmentation Applications in Health, Gaming, and Art Accelerometer-based devices have been used extensively in healthcare (Ward et al. 582), either using the accelerometer within a smartphone worn in the pocket (Yoshioka et al. 502) or using a standalone accelerometer device such as a wristband or shoe tab (Paradiso and Hu 165). In many cases, these devices have been used to measure specific activity such as swimming, gait (Henriksen et al. 288), and muscular activity (Thompson and Bemben 897), as well as general activity for tracking health (Troiano et al. 181), both in children (Stone et al. 136) and the elderly (Davis and Fox 581). These simple measurements are the first step in allowing athletes to modify their performance based on past activity. In the past, athletes would pour over recorded video to analyze and improve their performance, but with accelerometer devices, they can receive feedback in real time and modify their own behaviour based on these measurements. This augmentation is a competitive advantage but could be seen as unfair considering the current non-equal access to computer and electronic technology, i.e. the digital divide (Buente and Robbin 1743). When video games were augmented with motion controls, many assumed that this would have a positive impact on health. Physical activity in children is a common concern (Treuth et al. 1259), and there was a hope that if children had to move to play games, an activity that used to be considered a problem for health could be turned into an opportunity (Mellecker et al. 343). Unfortunately, the impact of children playing motion controlled video games has been less than successful. Although fitness games have been created, it is relatively easy to figure out how to activate controls with the least possible motion, thereby nullifying any potential benefit. One of the most interesting applications of accelerometers, in the context of this paper, is the application to dance-based video games (Brezmes et al. 796). In these systems, participants wear devices originally intended for health tracking in order to increase the sensitivity and control options for dance. This has evolved both from the use of accelerometers for gestural control in video games and for measuring and augmenting sport. Researchers and artists have also recently used accelerometers to augment dance systems in many ways (Latulipe et al. 2995) including combining multiple sensors (Yang et al. 121), as discussed above. Conclusions Although more and more people are using accelerometers in their research and art practice, it is significant that there is a lack of widespread knowledge about how the devices actually work. This can be seen in the many art installations and sports research studies that do not take full advantage of the capabilities of the accelerometer, or infer information or data that is unreliable because of the way that accelerometers behave. This lack of understanding of accelerometers also serves to limit the increased utilization of this powerful device, specifically in the context of augmentation tools. Being able to detect, analyze and interpret the motion of a body part has significant applications in augmentation that are only starting to be realized. The history of accelerometers is interesting and varied, and it is worthwhile, when exploring new ideas for applications of accelerometers, to be fully aware of the previous uses, current trends and technical limitations. It is clear that applications of accelerometers to the measurement of human motion are increasing, and that many new opportunities exist, especially in the application of combinations of sensors and new software techniques. The real novelty, however, will come from researchers and artists using accelerometers and sensors in novel and unusual ways. References Brezmes, Tomas, Juan-Luis Gorricho, and Josep Cotrina. “Activity Recognition from Accelerometer Data on a Mobile Phone.” In Distributed Computing, Artificial Intelligence, Bioinformatics, Soft Computing, and Ambient Assisted Living. Springer, 2009. Buente, Wayne, and Alice Robbin. “Trends in Internet Information Behavior, 2000-2004.” Journal of the American Society for Information Science and Technology 59.11 (2008).Chu, Narisa N.Y., Chang-Ming Yang, and Chih-Chung Wu. “Game Interface Using Digital Textile Sensors, Accelerometer and Gyroscope.” IEEE Transactions on Consumer Electronics 58.2 (2012): 184-189. Davis, Mark G., and Kenneth R. Fox. “Physical Activity Patterns Assessed by Accelerometry in Older People.” European Journal of Applied Physiology 100.5 (2007): 581-589.Hagstromer, Maria, Pekka Oja, and Michael Sjostrom. “Physical Activity and Inactivity in an Adult Population Assessed by Accelerometry.” Medical Science and Sports Exercise. 39.9 (2007): 1502-08. Henriksen, Marius, H. Lund, R. Moe-Nilssen, H. Bliddal, and B. Danneskiod-Samsøe. “Test–Retest Reliability of Trunk Accelerometric Gait Analysis.” Gait & Posture 19.3 (2004): 288-297. Latulipe, Celine, David Wilson, Sybil Huskey, Melissa Word, Arthur Carroll, Erin Carroll, Berto Gonzalez, Vikash Singh, Mike Wirth, and Danielle Lottridge. “Exploring the Design Space in Technology-Augmented Dance.” In CHI’10 Extended Abstracts on Human Factors in Computing Systems. ACM, 2010. Mellecker, Robin R., Lorraine Lanningham-Foster, James A. Levine, and Alison M. McManus. “Energy Intake during Activity Enhanced Video Game Play.” Appetite 55.2 (2010): 343-347. Paradiso, Joseph A., and Eric Hu. “Expressive Footwear for Computer-Augmented Dance Performance.” In First International Symposium on Wearable Computers. IEEE, 1997. Rogers, Everett M. Diffusion of Innovations. New York: Free Press of Glencoe, 1962. Stone, Michelle R., Ann V. Rowlands, and Roger G. Eston. "Relationships between Accelerometer-Assessed Physical Activity and Health in Children: Impact of the Activity-Intensity Classification Method" The Free Library 1 Mar. 2009. Thompson, Christian J., and Michael G. Bemben. “Reliability and Comparability of the Accelerometer as a Measure of Muscular Power.” Medicine and Science in Sports and Exercise. 31.6 (1999): 897-902.Treuth, Margarita S., Kathryn Schmitz, Diane J. Catellier, Robert G. McMurray, David M. Murray, M. Joao Almeida, Scott Going, James E. Norman, and Russell Pate. “Defining Accelerometer Thresholds for Activity Intensities in Adolescent Girls.” Medicine and Science in Sports and Exercise 36.7 (2004):1259-1266Troiano, Richard P., David Berrigan, Kevin W. Dodd, Louise C. Masse, Timothy Tilert, Margaret McDowell, et al. “Physical Activity in the United States Measured by Accelerometer.” Medicine and Science in Sports and Exercise, 40.1 (2008):181-88. Varat, Michael S., and Stein E. Husher. “Vehicle Impact Response Analysis through the Use of Accelerometer Data.” In SAE World Congress, 2000. Walter, Patrick L. “The History of the Accelerometer”. Sound and Vibration (Mar. 1997): 16-22. Ward, Dianne S., Kelly R. Evenson, Amber Vaughn, Anne Brown Rodgers, Richard P. Troiano, et al. “Accelerometer Use in Physical Activity: Best Practices and Research Recommendations.” Medicine and Science in Sports and Exercise 37.11 (2005): S582-8. Yang, Chang-Ming, Jwu-Sheng Hu, Ching-Wen Yang, Chih-Chung Wu, and Narisa Chu. “Dancing Game by Digital Textile Sensor, Accelerometer and Gyroscope.” In IEEE International Games Innovation Conference. IEEE, 2011.Yoshioka, M., M. Ayabe, T. Yahiro, H. Higuchi, Y. Higaki, J. St-Amand, H. Miyazaki, Y. Yoshitake, M. Shindo, and H. Tanaka. “Long-Period Accelerometer Monitoring Shows the Role of Physical Activity in Overweight and Obesity.” International Journal of Obesity 29.5 (2005): 502-508.
Gli stili APA, Harvard, Vancouver, ISO e altri
21

Losh, Elizabeth. "Artificial Intelligence". M/C Journal 10, n. 5 (1 ottobre 2007). http://dx.doi.org/10.5204/mcj.2710.

Testo completo
Abstract (sommario):
On the morning of Thursday, 4 May 2006, the United States House Permanent Select Committee on Intelligence held an open hearing entitled “Terrorist Use of the Internet.” The Intelligence committee meeting was scheduled to take place in Room 1302 of the Longworth Office Building, a Depression-era structure with a neoclassical façade. Because of a dysfunctional elevator, some of the congressional representatives were late to the meeting. During the testimony about the newest political applications for cutting-edge digital technology, the microphones periodically malfunctioned, and witnesses complained of “technical problems” several times. By the end of the day it seemed that what was to be remembered about the hearing was the shocking revelation that terrorists were using videogames to recruit young jihadists. The Associated Press wrote a short, restrained article about the hearing that only mentioned “computer games and recruitment videos” in passing. Eager to have their version of the news item picked up, Reuters made videogames the focus of their coverage with a headline that announced, “Islamists Using US Videogames in Youth Appeal.” Like a game of telephone, as the Reuters videogame story was quickly re-run by several Internet news services, each iteration of the title seemed less true to the exact language of the original. One Internet news service changed the headline to “Islamic militants recruit using U.S. video games.” Fox News re-titled the story again to emphasise that this alert about technological manipulation was coming from recognised specialists in the anti-terrorism surveillance field: “Experts: Islamic Militants Customizing Violent Video Games.” As the story circulated, the body of the article remained largely unchanged, in which the Reuters reporter described the digital materials from Islamic extremists that were shown at the congressional hearing. During the segment that apparently most captured the attention of the wire service reporters, eerie music played as an English-speaking narrator condemned the “infidel” and declared that he had “put a jihad” on them, as aerial shots moved over 3D computer-generated images of flaming oil facilities and mosques covered with geometric designs. Suddenly, this menacing voice-over was interrupted by an explosion, as a virtual rocket was launched into a simulated military helicopter. The Reuters reporter shared this dystopian vision from cyberspace with Western audiences by quoting directly from the chilling commentary and describing a dissonant montage of images and remixed sound. “I was just a boy when the infidels came to my village in Blackhawk helicopters,” a narrator’s voice said as the screen flashed between images of street-level gunfights, explosions and helicopter assaults. Then came a recording of President George W. Bush’s September 16, 2001, statement: “This crusade, this war on terrorism, is going to take a while.” It was edited to repeat the word “crusade,” which Muslims often define as an attack on Islam by Christianity. According to the news reports, the key piece of evidence before Congress seemed to be a film by “SonicJihad” of recorded videogame play, which – according to the experts – was widely distributed online. Much of the clip takes place from the point of view of a first-person shooter, seen as if through the eyes of an armed insurgent, but the viewer also periodically sees third-person action in which the player appears as a running figure wearing a red-and-white checked keffiyeh, who dashes toward the screen with a rocket launcher balanced on his shoulder. Significantly, another of the player’s hand-held weapons is a detonator that triggers remote blasts. As jaunty music plays, helicopters, tanks, and armoured vehicles burst into smoke and flame. Finally, at the triumphant ending of the video, a green and white flag bearing a crescent is hoisted aloft into the sky to signify victory by Islamic forces. To explain the existence of this digital alternative history in which jihadists could be conquerors, the Reuters story described the deviousness of the country’s terrorist opponents, who were now apparently modifying popular videogames through their wizardry and inserting anti-American, pro-insurgency content into U.S.-made consumer technology. One of the latest video games modified by militants is the popular “Battlefield 2” from leading video game publisher, Electronic Arts Inc of Redwood City, California. Jeff Brown, a spokesman for Electronic Arts, said enthusiasts often write software modifications, known as “mods,” to video games. “Millions of people create mods on games around the world,” he said. “We have absolutely no control over them. It’s like drawing a mustache on a picture.” Although the Electronic Arts executive dismissed the activities of modders as a “mustache on a picture” that could only be considered little more than childish vandalism of their off-the-shelf corporate product, others saw a more serious form of criminality at work. Testifying experts and the legislators listening on the committee used the video to call for greater Internet surveillance efforts and electronic counter-measures. Within twenty-four hours of the sensationalistic news breaking, however, a group of Battlefield 2 fans was crowing about the idiocy of reporters. The game play footage wasn’t from a high-tech modification of the software by Islamic extremists; it had been posted on a Planet Battlefield forum the previous December of 2005 by a game fan who had cut together regular game play with a Bush remix and a parody snippet of the soundtrack from the 2004 hit comedy film Team America. The voice describing the Black Hawk helicopters was the voice of Trey Parker of South Park cartoon fame, and – much to Parker’s amusement – even the mention of “goats screaming” did not clue spectators in to the fact of a comic source. Ironically, the moment in the movie from which the sound clip is excerpted is one about intelligence gathering. As an agent of Team America, a fictional elite U.S. commando squad, the hero of the film’s all-puppet cast, Gary Johnston, is impersonating a jihadist radical inside a hostile Egyptian tavern that is modelled on the cantina scene from Star Wars. Additional laughs come from the fact that agent Johnston is accepted by the menacing terrorist cell as “Hakmed,” despite the fact that he utters a series of improbable clichés made up of incoherent stereotypes about life in the Middle East while dressed up in a disguise made up of shoe polish and a turban from a bathroom towel. The man behind the “SonicJihad” pseudonym turned out to be a twenty-five-year-old hospital administrator named Samir, and what reporters and representatives saw was nothing more exotic than game play from an add-on expansion pack of Battlefield 2, which – like other versions of the game – allows first-person shooter play from the position of the opponent as a standard feature. While SonicJihad initially joined his fellow gamers in ridiculing the mainstream media, he also expressed astonishment and outrage about a larger politics of reception. In one interview he argued that the media illiteracy of Reuters potentially enabled a whole series of category errors, in which harmless gamers could be demonised as terrorists. It wasn’t intended for the purpose what it was portrayed to be by the media. So no I don’t regret making a funny video . . . why should I? The only thing I regret is thinking that news from Reuters was objective and always right. The least they could do is some online research before publishing this. If they label me al-Qaeda just for making this silly video, that makes you think, what is this al-Qaeda? And is everything al-Qaeda? Although Sonic Jihad dismissed his own work as “silly” or “funny,” he expected considerably more from a credible news agency like Reuters: “objective” reporting, “online research,” and fact-checking before “publishing.” Within the week, almost all of the salient details in the Reuters story were revealed to be incorrect. SonicJihad’s film was not made by terrorists or for terrorists: it was not created by “Islamic militants” for “Muslim youths.” The videogame it depicted had not been modified by a “tech-savvy militant” with advanced programming skills. Of course, what is most extraordinary about this story isn’t just that Reuters merely got its facts wrong; it is that a self-identified “parody” video was shown to the august House Intelligence Committee by a team of well-paid “experts” from the Science Applications International Corporation (SAIC), a major contractor with the federal government, as key evidence of terrorist recruitment techniques and abuse of digital networks. Moreover, this story of media illiteracy unfolded in the context of a fundamental Constitutional debate about domestic surveillance via communications technology and the further regulation of digital content by lawmakers. Furthermore, the transcripts of the actual hearing showed that much more than simple gullibility or technological ignorance was in play. Based on their exchanges in the public record, elected representatives and government experts appear to be keenly aware that the digital discourses of an emerging information culture might be challenging their authority and that of the longstanding institutions of knowledge and power with which they are affiliated. These hearings can be seen as representative of a larger historical moment in which emphatic declarations about prohibiting specific practices in digital culture have come to occupy a prominent place at the podium, news desk, or official Web portal. This environment of cultural reaction can be used to explain why policy makers’ reaction to terrorists’ use of networked communication and digital media actually tells us more about our own American ideologies about technology and rhetoric in a contemporary information environment. When the experts come forward at the Sonic Jihad hearing to “walk us through the media and some of the products,” they present digital artefacts of an information economy that mirrors many of the features of our own consumption of objects of electronic discourse, which seem dangerously easy to copy and distribute and thus also create confusion about their intended meanings, audiences, and purposes. From this one hearing we can see how the reception of many new digital genres plays out in the public sphere of legislative discourse. Web pages, videogames, and Weblogs are mentioned specifically in the transcript. The main architecture of the witnesses’ presentation to the committee is organised according to the rhetorical conventions of a PowerPoint presentation. Moreover, the arguments made by expert witnesses about the relationship of orality to literacy or of public to private communications in new media are highly relevant to how we might understand other important digital genres, such as electronic mail or text messaging. The hearing also invites consideration of privacy, intellectual property, and digital “rights,” because moral values about freedom and ownership are alluded to by many of the elected representatives present, albeit often through the looking glass of user behaviours imagined as radically Other. For example, terrorists are described as “modders” and “hackers” who subvert those who properly create, own, legitimate, and regulate intellectual property. To explain embarrassing leaks of infinitely replicable digital files, witness Ron Roughead says, “We’re not even sure that they don’t even hack into the kinds of spaces that hold photographs in order to get pictures that our forces have taken.” Another witness, Undersecretary of Defense for Policy and International Affairs, Peter Rodman claims that “any video game that comes out, as soon as the code is released, they will modify it and change the game for their needs.” Thus, the implication of these witnesses’ testimony is that the release of code into the public domain can contribute to political subversion, much as covert intrusion into computer networks by stealthy hackers can. However, the witnesses from the Pentagon and from the government contractor SAIC often present a contradictory image of the supposed terrorists in the hearing transcripts. Sometimes the enemy is depicted as an organisation of technological masterminds, capable of manipulating the computer code of unwitting Americans and snatching their rightful intellectual property away; sometimes those from the opposing forces are depicted as pre-modern and even sub-literate political innocents. In contrast, the congressional representatives seem to focus on similarities when comparing the work of “terrorists” to the everyday digital practices of their constituents and even of themselves. According to the transcripts of this open hearing, legislators on both sides of the aisle express anxiety about domestic patterns of Internet reception. Even the legislators’ own Web pages are potentially disruptive electronic artefacts, particularly when the demands of digital labour interfere with their duties as lawmakers. Although the subject of the hearing is ostensibly terrorist Websites, Representative Anna Eshoo (D-California) bemoans the difficulty of maintaining her own official congressional site. As she observes, “So we are – as members, I think we’re very sensitive about what’s on our Website, and if I retained what I had on my Website three years ago, I’d be out of business. So we know that they have to be renewed. They go up, they go down, they’re rebuilt, they’re – you know, the message is targeted to the future.” In their questions, lawmakers identify Weblogs (blogs) as a particular area of concern as a destabilising alternative to authoritative print sources of information from established institutions. Representative Alcee Hastings (D-Florida) compares the polluting power of insurgent bloggers to that of influential online muckrakers from the American political Right. Hastings complains of “garbage on our regular mainstream news that comes from blog sites.” Representative Heather Wilson (R-New Mexico) attempts to project a media-savvy persona by bringing up the “phenomenon of blogging” in conjunction with her questions about jihadist Websites in which she notes how Internet traffic can be magnified by cooperative ventures among groups of ideologically like-minded content-providers: “These Websites, and particularly the most active ones, are they cross-linked? And do they have kind of hot links to your other favorite sites on them?” At one point Representative Wilson asks witness Rodman if he knows “of your 100 hottest sites where the Webmasters are educated? What nationality they are? Where they’re getting their money from?” In her questions, Wilson implicitly acknowledges that Web work reflects influences from pedagogical communities, economic networks of the exchange of capital, and even potentially the specific ideologies of nation-states. It is perhaps indicative of the government contractors’ anachronistic worldview that the witness is unable to answer Wilson’s question. He explains that his agency focuses on the physical location of the server or ISP rather than the social backgrounds of the individuals who might be manufacturing objectionable digital texts. The premise behind the contractors’ working method – surveilling the technical apparatus not the social network – may be related to other beliefs expressed by government witnesses, such as the supposition that jihadist Websites are collectively produced and spontaneously emerge from the indigenous, traditional, tribal culture, instead of assuming that Iraqi insurgents have analogous beliefs, practices, and technological awareness to those in first-world countries. The residual subtexts in the witnesses’ conjectures about competing cultures of orality and literacy may tell us something about a reactionary rhetoric around videogames and digital culture more generally. According to the experts before Congress, the Middle Eastern audience for these videogames and Websites is limited by its membership in a pre-literate society that is only capable of abortive cultural production without access to knowledge that is archived in printed codices. Sometimes the witnesses before Congress seem to be unintentionally channelling the ideas of the late literacy theorist Walter Ong about the “secondary orality” associated with talky electronic media such as television, radio, audio recording, or telephone communication. Later followers of Ong extend this concept of secondary orality to hypertext, hypermedia, e-mail, and blogs, because they similarly share features of both speech and written discourse. Although Ong’s disciples celebrate this vibrant reconnection to a mythic, communal past of what Kathleen Welch calls “electric rhetoric,” the defence industry consultants express their profound state of alarm at the potentially dangerous and subversive character of this hybrid form of communication. The concept of an “oral tradition” is first introduced by the expert witnesses in the context of modern marketing and product distribution: “The Internet is used for a variety of things – command and control,” one witness states. “One of the things that’s missed frequently is how and – how effective the adversary is at using the Internet to distribute product. They’re using that distribution network as a modern form of oral tradition, if you will.” Thus, although the Internet can be deployed for hierarchical “command and control” activities, it also functions as a highly efficient peer-to-peer distributed network for disseminating the commodity of information. Throughout the hearings, the witnesses imply that unregulated lateral communication among social actors who are not authorised to speak for nation-states or to produce legitimated expert discourses is potentially destabilising to political order. Witness Eric Michael describes the “oral tradition” and the conventions of communal life in the Middle East to emphasise the primacy of speech in the collective discursive practices of this alien population: “I’d like to point your attention to the media types and the fact that the oral tradition is listed as most important. The other media listed support that. And the significance of the oral tradition is more than just – it’s the medium by which, once it comes off the Internet, it is transferred.” The experts go on to claim that this “oral tradition” can contaminate other media because it functions as “rumor,” the traditional bane of the stately discourse of military leaders since the classical era. The oral tradition now also has an aspect of rumor. A[n] event takes place. There is an explosion in a city. Rumor is that the United States Air Force dropped a bomb and is doing indiscriminate killing. This ends up being discussed on the street. It ends up showing up in a Friday sermon in a mosque or in another religious institution. It then gets recycled into written materials. Media picks up the story and broadcasts it, at which point it’s now a fact. In this particular case that we were telling you about, it showed up on a network television, and their propaganda continues to go back to this false initial report on network television and continue to reiterate that it’s a fact, even though the United States government has proven that it was not a fact, even though the network has since recanted the broadcast. In this example, many-to-many discussion on the “street” is formalised into a one-to many “sermon” and then further stylised using technology in a one-to-many broadcast on “network television” in which “propaganda” that is “false” can no longer be disputed. This “oral tradition” is like digital media, because elements of discourse can be infinitely copied or “recycled,” and it is designed to “reiterate” content. In this hearing, the word “rhetoric” is associated with destructive counter-cultural forces by the witnesses who reiterate cultural truisms dating back to Plato and the Gorgias. For example, witness Eric Michael initially presents “rhetoric” as the use of culturally specific and hence untranslatable figures of speech, but he quickly moves to an outright castigation of the entire communicative mode. “Rhetoric,” he tells us, is designed to “distort the truth,” because it is a “selective” assembly or a “distortion.” Rhetoric is also at odds with reason, because it appeals to “emotion” and a romanticised Weltanschauung oriented around discourses of “struggle.” The film by SonicJihad is chosen as the final clip by the witnesses before Congress, because it allegedly combines many different types of emotional appeal, and thus it conveniently ties together all of the themes that the witnesses present to the legislators about unreliable oral or rhetorical sources in the Middle East: And there you see how all these products are linked together. And you can see where the games are set to psychologically condition you to go kill coalition forces. You can see how they use humor. You can see how the entire campaign is carefully crafted to first evoke an emotion and then to evoke a response and to direct that response in the direction that they want. Jihadist digital products, especially videogames, are effective means of manipulation, the witnesses argue, because they employ multiple channels of persuasion and carefully sequenced and integrated subliminal messages. To understand the larger cultural conversation of the hearing, it is important to keep in mind that the related argument that “games” can “psychologically condition” players to be predisposed to violence is one that was important in other congressional hearings of the period, as well one that played a role in bills and resolutions that were passed by the full body of the legislative branch. In the witness’s testimony an appeal to anti-game sympathies at home is combined with a critique of a closed anti-democratic system abroad in which the circuits of rhetorical production and their composite metonymic chains are described as those that command specific, unvarying, robotic responses. This sharp criticism of the artful use of a presentation style that is “crafted” is ironic, given that the witnesses’ “compilation” of jihadist digital material is staged in the form of a carefully structured PowerPoint presentation, one that is paced to a well-rehearsed rhythm of “slide, please” or “next slide” in the transcript. The transcript also reveals that the members of the House Intelligence Committee were not the original audience for the witnesses’ PowerPoint presentation. Rather, when it was first created by SAIC, this “expert” presentation was designed for training purposes for the troops on the ground, who would be facing the challenges of deployment in hostile terrain. According to the witnesses, having the slide show showcased before Congress was something of an afterthought. Nonetheless, Congressman Tiahrt (R-KN) is so impressed with the rhetorical mastery of the consultants that he tries to appropriate it. As Tiarht puts it, “I’d like to get a copy of that slide sometime.” From the hearing we also learn that the terrorists’ Websites are threatening precisely because they manifest a polymorphously perverse geometry of expansion. For example, one SAIC witness before the House Committee compares the replication and elaboration of digital material online to a “spiderweb.” Like Representative Eshoo’s site, he also notes that the terrorists’ sites go “up” and “down,” but the consultant is left to speculate about whether or not there is any “central coordination” to serve as an organising principle and to explain the persistence and consistency of messages despite the apparent lack of a single authorial ethos to offer a stable, humanised, point of reference. In the hearing, the oft-cited solution to the problem created by the hybridity and iterability of digital rhetoric appears to be “public diplomacy.” Both consultants and lawmakers seem to agree that the damaging messages of the insurgents must be countered with U.S. sanctioned information, and thus the phrase “public diplomacy” appears in the hearing seven times. However, witness Roughhead complains that the protean “oral tradition” and what Henry Jenkins has called the “transmedia” character of digital culture, which often crosses several platforms of traditional print, projection, or broadcast media, stymies their best rhetorical efforts: “I think the point that we’ve tried to make in the briefing is that wherever there’s Internet availability at all, they can then download these – these programs and put them onto compact discs, DVDs, or post them into posters, and provide them to a greater range of people in the oral tradition that they’ve grown up in. And so they only need a few Internet sites in order to distribute and disseminate the message.” Of course, to maintain their share of the government market, the Science Applications International Corporation also employs practices of publicity and promotion through the Internet and digital media. They use HTML Web pages for these purposes, as well as PowerPoint presentations and online video. The rhetoric of the Website of SAIC emphasises their motto “From Science to Solutions.” After a short Flash film about how SAIC scientists and engineers solve “complex technical problems,” the visitor is taken to the home page of the firm that re-emphasises their central message about expertise. The maps, uniforms, and specialised tools and equipment that are depicted in these opening Web pages reinforce an ethos of professional specialisation that is able to respond to multiple threats posed by the “global war on terror.” By 26 June 2006, the incident finally was being described as a “Pentagon Snafu” by ABC News. From the opening of reporter Jake Tapper’s investigative Webcast, established government institutions were put on the spot: “So, how much does the Pentagon know about videogames? Well, when it came to a recent appearance before Congress, apparently not enough.” Indeed, the very language about “experts” that was highlighted in the earlier coverage is repeated by Tapper in mockery, with the significant exception of “independent expert” Ian Bogost of the Georgia Institute of Technology. If the Pentagon and SAIC deride the legitimacy of rhetoric as a cultural practice, Bogost occupies himself with its defence. In his recent book Persuasive Games: The Expressive Power of Videogames, Bogost draws upon the authority of the “2,500 year history of rhetoric” to argue that videogames represent a significant development in that cultural narrative. Given that Bogost and his Watercooler Games Weblog co-editor Gonzalo Frasca were actively involved in the detective work that exposed the depth of professional incompetence involved in the government’s line-up of witnesses, it is appropriate that Bogost is given the final words in the ABC exposé. As Bogost says, “We should be deeply bothered by this. We should really be questioning the kind of advice that Congress is getting.” Bogost may be right that Congress received terrible counsel on that day, but a close reading of the transcript reveals that elected officials were much more than passive listeners: in fact they were lively participants in a cultural conversation about regulating digital media. After looking at the actual language of these exchanges, it seems that the persuasiveness of the misinformation from the Pentagon and SAIC had as much to do with lawmakers’ preconceived anxieties about practices of computer-mediated communication close to home as it did with the contradictory stereotypes that were presented to them about Internet practices abroad. In other words, lawmakers found themselves looking into a fun house mirror that distorted what should have been familiar artefacts of American popular culture because it was precisely what they wanted to see. References ABC News. “Terrorist Videogame?” Nightline Online. 21 June 2006. 22 June 2006 http://abcnews.go.com/Video/playerIndex?id=2105341>. Bogost, Ian. Persuasive Games: Videogames and Procedural Rhetoric. Cambridge, MA: MIT Press, 2007. Game Politics. “Was Congress Misled by ‘Terrorist’ Game Video? We Talk to Gamer Who Created the Footage.” 11 May 2006. http://gamepolitics.livejournal.com/285129.html#cutid1>. Jenkins, Henry. Convergence Culture: Where Old and New Media Collide. New York: New York UP, 2006. julieb. “David Morgan Is a Horrible Writer and Should Be Fired.” Online posting. 5 May 2006. Dvorak Uncensored Cage Match Forums. http://cagematch.dvorak.org/index.php/topic,130.0.html>. Mahmood. “Terrorists Don’t Recruit with Battlefield 2.” GGL Global Gaming. 16 May 2006 http://www.ggl.com/news.php?NewsId=3090>. Morgan, David. “Islamists Using U.S. Video Games in Youth Appeal.” Reuters online news service. 4 May 2006 http://today.reuters.com/news/ArticleNews.aspx?type=topNews &storyID=2006-05-04T215543Z_01_N04305973_RTRUKOC_0_US-SECURITY- VIDEOGAMES.xml&pageNumber=0&imageid=&cap=&sz=13&WTModLoc= NewsArt-C1-ArticlePage2>. Ong, Walter J. Orality and Literacy: The Technologizing of the Word. London/New York: Methuen, 1982. Parker, Trey. Online posting. 7 May 2006. 9 May 2006 http://www.treyparker.com>. Plato. “Gorgias.” Plato: Collected Dialogues. Princeton: Princeton UP, 1961. Shrader, Katherine. “Pentagon Surfing Thousands of Jihad Sites.” Associated Press 4 May 2006. SonicJihad. “SonicJihad: A Day in the Life of a Resistance Fighter.” Online posting. 26 Dec. 2005. Planet Battlefield Forums. 9 May 2006 http://www.forumplanet.com/planetbattlefield/topic.asp?fid=13670&tid=1806909&p=1>. Tapper, Jake, and Audery Taylor. “Terrorist Video Game or Pentagon Snafu?” ABC News Nightline 21 June 2006. 30 June 2006 http://abcnews.go.com/Nightline/Technology/story?id=2105128&page=1>. U.S. Congressional Record. Panel I of the Hearing of the House Select Intelligence Committee, Subject: “Terrorist Use of the Internet for Communications.” Federal News Service. 4 May 2006. Welch, Kathleen E. Electric Rhetoric: Classical Rhetoric, Oralism, and the New Literacy. Cambridge, MA: MIT Press, 1999. Citation reference for this article MLA Style Losh, Elizabeth. "Artificial Intelligence: Media Illiteracy and the SonicJihad Debacle in Congress." M/C Journal 10.5 (2007). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0710/08-losh.php>. APA Style Losh, E. (Oct. 2007) "Artificial Intelligence: Media Illiteracy and the SonicJihad Debacle in Congress," M/C Journal, 10(5). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0710/08-losh.php>.
Gli stili APA, Harvard, Vancouver, ISO e altri
22

Poutoglidou, Frideriki, Marios Stavrakas, Nikolaos Tsetsos, Alexandros Poutoglidis, Aikaterini Tsentemeidou, Georgios Fyrmpas e Petros D. Karkos. "Fraud and Deceit in Medical Research". Voices in Bioethics 8 (26 gennaio 2022). http://dx.doi.org/10.52214/vib.v8i.8940.

Testo completo
Abstract (sommario):
Photo by Agni B on Unsplash ABSTRACT The number of scientific articles published per year has been steadily increasing; so have the instances of misconduct in medical research. While increasing scientific knowledge is beneficial, it is imperative that research be authentic and bias-free. This article explores why fraud and other misconduct occur, presents the consequences of the phenomenon, and proposes measures to eliminate unethical practices in medical research. The main reason scientists engage in unethical practices is the pressure to publish which is directly related to their academic advancement and career development. Additional factors include the pressure to get research funds, the pressure from funding sources on researchers to deliver results, how scientific publishing has evolved over the years, and the over-publication of research in general. Fraud in medical research damages trust and reliability in science and potentially harms individuals. INTRODUCTION Since the introduction of Evidence-Based Medicine (EBM) in the early 1990s, scientific articles published per year have increased steadily. No one knows the exact number of scientific articles published per year, but several estimates point to around 2,000,000.[1] EBM aims to integrate the clinical experience and the best available scientific knowledge in managing individual patients.[2] The EBM model is based on the accumulation of as much clinical and research data as possible, which has propelled a significant rise in research. Unfortunately, its incentive structure has also led to a rise in research misconduct. “Fraud in science has a long history.”[3] Cases of misconduct began to surface in the late 1980s and increased during the 1990s. Experts suggest that today fraud is “endemic in many scientific disciplines and in most countries.”[4] In recent reporting, the majority of cases of scientific fraud involved falsification and fabrication of the data, while plagiarism was much less frequent. 8 percent of scientists and 10 percent of medical and life-sciences researchers admitted to falsifying data at least once between 2017 and 2021 in a Dutch study of 6,813 researchers, while more than half engaged in at least one questionable research practice.[5] Questionable research practices include research design flaws or unfairness in decisions surrounding publication or grants.[6] In an older study, closer to 2 percent of those surveyed reported having engaged in falsification or fabrication,[7] while in a more recent survey of 3,000 scientists with NIH grants in the United States, 0.3 percent of the scientists responding admitted fabricating research data and 1.4 percent of them admitted plagiarizing.[8] These numbers are almost certainly not reflective of the true incidence of fraud as many scientists admitted that they engaged in a range of behaviors beyond fabrication, falsification, and plagiarism that undermine the integrity of science, such as changing the results of a study under pressure from a funding source or failing to present data that contradicts one’s previous research. It is also unclear whether surveys are the best method to investigate misconduct because a scientist answering the survey may be unsure of anonymity and may not be truthful. This article explores why misconduct occurs, presents the consequences, and proposes measures to eliminate unethical practices in medical research. In the 1999 Joint Consensus Conference on Misconduct in Biomedical Research, “scientific fraud” was defined as any “behavior by a researcher, intentional or not, that falls short of good ethical and scientific standards.”[9] ANALYSIS l. The Scientific Publishing Landscape There are several reasons scientists may commit misconduct and engage in unethical practices. There is an increasing pressure to publish, which the motto "publish or perish reflects.”[10] The number of scientific papers published by a researcher is directly related to their academic advancement and career development. Similarly, academic institutions rely on scientific publications to gain prestige and access research grants. Pressure to get research grants may create environments that make it challenging to research integrity. Researchers are often tempted to alter their data to fulfill the desired results, separately report the results of one research in multiple end publications, commonly referred to as “salami publication,” or even simultaneously submit their scientific articles to more than one journal. This creates a vicious cycle in which the need for funding leads to scientific misconduct, which in turn secures more research funding. Meanwhile, the pressure from the funding sources cannot be overlooked either. Although researchers must report the role of the funding sources, selection and publication bias often may advantage articles that support the interests of the financial sponsor. Disclosure does not alter the conflict of interest. The growing number of scientific articles published per year has practically overwhelmed the peer-review system. Manuscript submissions are often reviewed superficially or assigned to inexperienced reviewers; therefore, misconduct cases may go unnoticed. The rise of “predatory” journals that charge authors publication fees and do not review work for authenticity and the dissemination of information through preprints has worsened the situation. The way that profits influence scientific publishing has very likely contributed to the phenomenon of misconduct. The publishing industry is a highly profitable business.[11] The increased reliance on funding from sources that expect the research to appear in prestigious, open-access journals often creates conflicts of interest and funding bias. On the other hand, high-impact journals have not given space to navigate through negative results and previous failures. Nonsignificant findings commonly remain unpublished, a phenomenon known as “the file drawer problem.” Scientists often manipulate their data to fit their initial hypothesis or change their hypothesis to fit their results, leading to outcome-reporting bias. ll. Misconduct Concerning the Reporting and Publishing Data The types of misconduct vary and have different implications for the scientist’s career and those relying on the research. For example, plagiarism is generally not punished by law currently unless it violates the original author’s copyright. Nevertheless, publishers who detect plagiarism implement penalties such as rejection of the submitted article and expulsion of the author. While plagiarism can be either accidental or deliberate, in either case, it is a serious violation of academic integrity as it involves passing off someone else’s “work or ideas” as one’s own.[12] Plagiarism can be “verbatim” (copying sentences or paragraphs from previously published work without using quotation marks or referencing the source) or rephrasing someone’s work or ideas without citing them. In “mosaic” plagiarism, the work plagiarized comes from various sources. “Self-plagiarism” is defined as an author’s reproduction of their previous publications or ideas in the same or altered words. According to most scientific journals, all authors of an article in part must have contributed to the conception and design of the study, drafted the article, revised it critically, or approved of its final version.[13] The use of a ghost author (usually a professional writer who is not named an author) is generally not ethical, as it undermines the requirement that the listed authors created the article. Moreover, wasteful publication is another practice that contributes to misconduct. Wasteful publication includes dividing the results of one single study into multiple end publications (“salami slicing”), republishing the same results in one or more articles, or extending a previously published article by adding new data without reaching new conclusions. Wasteful publication not only skews the scientific databases, but also wastes the time of the readers, the editors, and the reviewers. It is considered unethical because it unreasonably increases the authors’ citation records. Authors caught engaging in such behaviors may be banned from submitting articles for years while the submitted article is automatically rejected. Wasteful publication is an example of how the pressure to publish more articles leads to dishonest behavior, making it look like a researcher has conducted more studies and has more experience. Conflicts of interest are not strictly prohibited in medicine but require disclosure. Although disclosure of financial interests is a critical step, it does not guarantee the absence of bias. Researchers with financial ties to a pharmaceutical company funding their research are more likely to report results that favor the sponsor, which eventually undermines the integrity of research.[14] Financial sponsors should not be allowed to influence publication; rather authors need to publish their results based on their own decisions and findings. lll. Misconduct in Carrying Out Scientific Research Studies Common forms of fabrication include concealing negative results, changing the results to fit the initial hypothesis or selective reporting of the outcomes. Falsification is the manipulation of experimental data that leads to inaccurate presentation of the research results. Falsified data includes deliberately manipulating images, omitting, or adding data points, and removing outliers in a dataset for the sake of manipulating the outcome. In contrast to plagiarism, this type of misconduct is very difficult to detect. Scientists who fabricate or falsify their data may be banned from receiving funding grants or terminated from their institutions. Falsification and fabrication are dangerous to the public as they can result in people giving and receiving incorrect medical advice. Relying on falsified data can lead to death or injury or lead patients to take a drug, treatment, or use a medical device that is less effective than perceived. Thus, some members of the scientific community support the criminalization of this type of misconduct.[15] Research involving human participants requires respect for persons, beneficence, justice, voluntary consent, respect for autonomy, and confidentiality. Violating those principles constitutes unethical human experimentation. The Declaration of Helsinki is a statement of ethical principles for biomedical research involving human subjects, including research on identifiable human material and data. Similarly, research in which animals are subjects is also regulated. The first set of limits on the practice of animal experimentation was the Cruelty to Animals Act passed in 1876 by the Parliament of the United Kingdom. Currently, all animal experiments in the EU should be carried out in accordance with the European Directive (2010/63/EU),[16] and in the US, there are many state and federal laws governing research involving animals. The incentives to compromise the ethical responsibilities surrounding human and animal practices may differ from the pressure to publish, yet some are in the same vein. They may generally include taking shortcuts, rushing to get necessary approvals, or using duress to get more research subjects, all actions that reflect a sense of urgency. lV. Consequences of Scientific Misconduct Fraud in medical research damages science by creating data that other researchers will be urged to follow or reproduce that wastes time, effort, and funds. Scientific misconduct undermines the trust among researchers and the public’s trust in science. Meanwhile, fraud in medical trials may lead to the release of ineffective or unsafe drugs or processes that could potentially harm individuals. Most recently, a study conducted by Surgisphere Corporation supported the efficacy of hydroxychloroquine for the treatment of COVID-19 disease.[17] The scientific article that presented the results of the study was retracted shortly after its release due to concerns raised over the validity of the data. Scientific misconduct is associated with reputational and financial costs, including wasted funds for research that is practically useless, costs of an investigation into the fraudulent research, and costs to settle litigation connected with the misconduct. The retraction of scientific articles for misconduct between 1992 and 2002 accounted for $58 million in lost funding by the NIH (which is the primary source of public funds for biomedical research in the US).[18] Of retracted articles, over half are retracted due to “fabrication, falsification, and plagiarism.”[19] Yet it is likely that many articles that contain falsified research are never retracted. A study revealed that of 12,000 journals reviewed, most of the journals had never retracted an article. The same study suggests that some journals have improved oversight, but many do not.[20] V. Oversight and Public Interest Organizations The Committee on Publication Ethics (COPE) was founded in 1997 and established practices and policies for journals and publishers to achieve the highest standards in publication ethics.[21] The Office of Research Integrity (ORI) is an organization created in the US to do the same. In 1996, the International Conference of Harmonization (ICH) adopted the international Good Clinical Practice (GCP) guidelines.[22] Finally, in 2017 the Parliamentary Office of Science and Technology (POST) initiated a formal inquiry into the trends and developments on fraud and misconduct in research and the publication of research results.[23] Despite the increasing efforts of regulatory organizations, scientific misconduct remains a major issue. To eliminate unethical practices in medical research, we must get to the root of the problem: the pressures put on scientists to increase output at the expense of quality. In the absence of altered incentives, criminalization is a possibility. However, several less severe remedies for reducing the prevalence of scientific misconduct exist. Institutions first need to foster open and frank discussion and promote collegiality. Reducing high-stakes competition for career advancement would also help realign incentives to compromise research ethics. In career advancement, emphasis should be given to the quality rather than the quantity of scientific publications. The significance of mentorship by senior, experienced researchers over lab assistants can bolster ethical training. Adopting certain codes of conduct and close supervision of research practices in the lab and beyond should also be formalized. The publication system plays a critical role in preserving research integrity. Computer-assisted tools that detect plagiarism and other types of misconduct need to be developed or upgraded. To improve transparency, scientific journals should establish clear authorship criteria and require that the data supporting the findings of a study be made available, a movement that is underway. Preprint repositories also might help with transparency, but they could lead to people acting on data that has not been peer-reviewed. Finally, publishing negative results is necessary so that the totality of research is not skewed or tainted by informative studies but does not produce the results researchers hoped. Consistently publishing negative results may create a new industry standard and help researchers see that all data is important. CONCLUSION Any medical trial, research project, or scientific publication must be conducted to develop science and improve medicine and public health. However, the pressures from the pharmaceutical industry and academic competition pose significant threats to the trustworthiness of science. Thus, it is up to every scientist to respect and follow ethical rules, while responsible organizations, regulatory bodies, and scientific journals should make every effort to prevent research misconduct. - [1] World Bank. “Scientific and technical journal articles”. World Development Indicators, The World Bank Group. https://data.worldbank.org/indicator/IP.JRN.ARTC.SC?year_low_desc=true. [2] Masic I, Miokovic M, Muhamedagic B. “Evidence Based Medicine - New Approaches and Challenges.” Acta Inform Med. 2008;16(4):219-25. https://www.bibliomed.org/mnsfulltext/6/6-1300616203.pdf?1643160950 [3] Dickenson, D. “The Medical Profession and Human Rights: Handbook for a Changing Agenda.” Zed Books. 2002;28(5):332. doi: 10.1136/jme.28.5.332. [4] Ranstam J, Buyse M, George SL, Evans S, Geller NL, Scherrer B, et al. “Fraud in Medical Research: an International Survey of Biostatisticians, ISCB Subcommittee on Fraud. Control Clin Trials. 2000;21(5):415-27. doi: 10.1016/s0197-2456(00)00069-6. [5] Gopalakrishna, G., Riet, G. T., Vink, G., Stoop, I., Wicherts, J. M., & Bouter, L. (2021, “Prevalence of questionable research practices, research misconduct and their potential explanatory factors: a survey among academic researchers in The Netherlands.” MetaArXiv. July 6, 2021. doi:10.31222/osf.io/vk9yt; Chawla, Dalmeet Singh, “8% of researchers in Dutch survey have falsified or fabricated data.” Nature. 2021. https://www.nature.com/articles/d41586-021-02035-2 (The Dutch study’s author suggests the results could be an underestimate; she also notes an older similar study that found 4.5 percent.) [6] Chawla. [7] Fanelli D. How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PLoS One. 2009;4(5):e5738. doi:10.1371/journal.pone.0005738 [8] Martinson BC, Anderson MS, de Vries R. “Scientists behaving badly” Nature. 2005;435(7043):737-8. https://www.nature.com/articles/435737a. [9] Munby J, Weetman, DF. Joint Consensus Conference on Misconduct in Biomedical Research: The Royal College of Physicians of Edinburgh. Indoor Built Environ. 1999;8:336–338. doi: 10.1177/1420326X9900800511. [10] Stephen Beale “Large Dutch Survey Shines Light on Fraud and Questionable Research Practices in Medical Studies Published in Scientific Journals,” The Dark Daily, Aug 30, 2021. https://www.darkdaily.com/2021/08/30/large-dutch-survey-shines-light-on-fraud-and-questionable-research-practices-in-medical-studies-published-in-scientific-journals/ [11] Buranyi S. Is the staggeringly profitable business of scientific publishing bad for science? The Guardian. June 27, 2017. https://www.theguardian.com/science/2017/jun/27/profitable-business-scientific-publishing-bad-for-science [12] Cambridge English Dictionary. https://dictionary.cambridge.org/us/dictionary/english/plagiarism [13] International Committee of Medical Journal Editors. Defining the Role of Authors and Contributors http://www.icmje.org/recommendations/browse/roles-and-responsibilities/defining-the-role-of-authors-and-contributors.html [14] Resnik DB, Elliott KC. “Taking Financial Relationships into Account When Assessing Research.” Accountability in Research. 2013;20(3):184-205. doi: 10.1080/08989621.2013.788383. [15] Bülow W, Helgesson G. Criminalization of scientific misconduct. Med Health Care and Philos. 2019;22:245–252. [16] Directive 2010/63/EU of the European Parliament and of the Council of 22 September 2010 on the protection of animals used for scientific purposes. Official Journal of the European Union. https://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=OJ:L:2010:276:0033:0079:en:PDF [17] Mehra MR, Desai SS, Ruschitzka F, Patel AN. “RETRACTED: Hydroxychloroquine or Chloroquine with or without a Macrolide for Treatment of COVID-19: a Multinational Registry Analysis.” The Lancet. 2020. doi: 10.1016/S0140-6736(20)31180-6. [18] Stern AM, Casadevall A, Steen RG, Fang FC. “Financial Costs and Personal Consequences of Research Misconduct Resulting in Retracted Publications,” eLife. 2014;3:e02956. doi: 10.7554/eLife.02956. [19] Brainard, Jeffrey and Jia You, “What a massive database of retracted papers reveals about science publishing's ‘death penalty': Better editorial oversight, not more flawed papers, might explain flood of retractions,” Science, Oct 25, 2018 https://www.science.org/content/article/what-massive-database-retracted-papers-reveals-about-science-publishing-s-death-penalty [20] Brainard and You. [21] Doherty M, Van De Putte Lbacope. Guidelines on Good Publication Practice; Annals of the Rheumatic Diseases 2000;59:403-404. [22] Dixon JR Jr. The International Conference on Harmonization Good Clinical Practice Guideline. Qual Assur. 1998 Apr-Jun;6(2):65-74. doi: 10.1080/105294199277860. PMID: 10386329. [23] “Research Integrity Terms of Reference.” Science and Technology Committee, 14 Sept. 2017, committees.parliament.uk/committee/135/science-and-technology-committee/news/100920/research-integrity-terms-of-reference/.
Gli stili APA, Harvard, Vancouver, ISO e altri
23

Beyer, Sue. "Metamodern Spell Casting". M/C Journal 26, n. 5 (2 ottobre 2023). http://dx.doi.org/10.5204/mcj.2999.

Testo completo
Abstract (sommario):
There are spells in the world: incantations that can transform reality through the power of procedural utterances. The marriage vow, the courtroom sentence, the shaman’s curse: these words are codes that change reality. (Finn 90) Introduction As a child, stories on magic were “opportunities to escape from reality” (Brugué and Llompart 1), or what Rosengren and Hickling describe as being part of a set of “causal belief systems” (77). As an adult, magic is typically seen as being “pure fantasy” (Rosengren and Hickling 75), while Bever argues that magic is something lost to time and materialism, and alternatively a skill that Yeats believed that anyone could develop with practice. The etymology of the word magic originates from magein, a Greek word used to describe “the science and religion of the priests of Zoroaster”, or, according to philologist Skeat, from Greek megas (great), thus signifying "the great science” (Melton 956). Not to be confused with sleight of hand or illusion, magic is traditionally associated with learned people, held in high esteem, who use supernatural or unseen forces to cause change in people and affect events. To use magic these people perform rituals and ceremonies associated with religion and spirituality and include people who may identify as Priests, Witches, Magicians, Wiccans, and Druids (Otto and Stausberg). Magic as Technology and Technology as Magic Although written accounts of the rituals and ceremonies performed by the Druids are rare, because they followed an oral tradition and didn’t record knowledge in a written form (Aldhouse-Green 19), they are believed to have considered magic as a practical technology to be used for such purposes as repelling enemies and divining lost items. They curse and blight humans and districts, raise storms and fogs, cause glamour and delusion, confer invisibility, inflict thirst and confusion on enemy warriors, transform people into animal shape or into stone, subdue and bind them with incantations, and raise magical barriers to halt attackers. (Hutton 33) Similarly, a common theme in The History of Magic by Chris Gosden is that magic is akin to science or mathematics—something to be utilised as a tool when there is a need, as well as being used to perform important rituals and ceremonies. In TechGnosis: Myth, Magic & Mysticism in the Age of Information, Davis discusses ideas on Technomysticism, and Thacker says that “the history of technology—from hieroglyphics to computer code—is itself inseparable from the often ambiguous exchanges with something nonhuman, something otherworldly, something divine. Technology, it seems, is religion by other means, then as now” (159). Written language, communication, speech, and instruction has always been used to transform the ordinary in people’s lives. In TechGnosis, Davis (32) cites Couliano (104): historians have been wrong in concluding that magic disappeared with the advent of 'quantitative science.’ The latter has simply substituted itself for a part of magic while extending its dreams and its goals by means of technology. Electricity, rapid transport, radio and television, the airplane, and the computer have merely carried into effect the promises first formulated by magic, resulting from the supernatural processes of the magician: to produce light, to move instantaneously from one point in space to another, to communicate with faraway regions of space, to fly through the air, and to have an infallible memory at one’s disposal. Non-Fungible Tokens (NFTs) In early 2021, at the height of the pandemic meta-crisis, blockchain and NFTs became well known (Umar et al. 1) and Crypto Art became the hot new money-making scheme for a small percentage of ‘artists’ and tech-bros alike. The popularity of Crypto Art continued until initial interest waned and Ether (ETH) started disappearing in the manner of a classic disappearing coin magic trick. In short, ETH is a type of cryptocurrency similar to Bitcoin. NFT is an acronym for Non-Fungible Token. An NFT is “a cryptographic digital asset that can be uniquely identified within its smart contract” (Myers, Proof of Work 316). The word Non-Fungible indicates that this token is unique and therefore cannot be substituted for a similar token. An example of something being fungible is being able to swap coins of the same denomination. The coins are different tokens but can be easily swapped and are worth the same as each other. Hackl, Lueth, and Bartolo define an NFT as “a digital asset that is unique and singular, backed by blockchain technology to ensure authenticity and ownership. An NFT can be bought, sold, traded, or collected” (7). Blockchain For the newcomer, blockchain can seem impenetrable and based on a type of esoterica or secret knowledge known only to an initiate of a certain type of programming (Cassino 22). The origins of blockchain can be found in the research article “How to Time-Stamp a Digital Document”, published by the Journal of Cryptology in 1991 by Haber, a cryptographer, and Stornetta, a physicist. They were attempting to answer “epistemological problems of how we trust what we believe to be true in a digital age” (Franceschet 310). Subsequently, in 2008, Satoshi Nakamoto wrote The White Paper, a document that describes the radical idea of Bitcoin or “Magic Internet Money” (Droitcour). As defined by Myers (Proof of Work 314), a blockchain is “a series of blocks of validated transactions, each linked to its predecessor by its cryptographic hash”. They go on to say that “Bitcoin’s innovation was not to produce a blockchain, which is essentially just a Merkle list, it was to produce a blockchain in a securely decentralised way”. In other words, blockchain is essentially a permanent record and secure database of information. The secure and permanent nature of blockchain is comparable to a chapter of the Akashic records: a metaphysical idea described as an infinite database where information on everything that has ever happened is stored. It is a mental plane where information is recorded and immutable for all time (Nash). The information stored in this infinite database is available to people who are familiar with the correct rituals and spells to access this knowledge. Blockchain Smart Contracts Blockchain smart contracts are written by a developer and stored on the blockchain. They contain the metadata required to set out the terms of the contract. IBM describes a smart contract as “programs stored on a blockchain that run when predetermined conditions are met”. There are several advantages of using a smart contract. Blockchain is a permanent and transparent record, archived using decentralised peer-to-peer Distributed Ledger Technology (DLT). This technology safeguards the security of a decentralised digital database because it eliminates the intermediary and reduces the chance of fraud, gives hackers fewer opportunities to access the information, and increases the stability of the system (Srivastava). They go on to say that “it is an emerging and revolutionary technology that is attracting a lot of public attention due to its capability to reduce risks and fraud in a scalable manner”. Despite being a dry subject, blockchain is frequently associated with magic. One example is Faustino, Maria, and Marques describing a “quasi-religious romanticism of the crypto-community towards blockchain technologies” (67), with Satoshi represented as King Arthur. The set of instructions that make up the blockchain smart contracts and NFTs tell the program, database, or computer what needs to happen. These instructions are similar to a recipe or spell. This “sourcery” is what Chun (19) describes when talking about the technological magic that mere mortals are unable to comprehend. “We believe in the power of code as a set of magical symbols linking the invisible and visible, echoing our long cultural tradition of logos, or language as an underlying system of order and reason, and its power as a kind of sourcery” (Finn 714). NFTs as a Conceptual Medium In a “massively distributed electronic ritual” (Myers, Proof of Work 100), NFTs became better-known with the sale of Beeple’s Everydays: The First 5000 Days by Christie’s for US$69,346,250. Because of the “thousandfold return” (Wang et al. 1) on the rapidly expanding market in October 2021, most people at that time viewed NFTs and cryptocurrencies as the latest cash cow; some artists saw them as a method to become financially independent, cut out the gallery intermediary, and be compensated on resales (Belk 5). In addition to the financial considerations, a small number of artists saw the conceptual potential of NFTs. Rhea Myers, a conceptual artist, has been using the blockchain as a conceptual medium for over 10 years. Myers describes themselves as “an artist, hacker and writer” (Myers, Bio). A recent work by Myers, titled Is Art (Token), made in 2023 as an Ethereum ERC-721 Token (NFT), is made using a digital image with text that says “this token is art”. The word ‘is’ is emphasised in a maroon colour that differentiates it from the rest in dark grey. The following is the didactic for the artwork. Own the creative power of a crypto artist. Is Art (Token) takes the artist’s power of nomination, of naming something as art, and delegates it to the artwork’s owner. Their assertion of its art or non-art status is secured and guaranteed by the power of the blockchain. Based on a common and understandable misunderstanding of how Is Art (2014) works, this is the first in a series of editions that inscribe ongoing and contemporary concerns onto this exemplar of a past or perhaps not yet realized blockchain artworld. (Myers, is art editions). This is a simple example of their work. A lot of Myers’s work appears to be uncomplicated but hides subtle levels of sophistication that use all the tools available to conceptual artists by questioning the notion of what art is—a hallmark of conceptual art (Goldie and Schellekens 22). Sol LeWitt, in Paragraphs on Conceptual Art, was the first to use the term, and described it by saying “the idea itself, even if not made visual, is as much a work of art as any finished product”. According to Bailey, the most influential American conceptual artists of the 1960s were Lucy Lippard, Sol LeWitt, and Joseph Kosuth, “despite deriving from radically diverse insights about the reason for calling it ‘Conceptual Art’” (8). Instruction-Based Art Artist Claudia Hart employs the instructions used to create an NFT as a medium and artwork in Digital Combines, a new genre the artist has proposed, that joins physical, digital, and virtual media together. The NFT, in a digital combine, functions as a type of glue that holds different elements of the work together. New media rely on digital technology to communicate with the viewer. Digital combines take this one step further—the media are held together by an invisible instruction linked to the object or installation with a QR code that magically takes the viewer to the NFT via a “portal to the cloud” (Hart, Digital Combine Paintings). QR codes are something we all became familiar with during the on-and-off lockdown phase of the pandemic (Morrison et al. 1). Denso Wave Inc., the inventor of the Quick Response Code or QR Code, describes them as being a scannable graphic that is “capable of handling several dozen to several hundred times more information than a conventional bar code that can only store up to 20 digits”. QR Codes were made available to the public in 1994, are easily detected by readers at nearly any size, and can be reconfigured to fit a variety of different shapes. A “QR Code is capable of handling all types of data, such as numeric and alphabetic characters, Kanji, Kana, Hiragana, symbols, binary, and control codes. Up to 7,089 characters can be encoded in one symbol” (Denso Wave). Similar to ideas used by the American conceptual artists of the 1960s, QR codes and NFTs are used in digital combines as conceptual tools. Analogous to Sol LeWitt’s wall drawings, the instruction is the medium and part of the artwork. An example of a Wall Drawing made by Sol LeWitt is as follows: Wall Drawing 11A wall divided horizontally and vertically into four equal parts. Within each part, three of the four kinds of lines are superimposed.(Sol LeWitt, May 1969; MASS MoCA, 2023) The act or intention of using an NFT as a medium in art-making transforms it from being solely a financial contract, which NFTs are widely known for, to an artistic medium or a standalone artwork. The interdisciplinary artist Sue Beyer uses Machine Learning and NFTs as conceptual media in her digital combines. Beyer’s use of machine learning corresponds to the automatic writing that André Breton and Philippe Soupault of the Surrealists were exploring from 1918 to 1924 when they wrote Les Champs Magnétiques (Magnetic Fields) (Bohn 7). Automatic writing was popular amongst the spiritualist movement that evolved from the 1840s to the early 1900s in Europe and the United States (Gosden 399). Michael Riffaterre (221; in Bohn 8) talks about how automatic writing differs from ordinary texts. Automatic writing takes a “total departure from logic, temporality, and referentiality”, in addition to violating “the rules of verisimilitude and the representation of the real”. Bohn adds that although “normal syntax is respected, they make only limited sense”. An artificial intelligence (AI) hallucination, or what Chintapali (1) describes as “distorted reality”, can be seen in the following paragraph that Deep Story provided after entering the prompt ‘Sue Beyer’ in March 2022. None of these sentences have any basis in truth about the person Sue Beyer from Melbourne, Australia. Suddenly runs to Jen from the bedroom window, her face smoking, her glasses shattering. Michaels (30) stands on the bed, pale and irritated. Dear Mister Shut Up! Sue’s loft – later – Sue is on the phone, looking upset. There is a new bruise on her face. There is a distinction between AI and machine learning. According to ChatGPT 3.5, “Machine Learning is a subset of AI that focuses on enabling computers to learn and make predictions or decisions without being explicitly programmed. It involves the development of algorithms and statistical models that allow machines to automatically learn from data, identify patterns, and make informed decisions or predictions”. Using the story generator Deep Story, Beyer uses the element of chance inherent in Machine Learning to create a biography on herself written by the alien other of AI. The paragraphs that Deep Story produces are nonsensical statements and made-up fantasies of what Beyer suspects AI wants the artist to hear. Like a psychic medium or oracle, providing wisdom and advice to a petitioner, the words tumble out of the story generator like a chaotic prediction meant to be deciphered at a later time. This element of chance might be a short-lived occurrence as machine learning is evolving and getting smarter exponentially, the potential of which is becoming very evident just from empirical observation. Something that originated in early modernist science fiction is quickly becoming a reality in our time. A Metamodern Spell Casting Metamodernism is an evolving term that emerged from a series of global catastrophes that occurred from the mid-1990s onwards. The term tolerates the concurrent use of ideas that arise in modernism and postmodernism without discord. It uses oppositional aspects or concepts in art-making and other cultural production that form what Dember calls a “complicated feeling” (Dember). These ideas in oscillation allow metamodernism to move beyond these fixed terms and encompass a wide range of cultural tendencies that reflect what is known collectively as a structure of feeling (van den Akker et al.). The oppositional media used in a digital combine oscillate with each other and also form meaning between each other, relating to material and immaterial concepts. These amalgamations place “technology and culture in mutual interrogation to produce new ways of seeing the world as it unfolds around us” (Myers Studio Ltd.). The use of the oppositional aspects of technology and culture indicates that Myers’s work can also be firmly placed within the domain of metamodernism. Advancements in AI over the years since the pandemic are overwhelming. In episode 23 of the MIT podcast Business Lab, Justice stated that “Covid-19 has accelerated the pace of digital in many ways, across many types of technologies.” They go on to say that “this is where we are starting to experience such a rapid pace of exponential change that it’s very difficult for most people to understand the progress” (MIT Technology Review Insights). Similarly, in 2021 NFTs burst forth in popularity in reaction to various conditions arising from the pandemic meta-crisis. A similar effect was seen around cryptocurrencies after the Global Financial Crisis (GFC) in 2007-2008 (Aliber and Zoega). “The popularity of cryptocurrencies represents in no small part a reaction to the financial crisis and austerity. That reaction takes the form of a retreat from conventional economic and political action and represents at least an economic occult” (Myers, Proof of Work 100). When a traumatic event occurs, like a pandemic, people turn to God, spirituality (Tumminio Hansen), or possibly the occult to look for answers. NFTs took on the role of precursor, promising access to untold riches, esoteric knowledge, and the comforting feeling of being part of the NFT cult. Similar to the effect of what Sutcliffe (15) calls spiritual “occultures” like “long-standing occult societies or New Age healers”, people can be lured by “the promise of secret knowledge”, which “can assist the deceptions of false gurus and create opportunities for cultic exploitation”. Conclusion NFTs are a metamodern spell casting, their popularity borne by the meta-crisis of the pandemic; they are made using magical instruction that oscillates between finance and conceptual abstraction, materialism and socialist idealism, financial ledger, and artistic medium. The metadata in the smart contract of the NFT provide instruction that combines the tangible and intangible. This oscillation, present in metamodern artmaking, creates and maintains a liminal space between these ideas, objects, and media. The in-between space allows for the perpetual transmutation of one thing to another. These ideas are a work in progress and additional exploration is necessary. An NFT is a new medium available to artists that does not physically exist but can be used to create meaning or to glue or hold objects together in a digital combine. Further investigation into the ontological aspects of this medium is required. The smart contract can be viewed as a recipe for the spell or incantation that, like instruction-based art, transforms an object from one thing to another. The blockchain that the NFT is housed in is a liminal space. The contract is stored on the threshold waiting for someone to view or purchase the NFT and turn the objects displayed in the gallery space into a digital combine. Alternatively, the intention of the artist is enough to complete this alchemical process. References Aldhouse-Green, Miranda. Caesar’s Druids: Story of an Ancient Priesthood. New Haven: Yale UP, 2010. Aliber, Robert Z., and Gylfi Zoega. “A Retrospective on the 2008 Global Financial Crisis.” The 2008 Global Financial Crisis in Retrospect: Causes of the Crisis and National Regulatory Responses. Eds. Robert Z. Aliber and Gylfi Zoega. Cham: Springer International Publishing, 2019. , 1–15. 9 June 2023 <https://doi.org/10.1007/978-3-030-12395-6_1>. Belk, Russell. “The Digital Frontier as a Liminal Space.” Journal of Consumer Psychology (2023): 1–7. Bailey, Robert. “Introduction: A Theory of Conceptualism.” Durham: Duke UP, 2017. 1–36. 28 July 2023 <https://read.dukeupress.edu/books/book/1938/chapter/234969/IntroductionA-Theory-of-Conceptualism>. Bever, Edward. “Witchcraft Prosecutions and the Decline of Magic.” Journal of Interdisciplinary History 40.2 (2009): 263–293. 27 July 2023 <https://muse.jhu.edu/pub/6/article/315257>. Beyer, S. “Digital Combines.” Sue Beyer | Visual artist, 2023. 22 July 2023 <https://www.suebeyer.com.au/combines.html>. ———. “Digital Combines: A Metamodern Oscillation of Oppositional Objects and Concepts in Contemporary Interdisciplinary Art Practice.” International Journal of Contemporary Humanities 6 (2022). Bohn, Willard. One Hundred Years of Surrealist Poetry: Theory and Practice. New York: Bloomsbury Academic, 2022. Brugué, Lydia, and Auba Llompart. Contemporary Fairy-Tale Magic: Subverting Gender and Genre. Boston: Brill, 2020. Cassino, Dan. “Crypto, Meme Stocks, and Threatened Masculinity.” Contexts 22.2 (2023): 18–23. ChatGPT 3.5. “ML vs AI.” Chat.openai.com, 18 June 2023. <https://chat.openai.com/share/4e425ff8-8610-4960-99d1-16e0451d517a>. Chintapali, Rohit. “Simplest Guardrail for AI Hallucinations? Be Skeptical, Double Check Outcomes & Don’t Anthropomorphise AI.” Business World 24 Mar. 2023. 28 June 2023 <https://www.proquest.com/docview/2790033991/abstract/9FD03495815D4956PQ/1>. Christie’s. “Beeple (b. 1981), EVERYDAYS: THE FIRST 5000 DAYS.” Christie’s, 2023. 7 June 2023 <https://onlineonly.christies.com/s/beeple-first-5000-days/beeple-b-1981-1/112924>. Chun, Wendy Hui Kyong. Programmed Visions: Software and Memory. Cambridge: MIT P, 2011. Davis, E. “TechGnosis: Magic, Memory and the Angels of Information.” Magic. Ed. J. Sutcliffe. Cambridge, Mass.: MIT P, 2021. 114–121 ———. TechGnosis: Myth, Magic & Mysticism in the Age of Information. 2nd ed. Berkeley: North Atlantic Books, 2015. DeepStory.ai. “DeepStory.” DeepStory, 2023. 18 June 2023 <https://www.deepstory.ai/#!/>. Dember, G. “What Is Metamodernism and Why Does It Matter?” The Side View, 2020. 22 July 2023 <https://thesideview.co/journal/what-is-metamodernism-and-why-does-it-matter/>. Denso Wave Inc. “History of QR Code.” QRcode.com, n.d. 27 June 2023 <https://www.qrcode.com/en/history/>. Droitcour, Brian. “The Outland Review, Vol. 01.” Outland, 4 May 2023. 20 July 2023 <https://outland.art/the-outland-review-vol-01-julian-opie-taproot-wizards/>. Faustino, Sandra, Inês Faria, and Rafael Marques. “The Myths and Legends of King Satoshi and the Knights of Blockchain.” Journal of Cultural Economy 15.1 (2022): 67–80. 23 July 2023 <https://www.tandfonline.com/doi/full/10.1080/17530350.2021.1921830>. Finn, Ed. What Algorithms Want: Imagination in the Age of Computing. EPub Version 1.0. Cambridge, Mass.: MIT P, 2017. Franceschet, Massimo. “The Sentiment of Crypto Art.” CEUR Workshop Proceedings. Aachen: RWTH Aachen, n.d. 310–318. 25 Nov. 2022 <https://ceur-ws.org/Vol-2989/long_paper10.pdf>. Goldie, Peter, and Elisabeth Schellekens. Who’s Afraid of Conceptual Art? Florence: Taylor and Francis, 2009. Gordon, Melton J. “Magic.” Encyclopedia of Occultism and Parapsychology. Ed Melton J. Gordon. 5th ed. Detroit, MI: Gale, 2001. 956–960. 21 July 2023 <https://link.gale.com/apps/doc/CX3403802897/GVRL?sid=bookmark-GVRL&xid=00061628>. Gosden, Chris. The History of Magic: From Alchemy to Witchcraft, from the Ice Age to the Present. London: Penguin, 2020. Haber, S., and W.S. Stornetta. “How to Time-Stamp a Digital Document.” Journal of Cryptology 3.2 (1991): 99–111. Hackl, Cathy, Dirk Lueth, and Tommaso di Bartolo. Navigating the Metaverse: A Guide to Limitless Possibilities in a Web 3.0 World. Ed. John Arkontaky. Hoboken, NJ: John Wiley & Sons, 2022. Hart, Claudia. “Digital Combine Paintings – Claudia Hart.” Claudia Hart, 2021. 15 Nov. 2022 <https://claudiahart.com/Digital-Combine-Paintings>. ———. “The Ruins Timeline – Claudia Hart.” Claudia Hart, 2020. 3 June 2023 <https://claudiahart.com/The-Ruins-timeline>. IBM. “What Are Smart Contracts on Blockchain?” IBM, n.d. 5 June 2023 <https://www.ibm.com/topics/smart-contracts>. LeWitt, Sol. “Paragraphs on Conceptual Art.” Art Forum (1967): n.p. 28 July 2023 <http://arteducation.sfu-kras.ru/files/documents/lewitt-paragraphs-on-conceptual-art1.pdf>. ———. “Wall Drawing 11.” Massachusetts Museum of Contemporary Art. MASS MoCA, 2023. 16 June 2023 <https://massmoca.org/event/walldrawing11/>. MIT Technology Review Insights. “Embracing the Rapid Pace of AI.” Business Lab, 20 May 2021. 28 July 2023 <https://www.technologyreview.com/2021/05/19/1025016/embracing-the-rapid-pace-of-ai/>. Morrison, Benjamin A., et al. “Life after Lockdown: The Experiences of Older Adults in a Contactless Digital World.” Frontiers in Psychology 13 (2023): 1–14. 28 July 2023 <https://www.frontiersin.org/articles/10.3389/fpsyg.2022.1100521>. Myers, Rhea. “Bio.” rhea.art, n.d. 1 July 2023 <https://rhea.art/bio>. ———. “is art editions.” rhea.art, 2023. 22 July 2023 <https://rhea.art/is-art-editions>. ——— [@rheaplex]. “My Little Penny: Bitcoin is Magic.” Tweet. Twitter, 2014. 8 June 2023 <https://twitter.com/rheaplex/status/439534733534298112>. ———. Proof of Work: Blockchain Provocations 2011-2021. UK: Urbanomic Media, 2023. Myers Studio Ltd. “The Home Base of Rhea and Seryna Myers.” Myers Studio, 2021. 17 Nov. 2022 <http://myers.studio/>. Nash, Alex. “The Akashic Records: Origins and Relation to Western Concepts.” Central European Journal for Contemporary Religion 3 (2020): 109–124. Otto, Bernd-Christian, and Michael Stausberg. Defining Magic: A Reader. London: Taylor & Francis Group, 2014. Riffaterre, Michael. Text Production. Trans. Terese Lyons. New York: Columbia UP, 1983. Rosengren, Karl S., and Anne K. Hickling. “Metamorphosis and Magic: The Development of Children’s Thinking about Possible Events and Plausible Mechanisms.” Imagining the Impossible. Ed. Karl S. Rosengren, Carl N. Johnson, and Paul L. Harris, 75–98. Cambridge UP, 2000. Srivastava, N. “What Is Blockchain Technology, and How Does It Work?” Blockchain Council, 23 Oct. 2020. 17 Nov. 2022 <https://www.blockchain-council.org/blockchain/what-is-blockchain-technology-and-how-does-it-work/>. Sutcliffe, J., ed. Magic. Cambridge, Mass.: MIT P, 2021. Thacker, Eugene. “Foreword (2015): ‘We Cartographers of Old…’” TechGnosis: Myth, Magic & Mysticism in the Age of Information. Kindle Edition. Berkeley: North Atlantic Books, 2015. Location 111-169. Tumminio Hansen, D. “Do People Become More Religious in Times of Crisis?” The Conversation, 2021. 9 June 2023 <http://theconversation.com/do-people-become-more-religious-in-times-of-crisis-158849>. Van den Akker, R., A. Gibbons, and T. Vermeulen. Metamodernism: Historicity, Affect, and Depth after Postmodernism. London: Rowman & Littlefield, 2019. Umar, Zaghum, et al. “Covid-19 Impact on NFTs and Major Asset Classes Interrelations: Insights from the Wavelet Coherence Analysis.” Finance Research Letters 47 (2022). 27 July 2023 <https://linkinghub.elsevier.com/retrieve/pii/S1544612322000496>. Wang, Qin, et al. “Non-Fungible Token (NFT): Overview, Evaluation, Opportunities and Challenges.” arXiv, 24 Oct. 2021. 28 July 2023 <http://arxiv.org/abs/2105.07447>. Yeats, W.B. “Magic.” Essays and Introductions. Ed. W.B. Yeats. London: Palgrave Macmillan, 1961. 28–52. 27 July 2023 <https://doi.org/10.1007/978-1-349-00618-2_3>.
Gli stili APA, Harvard, Vancouver, ISO e altri
24

Brien, Donna Lee, Leonie Rutherford e Rosemary Williamson. "Hearth and Hotmail". M/C Journal 10, n. 4 (1 agosto 2007). http://dx.doi.org/10.5204/mcj.2696.

Testo completo
Abstract (sommario):
Introduction It has frequently been noted that ICTs and social networking applications have blurred the once-clear boundary between work, leisure and entertainment, just as they have collapsed the distinction between public and private space. While each individual has a sense of what “home” means, both in terms of personal experience and more conceptually, the following three examples of online interaction (based on participants’ interest, or involvement, in activities traditionally associated with the home: pet care, craft and cooking) suggest that the utilisation of online communication technologies can lead to refined and extended definitions of what “home” is. These examples show how online communication can assist in meeting the basic human needs for love, companionship, shelter and food – needs traditionally supplied by the home environment. They also provide individuals with a considerably expanded range of opportunities for personal expression and emotional connection, as well as creative and commercial production, than that provided by the purely physical (and, no doubt, sometimes isolated and isolating) domestic environment. In this way, these case studies demonstrate the interplay and melding of physical and virtual “home” as domestic practices leach from the most private spaces of the physical home into the public space of the Internet (for discussion, see Gorman-Murray, Moss, and Rose). At the same time, online interaction can assert an influence on activity within the physical space of the home, through the sharing of advice about, and modeling of, domestic practices and processes. A Dog’s (Virtual) Life The first case study primarily explores the role of online communities in the formation and expression of affective values and personal identity – as traditionally happens in the domestic environment. Garber described the 1990s as “the decade of the dog” (20), citing a spate of “new anthropomorphic” (22) dog books, Internet “dog chat” sites, remakes of popular classics such as Lassie Come Home, dog friendly urban amenities, and the meteoric rise of services for pampered pets (28-9). Loving pets has become a lifestyle and culture, witnessed and commodified in Pet Superstores as well as in dog collectables and antiques boutiques, and in publications like The Bark (“the New Yorker of Dog Magazines”) and Clean Run, the international agility magazine, Website, online book store and information gateway for agility products and services. Available online resources for dog lovers have similarly increased rapidly during the decade since Garber’s book was published, with the virtual world now catering for serious hobby trainers, exhibitors and professionals as well as the home-based pet lover. At a recent survey, Yahoo Groups – a personal communication portal that facilitates social networking, in this case enabling users to set up electronic mailing lists and Internet forums – boasted just over 9,600 groups servicing dog fanciers and enthusiasts. The list Dogtalk is now an announcement only mailing list, but was a vigorous discussion forum until mid-2006. Members of Dogtalk were Australian-based “clicker-trainers”, serious hobbyist dog trainers, many of whom operated micro-businesses providing dog training or other pet-related services. They shared an online community, but could also engage in “flesh-meets” at seminars, conferences and competitive dog sport meets. An author of this paper (Rutherford) joined this group two years ago because of her interest in clicker training. Clicker training is based on an application of animal learning theory, particularly psychologist E. F. Skinner’s operant conditioning, so called because of the trademark use of a distinctive “click” sound to mark a desired behaviour that is then rewarded. Clicker trainers tend to dismiss anthropomorphic pack theory that positions the human animal as fundamentally opposed to non-human animals and, thus, foster a partnership (rather than a dominator) mode of social and learning relationships. Partnership and nurturance are common themes within the clicker community (as well as in more traditional “home” locations); as is recognising and valuing the specific otherness of other species. Typically, members regard their pets as affective equals or near-equals to the human animals that are recognised members of their kinship networks. A significant function of the episodic biographical narratives and responses posted to this list was thus to affirm and legitimate this intra-specific kinship as part of normative social relationship – a perspective that is not usually validated in the general population. One of the more interesting nexus that evolved within Dogtalk links the narrativisation of the pet in the domestic sphere with the pictorial genre of the family album. Emergent technologies, such as digital cameras together with Web-based image manipulation software and hosting (as provided by portals like Photobucket and Flickr ) democratise high quality image creation and facilitate the sharing of these images. Increasingly, the Dogtalk list linked to images uploaded to free online galleries, discussed digital image composition and aesthetics, and shared technical information about cameras and online image distribution. Much of this cultural production and circulation was concerned with digitally inscribing particular relationships with individual animals into cultural memory: a form of family group biography (for a discussion of the family photograph as a display of extended domestic space, see Rose). The other major non-training thread of the community involves the sharing and witnessing of the trauma suffered due to the illness and loss of pets. While mourning for human family members is supported in the off-line world – with social infrastructure, such as compassionate leave and/or bereavement counselling, part of professional entitlements – public mourning for pets is not similarly supported. Yet, both cultural studies (in its emphasis on cultural memory) and trauma theory have highlighted the importance of social witnessing, whereby traumatic memories must be narratively integrated into memory and legitimised by the presence of a witness in order to loosen their debilitating hold (Felman and Laub 57). Postings on the progress of a beloved animal’s illness or other misfortune and death were thus witnessed and affirmed by other Dogtalk list members – the sick or deceased pet becoming, in the process, a feature of community memory, not simply an individual loss. In terms of such biographical narratives, memory and history are not identical: “Any memories capable of being formed, retained or articulated by an individual are always a function of socially constituted forms, narratives and relations … Memory is always subject to active social manipulation and revision” (Halbwachs qtd. in Crewe 75). In this way, emergent technologies and social software provide sites, akin to that of physical homes, for family members to process individual memories into cultural memory. Dogzonline, the Australian Gateway site for purebred dog enthusiasts, has a forum entitled “Rainbow Bridge” devoted to textual and pictorial memorialisation of deceased pet dogs. Dogster hosts the For the Love of Dogs Weblog, in which images and tributes can be posted, and also provides links to other dog oriented Weblogs and Websites. An interesting combination of both therapeutic narrative and the commodification of affect is found in Lightning Strike Pet Loss Support which, while a memorial and support site, also provides links to the emerging profession of pet bereavement counselling and to suppliers of monuments and tributary urns for home or other use. loobylu and Narratives of Everyday Life The second case study focuses on online interactions between craft enthusiasts who are committed to the production of distinctive objects to decorate and provide comfort in the home, often using traditional methods. In the case of some popular craft Weblogs, online conversations about craft are interspersed with, or become secondary to, the narration of details of family life, the exploration of important life events or the recording of personal histories. As in the previous examples, the offering of advice and encouragement, and expressions of empathy and support, often characterise these interactions. The loobylu Weblog was launched in 2001 by illustrator and domestic crafts enthusiast Claire Robertson. Robertson is a toy maker and illustrator based in Melbourne, Australia, whose clients have included prominent publishing houses, magazines and the New York Public Library (Robertson “Recent Client List” online). She has achieved a measure of public recognition: her loobylu Weblog has won awards and been favourably commented upon in the Australian press (see Robertson “Press for loobylu” online). In 2005, an article in The Age placed Robertson in the context of a contemporary “craft revolution”, reporting her view that this “revolution” is in “reaction to mass consumerism” (Atkinson online). The hand-made craft objects featured in Robertson’s Weblogs certainly do suggest engagement with labour-intensive pursuits and the construction of unique objects that reject processes of mass production and consumption. In this context, loobylu is a vehicle for the display and promotion of Robertson’s work as an illustrator and as a craft practitioner. While skills-based, it also, however, promotes a family-centred lifestyle; it advocates the construction by hand of objects designed to enhance the appearance of the family home and the comfort of its inhabitants. Its specific subject matter extends to related aspects of home and family as, in addition to instructions, ideas and patterns for craft, the Weblog features information on commercially available products for home and family, recipes, child rearing advice and links to 27 other craft and other sites (including Nigella Lawson’s, discussed below). The primary member of its target community is clearly the traditional homemaker – the mother – as well as those who may aspire to this role. Robertson does not have the “celebrity” status of Lawson and Jamie Oliver (discussed below), nor has she achieved their market saturation. Indeed, Robertson’s online presence suggests a modest level of engagement that is placed firmly behind other commitments: in February 2007, she announced an indefinite suspension of her blog postings so that she could spend more time with her family (Robertson loobylu 17 February 2007). Yet, like Lawson and Oliver, Robertson has exploited forms of domestic competence traditionally associated with women and the home, and the non-traditional medium of the Internet has been central to her endeavours. The content of the loobylu blog is, unsurprisingly, embedded in, or an accessory to, a unifying running commentary on Robertson’s domestic life as a parent. Miles, who has described Weblogs as “distributed documentaries of the everyday” (66) sums this up neatly: “the weblogs’ governing discursive quality is the manner in which it is embodied within the life world of its author” (67). Landmark family events are narrated on loobylu and some attract deluges of responses: the 19 June 2006 posting announcing the birth of Robertson’s daughter Lily, for example, drew 478 responses; five days later, one describing the difficult circumstances of her birth drew 232 comments. All of these comments are pithy, with many being simple empathetic expressions or brief autobiographically based commentaries on these events. Robertson’s news of her temporary retirement from her blog elicited 176 comments that both supported her decision and also expressed a sense of loss. Frequent exclamation marks attest visually to the emotional intensity of the responses. By narrating aspects of major life events to which the target audience can relate, the postings represent a form of affective mass production and consumption: they are triggers for a collective outpouring of largely homogeneous emotional reaction (joy, in the case of Lily’s birth). As collections of texts, they can be read as auto/biographic records, arranged thematically, that operate at both the individual and the community levels. Readers of the family narratives and the affirming responses to them engage in a form of mass affirmation and consumerism of domestic experience that is easy, immediate, attractive and free of charge. These personal discourses blend fluidly with those of a commercial nature. Some three weeks after loobylu announced the birth of her daughter, Robertson shared on her Weblog news of her mastitis, Lily’s first smile and the family’s favourite television programs at the time, information that many of us would consider to be quite private details of family life. Three days later, she posted a photograph of a sleeping baby with a caption that skilfully (and negatively) links it to her daughter: “Firstly – I should mention that this is not a photo of Lily”. The accompanying text points out that it is a photo of a baby with the “Zaky Infant Sleeping Pillow” and provides a link to the online pregnancystore.com, from which it can be purchased. A quotation from the manufacturer describing the merits of the pillow follows. Robertson then makes a light-hearted comment on her experiences of baby-induced sleep-deprivation, and the possible consequences of possessing the pillow. Comments from readers also similarly alternate between the personal (sharing of experiences) to the commercial (comments on the product itself). One offshoot of loobylu suggests that the original community grew to an extent that it could support specialised groups within its boundaries. A Month of Softies began in November 2004, describing itself as “a group craft project which takes place every month” and an activity that “might give you a sense of community and kinship with other similar minded crafty types across the Internet and around the world” (Robertson A Month of Softies online). Robertson gave each month a particular theme, and readers were invited to upload a photograph of a craft object they had made that fitted the theme, with a caption. These were then included in the site’s gallery, in the order in which they were received. Added to the majority of captions was also a link to the site (often a business) of the creator of the object; another linking of the personal and the commercial in the home-based “cottage industry” sense. From July 2005, A Month of Softies operated through a Flickr site. Participants continued to submit photos of their craft objects (with captions), but also had access to a group photograph pool and public discussion board. This extension simulates (albeit in an entirely visual way) the often home-based physical meetings of craft enthusiasts that in contemporary Australia take the form of knitting, quilting, weaving or other groups. Chatting with, and about, Celebrity Chefs The previous studies have shown how the Internet has broken down many barriers between what could be understood as the separate spheres of emotional (that is, home-based private) and commercial (public) life. The online environment similarly enables the formation and development of fan communities by facilitating communication between those fans and, sometimes, between fans and the objects of their admiration. The term “fan” is used here in the broadest sense, referring to “a person with enduring involvement with some subject or object, often a celebrity, a sport, TV show, etc.” (Thorne and Bruner 52) rather than focusing on the more obsessive and, indeed, more “fanatical” aspects of such involvement, behaviour which is, increasingly understood as a subculture of more variously constituted fandoms (Jenson 9-29). Our specific interest in fandom in relation to this discussion is how, while marketers and consumer behaviourists study online fan communities for clues on how to more successfully market consumer goods and services to these groups (see, for example, Kozinets, “I Want to Believe” 470-5; “Utopian Enterprise” 67-88; Algesheimer et al. 19-34), fans regularly subvert the efforts of those urging consumer consumption to utilise even the most profit-driven Websites for non-commercial home-based and personal activities. While it is obvious that celebrities use the media to promote themselves, a number of contemporary celebrity chefs employ the media to construct and market widely recognisable personas based on their own, often domestically based, life stories. As examples, Jamie Oliver and Nigella Lawson’s printed books and mass periodical articles, television series and other performances across a range of media continuously draw on, elaborate upon, and ultimately construct their own lives as the major theme of these works. In this, these – as many other – celebrity chefs draw upon this revelation of their private lives to lend authenticity to their cooking, to the point where their work (whether cookbook, television show, advertisement or live chat room session with their fans) could be described as “memoir-illustrated-with-recipes” (Brien and Williamson). This generic tendency influences these celebrities’ communities, to the point where a number of Websites devoted to marketing celebrity chefs as product brands also enable their fans to share their own life stories with large readerships. Oliver and Lawson’s official Websites confirm the privileging of autobiographical and biographical information, but vary in tone and approach. Each is, for instance, deliberately gendered (see Hollows’ articles for a rich exploration of gender, Oliver and Lawson). Oliver’s hip, boyish, friendly, almost frantic site includes the what are purported-to-be self-revelatory “Diary” and “About me” sections, a selection of captioned photographs of the chef, his family, friends, co-workers and sponsors, and his Weblog as well as footage streamed “live from Jamie’s phone”. This self-revelation – which includes significant details about Oliver’s childhood and his domestic life with his “lovely girls, Jools [wife Juliette Norton], Poppy and Daisy” – completely blurs the line between private life and the “Jamie Oliver” brand. While such revelation has been normalised in contemporary culture, this practice stands in great contrast to that of renowned chefs and food writers such as Elizabeth David, Julia Child, James Beard and Margaret Fulton, whose work across various media has largely concentrated on food, cooking and writing about cooking. The difference here is because Oliver’s (supposedly private) life is the brand, used to sell “Jamie Oliver restaurant owner and chef”, “Jamie Oliver cookbook author and TV star”, “Jamie Oliver advertising spokesperson for Sainsbury’s supermarket” (from which he earns an estimated £1.2 million annually) (Meller online) and “Jamie Oliver social activist” (made MBE in 2003 after his first Fifteen restaurant initiative, Oliver was named “Most inspiring political figure” in the 2006 Channel 4 Political Awards for his intervention into the provision of nutritious British school lunches) (see biographies by Hildred and Ewbank, and Smith). Lawson’s site has a more refined, feminine appearance and layout and is more mature in presentation and tone, featuring updates on her (private and public) “News” and forthcoming public appearances, a glamorous selection of photographs of herself from the past 20 years, and a series of print and audio interviews. Although Lawson’s children have featured in some of her television programs and her personal misfortunes are well known and regularly commented upon by both herself and journalists (her mother, sister and husband died of cancer) discussions of these tragedies, and other widely known aspects of her private life such as her second marriage to advertising mogul Charles Saatchi, is not as overt as on Oliver’s site, and the user must delve to find it. The use of Lawson’s personal memoir, as sales tool, is thus both present and controlled. This is in keeping with Lawson’s professional experience prior to becoming the “domestic goddess” (Lawson 2000) as an Oxford graduated journalist on the Spectator and deputy literary editor of the Sunday Times. Both Lawson’s and Oliver’s Websites offer readers various ways to interact with them “personally”. Visitors to Oliver’s site can ask him questions and can access a frequently asked question area, while Lawson holds (once monthly, now irregularly) a question and answer forum. In contrast to this information about, and access to, Oliver and Lawson’s lives, neither of their Websites includes many recipes or other food and cooking focussed information – although there is detailed information profiling their significant number of bestselling cookbooks (Oliver has published 8 cookbooks since 1998, Lawson 5 since 1999), DVDs and videos of their television series and one-off programs, and their name branded product lines of domestic kitchenware (Oliver and Lawson) and foodstuffs (Oliver). Instruction on how to purchase these items is also featured. Both these sites, like Robertson’s, provide various online discussion fora, allowing members to comment upon these chefs’ lives and work, and also to connect with each other through posted texts and images. Oliver’s discussion forum section notes “this is the place for you all to chat to each other, exchange recipe ideas and maybe even help each other out with any problems you might have in the kitchen area”. Lawson’s front page listing states: “You will also find a moderated discussion forum, called Your Page, where our registered members can swap ideas and interact with each other”. The community participants around these celebrity chefs can be, as is the case with loobylu, divided into two groups. The first is “foodie (in Robertson’s case, craft) fans” who appear to largely engage with these Websites to gain, and to share, food, cooking and craft-related information. Such fans on Oliver and Lawson’s discussion lists most frequently discuss these chefs’ television programs and books and the recipes presented therein. They test recipes at home and discuss the results achieved, any problems encountered and possible changes. They also post queries and share information about other recipes, ingredients, utensils, techniques, menus and a wide range of food and cookery-related matters. The second group consists of “celebrity fans” who are attracted to the chefs (as to Robertson as craft maker) as personalities. These fans seek and share biographical information about Oliver and Lawson, their activities and their families. These two areas of fan interest (food/cooking/craft and the personal) are not necessarily or always separated, and individuals can be active members of both types of fandoms. Less foodie-orientated users, however (like users of Dogtalk and loobylu), also frequently post their own auto/biographical narratives to these lists. These narratives, albeit often fragmented, may begin with recipes and cooking queries or issues, but veer off into personal stories that possess only minimal or no relationship to culinary matters. These members also return to the boards to discuss their own revealed life stories with others who have commented on these narratives. Although research into this aspect is in its early stages, it appears that the amount of public personal revelation either encouraged, or allowed, is in direct proportion to the “open” friendliness of these sites. More thus are located in Oliver’s and less in Lawson’s, and – as a kind of “control” in this case study, but not otherwise discussed – none in that of Australian chef Neil Perry, whose coolly sophisticated Website perfectly complements Perry’s professional persona as the epitome of the refined, sophisticated and, importantly in this case, unapproachable, high-end restaurant chef. Moreover, non-cuisine related postings are made despite clear directions to the contrary – Lawson’s site stating: “We ask that postings are restricted to topics relating to food, cooking, the kitchen and, of course, Nigella!” and Oliver making the plea, noted above, for participants to keep their discussions “in the kitchen area”. Of course, all such contemporary celebrity chefs are supported by teams of media specialists who selectively construct the lives that these celebrities share with the public and the postings about others’ lives that are allowed to remain on their discussion lists. The intersection of the findings reported above with the earlier case studies suggests, however, that even these most commercially-oriented sites can provide a fruitful data regarding their function as home-like spaces where domestic practices and processes can be refined, and emotional relationships formed and fostered. In Summary As convergence results in what Turow and Kavanaugh call “the wired homestead”, our case studies show that physically home-based domestic interests and practices – what could be called “home truths” – are also contributing to a refiguration of the private/public interplay of domestic activities through online dialogue. In the case of Dogtalk, domestic space is reconstituted through virtual spaces to include new definitions of family and memory. In the case of loobylu, the virtual interaction facilitates a development of craft-based domestic practices within the physical space of the home, thus transforming domestic routines. Jamie Oliver’s and Nigella Lawson’s sites facilitate development of both skills and gendered identities by means of a bi-directional nexus between domestic practices, sites of home labour/identity production and public media spaces. As participants modify and redefine these online communities to best suit their own needs and desires, even if this is contrary to the stated purposes for which the community was instituted, online communities can be seen to be domesticated, but, equally, these modifications demonstrate that the activities and relationships that have traditionally defined the home are not limited to the physical space of the house. While virtual communities are “passage points for collections of common beliefs and practices that united people who were physically separated” (Stone qtd in Jones 19), these interactions can lead to shared beliefs, for example, through advice about pet-keeping, craft and cooking, that can significantly modify practices and routines in the physical home. Acknowledgments An earlier version of this paper was presented at the Association of Internet Researchers’ International Conference, Brisbane, 27-30 September 2006. The authors would like to thank the referees of this article for their comments and input. Any errors are, of course, our own. References Algesheimer, R., U. Dholake, and A. Herrmann. “The Social Influence of Brand Community: Evidence from European Car Clubs”. Journal of Marketing 69 (2005): 19-34. Atkinson, Frances. “A New World of Craft”. The Age (11 July 2005). 28 May 2007 http://www.theage.com.au/articles/2005/07/10/1120934123262.html>. Brien, Donna Lee, and Rosemary Williamson. “‘Angels of the Home’ in Cyberspace: New Technologies and Biographies of Domestic Production”. Paper. Biography and New Technologies conference. Humanities Research Centre, Australian National University, Canberra, ACT. 12-14 Sep. 2006. Crewe, Jonathan. “Recalling Adamastor: Literature as Cultural Memory in ‘White’ South Africa”. In Acts of Memory: Cultural Recall in the Present, eds. Mieke Bal, Jonathan Crewe, and Leo Spitzer. Hanover, NH: Dartmouth College, 1999. 75-86. Felman, Shoshana, and Dori Laub. Testimony: Crises of Witnessing in Literature, Psychoanalysis, and History. New York: Routledge, 1992. Garber, Marjorie. Dog Love. New York: Touchstone/Simon and Schuster, 1996. Gorman-Murray, Andrew. “Homeboys: Uses of Home by Gay Australian Men”. Social and Cultural Geography 7.1 (2006): 53-69. Halbwachs, Maurice. On Collective Memory. Trans. Lewis A. Closer. Chicago: U of Chicago P, 1992. Hildred, Stafford, and Tim Ewbank. Jamie Oliver: The Biography. London: Blake, 2001. Hollows, Joanne. “Feeling like a Domestic Goddess: Post-Feminism and Cooking.” European Journal of Cultural Studies 6.2 (2003): 179-202. ———. “Oliver’s Twist: Leisure, Labour and Domestic Masculinity in The Naked Chef.” International Journal of Cultural Studies 6.2 (2003): 229-248. Jenson, J. “Fandom as Pathology: The Consequences of Characterization”. The Adoring Audience; Fan Culture and Popular Media. Ed. L. A. Lewis. New York, NY: Routledge, 1992. 9-29. Jones, Steven G., ed. Cybersociety, Computer-Mediated Communication and Community. Thousand Oaks, CA: Sage, 1995. Kozinets, R.V. “‘I Want to Believe’: A Netnography of the X’Philes’ Subculture of Consumption”. Advances in Consumer Research 34 (1997): 470-5. ———. “Utopian Enterprise: Articulating the Meanings of Star Trek’s Culture of Consumption.” Journal of Consumer Research 28 (2001): 67-88. Lawson, Nigella. How to Be a Domestic Goddess: Baking and the Art of Comfort Cooking. London: Chatto and Windus, 2000. Meller, Henry. “Jamie’s Tips Spark Asparagus Shortages”. Daily Mail (17 June 2005). 21 Aug. 2007 http://www.dailymail.co.uk/pages/live/articles/health/dietfitness.html? in_article_id=352584&in_page_id=1798>. Miles, Adrian. “Weblogs: Distributed Documentaries of the Everyday.” Metro 143: 66-70. Moss, Pamela. “Negotiating Space in Home Environments: Older Women Living with Arthritis.” Social Science and Medicine 45.1 (1997): 23-33. Robertson, Claire. Claire Robertson Illustration. 2000-2004. 28 May 2007 . Robertson, Claire. loobylu. 16 Feb. 2007. 28 May 2007 http://www.loobylu.com>. Robertson, Claire. “Press for loobylu.” Claire Robertson Illustration. 2000-2004. 28 May 2007 http://www.clairetown.com/press.html>. Robertson, Claire. A Month of Softies. 28 May 2007. 21 Aug. 2007 . Robertson, Claire. “Recent Client List”. Claire Robertson Illustration. 2000-2004. 28 May 2007 http://www.clairetown.com/clients.html>. Rose, Gillian. “Family Photographs and Domestic Spacings: A Case Study.” Transactions of the Institute of British Geographers NS 28.1 (2003): 5-18. Smith, Gilly. Jamie Oliver: Turning Up the Heat. Sydney: Macmillian, 2006. Thorne, Scott, and Gordon C. Bruner. “An Exploratory Investigation of the Characteristics of Consumer Fanaticism.” Qualitative Market Research: An International Journal 9.1 (2006): 51-72. Turow, Joseph, and Andrea Kavanaugh, eds. The Wired Homestead: An MIT Press Sourcebook on the Internet and the Family. Cambridge, MA: MIT Press, 2003. Citation reference for this article MLA Style Brien, Donna Lee, Leonie Rutherford, and Rosemary Williamson. "Hearth and Hotmail: The Domestic Sphere as Commodity and Community in Cyberspace." M/C Journal 10.4 (2007). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0708/10-brien.php>. APA Style Brien, D., L. Rutherford, and R. Williamson. (Aug. 2007) "Hearth and Hotmail: The Domestic Sphere as Commodity and Community in Cyberspace," M/C Journal, 10(4). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0708/10-brien.php>.
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia