Journal articles on the topic 'Degree Name: Master of Biomedical Science'

To see the other types of publications on this topic, follow the link: Degree Name: Master of Biomedical Science.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 25 journal articles for your research on the topic 'Degree Name: Master of Biomedical Science.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Dudina, Oksana. "PECULIARITIES OF TRAINING MASTERS IN MEDICINE IN CHINISE UNIVERSITIES." Academic Notes Series Pedagogical Science 1, no. 192 (March 2021): 63–66. http://dx.doi.org/10.36550/2415-7988-2021-1-192-63-66.

Full text
Abstract:
The article investigates and theoretically summarizes the peculiarities of training doctors at the master's level at the universities of ROC. Higher education in China is characterized by numerous changes due to the accumulation and adaptation of advanced successful experience in training specialists in different countries of the world. In this context, the property of scientists and educators of ROC concerning the organization of professional training of masters in medicine is of particular interest for Ukraine. Scientists are constantly searching for solutions and improving higher medical education in ROC. In the universities of the Republic of China, according to the field of study, the degree of master in medicine can be obtained as a professional degree and scientific degree. As a result, after completing the master's program in professional field, the master may work in positions such as senior physician, senior physician in health care, senior dentist, senior pharmaceutical, and the master in research field may work as the doctor-scientist, who carries out medical research as the main professional activity. The name of medical degrees is also different, for the professional field – clinical medicine, for the research field – preclinical medicine. Clinical medicine includes such areas of master's programs in medicine as health care, dentistry, pharmacological science; preclinical medicine includes clinical medicine, preventive medicine, dentistry, the science of human progress, the history of science and technology, biomedical engineering, social medicine and health management. The article examines the experience of implementing master's programs in medicine at higher educational institutions in China. The competence-based approach, forms and specialization of training in the organization of training and practicing students due to master's programs in medicine in ROC were determined.
APA, Harvard, Vancouver, ISO, and other styles
2

Hovorka, Christopher F., Donald G. Shurr, and Daniel S. Bozik. "The Concept of an Entry-Level Interdisciplinary Graduate Degree Preparing Orthotists for the New Millennium Part 2: Master of Orthotic Science." JPO Journal of Prosthetics and Orthotics 14, no. 2 (June 2002): 59–70. http://dx.doi.org/10.1097/00008526-200206000-00007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Alcalay, Myriam, Barbara Alicja Jereczek-Fossa, Matteo Pepa, Stefania Volpe, Mattia Zaffaroni, Francesca Fiore, Giulia Marvaso, et al. "Biomedical omics: first insights of a new MSc degree of the University of Milan." Tumori Journal 108, no. 1 (September 29, 2021): 6–11. http://dx.doi.org/10.1177/03008916211047268.

Full text
Abstract:
The advent of technologies allowing the global analysis of biological phenomena, referred to as "omics" (genomics, epigenomics, proteomics, metabolomics, microbiomics, radiomics, and radiogenomics), has revolutionized the study of human diseases and traced the path for quantitative personalized medicine. The newly inaugurated Master of Science Program in Biomedical Omics of the University of Milan, Italy, aims at addressing the unmet need to create professionals with a broad understanding of omics disciplines. The course is structured over 2 years and admits students with a bachelor’s degree in biotechnology, biology, chemistry, or pharmaceutical sciences. All teaching activities are fully held in English. A total of nine students enrolled in the first academic year and attended the courses of radiomics, genomics and epigenomics, proteomics, and high-throughput screenings, and their feedback was evaluated by means of an online questionnaire. Faculty with different backgrounds were recruited according to the subject. Due to restrictions imposed by the coronavirus disease 2019 (COVID-19) pandemic, laboratory activities were temporarily suspended, while lectures, journal clubs, and examinations were mainly held online. After the end of the first semester, despite the difficulties brought on by the COVID-19 pandemic, the course overall met the expectations of the students, specifically regarding teaching effectiveness, interpersonal interactions with the lecturers, and courses organization. Future efforts will be undertaken to better calibrate the overall workload of the course and to implement the most relevant suggestions from the students together with omics science evolution in order to guarantee state-of-the-art omics teaching and to prepare future omics specialists.
APA, Harvard, Vancouver, ISO, and other styles
4

Khalil, Muhammad Saqib, Muhammad Shakeel, Naila Gulfam, Syed Umair Ahmad, Aamir Aziz, Junaid Ahmad, Shabana Bibi, et al. "Fabrication of Silver Nanoparticles from Ziziphus nummularia Fruit Extract: Effect on Hair Growth Rate and Activity against Selected Bacterial and Fungal Strains." Journal of Nanomaterials 2022 (June 25, 2022): 1–14. http://dx.doi.org/10.1155/2022/3164951.

Full text
Abstract:
Nanoparticles are extensively used in biomedical and biotechnological research. Their large surface area, excellent physical properties, high permeability, and retention effect make them ideal for biomedical applications including diagnosis and treatment. Silver nanoparticles proved to be the safest for therapeutic uses. In the present study, silver nanoparticles (AgNPs) were prepared using various ratios of Ziziphus nummularia fruit extract and silver nitrate solution. The nanoparticles were investigated for hair growth and antibacterial and antifungal activities. Characterization of AgNPs was done by using UV-spectrophotometer, scanning electron microscope (SEM), X-ray diffractometer (XRD), thermogravimeter (TG), energy dispersive X-ray (EDX), Fourier transform infrared spectroscopy (FTIR), and master sizer. UV-spectrophotometer results showed the best ratio 10 : 10 of Z. nummularia fruit aqueous extract to silver solution for nanoparticle production at 400 to 430 nm wavelength. The size of AgNPs was 40 nm as measured by SEM. Characterization of AgNPs through EDX resulted in a silver peak at 3 keV. In contrast, differential scanning calorimetry (DSC) spectra show that the AgNPs are stable up to 160°C. The XED spectra gave 12 nm size of crystallite at 2 theta degree angle. FTIR bands for the metal oxides were recorded at 665 cm-1. Weight loss of the prepared nanoparticles was observed due to moisture loss when subjected to TGA, whereas particle size distribution 0.1 μm to 0.17 μm was recorded by the master seizer. The Z. nummularia fruit aqueous extract-mediated AgNPs were noted highly effective against Gram-positive bacteria compared to ethanolic, methanolic, chloroform, and ethyl acetate extracts of Z. nummularia fruit. The Gram-negative bacteria fungal species showed less sensitivity to AgNPs. The hair growth activity was observed to be higher for AgNPs followed by minoxidil than ethanolic and methanolic extracts of Z. nummularia fruit. These findings have concluded that Z. nummularia-AgNPs have an effective hair growth activity and exhibit several applications in distinctive biomedical and pharmaceutical industries.
APA, Harvard, Vancouver, ISO, and other styles
5

Troy, Jesse D., Josh Granek, Gregory P. Samsa, Gina-Maria Pomann, Sharon Updike, Steven C. Grambow, and Megan L. Neely. "A Course in Biology and Communication Skills for Master of Biostatistics Students." Journal of Curriculum and Teaching 11, no. 4 (April 21, 2022): 120. http://dx.doi.org/10.5430/jct.v11n4p120.

Full text
Abstract:
We describe an innovative, semester-long course in biology and communication skills for master’s degree students in biostatistics. The primary goal of the course is to make the connection between biological science and statistics more explicit. The secondary goals are to teach oral and written communication skills in an appropriate context for applied biostatisticians, and to teach a structured approach to thinking that enables students to become lifelong learners in biology, study design, and the application of statistics to biomedical research. Critical evaluation of medical literature is the method used to teach biology and communication. Exercises are constructivist in nature, designed to be hands-on and encourage reflection through writing and oral communication. A single disease area (cancer) provides a motivating example to: 1) introduce students to the most commonly used study designs in medical and public health research, 2) illustrate how study design is used to address questions about human biology and disease, 3) teach basic biological concepts necessary for a successful career in biostatistics, and 4) train students to read and critically evaluate publications in peer-reviewed journals. We describe the design and features of the course, the intended audience, and provide detailed examples for instructors interested in designing similar courses.
APA, Harvard, Vancouver, ISO, and other styles
6

Hart, Jack, and Caleb C. McKinney. "An institutional analysis of graduate outcomes reveals a contemporary workforce footprint for biomedical master’s degrees." PLOS ONE 15, no. 12 (December 7, 2020): e0243153. http://dx.doi.org/10.1371/journal.pone.0243153.

Full text
Abstract:
There is continued growth in the number of master’s degrees awarded in the life sciences to address the evolving needs of the biomedical workforce. Academic medical centers leverage the expertise of their faculty and industry partners to develop one to two year intensive and multidisciplinary master’s programs that equip students with advanced scientific skills and practical training experiences. However, there is little data published on the outcomes of these graduates to evaluate the effectiveness of these programs and to inform the return on investment of students. Here, the authors show the first five-year career outlook for master of science graduates from programs housed at an academic medical center. Georgetown University Biomedical Graduate Education researchers analyzed the placement outcomes of 1,204 graduates from 2014–2018, and the two-year outcomes of 412 graduates from 2016 and 2017. From the 15 M.S. programs analyzed, they found that 69% of graduates entered the workforce, while 28% entered an advanced degree program such as a Ph.D., allopathic or osteopathic medicine (M.D. or D.O.), or health professions degree. International students who pursue advanced degrees largely pursued Ph.D. degrees, while domestic students represent the majority of students entering into medical programs. Researchers found that a majority of the alumni that entered the workforce pursue research-based work, with 59% of graduates conducting research-based job functions across industries. Forty-nine percent of employed graduates analyzed from 2016 and 2017 changed employment positions, while 15% entered advanced degree programs. Alumni that changed positions changed companies in the same job function, changed to a position of increasing responsibility in the same or different organization, or changed to a different job function in the same or different company. Overall, standalone master’s programs equip graduates with research skills, analytical prowess, and content expertise, strengthening the talent pipeline of the biomedical workforce.
APA, Harvard, Vancouver, ISO, and other styles
7

Ammenwerth, E., G. Demiris, A. Hasman, R. Haux, W. Hersh, E. Hovenga, K. C. Lun, et al. "Recommendations of the International Medical Informatics Association (IMIA) on Education in Biomedical and Health Informatics." Methods of Information in Medicine 49, no. 02 (2010): 105–20. http://dx.doi.org/10.3414/me5119.

Full text
Abstract:
Summary Objective: The International Medical Informatics Association (IMIA) agreed on revising the existing international recommendations in health informatics /medical informatics education. These should help to establish courses, course tracks or even complete programs in this field, to further develop existing educational activities in the various nations and to support international initiatives concerning education in biomedical and health informatics (BMHI), particularly international activities in educating BMHI specialists and the sharing of courseware. Method: An IMIA task force, nominated in 2006, worked on updating the recommendations’ first version. These updates have been broadly discussed and refined by members of IMIA’s National Member Societies, IMIA’s Academic Institutional Members and by members of IMIA’s Working Group on Health and Medical Informatics Education. Results and Conclusions: The IMIA recommendations center on educational needs for health care professionals to acquire knowledge and skills in information processing and information and communication technology. The educational needs are described as a three-dimensional framework. The dimensions are: 1) professionals in health care (e.g. physicians, nurses, BMHI professionals), 2) type of specialization in BMHI (IT users, BMHI specialists), and 3) stage of career progression (bachelor, master, doctorate). Learning outcomes are defined in terms of knowledge and practical skills for health care professionals in their role a) as IT user and b) as BMHI specialist. Recommendations are given for courses /course tracks in BMHI as part of educational programs in medicine, nursing, health care management, dentistry, pharmacy, public health, health record administration, and informatics /computer science as well as for dedicated programs in BMHI (with bachelor, master or doctor degree).To support education in BMHI, IMIA offers to award a certificate for high-quality BMHI education. It supports information exchange on programs and courses in BMHI through its Working Group on Health and Medical Informatics Education.
APA, Harvard, Vancouver, ISO, and other styles
8

Bumgardner, Joel D., Linda C. Lucas, and Arabella B. Tilden. "Student research award in the undergraduate, master candidate, or health science degree candidate category, 15th annual meeting of the society for biomaterials, Lake Buena Vista, Florida, April 28-may 2, 1989. Toxicity of copper-based dental alloys in cell culture." Journal of Biomedical Materials Research 23, no. 10 (October 1989): 1103–14. http://dx.doi.org/10.1002/jbm.820231002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Filiaggi, M. J., N. A. Coombs, and R. M. Pilliar. "Student research award in the undergraduate, Master candidate category, or health science degree candidate category, 17th annual meeting of the society for biomaterials, scottsdale, AZ may 1-5,1991. Characterization of the interface in the plasma-sprayed HA coating/Ti-6Al-4V implant system." Journal of Biomedical Materials Research 25, no. 10 (October 1991): 1211–29. http://dx.doi.org/10.1002/jbm.820251004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Deng, Jing, Quan Ming Ding, Wen Li, Jian Hui Wang, Dong Min Liu, Xiao Xi Zeng, Xue Ying Liu, et al. "Preparation of Nano-Silver-Containing Polyethylene Composite Film and Ag Ion Migration into Food-Simulants." Journal of Nanoscience and Nanotechnology 20, no. 3 (March 1, 2020): 1613–21. http://dx.doi.org/10.1166/jnn.2020.17346.

Full text
Abstract:
Nano-composite films were developed between silver nanoparticles (Ag NPs) and a low-density polyethylene (LDPE) using master batches by melt extruding and melt compounding. The Ag/PE composite film showed decreased gas permeability, moisture permeability coefficient, the tear strength, the longitudinal and transverse elongation to that of commercial LDPE. Although stiffness increased at high Ag (40 ppm) concentration, but the longitudinal and transverse tensile strength enhance comparing with commercial PE. Light transmittance and haze were comparable. Both Nano-silver and composite films are effective against Escherichia coli (E. coli). Antibacterial activity of nano-silver for E. coli was determined by diameter of the inhibition zone and the minimum inhibitory concentration of nano-silver is detected by tube double dilution method reaching 15.63 ppm. The composite films are effective inhibition of E. coli at concentrations of 40 ppm Ag nanoparticles. Moreover, Nano-silver migration occurs in composite film. One-side migration was conducted to detect under three food simulants (3% acetic acid, 50% ethanol and distilled water) at three degree of temperature (25 °C, 40 °C and 70 °C) on different period of time (2, 4, 6, 8, 10 and 12 hours). These results indicated that the highest migration amount was obtained with 3% acetic acid following distilled water and finally 50% ethanol under same conditions. The migration level is dependent upon time and temperature and high migration time and temperature can enhance migration level. These findings demonstrate that nano-silver-containing polyethylene composite film may have a great potential for developing antibacterial and acid food packaging system.
APA, Harvard, Vancouver, ISO, and other styles
11

Long, Shengxiang, Yongmin Peng, Jing Lu, Tong Zhu, Chuanxiang Sun, and Jun Luo. "Identification and Applications of Micro to Macroscale Shale Lithofacies." Journal of Nanoscience and Nanotechnology 21, no. 1 (January 1, 2021): 659–69. http://dx.doi.org/10.1166/jnn.2021.18477.

Full text
Abstract:
Systematic research and evaluations of shale gas reservoirs are critical in shale gas exploration and development. Previous studies in this field mainly depend on experimental analysis that is often overly simplified. Here, we report an integrated geological and engineering method that combines the lithofacies division system with an identification and evaluation technology from the micro to macroscale. This method has been successfully applied in field sites. The key achievements of this method include the following: 1 Lithofacies refers to the rock or rock assemblage formed in a specific depositional environment that has experienced a certain degree of diagenesis, which is a comprehensive term that contains information about the lithology, physical properties, gas content and fracturability. With the main rock type or several rock types combined as the basic name, the shale lithofacies are classified and named by highlighting particular characteristics such as total organic carbon (TOC) and content of brittle minerals. 2 Based on the lithofacies classification and constrained by the equivalent sequence interfaces or thin layer interfaces, the organic matter-rich shales in the Upper Ordovician Wufeng Formation to the Lower Silurian Longmaxi Formation in the Fuling gas field are longitudinally divided into 7 shale lithofacies. Furthermore, the organic carbonrich, high-silicon lithofacies is identified as the most favorable type for shale gas reservoirs. 3 A set of marine shale lithofacies identification technologies was created using cross-referenced scales of conventional logging and image logging, as well as logging summarization. Lastly, 4 the lithofacies technologies reported here have been successfully applied in multiple areas, including in the south and southeast of the Sichuan Basin and the Pengshui area. Applications in these areas include shale formation comparison and analysis, monitoring and analysis of horizontal well drilling, and assessment of fracturing stages in horizontal sections. The lithofacies technology is proven to be efficient and applicable for comprehensive shale gas reservoir evaluation.
APA, Harvard, Vancouver, ISO, and other styles
12

Tregellas, Jason R., Jason Smucny, Donald C. Rojas, and Kristina T. Legget. "Predicting academic career outcomes by predoctoral publication record." PeerJ 6 (October 4, 2018): e5707. http://dx.doi.org/10.7717/peerj.5707.

Full text
Abstract:
Background For students entering a science PhD program, a tenure-track faculty research position is often perceived as the ideal long-term goal. A relatively small percentage of individuals ultimately achieve this goal, however, with the vast majority of PhD recipients ultimately finding employment in industry or government positions. Given the disparity between academic career ambitions and outcomes, it is useful to understand factors that may predict those outcomes. Toward this goal, the current study examined employment status of PhD graduates from biomedical sciences programs at the University of Colorado Anschutz Medical Campus (CU AMC) and related this to metrics of predoctoral publication records, as well as to other potentially important factors, such as sex and time-since-degree, to determine if these measures could predict career outcomes. Methods Demographic information (name, PhD program, graduation date, sex) of CU AMC biomedical sciences PhD graduates between 2000 and 2015 was obtained from University records. Career outcomes (academic faculty vs. non-faculty) and predoctoral publication records (number and impact factors of first-author and non-first-author publications) were obtained via publicly available information. Relationships between predoctoral publication record and career outcomes were investigated by (a) comparing faculty vs. non-faculty publication metrics, using t-tests, and (b) investigating the ability of predoctoral publication record, sex, and time-since-degree to predict career outcomes, using logistic regression. Results Significant faculty vs. non-faculty differences were observed in months since graduation (p < 0.001), first-author publication number (p = 0.001), average first-author impact factor (p = 0.006), and highest first-author impact factor (p = 0.004). With sex and months since graduation as predictors of career outcome, the logistic regression model was significant (p < 0.001), with both being male and having more months since graduation predicting career status. First-author related publication metrics (number of publications, average impact factor, highest impact factor) all significantly improved model fit (χ2 < 0.05 for all) and were all significant predictors of faculty status (p < 0.05 for all). Non-first-author publication metrics did not significantly improve model fit or predict faculty status. Discussion Results suggest that while sex and months since graduation also predict career outcomes, a strong predoctoral first-author publication record may increase likelihood of obtaining an academic faculty research position. Compared to non-faculty, individuals employed in faculty positions produced more predoctoral first-author publications, with these being in journals with higher impact factors. Furthermore, first-author publication record, sex, and months since graduation were significant predictors of faculty status.
APA, Harvard, Vancouver, ISO, and other styles
13

Pransky, Joanne. "The Pransky interview: Dr Howard Chizeck, founder, Olis Robotics; Professor, Electrical and Computer Engineering, University of Washington." Industrial Robot: the international journal of robotics research and application 46, no. 4 (June 17, 2019): 467–70. http://dx.doi.org/10.1108/ir-05-2019-0102.

Full text
Abstract:
Purpose The following paper is a “Q&A interview” conducted by Joanne Pransky of Industrial Robot Journal as a method to impart the combined technological, business and personal experience of a prominent, robotic industry PhD and innovator regarding his pioneering efforts and his personal journey of bringing a technological invention to market. This paper aims to discuss these issues. Design/methodology/approach The interviewee is Dr Howard Chizeck, Professor of Electrical and Computer Engineering and Adjunct Professor of Bioengineering at the University of Washington (UW). Professor Chizeck is a research testbed leader for the Center for Neurotechnology (a National Science Foundation Engineering Research Center) and also co-director of the UW BioRobotics Laboratory. In this interview, Chizeck shares the details on his latest startup, Olis Robotics. Findings Howard Jay Chizeck received his BS and MS degrees from Case Western Reserve University and the ScD degree in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology. He served as Chair of the Department of Systems, Control and Industrial Engineering at Case Western Reserve University and was also the Chair of the Electrical and Computer Engineering Department at the University of Washington. His telerobotic research includes haptic navigation and control for telerobotic devices, including robotic surgery and underwater systems. His neural engineering work involves the design and security of brain-machine interfaces and the development of devices to control symptoms of essential tremor and Parkinson’s disease. Originality/value Professor Chizeck was elected as a Fellow of the IEEE in 1999 “for contributions to the use of control system theory in biomedical engineering” and he was elected to the American Institute for Medical and Biological Engineering (AIMBE) College of Fellows in 2011 for “contributions to the use of control system theory in functional electrical stimulation assisted walking.” From 2008 to 2012, he was a member of the Science Technology Advisory Panel of the Johns Hopkins Applied Physics Laboratory. Professor Chizeck currently serves on the Visiting Committee of the Case School of Engineering (Case Western Reserve University). He is a founder and advisor of Controlsoft Inc (Ohio) and also is a founder and Chair of the Board of Directors of Olis Robotics, Inc., which was established in 2013 (under the name of BluHaptics) to commercialize haptic rendering, haptic navigation and other UW telerobotic technologies. He holds approximately 20 patents, and he has published more than 250 scholarly papers.
APA, Harvard, Vancouver, ISO, and other styles
14

Nazari, Elham, Mehran Aghemiri, Seyed Mohammad Tabatabaei, Sayyed Mostafa Mostafavi, Shokoufeh Aalaei, Saeed Eslami HasanAbady, and Hamed Tabesh. "Thesis Conducted by Student in Medical Informatics Field Based on Health Informatics Competencies Framework: A Study in Iran." Frontiers in Health Informatics 9, no. 1 (September 2, 2020): 42. http://dx.doi.org/10.30699/fhi.v9i1.230.

Full text
Abstract:
Introduction: One of the challenges of multidisciplinary disciplines such as Medical Informatics, health information technology, etc., especially for those who have just started research in this field, is the lack of familiarity with research fields in this field. Medical Informatics has various research fields related to the use of technology in the field of health care with the aim of reducing costs, improving the quality of health care and reducing possible medical errors. Iran is a country where medical informatics is a fledgling field and few universities in This field is accepted by students. Due to the specializations and clinical facilities concentrated in each university, research is being done differently and with variety. At the same time, some important fields related to the field may be missed. Therefore, in this study, in order to identify the most researched fields in this field and the neglected fields of research, the dissertations done in the field of medical informatics in Iranian universities were reviewed based on the health informatics framework.Materials and methods: Defended dissertations available to master and doctoral students in medical informatics in the years 2001 to 2018 in the universities of Tehran, Iran, Tarbiat Modares, Shahid Beheshti, Shiraz, Tabriz and Mashhad were collected in this study. Three medical informatics graduates from different universities assigned dissertation titles to a competency and an area of skill based on the Health Informatics Competencies Framework. The second stage of the study was performed by two other experts (different from the previous three experts). At this stage, the opinions of three experts on thesis were compared. Each dissertation title was assigned to a specific competency and a specific area of skill by the majority opinion method. Results: The results showed that most of the master's and doctoral dissertations in medical informatics were in the field of information science and methods, in which area of skill (data analysis and visualization) and decision support systems and informatics for participatory health were more than others. PhD students and the area of skill (decision support system) and (Architecture of health IS) are more popular with postgraduate students, while PhD students at the University of Mashhad, Tehran and Shahid Beheshti in the field of methods and basic principles of activities More than other areas, information and communication technology, biomedical science and health were not considered. Shahid Beheshti University had the most activity in the skill (Realization of benefits from IS) in the master's degree and in the degree PhD in skill (Application Mathematical concepts) has the most activity, in total, the most Competencies in which dissertations have been done in the country Competency (information science and core principles and method s) have been. In the universities of Iran, Tehran and Mashhad, the frequency of dissertations in competency (core principles and methods) and in other universities in competency (information science) has been higher. Most research in the doctoral program in 2017 was related to the University of Mashhad. Most of the research in master's degree in 2014 was related to Iran University. Most of the research in the doctoral program in 2017 was related to Mashhad University.Conclusion: The results of this research can help field researchers in conducting new research in the field and can help in useful, scientific and effective design of research projects of researchers, university professors, educational planners and health care service providers. Also, due to the interdisciplinary nature of medical informatics, the use of professors in various fields and specialties in Iranian universities can be effective in defining topics for research for theses.
APA, Harvard, Vancouver, ISO, and other styles
15

Sayekti, Retno, and Usiono Usiono. "Trend Pemilihan Pendidikan Ilmu Perpustakaan." LIBRARIA: Jurnal Perpustakaan 6, no. 2 (December 18, 2018): 281. http://dx.doi.org/10.21043/libraria.v6i2.3927.

Full text
Abstract:
<p class="Default" align="center">Abstrak</p><p class="Default">Program Studi Ilmu Perpustakaan di Indonesia merupakan salah satu bidang pendidikan yang sedang berkembang sesuai dengan tuntutan perkembangan teknologi informasi dewasa ini. Sekalipun bidang pendidikan ini sudah cukup tua namun penamaan dan penempatan bidang pendidikan ini pada fakultas tidak sama antara satu Perguruan Tinggi dengan yang lainnya. Profesi pustakawan yang akan dihasilkan dari lulusan bidang pendidikan ilmu perpustakaan, masih merupakan profesi yang tak banyak diminati dibandingkan profesi lainnya. Namun demikian, sejak dibukanya bidang pendidikan ini di UIN Sumatera Utara, minat calon mahasiswa untuk belajar di program studi Ilmu Perpustakaan meningkat dari tahun ke tahun. Karena itu, penelitian ini bertujuan untuk menggali data tentang faktor-faktor yang mendorong mahasiswa memilih program studi Ilmu perpustakaan di Univesitas Islam Negeri Sumatera Utara Medan. Dengan menggunakan tehnik pengumpulan data <em>Focused Group Discussion</em> (FGD), penelitian ini menemukan bahwa ada empat faktor yang mendorong mahasiswa memilih program studi Ilmu Perpustakaan diri sendiri, orangtua, kerabat dan teman. Dalam hal karir, sebagian besar mahasiswa ingin melanjutkan studi untuk mendapatkan peluang kerja menjadi dosen pada Ilmu Perpustakaan. Hambatan-hambatan yang dihadapi oleh mahasiswa meliputi keterbatasan dalam sarana dan prasarana dan keterbatasan kemampuan Bahasa asing. Karena itu, mahasiswa berharap agar program studi melakukan update kurikulum untuk menyesuaikan dengan perkembangan baru dalam ilmu perpustakaan dengan implementasi teknologi informasi dan menyelenggarakan program-program pelatihan dalam bahasa asing dan IT.</p><p class="Default"> </p><p class="Default" align="center"><em>Abstract</em></p><p class="Default"><em>The School of Library and Information Science is one of the growing field of studies in Indonesia. The growth of this school is in line with the recent development of information technology. Although this education has long been in existence in the history, the name and the positioning under certain faculty differs from one university to another. Although study shows that the profession of librarian in Indonesia gains less interest from the society compared to other profession, the interest of student to study at LIS program at UIN Sumatera Utara has been increased since the beginning of its opening. This study aims at investigating factors that determine the students to choose Library and Information Science Department at UIN Sumatera Utara Medan. Using survey and Focused Group Discussion techniques, this study found that there are four factors encouraging students to study at LIS program, they are self- motivation, parents, other family members and friend supports. In terms of career, most students want to pursue their studies to a master degree to obtain better profession in LIS program instead of working as information specialists or librarians. Regarding the obstacles they face during the period of their study at LIS program, students maintain that lack of laboratory and lack of adequate resources in have made their learning difficult. Therefore, they expect for the school to provide more trainings in hard and soft skills, especially in foreign languages and IT while updating the curricula to keep up to date with new trends in LIS. </em></p><p class="Default"><em> </em></p>
APA, Harvard, Vancouver, ISO, and other styles
16

Espinosa-Mirabet, Silvia, and Lola Costa-Galvez. "Identidad gráfica para un equipo de biomedicina universitario: consiguiendo notoriedad." Relaciones Públicas en tiempos del confinamiento 10, no. 19 (June 26, 2020): 201–22. http://dx.doi.org/10.5783/rirp-19-2020-11-201-222.

Full text
Abstract:
The aim of this article is to present the work carried out by communication researchers within a biomedical engineering group that works against breast cancer at the University of Girona (Spain). It will gradually detail how the identity of the group was developed, how the storytelling was built, how we construct a Strategic Communication Plan, and the concrete and specific actions to achieve two objectives: to be visible and to obtain extra funding. It was assumed that one objective should be linked to the other. This paper is a real world application. It is based on the theoretical perspective of Institutional Theory as understood by Macagnan (2013), and the philosophy put forward by Abratt and Kleyn (2011). They state that corporate identity and corporate branding are inseparable elements that shape the reputation of an organisation, in the same way as the role played by the public. Thus, the strategic communication plan that was developed followed the guidelines outlined by Scheinsohn (2009), the flexibility outlined by Matilla (2009) and following Capriotti (2009) in relation to the phases of strategic management and its relationship with the public. First, we focused on seeking notoriety. The assumption was that by achieving visibility and awareness it would be more accessible to secure more funding. In other words, we tried to ensure that the biomedical contributions based on 3D engineering, developed within the team, did not go unnoticed. The mission of the project, called ONCOen3D, is to test new therapeutic targets against Triple Negative breast cancer, one of the most aggressive cancers, with a high degree of recurrence, which mostly affects young patients. The drug tests, which is one of the team's tasks, are tested, in our case, on matrices printed with 3D systems that, with low cost, allow many tests. This creative idea, the basis of the project, was absolutely unknown outside of academic walls, until this multidisciplinary research team incorporated experts from the area of communication. Thus, communicative actions were devised, scheduled and focused on different audiences to achieve notoriety. To begin with, the name and visual identity of ONCOen3D was created: a logo, a graphic identity embodied in communicative elements of dissemination (flyers, roll-ups) and institutional videos hosted on the corporate website. The videos explain and present the work both to a more specialized and scientific public, or to a more generalist public: patients, families and organizations supporting breast cancer patients. Other actions were aimed at achieving the second objective: to seek funding outside the usual channels of the university. Visits to the laboratories were scheduled to introduce the institutions and organizations with which we normally collaborate, and to engage new potential sponsors. Another event planned was the organization of a solidarity market for emerging talented designers. In addition, the informative story was constructed to present ONCOen3D in society through a classic technique of Public Relations: the press conference. The results were more than 50 impacts in national and international media; a remarkable publicity was obtained. This international presence prompted ONCOen3D to compete in the 3D Printing Academy Awards 2019 and win the Best Healthcare Application 2019 award. Therefore, these initial responses show that, with the multidisciplinary nature of the teams, dissemination is more effective in terms of both positioning and reputation. This project conceived and implemented with a Spanish public university is serving to give visibility to science.
APA, Harvard, Vancouver, ISO, and other styles
17

Xing, Fei, Yi Ping Yao, Zhi Wen Jiang, and Bing Wang. "Fine-Grained Parallel and Distributed Spatial Stochastic Simulation of Biological Reactions." Advanced Materials Research 345 (September 2011): 104–12. http://dx.doi.org/10.4028/www.scientific.net/amr.345.104.

Full text
Abstract:
To date, discrete event stochastic simulations of large scale biological reaction systems are extremely compute-intensive and time-consuming. Besides, it has been widely accepted that spatial factor plays a critical role in the dynamics of most biological reaction systems. The NSM (the Next Sub-Volume Method), a spatial variation of the Gillespie’s stochastic simulation algorithm (SSA), has been proposed for spatially stochastic simulation of those systems. While being able to explore high degree of parallelism in systems, NSM is inherently sequential, which still suffers from the problem of low simulation speed. Fine-grained parallel execution is an elegant way to speed up sequential simulations. Thus, based on the discrete event simulation framework JAMES II, we design and implement a PDES (Parallel Discrete Event Simulation) TW (time warp) simulator to enable the fine-grained parallel execution of spatial stochastic simulations of biological reaction systems using the ANSM (the Abstract NSM), a parallel variation of the NSM. The simulation results of classical Lotka-Volterra biological reaction system show that our time warp simulator obtains remarkable parallel speed-up against sequential execution of the NSM.I.IntroductionThe goal of Systems biology is to obtain system-level investigations of the structure and behavior of biological reaction systems by integrating biology with system theory, mathematics and computer science [1][3], since the isolated knowledge of parts can not explain the dynamics of a whole system. As the complement of “wet-lab” experiments, stochastic simulation, being called the “dry-computational” experiment, plays a more and more important role in computing systems biology [2]. Among many methods explored in systems biology, discrete event stochastic simulation is of greatly importance [4][5][6], since a great number of researches have present that stochasticity or “noise” have a crucial effect on the dynamics of small population biological reaction systems [4][7]. Furthermore, recent research shows that the stochasticity is not only important in biological reaction systems with small population but also in some moderate/large population systems [7].To date, Gillespie’s SSA [8] is widely considered to be the most accurate way to capture the dynamics of biological reaction systems instead of traditional mathematical method [5][9]. However, SSA-based stochastic simulation is confronted with two main challenges: Firstly, this type of simulation is extremely time-consuming, since when the types of species and the number of reactions in the biological system are large, SSA requires a huge amount of steps to sample these reactions; Secondly, the assumption that the systems are spatially homogeneous or well-stirred is hardly met in most real biological systems and spatial factors play a key role in the behaviors of most real biological systems [19][20][21][22][23][24]. The next sub-volume method (NSM) [18], presents us an elegant way to access the special problem via domain partition. To our disappointment, sequential stochastic simulation with the NSM is still very time-consuming, and additionally introduced diffusion among neighbor sub-volumes makes things worse. Whereas, the NSM explores a very high degree of parallelism among sub-volumes, and parallelization has been widely accepted as the most meaningful way to tackle the performance bottleneck of sequential simulations [26][27]. Thus, adapting parallel discrete event simulation (PDES) techniques to discrete event stochastic simulation would be particularly promising. Although there are a few attempts have been conducted [29][30][31], research in this filed is still in its infancy and many issues are in need of further discussion. The next section of the paper presents the background and related work in this domain. In section III, we give the details of design and implementation of model interfaces of LP paradigm and the time warp simulator based on the discrete event simulation framework JAMES II; the benchmark model and experiment results are shown in Section IV; in the last section, we conclude the paper with some future work.II. Background and Related WorkA. Parallel Discrete Event Simulation (PDES)The notion Logical Process (LP) is introduced to PDES as the abstract of the physical process [26], where a system consisting of many physical processes is usually modeled by a set of LP. LP is regarded as the smallest unit that can be executed in PDES and each LP holds a sub-partition of the whole system’s state variables as its private ones. When a LP processes an event, it can only modify the state variables of its own. If one LP needs to modify one of its neighbors’ state variables, it has to schedule an event to the target neighbor. That is to say event message exchanging is the only way that LPs interact with each other. Because of the data dependences or interactions among LPs, synchronization protocols have to be introduced to PDES to guarantee the so-called local causality constraint (LCC) [26]. By now, there are a larger number of synchronization algorithms have been proposed, e.g. the null-message [26], the time warp (TW) [32], breath time warp (BTW) [33] and etc. According to whether can events of LPs be processed optimistically, they are generally divided into two types: conservative algorithms and optimistic algorithms. However, Dematté and Mazza have theoretically pointed out the disadvantages of pure conservative parallel simulation for biochemical reaction systems [31]. B. NSM and ANSM The NSM is a spatial variation of Gillespie’ SSA, which integrates the direct method (DM) [8] with the next reaction method (NRM) [25]. The NSM presents us a pretty good way to tackle the aspect of space in biological systems by partitioning a spatially inhomogeneous system into many much more smaller “homogeneous” ones, which can be simulated by SSA separately. However, the NSM is inherently combined with the sequential semantics, and all sub-volumes share one common data structure for events or messages. Thus, directly parallelization of the NSM may be confronted with the so-called boundary problem and high costs of synchronously accessing the common data structure [29]. In order to obtain higher efficiency of parallel simulation, parallelization of NSM has to firstly free the NSM from the sequential semantics and secondly partition the shared data structure into many “parallel” ones. One of these is the abstract next sub-volume method (ANSM) [30]. In the ANSM, each sub-volume is modeled by a logical process (LP) based on the LP paradigm of PDES, where each LP held its own event queue and state variables (see Fig. 1). In addition, the so-called retraction mechanism was introduced in the ANSM too (see algorithm 1). Besides, based on the ANSM, Wang etc. [30] have experimentally tested the performance of several PDES algorithms in the platform called YH-SUPE [27]. However, their platform is designed for general simulation applications, thus it would sacrifice some performance for being not able to take into account the characteristics of biological reaction systems. Using the similar ideas of the ANSM, Dematté and Mazza have designed and realized an optimistic simulator. However, they processed events in time-stepped manner, which would lose a specific degree of precisions compared with the discrete event manner, and it is very hard to transfer a time-stepped simulation to a discrete event one. In addition, Jeschke etc.[29] have designed and implemented a dynamic time-window simulator to execution the NSM in parallel on the grid computing environment, however, they paid main attention on the analysis of communication costs and determining a better size of the time-window.Fig. 1: the variations from SSA to NSM and from NSM to ANSMC. JAMES II JAMES II is an open source discrete event simulation experiment framework developed by the University of Rostock in Germany. It focuses on high flexibility and scalability [11][13]. Based on the plug-in scheme [12], each function of JAMES II is defined as a specific plug-in type, and all plug-in types and plug-ins are declared in XML-files [13]. Combined with the factory method pattern JAMES II innovatively split up the model and simulator, which makes JAMES II is very flexible to add and reuse both of models and simulators. In addition, JAMES II supports various types of modelling formalisms, e.g. cellular automata, discrete event system specification (DEVS), SpacePi, StochasticPi and etc.[14]. Besides, a well-defined simulator selection mechanism is designed and developed in JAMES II, which can not only automatically choose the proper simulators according to the modeling formalism but also pick out a specific simulator from a serious of simulators supporting the same modeling formalism according to the user settings [15].III. The Model Interface and SimulatorAs we have mentioned in section II (part C), model and simulator are split up into two separate parts. Thus, in this section, we introduce the designation and implementation of model interface of LP paradigm and more importantly the time warp simulator.A. The Mod Interface of LP ParadigmJAMES II provides abstract model interfaces for different modeling formalism, based on which Wang etc. have designed and implemented model interface of LP paradigm[16]. However, this interface is not scalable well for parallel and distributed simulation of larger scale systems. In our implementation, we accommodate the interface to the situation of parallel and distributed situations. Firstly, the neighbor LP’s reference is replaced by its name in LP’s neighbor queue, because it is improper even dangerous that a local LP hold the references of other LPs in remote memory space. In addition, (pseudo-)random number plays a crucial role to obtain valid and meaningful results in stochastic simulations. However, it is still a very challenge work to find a good random number generator (RNG) [34]. Thus, in order to focus on our problems, we introduce one of the uniform RNGs of JAMES II to this model interface, where each LP holds a private RNG so that random number streams of different LPs can be independent stochastically. B. The Time Warp SimulatorBased on the simulator interface provided by JAMES II, we design and implement the time warp simulator, which contains the (master-)simulator, (LP-)simulator. The simulator works strictly as master/worker(s) paradigm for fine-grained parallel and distributed stochastic simulations. Communication costs are crucial to the performance of a fine-grained parallel and distributed simulation. Based on the Java remote method invocation (RMI) mechanism, P2P (peer-to-peer) communication is implemented among all (master-and LP-)simulators, where a simulator holds all the proxies of targeted ones that work on remote workers. One of the advantages of this communication approach is that PDES codes can be transferred to various hardwire environment, such as Clusters, Grids and distributed computing environment, with only a little modification; The other is that RMI mechanism is easy to realized and independent to any other non-Java libraries. Since the straggler event problem, states have to be saved to rollback events that are pre-processed optimistically. Each time being modified, the state is cloned to a queue by Java clone mechanism. Problem of this copy state saving approach is that it would cause loads of memory space. However, the problem can be made up by a condign GVT calculating mechanism. GVT reduction scheme also has a significant impact on the performance of parallel simulators, since it marks the highest time boundary of events that can be committed so that memories of fossils (processed events and states) less than GVT can be reallocated. GVT calculating is a very knotty for the notorious simultaneous reporting problem and transient messages problem. According to our problem, another GVT algorithm, called Twice Notification (TN-GVT) (see algorithm 2), is contributed to this already rich repository instead of implementing one of GVT algorithms in reference [26] and [28].This algorithm looks like the synchronous algorithm described in reference [26] (pp. 114), however, they are essentially different from each other. This algorithm has never stopped the simulators from processing events when GVT reduction, while algorithm in reference [26] blocks all simulators for GVT calculating. As for the transient message problem, it can be neglect in our implementation, because RMI based remote communication approach is synchronized, that means a simulator will not go on its processing until the remote the massage get to its destination. And because of this, the high-costs message acknowledgement, prevalent over many classical asynchronous GVT algorithms, is not needed anymore too, which should be constructive to the whole performance of the time warp simulator.IV. Benchmark Model and Experiment ResultsA. The Lotka-Volterra Predator-prey SystemIn our experiment, the spatial version of Lotka-Volterra predator-prey system is introduced as the benchmark model (see Fig. 2). We choose the system for two considerations: 1) this system is a classical experimental model that has been used in many related researches [8][30][31], so it is credible and the simulation results are comparable; 2) it is simple but helpful enough to test the issues we are interested in. The space of predator-prey System is partitioned into a2D NXNgrid, whereNdenotes the edge size of the grid. Initially the population of the Grass, Preys and Predators are set to 1000 in each single sub-volume (LP). In Fig. 2,r1,r2,r3stand for the reaction constants of the reaction 1, 2 and 3 respectively. We usedGrass,dPreyanddPredatorto stand for the diffusion rate of Grass, Prey and Predator separately. Being similar to reference [8], we also take the assumption that the population of the grass remains stable, and thusdGrassis set to zero.R1:Grass + Prey ->2Prey(1)R2:Predator +Prey -> 2Predator(2)R3:Predator -> NULL(3)r1=0.01; r2=0.01; r3=10(4)dGrass=0.0;dPrey=2.5;dPredato=5.0(5)Fig. 2: predator-prey systemB. Experiment ResultsThe simulation runs have been executed on a Linux Cluster with 40 computing nodes. Each computing node is equipped with two 64bit 2.53 GHz Intel Xeon QuadCore Processors with 24GB RAM, and nodes are interconnected with Gigabit Ethernet connection. The operating system is Kylin Server 3.5, with kernel 2.6.18. Experiments have been conducted on the benchmark model of different size of mode to investigate the execution time and speedup of the time warp simulator. As shown in Fig. 3, the execution time of simulation on single processor with 8 cores is compared. The result shows that it will take more wall clock time to simulate much larger scale systems for the same simulation time. This testifies the fact that larger scale systems will leads to more events in the same time interval. More importantly, the blue line shows that the sequential simulation performance declines very fast when the mode scale becomes large. The bottleneck of sequential simulator is due to the costs of accessing a long event queue to choose the next events. Besides, from the comparison between group 1 and group 2 in this experiment, we could also conclude that high diffusion rate increased the simulation time greatly both in sequential and parallel simulations. This is because LP paradigm has to split diffusion into two processes (diffusion (in) and diffusion (out) event) for two interactive LPs involved in diffusion and high diffusion rate will lead to high proportional of diffusion to reaction. In the second step shown in Fig. 4, the relationship between the speedups from time warp of two different model sizes and the number of work cores involved are demonstrated. The speedup is calculated against the sequential execution of the spatial reaction-diffusion systems model with the same model size and parameters using NSM.Fig. 4 shows the comparison of speedup of time warp on a64X64grid and a100X100grid. In the case of a64X64grid, under the condition that only one node is used, the lowest speedup (a little bigger than 1) is achieved when two cores involved, and the highest speedup (about 6) is achieved when 8 cores involved. The influence of the number of cores used in parallel simulation is investigated. In most cases, large number of cores could bring in considerable improvements in the performance of parallel simulation. Also, compared with the two results in Fig. 4, the simulation of larger model achieves better speedup. Combined with time tests (Fig. 3), we find that sequential simulator’s performance declines sharply when the model scale becomes very large, which makes the time warp simulator get better speed-up correspondingly.Fig. 3: Execution time (wall clock time) of Seq. and time warp with respect to different model sizes (N=32, 64, 100, and 128) and model parameters based on single computing node with 8 cores. Results of the test are grouped by the diffusion rates (Group 1: Sequential 1 and Time Warp 1. dPrey=2.5, dPredator=5.0; Group 2: dPrey=0.25, dPredator=0.5, Sequential 2 and Time Warp 2).Fig. 4: Speedup of time warp with respect to the number of work cores and the model size (N=64 and 100). Work cores are chose from one computing node. Diffusion rates are dPrey=2.5, dPredator=5.0 and dGrass=0.0.V. Conclusion and Future WorkIn this paper, a time warp simulator based on the discrete event simulation framework JAMES II is designed and implemented for fine-grained parallel and distributed discrete event spatial stochastic simulation of biological reaction systems. Several challenges have been overcome, such as state saving, roll back and especially GVT reduction in parallel execution of simulations. The Lotka-Volterra Predator-Prey system is chosen as the benchmark model to test the performance of our time warp simulator and the best experiment results show that it can obtain about 6 times of speed-up against the sequential simulation. The domain this paper concerns with is in the infancy, many interesting issues are worthy of further investigated, e.g. there are many excellent PDES optimistic synchronization algorithms (e.g. the BTW) as well. Next step, we would like to fill some of them into JAMES II. In addition, Gillespie approximation methods (tau-leap[10] etc.) sacrifice some degree of precision for higher simulation speed, but still could not address the aspect of space of biological reaction systems. The combination of spatial element and approximation methods would be very interesting and promising; however, the parallel execution of tau-leap methods should have to overcome many obstacles on the road ahead.AcknowledgmentThis work is supported by the National Natural Science Foundation of China (NSF) Grant (No.60773019) and the Ph.D. Programs Foundation of Ministry of Education of China (No. 200899980004). The authors would like to show their great gratitude to Dr. Jan Himmelspach and Dr. Roland Ewald at the University of Rostock, Germany for their invaluable advice and kindly help with JAMES II.ReferencesH. Kitano, "Computational systems biology." Nature, vol. 420, no. 6912, pp. 206-210, November 2002.H. Kitano, "Systems biology: a brief overview." Science (New York, N.Y.), vol. 295, no. 5560, pp. 1662-1664, March 2002.A. Aderem, "Systems biology: Its practice and challenges," Cell, vol. 121, no. 4, pp. 511-513, May 2005. [Online]. Available: http://dx.doi.org/10.1016/j.cell.2005.04.020.H. de Jong, "Modeling and simulation of genetic regulatory systems: A literature review," Journal of Computational Biology, vol. 9, no. 1, pp. 67-103, January 2002.C. W. Gardiner, Handbook of Stochastic Methods: for Physics, Chemistry and the Natural Sciences (Springer Series in Synergetics), 3rd ed. Springer, April 2004.D. T. Gillespie, "Simulation methods in systems biology," in Formal Methods for Computational Systems Biology, ser. Lecture Notes in Computer Science, M. Bernardo, P. Degano, and G. Zavattaro, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008, vol. 5016, ch. 5, pp. 125-167.Y. Tao, Y. Jia, and G. T. Dewey, "Stochastic fluctuations in gene expression far from equilibrium: Omega expansion and linear noise approximation," The Journal of Chemical Physics, vol. 122, no. 12, 2005.D. T. Gillespie, "Exact stochastic simulation of coupled chemical reactions," Journal of Physical Chemistry, vol. 81, no. 25, pp. 2340-2361, December 1977.D. T. Gillespie, "Stochastic simulation of chemical kinetics," Annual Review of Physical Chemistry, vol. 58, no. 1, pp. 35-55, 2007.D. T. Gillespie, "Approximate accelerated stochastic simulation of chemically reacting systems," The Journal of Chemical Physics, vol. 115, no. 4, pp. 1716-1733, 2001.J. Himmelspach, R. Ewald, and A. M. Uhrmacher, "A flexible and scalable experimentation layer," in WSC '08: Proceedings of the 40th Conference on Winter Simulation. Winter Simulation Conference, 2008, pp. 827-835.J. Himmelspach and A. M. Uhrmacher, "Plug'n simulate," in 40th Annual Simulation Symposium (ANSS'07). Washington, DC, USA: IEEE, March 2007, pp. 137-143.R. Ewald, J. Himmelspach, M. Jeschke, S. Leye, and A. M. Uhrmacher, "Flexible experimentation in the modeling and simulation framework james ii-implications for computational systems biology," Brief Bioinform, vol. 11, no. 3, pp. bbp067-300, January 2010.A. Uhrmacher, J. Himmelspach, M. Jeschke, M. John, S. Leye, C. Maus, M. Röhl, and R. Ewald, "One modelling formalism & simulator is not enough! a perspective for computational biology based on james ii," in Formal Methods in Systems Biology, ser. Lecture Notes in Computer Science, J. Fisher, Ed. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008, vol. 5054, ch. 9, pp. 123-138. [Online]. Available: http://dx.doi.org/10.1007/978-3-540-68413-8_9.R. Ewald, J. Himmelspach, and A. M. Uhrmacher, "An algorithm selection approach for simulation systems," pads, vol. 0, pp. 91-98, 2008.Bing Wang, Jan Himmelspach, Roland Ewald, Yiping Yao, and Adelinde M Uhrmacher. Experimental analysis of logical process simulation algorithms in james ii[C]// In M. D. Rossetti, R. R. Hill, B. Johansson, A. Dunkin, and R. G. Ingalls, editors, Proceedings of the Winter Simulation Conference, IEEE Computer Science, 2009. 1167-1179.Ewald, J. Rössel, J. Himmelspach, and A. M. Uhrmacher, "A plug-in-based architecture for random number generation in simulation systems," in WSC '08: Proceedings of the 40th Conference on Winter Simulation. Winter Simulation Conference, 2008, pp. 836-844.J. Elf and M. Ehrenberg, "Spontaneous separation of bi-stable biochemical systems into spatial domains of opposite phases." Systems biology, vol. 1, no. 2, pp. 230-236, December 2004.K. Takahashi, S. Arjunan, and M. Tomita, "Space in systems biology of signaling pathways? Towards intracellular molecular crowding in silico," FEBS Letters, vol. 579, no. 8, pp. 1783-1788, March 2005.J. V. Rodriguez, J. A. Kaandorp, M. Dobrzynski, and J. G. Blom, "Spatial stochastic modelling of the phosphoenolpyruvate-dependent phosphotransferase (pts) pathway in escherichia coli," Bioinformatics, vol. 22, no. 15, pp. 1895-1901, August 2006.D. Ridgway, G. Broderick, and M. Ellison, "Accommodating space, time and randomness in network simulation," Current Opinion in Biotechnology, vol. 17, no. 5, pp. 493-498, October 2006.J. V. Rodriguez, J. A. Kaandorp, M. Dobrzynski, and J. G. Blom, "Spatial stochastic modelling of the phosphoenolpyruvate-dependent phosphotransferase (pts) pathway in escherichia coli," Bioinformatics, vol. 22, no. 15, pp. 1895-1901, August 2006.W. G. Wilson, A. M. Deroos, and E. Mccauley, "Spatial instabilities within the diffusive lotka-volterra system: Individual-based simulation results," Theoretical Population Biology, vol. 43, no. 1, pp. 91-127, February 1993.K. Kruse and J. Elf. Kinetics in spatially extended systems. In Z. Szallasi, J. Stelling, and V. Periwal, editors, System Modeling in Cellular Biology. From Concepts to Nuts and Bolts, pages 177–198. MIT Press, Cambridge, MA, 2006.M. A. Gibson and J. Bruck, "Efficient exact stochastic simulation of chemical systems with many species and many channels," The Journal of Physical Chemistry A, vol. 104, no. 9, pp. 1876-1889, March 2000.R. M. Fujimoto, Parallel and Distributed Simulation Systems (Wiley Series on Parallel and Distributed Computing). Wiley-Interscience, January 2000.Y. Yao and Y. Zhang, “Solution for analytic simulation based on parallel processing,” Journal of System Simulation, vol. 20, No.24, pp. 6617–6621, 2008.G. Chen and B. K. Szymanski, "Dsim: scaling time warp to 1,033 processors," in WSC '05: Proceedings of the 37th conference on Winter simulation. Winter Simulation Conference, 2005, pp. 346-355.M. Jeschke, A. Park, R. Ewald, R. Fujimoto, and A. M. Uhrmacher, "Parallel and distributed spatial simulation of chemical reactions," in 2008 22nd Workshop on Principles of Advanced and Distributed Simulation. Washington, DC, USA: IEEE, June 2008, pp. 51-59.B. Wang, Y. Yao, Y. Zhao, B. Hou, and S. Peng, "Experimental analysis of optimistic synchronization algorithms for parallel simulation of reaction-diffusion systems," High Performance Computational Systems Biology, International Workshop on, vol. 0, pp. 91-100, October 2009.L. Dematté and T. Mazza, "On parallel stochastic simulation of diffusive systems," in Computational Methods in Systems Biology, M. Heiner and A. M. Uhrmacher, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008, vol. 5307, ch. 16, pp. 191-210.D. R. Jefferson, "Virtual time," ACM Trans. Program. Lang. Syst., vol. 7, no. 3, pp. 404-425, July 1985.J. S. Steinman, "Breathing time warp," SIGSIM Simul. Dig., vol. 23, no. 1, pp. 109-118, July 1993. [Online]. Available: http://dx.doi.org/10.1145/174134.158473 S. K. Park and K. W. Miller, "Random number generators: good ones are hard to find," Commun. ACM, vol. 31, no. 10, pp. 1192-1201, October 1988.
APA, Harvard, Vancouver, ISO, and other styles
18

Simeoni, Ricardo. "Chronic Fatigue Syndrome: A Quantum Mechanical Perspective." UNET JOSS: Journal of Science and Society 2, no. 1 (May 9, 2022): 20–46. http://dx.doi.org/10.52042/unetjoss020103.

Full text
Abstract:
Chronic fatigue syndrome (CFS), also known as myalgic encephalomyelitis (ME) or systemic exertional intolerance disease (SEID), is an illness dominated by long-term fatigue persisting for more than six months, incapacitating to the point of sufferers being bedridden or housebound in some cases, and unexplained by some other underlying medical condition. CFS is also often characterised by unrefreshing sleep, post-exertional discomfort ranging from malaise to extreme exhaustion, orthostatic (upright posture) intolerance, muscle pain, cognitive impairment (including the commonly described symptom of "brain fog"), and deterioration in cellular bioenergetics [1-3]. Scientific estimates of the world-wide population percentage that suffer from CFS naturally vary, but a conservative estimate based on several studies is at least 0.4%, thereby equating to millions world-wide [1-4]. Thankfully, after decades of dismissal by some quarters, leading to despair and exasperation of sufferers, CFS is now widely accepted as a legitimate illness. However, while depreciating labels such as "yuppy flu" have subsequently been banished to recent history, this new-found acceptance provides comfort for sufferers only up to a certain point. Viz., CFS is still far from fully understood and is often described as a complex, multisystem illness with no clear pathological mechanisms or diagnostic biomarkers [1-3], from which treatment uncertainty ensues [1,2]. Sadly, due in no small part to this uncertainty and the illness characteristics of the opening paragraph, the suicide rate of CFS sufferers has been reported as approximately seven times that of their healthy counterparts [1,5]. The economic and other social impacts of CFS are difficult to determine because of the arbitrariness of case definitions, lack of evidence including prevalence data, diagnostic inability of some physicians due to factors such as disbelief and lack of understanding (one major survey [4] reveals that 62% of sufferers are not confident in their general physician’s understanding), and difficulty many sufferers have in explaining the symptoms of their illness (another survey [2] shows that a majority or substantial proportion, depending on factors such as country of origin, have difficulty explaining their illness to not only physicians but also family and friends). Societal impacts of CFS have nonetheless been assessed by various committees (e.g., associated with the United States’ Institute of Medicine) and working/action groups (e.g., associated with the European Union). As expected, the economic impact of CFS is formally declared to be significant, with the net income of a CFS household in Europe being substantially lower than general population households (i.e., individual productivity effect), and the total annual cost burden being tens of billions of dollars in the United States alone [1-4]. The World Health Organization generally classifies CFS as a neurological illness involving the central nervous system. Some notable and more specific examples of proposed CFS aetiology components are summarised below, with these examples reflecting the complex multisystem nature of CFS and not necessarily being mutually exclusive: • Recent studies suggest that CFS arises from functional changes in the brain, with spectroscopic and inflammatory brain changes (e.g., following repeated exercise) also demonstrated. However, uncertainty over the character, location and propensity of such changes remains and the need for further functional neuroimaging studies is recognised [2,3,6,7]. • A significant increase in red blood cell (RBC) stiffness is reported in CFS, suggesting that compromised RBC transport through microcapillaries may contribute to CFS aetiology and that this diminished deformability could form the basis of a first-pass diagnostic test [8]. Further to this point, the previously identified CFS characteristic of orthostatic intolerance (estimated to occur in up to 97% of cases) is linked to under-oxygenatation of the brain to which diminished RBC deformability is thought to be a contributing factor [9]. • Unusual RBC shape, leading to reduced blood flow and changes in molecular docking on the RBC surface, is reported in CFS [10]. The subsequent increase in the number of stomatocytes (RBCs that have lost their typical concave shape, due for example to membrane defect), adds to the previous point of diminished RBC deformability to support poor microcirculation as contributing to CFS aetiology. • Dysfunction of mitochondria (subcellular organelles within the cytoplasm of aerobic cells) is found in CFS, with the interference of adenosine triphosphate (ATP) production being one of several consequences within the explanatory pathological pathway [11] (ATP is fundamentally essential for cellular-level metabolic energy requirements as outlined in Section 3). • CFS is largely resolved as not being attributable to some ongoing infection, endocrine disorder, or psychiatric condition [3,6]. While some similarly do not assign an immunological disorder attribution, more often over-stimulation or over-reaction of the immune system (hyperimmune response), impaired immune system response, immuno-inflammatory, and oxidative damage to the immune system, are all utilised expressions associated with CFS [3,6,8,1113], which in several research circles is described as a neuroimmune disease [1,11,14]. This immunological quandary again highlights the complexity of the ongoing medical challenge at hand. One clear aspect of CFS is that underlying pathophysiology implicates a range of different acute infections as onset triggers in a significant minority of cases (i.e., infections like Epstein-Barr, Ross River and the 2003 outbreak variant of Severe Acute Respiratory Syndrome, or SARS, viruses). No other medical or psychological factors are definitively implicated in CFS [7]. For many observers such triggerings are mindful of, if not directly related to, the crippling fatigue that is widely reported within contemporary media and recent studies as a lasting symptom of COVID-19. Such COVID-19-triggred CFS has led to the coined phrases of COVID-19 "long-haulers" or "long COVID", and has returned CFS to the public awareness spotlight [12]. However, too familiarly the lack of definitive CFS biomarkers is again confirmed by long COVID research, and sadly the dismissive attitudes of some in the medical profession is also a point of exasperation for long COVID sufferers [12], contributing for example to the in-desperation-establishment of a "long COVID kids" Facebook site in the United Kingdom. Established treatments, such as cognitive behaviour therapy (CBT) and graded exercise therapy (GET), primarily aim to manage the symptoms and improve the overall function of sufferers. The confounding nature of CFS extends to these treatments, since there is wide ongoing debate over their effectiveness [1,15]. For example, while GET is shown to benefit some, for others it is essentially considered just "cruel". A host of alternative treatments, some of which may be described as holistic or naturopathic or similar, naturally also exist, such as cryogenic, floatation and oxygen therapies, to name just a few. It is not the intention or place of the present article to compare, critique or scientifically review such treatments. It will simply be stated that, at least anecdotally, some such treatments seem to bring relief to some individuals (which is a positive outcome for those lucky enough to find any relief), but certainly most do not consider these treatments to be CFS cures or long-term major alleviators for the majority. Contemporary scientific scrutiny into how COVID-19 can damage the brain [13,16,17], and suggesting that the virus’ fatigue and adverse neurological effects (such as loss of smell and taste, altered mental states that can lead to the development of psychoses, and brain shrinkage in regions essential for processing memory, cognition and emotion) are indeed due to some hyperimmune response with neuroinflammation, does however offer many CFS sufferers new hope. Viz., hope that as a result of such scrutiny highly effective treatments (e.g., neural rewiring therapies [16]) and eventual cure await, even with the caveat of caution around some uncertain degree of overlap between COVID and non-COVID CFS. The present article’s title with cartoon of a fatigued physicist upon first glance likely appears incongruous. However, while some delight was taken in choosing this "humorous-to-a-physicist" title, the article is journalistically serious and does not make light of CFS. Rather, in addition to the above CFS overview, the article reflects upon a presented clinical Case Study of a seemingly recovering CFS sufferer, to form a justified CFS hypothesis for future testing. The to-be-formed hypothesis follows from the unique neuro-perspectives of [18], which explore central nervous system impulse encoding revelations via a new approach to high-order electroencephalogram (EEG) phase analysis. Given that CFS has a neurological component, can these new perspectives be applied to the area of CFS, and in particular to the to-be-presented Case Study of recovery? While this tangent might seem a long bow to draw, perhaps a fresh CFS perspective is just what is currently needed. Despite the quantum mechanical aspects to come and references [18] and [19], the latter on a discrete oscillator phase noise effect applied within phase-shift keying radiofrequency (RF) digital signal modulation, being recommended prior readings for those with a biomedical engineering or similar background, no such specialist backgrounds are assumed for readers. In brief, the present article represents academic (science and medicine) journalism that is hopefully considered high-interest, and shares via Case Study the clinical/medical results, collated over several years, for a scientifically dependent individual. The eventually formed hypothesis is intended for testing within a future formalised study, and so presently may be countered by alternative explanatory hypotheses, such as placebo and simple recovery coincidence, which are also identified.
APA, Harvard, Vancouver, ISO, and other styles
19

"A standardized educational system for radiology programs worldwide: An opinion." Open Journal of Radiology and Medical Imaging, September 22, 2021, 53–54. http://dx.doi.org/10.36811/ojrmi.2021.110018.

Full text
Abstract:
The educational system in radiology programs worldwide is different. In the American system, they offer a certificate program (Cert) then an associate degree (AAS) in some colleges then a diploma (Dip) after that a bachelor’s degree (B.S.). A radiographer a.k.a radiologic technologist can continue to get a post-baccalaureate certificate or a master’s degree (M.S.) and rarely in America due to the shortage of Ph.D. programs a doctorate of philosophy in radiology. The British system in radiology programs is more advanced than the American system which offers a bachelor’s degree (B.Sc.) then a post graduate certificate (PGCert) or post graduate diploma (PGDip) or a master’s degree (M.Sc.) and eventually a Ph.D. There are other countries with different educational systems in radiology which can vary from the previous two examples. All of these system does not standardize a one educational system that can work for everyone. The aim of this paper is to propose an educational system that is easy and effective. First of all, standardize the name of all the radiology programs. Radiology is a good name which will prevent confusion with other fields. For example, radiation science can be confused with physics. Medical imaging is a broad name that include any imaging as a picture of human skin with pathology or a picture of a microscopic slide of a human specimen can be included in medical imaging. Radiologic technology or biomedical imaging is confused with IT and engineering. Radiology is a perfect name that no one will be confused with. When someone says biology no one will think it means mathematics. Second, no degrees below bachelor degree except high school−level a.k.a a secondary education. The admission requirement is general educational diploma or high school certificate. The bachelor degree in radiology must be a 4-year long program that teaches all of the modalities (i.e., X-ray, Fluoroscopy, Computed Tomography (CT), Magnetic Resonance Imaging (MRI), Nuclear Medicine (NM), Ultrasound (US), Doppler, Catheterization lab and Angiography, etc.). This will prevent the track system which makes issues in the work market by over saturating one track. Third, a master’s degree in radiology that offer a sub-specialty in one modality. Like a master’s degree in CT alone, ultrasound alone, radiation protection alone, or Picture Archiving and Communication system (PACs) alone. The master’s program can be 1 or 2 years. Fourth, the Ph.D. program should focus on one system of the human body or a category of imaging like: PhD in neuroimaging, gynecology and obstetrics ultrasound, forensic imaging, pediatric imaging, or cardiovascular imaging, etc. The PhD program is more subspecialized program in a very small area. The Ph.D. program can be from 3 to 5 years long. This is a very simple system and applicable which will allow more consistence in the radiology educational systems worldwide.
APA, Harvard, Vancouver, ISO, and other styles
20

Papesch, Te Rita. "A Māori Model of Leadership Practice." Te Kaharoa 17, no. 1 (November 3, 2021). http://dx.doi.org/10.24135/tekaharoa.v17i1.373.

Full text
Abstract:
He Waka Hiringa (HWH) is a Masters of Applied Indigenous Knowledge offered as a programme of two years’ study by Te Wānanga o Aotearoa. The main pre-requisite for enrolment in to this graduate degree is for the student to be a master of their own practice, whatever that practice may be. In other words, they are already leaders in their own field of practice. My task is to help them clarify how they indigenise their practice; introduce them to academic processes to achieve the rangahau (research) around this and encourage them to create their own Models of Practice (MsOP) to guide them as they work with students or clients. In six years three cohorts of students have succesfully graduated through my encouragement in the development and approval of about 100 different new MsOP, each unique in its own way. These add to the use by graduates of HWH to models such as Whare Tapatoru ( Wi Te Tau Huata Snr. 1967, personal communication), Whare Tapawhā (Durie, M. 1984), Te Wheke (Pere, R. 1997) and Poutama Pōwhiri (Huata, P. 2011) to name a few well known MsOP. In terms of a Leadership MOP I have not seen a better model than that created by Te Wairere Te Pūāwaitanga o te Whakaaro Ngaia (my youngest child and daughter) to fulfil the requirements of her Masters in Management Communications and Te Reo Māori (Māori Language) graduate degree at The University of Waikato. I am going to use her MOP for leadership in competitive Kapa Haka[1] (Māori performing arts) as my model in this delivery with her permission. The title comes from a waiata-ā-ringa (action song) composed by one of her tuākana (older sisters), Te Ingo Karangaroa Ngaia, entitled ‘He Rākau Taumatua!’[2], for their whānau (family) kapa haka, Te Haona Kaha. [1] I use capital letters when talking about the art form and small letters when talking about a group that does the art form. [2] “He rākau taumatua” was first performed as a whakawātea by Te Haona Kaha kapa haka at the Tainui Waka Cultural Trust Regional Kapa Haka competitions in 2016.
APA, Harvard, Vancouver, ISO, and other styles
21

Ellson, Lindsay, Nicole Wong, Jessica Harper, Gage Williamson, Isain Zapata, Kristin Putnam, and Joel Roberts. "Understanding and preference toward DOs and OMT before and after an osteopathic principles and practice fellow lecture series." Journal of Osteopathic Medicine, November 29, 2022. http://dx.doi.org/10.1515/jom-2022-0139.

Full text
Abstract:
Abstract Context One of the two major pathways to become a physician in the United States is the Doctor of Osteopathic Medicine (DO) degree. A major distinctive feature is often perceived as the addition of manual training in osteopathic manipulative treatment (OMT) in the DO education. However, the profession also has a distinct philosophy imbedded in the curriculum of all osteopathic medical schools. Many medical schools offer professional degrees with graduates who may choose to continue their education in medicine, such as the Master of Science in Biomedical Sciences (MSBS). At our institution, there is no formal exposure to the differences between osteopathic and allopathic medicine in the MSBS curriculum, and most of this understanding is gained through out-of-classroom conversations. During the SARS-CoV-2 pandemic, virtual learning prohibited the usual gathering and discourse that occurs when students are learning on campus. Objectives The objective of this study is to create a curriculum in the form of a seminar series to assist premedical students in making an informed choice about which profession is the best fit for their own education and to gain an appreciation for osteopathic medicine. This appreciation could also aid in the future collaboration of premedical students with osteopathic providers, recommendations to patients, and potentially their own medical care. Questionnaires were utilized to determine if our osteopathic seminar series was effective at changing the preferences and understanding of MSBS students. We also sought to determine the effectiveness of virtual vs. in-person delivery of our curriculum. Methods A seminar series with pre-established objectives was developed and presented to MSBS students at an osteopathic institution during the Fall of 2020 and 2021. The 2020 seminar was delivered through a virtual conference platform, and the 2021 seminar was delivered in-person. An eight question pre-and postquestionnaire was given to participants to evaluate their preferences and understanding. Internal validity and differences between delivery formats were assessed. Results Both seminar series produced equally effective, significant changes in the preferences and perceptions of osteopathic medicine in both virtual and in-person delivery formats. Differences in pre-vs. post understanding across both seminar series were not consistently significant and were smaller than those observed in preferences and perceptions. Positive changes included an increased willingness to see a DO and to recommend a loved one see a DO as their personal physician. Preference changes between the in-person vs. virtual delivery platforms did not show significant differences; however, understanding did show some inconsistent differences. Conclusions This study demonstrates the utility of a virtual or in-person seminar to improve the preferences and perceptions of the osteopathic profession in MSBS students. The seminar series was successful in its goal of offering formal exposure to the osteopathic profession. The improved preferences and perceptions will have potential substantial benefits to the field of osteopathic medicine in the future. Further research is warranted to determine the most effective way to increase understanding of the osteopathic profession.
APA, Harvard, Vancouver, ISO, and other styles
22

Goodman, Max, and Connor Pedersen. "Should the Food and Drug Administration Limit Placebo-Controlled Trials?" Voices in Bioethics 8 (July 8, 2022). http://dx.doi.org/10.52214/vib.v8i.9639.

Full text
Abstract:
Photo by Diana Polekhina on Unsplash ABSTRACT Randomized placebo-controlled trials are often used in clinical research, though there are ethical concerns regarding their use. The Food and Drug Administration (FDA) has rejected international stances on placebo-controlled trial use in favor of the bioethical principles of autonomy, beneficence, nonmaleficence, and justice. The FDA permits placebo-controlled trials in three circumstances: when there are no established treatments available when their use would be of negligible harm to the patient, and when there are compelling reasons for their use. However, in some cases, the FDA’s approval of placebo-controlled trials violates bioethical principles. Ultimately, the FDA should overhaul its practices regarding the use of placebo-controlled trials. INTRODUCTION Randomized placebo-controlled clinical trials (PCTs) are considered the most rigorous method of understanding the efficacy of an intervention and, as a result, are widely used in clinical research.[1] However, there are ethical concerns regarding placebo controls, including their use in the study of deadly diseases or when effective treatments already exist, though poor oversight and lax rules have largely permitted PCT research, even under those conditions.[2] The FDA prefers PCTs for most interventional research and considers them essential to test the efficacy of drugs. Between 2006-2011, 40 percent of FDA-approved clinical trials used a placebo alone for comparison. The FDA has been lagging in altering its policies regarding PCTs, only advising against PCT research in select oncological cases for the first time in 2019 in a nonbinding guidance. It is our belief that the FDA should change its approach and prohibit the use of placebo controls in clinical trials where effective treatments already exist. l. Brief History of PCTs and the FDA In contemporary research practices, PCTs are used to evaluate whether an intervention is effective by comparing it to a control group that received a treatment designed to have no real effect (placebo). Throughout the 20th century there have been numerous bioethical tragedies, including but not limited to the Holocaust and the Tuskegee Syphilis Study.[3] These and other transgressions have become an impetus for establishing ethical research standards preventing human exploitation in the name of science. The Declaration of Helsinki, adopted in 1964, a nonbinding instrument, restricts the use of PCTs. Clause 33 of the Declaration of Helsinki states that new medical interventions should be tested against previously demonstrated interventions and placebos should be used only if there is no existing intervention with narrow exceptions. Clause 33 says the effectiveness of a new intervention must be tested against those of the best current proven intervention (s), except in the following circumstances: Where no proven intervention exists, the use of placebo, or no intervention, is acceptable; or Where for compelling and scientifically sound methodological reasons the use of any intervention less effective than the best proven one, the use of placebo, or no intervention is necessary to determine the efficacy or safety of an intervention and the patients who receive any intervention less effective than the best proven one, placebo, or no intervention will not be subject to additional risks of serious or irreversible harm as a result of not receiving the best proven intervention. Extreme care must be taken to avoid abuse of this option.[4] The FDA has largely ignored this and deemed placebo controls the gold standard, stating that “PCTs are necessary to control for placebo effect of investigational medicinal product.”[5] The FDA has even refused to approve drugs that are tested against established treatments instead of against placebos, notably atenolol.[6] By stretching the “methodological” exception and failing to define harm reasonably, the FDA does not meet the spirit behind Helsinki’s conditions for allowing PCTs. When the Declaration of Helsinki was revised in 2000 to increase restrictions, the Director of Medical Policy for the FDA’s Center for Drug Evaluation and Research considered it “unpardonable” and abandoned any compliance with it in 2008.[7] The FDA’s past statements and actions have supported its belief that drug approval hinges on the use of placebos. While the FDA has rejected the Declaration of Helsinki’s stance on placebos, it has remained faithful to the guidelines of other bioethical codes such as the International Ethical Guidelines for Biomedical Research Involving Human Subjects and the Council for International Organization of Medical Science’s guidelines for biomedical research involving human subjects. The International Ethical Guidelines for Biomedical Research Involving Human Subjects permits PCTs if the consequences are negligible, when methodologically advantageous, and when responses have been historically erratic.[8] The Council for International Organization of Medical Science’s guidelines for biomedical research involving human subjects echoed the Declaration of Helsinki in guideline 11, stating that a “‘placebo may be used: When there is no effective intervention; when withholding an established, effective intervention would expose to, at most temporary discomfort, or delay in relief symptoms; when use of an established, effective intervention as comparator would not yield scientifically reliable results and the use of the placebo would not add risk of serious or irreversible harm to subjects.”[9] The Belmont Report notes three ethical principles: beneficence, respect for persons (autonomy), and justice. The Common Rule requires IRBs for human research and reflects principles noted in the Belmont Report. The Belmont Report covers three applications of its principles: Informed consent, selection of research subjects, and risk-benefit assessments.[10] In 1979, Beauchamp and Childress established the four principles approach to bioethics including autonomy, beneficence, nonmaleficence, and justice. While PCTs were not mentioned in these reports, the principles in them permit placebo controls as long as subjects are informed of the risks of participating and risks are minimized. The FDA has since followed that approach. These guidelines have made PCTs ethically ambiguous, and there are moral counterpoints to be made. ll. FDA-PCT Conditions The FDA has permitted PCT use under three conditions. The first condition is when there is no proven intervention for the medical condition under the study. This means treatment has either not been found for a disease or has not yet been translated into clinical practice and is not controversial. The second condition is when there is negligible harm to the patient from delaying or forgoing an available treatment. In this scenario, a placebo is not suspected to cause damage and the available treatment is meant for mild conditions that pose low-risk adverse effects, which is said to justify its use. The final condition is when there are compelling methodologic reasons for the use of the placebo. This scenario is for situations where outcomes fluctuate for complex reasons making other research methods likely to be unreliable. This condition for PCT use is also justified when it is not possible to administer the intervention to the experimental group because of economic, social, or administrative factors, in which case it is believed to be better to have results of some kind than none at all.[11] We will argue each condition is unethical to the current degree it is practiced. lll. Condition One: Lack of Established Treatment Placebo use in cases where no established treatment exists would not typically be considered unethical. However, placebos continue to be used in numerous clinical trials approved by the FDA, many of which already have standard interventions.[12] In addition, the lack of head-to-head drug trials, in favor of placebo, has had no benefit on clinical guidelines and practices. The direct comparison of drugs in head-to-head trials gives physicians and buyers a better understanding of the effectiveness of a drug and allows for the creation of more robust clinical guidelines. Instead, under the PCT model, the market is saturated with a plethora of drugs to choose from. While each one may be better than placebo, it can be difficult to understand how each treatment compares to another, which may be harmful to patients. A recent study has shown that nearly 90 percent of new drugs do not perform better than existing options.[13] There is an ethical cost to be considered when devoting financial resources and effort to create new drugs that are inferior to existing treatments and have not led to changes to clinical practice. While the FDA claims to follow the bioethical principles of beneficence and nonmaleficence, its choice of approving treatments through placebo controls, despite the existence of standard interventions, counters these guidelines. lV. Condition Two: Negligible Harm from Delayed Treatment The International Ethical Guidelines for Biomedical Research Involving Human Subjects argues that placebos are acceptable if there is only “temporary discomfort or a delay in relief of symptoms,” a stipulation that the FDA follows. However, what constitutes temporary is arbitrary, as there is no absolute reference of time prescribed, nor is there a defined proportion relative to total life expectancy available. For example, many patients in trials for terminal illnesses have a limited therapeutic window and a reduced life expectancy, so they value time differently from someone with a non-terminal illness. Additionally, there is no consensus of what constitutes harm when withholding treatment; placebos are often used in trials for major depressive disorder, yet this population has statistically higher rates of self-harm and suicide without treatment compared to the general population.[14] Serious risks can be incurred due to a placebo intervention by not offering experimental treatment, without excusing the psychological harm withholding a treatment may have on a patient should it be unblinded. Nevertheless, the FDA has used the umbrella term of “temporary discomfort” to justify the widespread use of PCTs, but the vagueness of this language results in human suffering. V. Condition Three: Compelling Methodological Reasoning Finally, the FDA authorizes placebo use in cases where for compelling scientifically sound methodological reasons, the use of placebo is necessary to determine the efficacy or safety of an intervention, and the parties who receive placebo or no treatment will not be subject to any risk of serious or irreversible harm. The condition includes cases where PCT is believed to be necessary to demonstrate efficacy, such as in trials of psychoactive drugs where evidence is inconsistent due to disease heterogeneity and demonstrating equivalence to an established treatment is insufficient. There are also arguments that PCTs, while not necessary, may be beneficial in generating socially valuable knowledge. However, whether a placebo control demonstrates efficacy is not sufficient to justify its use. When considering the ethical use of PCTs, investigators must weigh the social value gained against the risks of no treatment in the control. Unfortunately, the risk-benefit analysis is often controversial. For example, in 2001, the FDA initially responded positively to a placebo-controlled trial of Surfaxin in infants with acute respiratory distress syndrome in Latin America. However, the trial was deemed exploitative by a public watch group when it was revealed that the drug was already FDA-approved in the United States, and the manufacturer of that drug was undertaking another study with the same drug in Europe without any placebos. To justify withholding treatment from a vulnerable population in a developing country, the manufacturer stated that they would be providing a drug that would otherwise be unavailable to many participants, and the risks would be compensated by upgrades to the host country’s medical infrastructure. Despite the FDA’s initial approval and the manufacturer’s attempt to quell public outcry, objections by the public led to the removal of the placebo arm from the trial. While the FDA believes there may be methodologically compelling reasons to utilize PCTs, they have demonstrated a lack of judgment necessary to balance the gains against their inherent losses, requiring the public to step in. CONCLUSION Based on the ambiguous bioethical guidelines that the FDA follows, and the moral justifications described in this paper, its preference of PCTs is unethical. We suspect the overreliance of PCTs has resulted in harm to research participants and the general population, which is why the FDA should change its policy. We propose that PCTs be used only for diseases that lack an established treatment, as decreed by Clause 33 of the Declaration of Helsinki. Other measures that would satisfy Clause 33, the Belmont Report, and the Common Rule are the use of large retrospective observational trials for comparison rather than a prospective placebo group. Ultimately, it is ethically necessary that the FDA modify its practices regarding drug approval and more stringently scrutinize PCTs as well as adopt more favorable approaches to other comparative models. Acknowledgments We sincerely thank Dr. Gregory James Smith, JD, DBE for his patience and guidance in both the research and writing of this paper. - [1] Simmonds A. Ethics of placebo-controlled trials in developing countries: The Search for Standards and Solutions. The Morningside Review. https://journals.library.columbia.edu/index.php/TMR/article/view/5507. Published May 1, 2011. Accessed April 21, 2022; Millum J, Grady C. The ethics of placebo-controlled trials: Methodological Justifications. Contemporary Clinical Trials. 2013;36(2):510-514. doi:10.1016/j.cct.2013.09.003; Center for Drug Evaluation and Research. Institutional Review Boards (IRBs) and Protection of Human Subjects in Clinical Trials. U.S. Food and Drug Administration. https://www.fda.gov/about-fda/center-drug-evaluation-and-research-cder/institutional-review-boards-irbs-and-protection-human-subjects-clinical-trials. Published September 11, 2019. Accessed April 21, 2022. [2] Keränen T, Halkoaho A, Itkonen E, Pietilä A-M. Placebo-controlled clinical trials: How trial documents justify the use of randomisation and Placebo. BMC Medical Ethics. 2015;16(1). doi:10.1186/1472-6939-16-2; Feifel D. The use of placebo-controlled clinical trials for the approval of psychiatric drugs: part I-statistics and the case for the "greater good.” Psychiatry (Edgmont). 2009;6(3):41-43; van der Graaf R, Rid A. Placebo-controlled trials, ethics of. International Encyclopedia of the Social & Behavioral Sciences. 2015:164-173. doi:10.1016/b978-0-08-097086-8.11011-6; Ibrahim MS, Ovosi JO, Bello-Ovosi BO. Randomized controlled trials: Ethical and scientific issues in the choice of placebo or active control. Annals of African Medicine. 2017;16(3):97-100. doi:10.4103/aam.aam_211_16; Sorscher S, AbuDagga A, Almashat S, Carome M, Wolfe S. Placebo-only-controlled versus active-controlled trials of new drugs for nine common life-threatening diseases. Open Access Journal of Clinical Trials. 2018;Volume 10:19-28. doi:10.2147/oajct.s156054; Mezher M. FDA finalizes guidance on placebos and blinding for cancer trials. Regulatory Affairs Professionals Society (RAPS). http://www.raps.org/news-and-articles/news-articles/2019/8/fda-finalizes-guidance-on-placebos-and-blinding-fo. Published August 28, 2019. Accessed April 21, 2022. [3] WMA Declaration of Helsinki – ethical principles for medical research involving human subjects. The World Medical Association. http://www.wma.net/policies-post/wma-declaration-of-helsinki-ethical-principles-for-medical-research-involving-human-subjects/. Published July 9, 2018. Accessed April 21, 2022. [4] WMA Declaration of Helsinki, Clause 33. [5] Ovosi JO, Ibrahim MS, Bello-Ovosi BO. Randomized controlled trials: Ethical and scientific issues in the choice of placebo or active control. Ann Afr Med. 2017;16(3):97-100. doi:10.4103/aam.aam_211_16; Rothman KJ, Michels KB. The continuing unethical use of placebo controls. New England Journal of Medicine. 1994;331(6):394-398. doi:10.1056/nejm199408113310611 [6] Rothman KJ, Michels KB. The Continuing Unethical Use of Placebo Controls. New England Journal of Medicine.1994;331(6):394-98. doi:10.1056/nejm199408113310611 [7] Hollon T. FDA uneasy about placebo revision. Nature Medicine. 2001;7(1):7-7. doi:10.1038/83389 [8] International Ethical Guidelines for Biomedical Research Involving Human Subjects. Geneva: CIOMS; 1993. https://cioms.ch/wp-content/uploads/2017/01/WEB-CIOMS-EthicalGuidelines.pdf. Accessed April 21, 2022. [9] Ovosi JO, Ibrahim MS, Bello-Ovosi BO. Randomized controlled trials: Ethical and scientific issues in the choice of placebo or active control. Ann Afr Med. 2017;16(3):97-100. doi:10.4103/aam.aam_211_16 [10] The Belmont Report Ethical Principles and Guidelines for the Protection of Human Subjects of Research. Washington, D.C: U.S. Government Print. Off; 1978. Accessed April 21, 2022. Office for Human Research Protections (OHRP); Federal Policy for the Protection of Human Subjects ('Common Rule'). HHS.gov. https://www.hhs.gov/ohrp/regulations-and-policy/regulations/common-rule/index.html. Published June 16, 2021. Accessed April 21, 2022. [11] Millum J, Grady C. The ethics of placebo-controlled trials: Methodological justifications. Contemporary Clinical Trials. 2013;36(2):510-514. doi:10.1016/j.cct.2013.09.003; Center for Drug Evaluation and Research. Institutional Review Boards (IRBs) and Protection of Human Subjects in Clinical Trials. U.S. Food and Drug Administration. https://www.fda.gov/about-fda/center-drug-evaluation-and-research-cder/institutional-review-boards-irbs-and-protection-human-subjects-clinical-trials. Published September 11, 2019. Accessed April 21, 2022. [12] Center for Drug Evaluation and Research. New drug therapy approvals 2020. U.S. Food and Drug Administration. https://www.fda.gov/drugs/new-drugs-fda-cders-new-molecular-entities-and-new-therapeutic-biological-products/new-drug-therapy-approvals-2020#first-in-class. Published January 8, 2021. Accessed April 21, 2022. [13] Light DW, Lexchin J, Darrow JJ. Institutional corruption of pharmaceuticals and the myth of safe and effective drugs. Journal of Law, Medicine & Ethics. 2013;41(3):590-600. doi:10.1111/jlme.12068 [14] Lahey T. The ethics of clinical research in low- and middle-income countries. Ethical and Legal Issues in Neurology. 2013:301-313. doi:10.1016/b978-0-444-53501-6.00025-1
APA, Harvard, Vancouver, ISO, and other styles
23

Danaher, Pauline. "From Escoffier to Adria: Tracking Culinary Textbooks at the Dublin Institute of Technology 1941–2013." M/C Journal 16, no. 3 (June 23, 2013). http://dx.doi.org/10.5204/mcj.642.

Full text
Abstract:
IntroductionCulinary education in Ireland has long been influenced by culinary education being delivered in catering colleges in the United Kingdom (UK). Institutionalised culinary education started in Britain through the sponsorship of guild conglomerates (Lawson and Silver). The City & Guilds of London Institute for the Advancement of Technical Education opened its central institution in 1884. Culinary education in Ireland began in Kevin Street Technical School in the late 1880s. This consisted of evening courses in plain cookery. Dublin’s leading chefs and waiters of the time participated in developing courses in French culinary classics and these courses ran in Parnell Square Vocational School from 1926 (Mac Con Iomaire “The Changing”). St Mary’s College of Domestic Science was purpose built and opened in 1941 in Cathal Brugha Street. This was renamed the Dublin College of Catering in the 1950s. The Council for Education, Recruitment and Training for the Hotel Industry (CERT) was set up in 1963 and ran cookery courses using the City & Guilds of London examinations as its benchmark. In 1982, when the National Craft Curriculum Certification Board (NCCCB) was established, CERT began carrying out their own examinations. This allowed Irish catering education to set its own standards, establish its own criteria and award its own certificates, roles which were previously carried out by City & Guilds of London (Corr). CERT awarded its first certificates in professional cookery in 1989. The training role of CERT was taken over by Fáilte Ireland, the State tourism board, in 2003. Changing Trends in Cookery and Culinary Textbooks at DIT The Dublin College of Catering which became part of the Dublin Institute of Technology (DIT) is the flagship of catering education in Ireland (Mac Con Iomaire “The Changing”). The first DIT culinary award, was introduced in 1984 Certificate in Diet Cookery, later renamed Higher Certificate in Health and Nutrition for the Culinary Arts. On the 19th of July 1992 the Dublin Institute of Technology Act was enacted into law. This Act enabled DIT to provide vocational and technical education and training for the economic, technological, scientific, commercial, industrial, social and cultural development of the State (Ireland 1992). In 1998, DIT was granted degree awarding powers by the Irish state, enabling it to make major awards at Higher Certificate, Ordinary Bachelor Degree, Honors Bachelor Degree, Masters and PhD levels (Levels six to ten in the National Framework of Qualifications), as well as a range of minor, special purpose and supplemental awards (National NQAI). It was not until 1999, when a primary degree in Culinary Arts was sanctioned by the Department of Education in Ireland (Duff, The Story), that a more diverse range of textbooks was recommended based on a new liberal/vocational educational philosophy. DITs School of Culinary Arts currently offers: Higher Certificates Health and Nutrition for the Culinary Arts; Higher Certificate in Culinary Arts (Professional Culinary Practice); BSc (Ord) in Baking and Pastry Arts Management; BA (Hons) in Culinary Arts; BSc (Hons) Bar Management and Entrepreneurship; BSc (Hons) in Culinary Entrepreneurship; and, MSc in Culinary Innovation and Food Product Development. From 1942 to 1970, haute cuisine, or classical French cuisine was the most influential cooking trend in Irish cuisine and this is reflected in the culinary textbooks of that era. Haute cuisine has been influenced by many influential writers/chefs such as Francois La Varenne, Antoine Carême, Auguste Escoffier, Ferand Point, Paul Bocuse, Anton Mosiman, Albert and Michel Roux to name but a few. The period from 1947 to 1974 can be viewed as a “golden age” of haute cuisine in Ireland, as more award-winning world-class restaurants traded in Dublin during this period than at any other time in history (Mac Con Iomaire “The Changing”). Hotels and restaurants were run in the Escoffier partie system style which is a system of hierarchy among kitchen staff and areas of the kitchens specialising in cooking particular parts of the menu i.e sauces (saucier), fish (poissonnier), larder (garde manger), vegetable (legumier) and pastry (patissier). In the late 1960s, Escoffier-styled restaurants were considered overstaffed and were no longer financially viable. Restaurants began to be run by chef-proprietors, using plate rather than silver service. Nouvelle cuisine began in the 1970s and this became a modern form of haute cuisine (Gillespie). The rise in chef-proprietor run restaurants in Ireland reflected the same characteristics of the nouvelle cuisine movement. Culinary textbooks such as Practical Professional Cookery, La Technique, The Complete Guide to Modern Cooking, The Art of the Garde Mange and Patisserie interpreted nouvelle cuisine techniques and plated dishes. In 1977, the DIT began delivering courses in City & Guilds Advanced Kitchen & Larder 706/3 and Pastry 706/3, the only college in Ireland to do so at the time. Many graduates from these courses became the future Irish culinary lecturers, chef-proprietors, and culinary leaders. The next two decades saw a rise in fusion cooking, nouvelle cuisine, and a return to French classical cooking. Numerous Irish chefs were returning to Ireland having worked with Michelin starred chefs and opening new restaurants in the vein of classical French cooking, such as Kevin Thornton (Wine Epergne & Thorntons). These chefs were, in turn, influencing culinary training in DIT with a return to classical French cooking. New Classical French culinary textbooks such as New Classical Cuisine, The Modern Patisserie, The French Professional Pastry Series and Advanced Practical Cookery were being used in DIT In the last 15 years, science in cooking has become the current trend in culinary education in DIT. This is acknowledged by the increased number of culinary science textbooks and modules in molecular gastronomy offered in DIT. This also coincided with the launch of the BA (Hons) in Culinary Arts in DIT moving culinary education from a technical to a liberal education. Books such as The Science of Cooking, On Food and Cooking, The Fat Duck Cookbook and Modern Gastronomy now appear on recommended textbooks for culinary students.For the purpose of this article, practical classes held at DIT will be broken down as follows: hot kitchen class, larder classes, and pastry classes. These classes had recommended textbooks for each area. These can be broken down into three sections: hot kitche, larder, and pastry. This table identifies that the textbooks used in culinary education at DIT reflected the trends in cookery at the time they were being used. Hot Kitchen Larder Pastry Le Guide Culinaire. 1921. Le Guide Culinaire. 1921. The International Confectioner. 1968. Le Repertoire De La Cuisine. 1914. The Larder Chef, Classical Food Preparation and Presentation. 1969. Patisserie. 1971. All in the Cooking, Books 1&2. 1943 The Art of the Garde Manger. 1973. The Modern Patissier. 1986 Larousse Gastronomique. 1961. New Classic Cuisine. 1989. Professional French Pastry Series. 1987. Practical Cookery. 1962. The Curious Cook. 1990. Complete Pastrywork Techniques. 1991. Practical Professional Cookery. 1972. On Food and Cooking. The Science and Lore of the Kitchen. 1991. On Food and Cooking: The Science and Lore of the Kitchen. 1991 La Technique. 1976. Advanced Practical Cookery. 1995. Desserts: A Lifelong Passion. 1994. Escoffier: The Complete Guide to the Art of Modern Cookery. 1979. The Science of Cooking. 2000. Culinary Artistry. Dornenburg, 1996. Professional Cookery: The Process Approach. 1985. Garde Manger, The Art and Craft of the Cold Kitchen. 2004. Grande Finales: The Art of the Plated Dessert. 1997. On Food and Cooking: The Science and Lore of the Kitchen. 1991. The Science of Cooking. 2000. Fat Duck Cookbook. 2009. Modern Gastronomy. 2010. Tab.1. DIT Culinary Textbooks.1942–1960 During the first half of the 20th century, senior staff working in Dublin hotels, restaurants and clubs were predominately foreign born and trained. The two decades following World War II could be viewed as the “golden age” of haute cuisine in Dublin as many award-wining restaurants traded in the city at this time (Mac Con Iomaire “The Emergence”). Culinary education in DIT in 1942 saw the use of Escoffier’s Le Guide Culinaire as the defining textbook (Bowe). This was first published in 1903 and translated into English in 1907. In 1979 Cracknell and Kaufmann published a more comprehensive and update edited version under the title The Complete Guide to the Art of Modern Cookery by Escoffier for use in culinary colleges. This demonstrated that Escoffier’s work had withstood the test of the decades and was still relevant. Le Repertoire de La Cuisine by Louis Saulnier, a student of Escoffier, presented the fundamentals of French classical cookery. Le Repertoire was inspired by the work of Escoffier and contains thousands of classical recipes presented in a brief format that can be clearly understood by chefs and cooks. Le Repertoire remains an important part of any DIT culinary student’s textbook list. All in the Cooking by Josephine Marnell, Nora Breathnach, Ann Mairtin and Mor Murnaghan (1946) was one of the first cookbooks to be published in Ireland (Cashmann). This book was a domestic science cooking book written by lecturers in the Cathal Brugha Street College. There is a combination of classical French recipes and Irish recipes throughout the book. 1960s It was not until the 1960s that reference book Larousse Gastronomique and new textbooks such as Practical Cookery, The Larder Chef and International Confectionary made their way into DIT culinary education. These books still focused on classical French cooking but used lighter sauces and reflected more modern cooking equipment and techniques. Also, this period was the first time that specific books for larder and pastry work were introduced into the DIT culinary education system (Bowe). Larousse Gastronomique, which used Le Guide Culinaire as a basis (James), was first published in 1938 and translated into English in 1961. Practical Cookery, which is still used in DIT culinary education, is now in its 12th edition. Each edition has built on the previous, however, there is now criticism that some of the content is dated (Richards). Practical Cookery has established itself as a key textbook in culinary education both in Ireland and England. Practical Cookery recipes were laid out in easy to follow steps and food commodities were discussed briefly. The Larder Chef was first published in 1969 and is currently in its 4th edition. This book focuses on classical French larder techniques, butchery and fishmongery but recognises current trends and fashions in food presentation. The International Confectioner is no longer in print but is still used as a reference for basic recipes in pastry classes (Campbell). The Modern Patissier demonstrated more updated techniques and methods than were used in The International Confectioner. The Modern Patissier is still used as a reference book in DIT. 1970s The 1970s saw the decline in haute cuisine in Ireland, as it was in the process of being replaced by nouvelle cuisine. Irish chefs were being influenced by the works of chefs such as Paul Boucuse, Roger Verge, Michel Guerard, Raymond Olivier, Jean & Pierre Troisgros, Alain Senderens, Jacques Maniere, Jean Delaveine and Michel Guerard who advanced the uncomplicated natural presentation in food. Henri Gault claims that it was his manifesto published in October 1973 in Gault-Millau magazine which unleashed the movement called La Nouvelle Cuisine Française (Gault). In nouvelle cuisine, dishes in Carème and Escoffier’s style were rejected as over-rich and complicated. The principles underpinning this new movement focused on the freshness of ingredients, and lightness and harmony in all components and accompaniments, as well as basic and simple cooking methods and types of presentation. This was not, however, a complete overthrowing of the past, but a moving forward in the long-term process of cuisine development, utilising the very best from each evolution (Cousins). Books such as Practical Professional Cookery, The Art of the Garde Manger and Patisserie reflected this new lighter approach to cookery. Patisserie was first published in 1971, is now in its second edition, and continues to be used in DIT culinary education. This book became an essential textbook in pastrywork, and covers the entire syllabus of City & Guilds and CERT (now Fáilte Ireland). Patisserie covered all basic pastry recipes and techniques, while the second edition (in 1993) included new modern recipes, modern pastry equipment, commodities, and food hygiene regulations reflecting the changing catering environment. The Art of the Garde Manger is an American book highlighting the artistry, creativity, and cooking sensitivity need to be a successful Garde Manger (the larder chef who prepares cold preparation in a partie system kitchen). It reflected the dynamic changes occurring in the culinary world but recognised the importance of understanding basic French culinary principles. It is no longer used in DIT culinary education. La Technique is a guide to classical French preparation (Escoffier’s methods and techniques) using detailed pictures and notes. This book remains a very useful guide and reference for culinary students. Practical Professional Cookery also became an important textbook as it was written with the student and chef/lecturer in mind, as it provides a wider range of recipes and detailed information to assist in understanding the tasks at hand. It is based on classical French cooking and compliments Practical Cookery as a textbook, however, its recipes are for ten portions as opposed to four portions in Practical Cookery. Again this book was written with the City & Guilds examinations in mind. 1980s During the mid-1980s, many young Irish chefs and waiters emigrated. They returned in the late-1980s and early-1990s having gained vast experience of nouvelle and fusion cuisine in London, Paris, New York, California and elsewhere (Mac Con Iomaire, “The Changing”). These energetic, well-trained professionals began opening chef-proprietor restaurants around Dublin, providing invaluable training and positions for up-and-coming young chefs, waiters and culinary college graduates. The 1980s saw a return to French classical cookery textbook such as Professional Cookery: The Process Approach, New Classic Cuisine and the Professional French Pastry series, because educators saw the need for students to learn the basics of French cookery. Professional Cookery: The Process Approach was written by Daniel Stevenson who was, at the time, a senior lecturer in Food and Beverage Operations at Oxford Polytechnic in England. Again, this book was written for students with an emphasis on the cookery techniques and the practices of professional cookery. The Complete Guide to Modern Cooking by Escoffier continued to be used. This book is used by cooks and chefs as a reference for ingredients in dishes rather than a recipe book, as it does not go into detail in the methods as it is assumed the cook/chef would have the required experience to know the method of production. Le Guide Culinaire was only used on advanced City & Guilds courses in DIT during this decade (Bowe). New Classic Cuisine by the classically French trained chefs, Albert and Michel Roux (Gayot), is a classical French cuisine cookbook used as a reference by DIT culinary educators at the time because of the influence the Roux brothers were having over the English fine dining scene. The Professional French Pastry Series is a range of four volumes of pastry books: Vol. 1 Doughs, Batters and Meringues; Vol. 2 Creams, Confections and Finished Desserts; Vol. 3 Petit Four, Chocolate, Frozen Desserts and Sugar Work; and Vol. 4 Decorations, Borders and Letters, Marzipan, Modern Desserts. These books about classical French pastry making were used on the advanced pastry courses at DIT as learners needed a basic knowledge of pastry making to use them. 1990s Ireland in the late 1990s became a very prosperous and thriving European nation; the phenomena that became known as the “celtic tiger” was in full swing (Mac Con Iomaire “The Changing”). The Irish dining public were being treated to a resurgence of traditional Irish cuisine using fresh wholesome food (Hughes). The Irish population was considered more well-educated and well travelled than previous generations and culinary students were now becoming interested in the science of cooking. In 1996, the BA (Hons) in Culinary Arts program at DIT was first mooted (Hegarty). Finally, in 1999, a primary degree in Culinary Arts was sanctioned by the Department of Education underpinned by a new liberal/vocational philosophy in education (Duff). Teaching culinary arts in the past had been through a vocational education focus whereby students were taught skills for industry which were narrow, restrictive, and constraining, without the necessary knowledge to articulate the acquired skill. The reading list for culinary students reflected this new liberal education in culinary arts as Harold McGee’s books The Curious Cook and On Food and Cooking: The Science and Lore of the Kitchen explored and explained the science of cooking. On Food and Cooking: The Science and Lore of the Kitchen proposed that “science can make cooking more interesting by connecting it with the basic workings of the natural world” (Vega 373). Advanced Practical Cookery was written for City & Guilds students. In DIT this book was used by advanced culinary students sitting Fáilte Ireland examinations, and the second year of the new BA (Hons) in Culinary Arts. Culinary Artistry encouraged chefs to explore the creative process of culinary composition as it explored the intersection of food, imagination, and taste (Dornenburg). This book encouraged chefs to develop their own style of cuisine using fresh seasonal ingredients, and was used for advanced students but is no longer a set text. Chefs were being encouraged to show their artistic traits, and none more so than pastry chefs. Grande Finale: The Art of Plated Desserts encouraged advanced students to identify different “schools” of pastry in relation to the world of art and design. The concept of the recipes used in this book were built on the original spectacular pieces montées created by Antoine Carême. 2000–2013 After nouvelle cuisine, recent developments have included interest in various fusion cuisines, such as Asia-Pacific, and in molecular gastronomy. Molecular gastronomists strive to find perfect recipes using scientific methods of investigation (Blanck). Hervè This experimentation with recipes and his introduction to Nicholos Kurti led them to create a food discipline they called “molecular gastronomy”. In 1998, a number of creative chefs began experimenting with the incorporation of ingredients and techniques normally used in mass food production in order to arrive at previously unattainable culinary creations. This “new cooking” (Vega 373) required a knowledge of chemical reactions and physico-chemical phenomena in relation to food, as well as specialist tools, which were created by these early explorers. It has been suggested that molecular gastronomy is “science-based cooking” (Vega 375) and that this concept refers to conscious application of the principles and tools from food science and other disciplines for the development of new dishes particularly in the context of classical cuisine (Vega). The Science of Cooking assists students in understanding the chemistry and physics of cooking. This book takes traditional French techniques and recipes and refutes some of the claims and methods used in traditional recipes. Garde Manger: The Art and Craft of the Cold Kitchen is used for the advanced larder modules at DIT. This book builds on basic skills in the Larder Chef book. Molecular gastronomy as a subject area was developed in 2009 in DIT, the first of its kind in Ireland. The Fat Duck Cookbook and Modern Gastronomy underpin the theoretical aspects of the module. This module is taught to 4th year BA (Hons) in Culinary Arts students who already have three years experience in culinary education and the culinary industry, and also to MSc Culinary Innovation and Food Product Development students. Conclusion Escoffier, the master of French classical cuisine, still influences culinary textbooks to this day. His basic approach to cooking is considered essential to teaching culinary students, allowing them to embrace the core skills and competencies required to work in the professional environment. Teaching of culinary arts at DIT has moved vocational education to a more liberal basis, and it is imperative that the chosen textbooks reflect this development. This liberal education gives the students a broader understanding of cooking, hospitality management, food science, gastronomy, health and safety, oenology, and food product development. To date there is no practical culinary textbook written specifically for Irish culinary education, particularly within this new liberal/vocational paradigm. There is clearly a need for a new textbook which combines the best of Escoffier’s classical French techniques with the more modern molecular gastronomy techniques popularised by Ferran Adria. References Adria, Ferran. Modern Gastronomy A to Z: A Scientific and Gastronomic Lexicon. London: CRC P, 2010. Barker, William. The Modern Patissier. London: Hutchinson, 1974. Barham, Peter. The Science of Cooking. Berlin: Springer-Verlag, 2000. Bilheux, Roland, Alain Escoffier, Daniel Herve, and Jean-Maire Pouradier. Special and Decorative Breads. New York: Van Nostrand Reinhold, 1987. Blanck, J. "Molecular Gastronomy: Overview of a Controversial Food Science Discipline." Journal of Agricultural and Food Information 8.3 (2007): 77-85. Blumenthal, Heston. The Fat Duck Cookbook. London: Bloomsbury, 2001. Bode, Willi, and M.J. Leto. The Larder Chef. Oxford: Butter-Heinemann, 1969. Bowe, James. Personal Communication with Author. Dublin. 7 Apr. 2013. Boyle, Tish, and Timothy Moriarty. Grand Finales, The Art of the Plated Dessert. New York: John Wiley, 1997. Campbell, Anthony. Personal Communication with Author. Dublin, 10 Apr. 2013. Cashman, Dorothy. "An Exploratory Study of Irish Cookbooks." Unpublished M.Sc Thesis. Dublin: Dublin Institute of Technology, 2009. Ceserani, Victor, Ronald Kinton, and David Foskett. Practical Cookery. London: Hodder & Stoughton Educational, 1962. Ceserani, Victor, and David Foskett. Advanced Practical Cookery. London: Hodder & Stoughton Educational, 1995. Corr, Frank. Hotels in Ireland. Dublin: Jemma, 1987. Cousins, John, Kevin Gorman, and Marc Stierand. "Molecular Gastronomy: Cuisine Innovation or Modern Day Alchemy?" International Journal of Hospitality Management 22.3 (2009): 399–415. Cracknell, Harry Louis, and Ronald Kaufmann. Practical Professional Cookery. London: MacMillan, 1972. Cracknell, Harry Louis, and Ronald Kaufmann. Escoffier: The Complete Guide to the Art of Modern Cookery. New York: John Wiley, 1979. Dornenburg, Andrew, and Karen Page. Culinary Artistry. New York: John Wiley, 1996. Duff, Tom, Joseph Hegarty, and Matt Hussey. The Story of the Dublin Institute of Technology. Dublin: Blackhall, 2000. Escoffier, Auguste. Le Guide Culinaire. France: Flammarion, 1921. Escoffier, Auguste. The Complete Guide to the Art of Modern Cookery. Ed. Crachnell, Harry, and Ronald Kaufmann. New York: John Wiley, 1986. Gault, Henri. Nouvelle Cuisine, Cooks and Other People: Proceedings of the Oxford Symposium on Food and Cookery 1995. Devon: Prospect, 1996. 123-7. Gayot, Andre, and Mary, Evans. "The Best of London." Gault Millau (1996): 379. Gillespie, Cailein. "Gastrosophy and Nouvelle Cuisine: Entrepreneurial Fashion and Fiction." British Food Journal 96.10 (1994): 19-23. Gisslen, Wayne. Professional Cooking. Hoboken: John Wiley, 2011. Hanneman, Leonard. Patisserie. Oxford: Butterworth-Heinemann, 1971. Hegarty, Joseph. Standing the Heat. New York: Haworth P, 2004. Hsu, Kathy. "Global Tourism Higher Education Past, Present and Future." Journal of Teaching in Travel and Tourism 5.1/2/3 (2006): 251-267 Hughes, Mairtin. Ireland. Victoria: Lonely Planet, 2000. Ireland. Irish Statute Book: Dublin Institute of Technology Act 1992. Dublin: Stationery Office, 1992. James, Ken. Escoffier: The King of Chefs. Hambledon: Cambridge UP, 2002. Lawson, John, and Harold, Silver. Social History of Education in England. London: Methuen, 1973. Lehmann, Gilly. "English Cookery Books in the 18th Century." The Oxford Companion to Food. Oxford: Oxford UP, 1999. 227-9. Marnell, Josephine, Nora Breathnach, Ann Martin, and Mor Murnaghan. All in the Cooking Book 1 & 2. Dublin: Educational Company of Ireland, 1946. Mac Con Iomaire, Máirtín. "The Changing Geography and Fortunes of Dublin's Haute Cuisine Restaurants, 1958-2008." Food, Culture and Society: An International Journal of Multidisiplinary Research 14.4 (2011): 525-45. ---. "Chef Liam Kavanagh (1926-2011)." Gastronomica: The Journal of Food and Culture 12.2 (2012): 4-6. ---. "The Emergence, Development and Influence of French Haute Cuisine on Public Dining in Dublin Restaurants 1900-2000: An Oral History". PhD. Thesis. Dublin: Dublin Institute of Technology, 2009. McGee, Harold. The Curious Cook: More Kitchen Science and Lore. New York: Hungry Minds, 1990. ---. On Food and Cooking the Science and Lore of the Kitchen. London: Harper Collins, 1991. Montague, Prosper. Larousse Gastronomique. New York: Crown, 1961. National Qualification Authority of Ireland. "Review by the National Qualifications Authority of Ireland (NQAI) of the Effectiveness of the Quality Assurance Procedures of the Dublin Institute of Technology." 2010. 18 Feb. 2012 ‹http://www.dit.ie/media/documents/services/qualityassurance/terms_of_ref.doc› Nicolello, Ildo. Complete Pastrywork Techniques. London: Hodder & Stoughton, 1991. Pepin, Jacques. La Technique. New York: Black Dog & Leventhal, 1976. Richards, Peter. "Practical Cookery." 9th Ed. Caterer and Hotelkeeper (2001). 18 Feb. 2012 ‹http://www.catererandhotelkeeper.co.uk/Articles/30/7/2001/31923/practical-cookery-ninth-edition-victor-ceserani-ronald-kinton-and-david-foskett.htm›. Roux, Albert, and Michel Roux. New Classic Cuisine. New York: Little, Brown, 1989. Roux, Michel. Desserts: A Lifelong Passion. London: Conran Octopus, 1994. Saulnier, Louis. Le Repertoire De La Cuisine. London: Leon Jaeggi, 1914. Sonnenschmidt, Fredric, and John Nicholas. The Art of the Garde Manger. New York: Van Nostrand Reinhold, 1973. Spang, Rebecca. The Invention of the Restaurant: Paris and Modern Gastronomic Culture. Cambridge: Harvard UP, 2000. Stevenson, Daniel. Professional Cookery the Process Approach. London: Hutchinson, 1985. The Culinary Institute of America. Garde Manger: The Art and Craft of the Cold Kitchen. Hoboken: New Jersey, 2004. Vega, Cesar, and Job, Ubbink. "Molecular Gastronomy: A Food Fad or Science Supporting Innovation Cuisine?". Trends in Food Science & Technology 19 (2008): 372-82. Wilfred, Fance, and Michael Small. The New International Confectioner: Confectionary, Cakes, Pastries, Desserts, Ices and Savouries. 1968.
APA, Harvard, Vancouver, ISO, and other styles
24

Goodall, Jane. "Looking Glass Worlds: The Queen and the Mirror." M/C Journal 19, no. 4 (August 31, 2016). http://dx.doi.org/10.5204/mcj.1141.

Full text
Abstract:
As Lewis Carroll’s Alice comes to the end of her journey through the looking glass world, she has also come to the end of her patience with its strange power games and arbitrations. At every stage of the adventure, she has encountered someone who wants to dictate rules and protocols, and a lesson on table manners from the Red Queen finally triggers rebellion. “I can’t stand this any more,” Alice cries, as she seizes the tablecloth and hurls the entire setting into chaos (279). Then, catching hold of the Red Queen, she gives her a good shaking, until the rigid contours of the imperious figure become fuzzy and soft. At this point, the hold of the dream dissolves and Alice, awakening on the other side of the mirror, realises she is shaking the kitten. Queens have long been associated with ideas of transformation. As Alice is duly advised when she first looks out across the chequered landscape of the looking glass world, the rules of chess decree that a pawn may become a queen if she makes it to the other side. The transformation of pawn to queen is in accord with the fairy tale convention of the unspoiled country girl who wins the heart of a prince and is crowned as his bride. This works in a dual register: on one level, it is a story of social elevation, from the lowest to the highest rank; on another, it is a magical transition, as some agent of fortune intervenes to alter the determinations of the social world. But fairy tales also present us with the antithesis and adversary of the fortune-blessed princess, in the figure of the tyrant queen who works magic to shape destiny to her own ends. The Queen and the mirror converge in the cultural imaginary, working transformations that disrupt the order of nature, invert socio-political hierarchies, and flout the laws of destiny. In “Snow White,” the powers of the wicked queen are mediated by the looking glass, which reflects and affirms her own image while also serving as a panopticon, keep the entire realm under surveillance, to pick up any signs of threat to her pre-eminence. All this turbulence in the order of things lets loose a chaotic phantasmagoria that is prime material for film and animation. Two major film versions of “Snow White” have been released in the past few years—Mirror Mirror (2012) and Snow White and the Huntsman (2012)—while Tim Burton’s animated 3D rendition of Alice in Wonderland was released in 2010. Alice through the Looking Glass (2016) and The Huntsman: Winter’s War, the 2016 prequel to Snow White and the Huntsman, continue the experiment with state-of-the-art-techniques in 3D animation and computer-generated imaging to push the visual boundaries of fantasy. Perhaps this escalating extravagance in the creation of fantasy worlds is another manifestation of the ancient lore and law of sorcery: that the magic of transformation always runs out of control, because it disrupts the all-encompassing design of an ordered world. This principle is expressed with poetic succinctness in Ursula Le Guin’s classic story A Wizard of Earthsea, when the Master Changer issues a warning to his most gifted student: But you must not change one thing, one pebble, one grain of sand, until you know what good and evil will follow on that act. The world is in balance, in Equilibrium. A wizard's power of Changing and Summoning can shake the balance of the world. It is dangerous, that power. (48)In Le Guin’s story, transformation is only dangerous if it involves material change; illusions of all kinds are ultimately harmless because they are impermanent.Illusions mediated by the mirror, however, blur the distinction Le Guin is making, for the mirror image supposedly reflects a real world. And it holds the seductive power of a projected narcissism. Seeing what we wish for is an experience that can hold us captive in a way that changes human nature, and so leads to dangerous acts with material consequences. The queen in the mirror becomes the wicked queen because she converts the world into her image, and in traditions of animation going back to Disney’s original Snow White (1937) the mirror is itself an animate being, with a spirit whose own determinations become paramount. Though there are exceptions in the annals of fairy story, powers of transformation are typically dark powers, turbulent and radically elicit. When they are mediated through the agency of the mirror, they are also the powers of narcissism and autocracy. Through a Glass DarklyIn her classic cultural history of the mirror, Sabine Melchior-Bonnet tracks a duality in the traditions of symbolism associated with it. This duality is already evident in Biblical allusions to the mirror, with references to the Bible itself as “the unstained mirror” (Proverbs 7.27) counterpointed by images of the mortal condition as one of seeing “through a glass darkly” (1 Corinthians 13.12).The first of these metaphoric conventions celebrates the crystalline purity of a reflecting surface that reveals the spiritual identity beneath the outward form of the human image. The church fathers drew on Plotinus to evoke “a whole metaphysics of light and reflection in which the visible world is the image of the invisible,” and taught that “humans become mirrors when they cleanse their souls (Melchior-Bonnet 109–10). Against such invocations of the mirror as an intermediary for the radiating presence of the divine in the mortal world, there arises an antithetical narrative, in which it is portrayed as distorting, stained, and clouded, and therefore an instrument of delusion. Narcissus becomes the prototype of the human subject led astray by the image itself, divorced from material reality. What was the mirror if not a trickster? Jean Delumeau poses this question in a preface to Melchior-Bonnet’s book (xi).Through the centuries, as Melchior-Bonnet’s study shows, these two strands are interwoven in the cultural imaginary, sometimes fused, and sometimes torn asunder. With Venetian advances in the techniques and technologies of mirror production in the late Renaissance, the mirror gained special status as a possession of pre-eminent beauty and craftsmanship, a means by which the rich and powerful could reflect back to themselves both the self-image they wanted to see, and the world in the background as a shimmering personal aura. This was an attempt to harness the numinous influence of the divinely radiant mirror in order to enhance the superiority of leading aristocrats. By the mid seventeenth century, the mirror had become an essential accessory to the royal presence. Queen Anne of Austria staged a Queen’s Ball in 1633, in a hall surrounded by mirrors and tapestries. The large, finely polished mirror panels required for this kind of display were made exclusively by craftsmen at Murano, in a process that, with its huge furnaces, its alternating phases of melting and solidifying, its mysterious applications of mercury and silver, seemed to belong to the transformational arts of alchemy. In 1664, Louis XIV began to steal unique craftsmen from Murano and bring them to France, to set up the Royal Glass and Mirror Company whose culminating achievement was the Hall of Mirrors at Versailles.The looking glass world of the palace was an arena in which courtiers and visitors engaged in the high-stakes challenge of self-fashioning. Costume, attitude, and manners were the passport to advancement. To cut a figure at court was to create an identity with national and sometimes international currency. It was through the art of self-fashioning that the many princesses of Europe, and many more young women of title and hereditary distinction, competed for the very few positions as consort to the heir of a royal house. A man might be born to be king, but a woman had to become a queen.So the girl who would be queen looks in the mirror to assess her chances. If her face is her fortune, what might she be? A deep relationship with the mirror may serve to enhance her beauty and enable her to realise her wish, but like all magical agents, the mirror also betrays anyone with the hubris to believe they are in control of it. In the Grimm’s story of “Snow White,” the Queen practises the ancient art of scrying, looking into a reflective surface to conjure images of things distant in time and place. But although the mirror affords her the seer’s visionary capacity to tell what will be, it does not give her the power to control the patterns of destiny. Driven to attempt such control, she must find other magic in order to work the changes she desires, and so she experiments with spells of self-transformation. Here the doubleness of the mirror plays out across every plane of human perception: visual, ethical, metaphysical, psychological. A dynamic of inherent contradiction betrays the figure who tries to engage the mirror as a servant. Disney’s original 1937 cartoon shows the vain Queen brewing an alchemical potion that changes her into the very opposite of all she has sought to become: an ugly, ill-dressed, and impoverished old woman. This is the figure who can win and betray trust from the unspoiled princess to whom the arts of self-fashioning are unknown. In Tarsem Singh’s film Mirror Mirror, the Queen actually has two mirrors. One is a large crystal egg that reflects back a phantasmagoria of palace scenes; the other, installed in a primitive hut on an island across the lake, is a simple looking glass that shows her as she really is. Snow White and the Huntsman portrays the mirror as a golden apparition, cloaked and faceless, that materialises from within the frame to stand before her. This is not her reflection, but with every encounter, she takes on more of its dark energies, until, in another kind of reversal, she becomes its image and agent in the wider world. As Ursula Le Guin’s sage teaches the young magician, magic has its secret economies. You pay for what you get, and the changes wrought will come back at you in ways you would never have foreseen. The practice of scrying inevitably leads the would-be clairvoyant into deeper levels of obscurity, until the whole world turns against the seer in a sequence of manifestations entirely contrary to his or her framework of expectation. Ultimately, the lesson of the mirror is that living in obscurity is a defining aspect of the human condition. Jorge Luis Borges, the blind writer whose work exhibits a life-long obsession with mirrors, surveys a range of interpretations and speculations surrounding the phrase “through a glass darkly,” and quotes this statement from Leon Bloy: “There is no human being on earth capable of declaring with certitude who he is. No one knows what he has come into this world to do . . . or what his real name is, his enduring Name in the register of Light” (212).The mirror will never really tell you who you are. Indeed, its effects may be quite the contrary, as Alice discovers when, within a couple of moves on the looking glass chessboard, she finds herself entering the wood of no names. Throughout her adventures she is repeatedly interrogated about who or what she is, and can give no satisfactory answer. The looking glass has turned her into an estranged creature, as bizarre a species as any of those she encounters in its landscapes.Furies“The furies are at home in the mirror,” wrote R. S. Thomas in his poem “Reflections” (265). They are the human image gone haywire, the frightening other of what we hope to see in our reflection. As the mirror is joined by technologies of the moving image in twentieth-century evolutions of the myth, the furies have been given a new lease of life on the cinema screen. In Disney’s 1937 cartoon of Snow White, the mirror itself has the face of a fury, which emerges from a pool of blackness like a death’s head before bringing the Queen’s own face into focus. As its vision comes into conflict with hers, threatening the dissolution of the world over which she presides, the mirror’s face erupts into fire.Computer-generated imaging enables an expansive response to the challenges of visualisation associated with the original furies of classical mythology. The Erinyes are unstable forms, arising from liquid (blood) to become semi-materialised in human guise, always ready to disintegrate again. They are the original undead, hovering between mortal embodiment and cadaverous decay. Tearing across the landscape as a flock of birds, a swarm of insects, or a mass of storm clouds, they gather into themselves tremendous energies of speed and motion. The 2012 film Snow White and the Huntsman, directed by Rupert Sanders, gives us the strongest contemporary realisation of the archaic fury. Queen Ravenna, played by Charlize Theron, is a virtuoso of the macabre, costumed in a range of metallic exoskeletons and a cloak of raven’s feathers, with a raised collar that forms two great black wings either side of her head. Powers of dematerialisation and rematerialisation are central to her repertoire. She undergoes spectacular metamorphosis into a mass of shrieking birds; from the walls around her she conjures phantom soldiers that splinter into shards of black crystal when struck by enemy swords. As she dies at the foot of the steps leading up to the great golden disc of her mirror, her face rapidly takes on the great age she has disguised by vampiric practices.Helena Bonham Carter as the Red Queen in Burton’s Alice in Wonderland is a figure midway between Disney’s fairy tale spectre and the fully cinematic register of Theron’s Ravenna. Bonham Carter’s Queen, with her accentuated head and pantomime mask of a face, retains the boundaries of form. She also presides over a court whose visual structures express the rigidities of a tyrannical regime. Thus she is no shape-shifter, but energies of the fury are expressed in her voice, which rings out across the presence chamber of the palace and reverberates throughout the kingdom with its calls for blood. Alice through the Looking Glass, James Bobin’s 2016 sequel, puts her at the centre of a vast destructive force field. Alice passes through the mirror to encounter the Lord of Time, whose eternal rule must be broken in order to break the power of the murdering Queen; Alice then opens a door and tumbles in free-fall out into nothingness. The place where she lands is a world not of daydream but of nightmare, where everything will soon be on fire, as the two sides in the chess game advance towards each other for the last battle. This inflation of the Red Queen’s macabre aura and impact is quite contrary to what Lewis Carroll had in mind for his own sequel. In some notes about the stage adaptation of the Alice stories, he makes a painstaking distinction between the characters of the queen in his two stories.I pictured to myself the Queen of Hearts as a sort of embodiment of ungovernable passion—a blind and aimless Fury. The Red Queen I pictured as a Fury, but of another type; her passion must be cold and calm—she must be formal and strict, yet not unkindly; pedantic to the 10th degree, the concentrated essence of governesses. (86)Yet there is clearly a temptation to erase this distinction in dramatisations of Alice’s adventures. Perhaps the Red Queen as a ‘not unkindly’ governess is too restrained a persona for the psychodynamic mythos surrounding the queen in the mirror. The image itself demands more than Carroll wants to accord, and the original Tenniel illustrations give a distinctly sinister look to the stern chess queen. In their very first encounter, the Red Queen contradicts every observation Alice makes, confounds the child’s sensory orientation by inverting the rules of time and motion, and assigns her the role of pawn in the game. Kafka or Orwell would not have been at all relaxed about an authority figure who practises mind control, language management, and identity reassignment. But here Carroll offers a brilliant modernisation of the fairy story tradition. Under the governance of the autocratic queen, wonderland and the looking glass world are places in which the laws of science, logic, and language are overturned, to be replaced by the rules of the queen’s games: cards and croquet in the wonderland, and chess in the looking glass world. Alice, as a well-schooled Victorian child, knows something of these games. She has enough common sense to be aware of how the laws of gravity and time and motion are supposed to work, and if she boasts of being able to believe six impossible things before breakfast, this signifies that she has enough logic to understand the limits of possibility. She would also have been taught about species and varieties and encouraged to make her own collections of natural forms. But the anarchy of the queen’s world extends into the domain of biology: species of all kinds can talk, bodies dissolve or change size, and transmutations occur instantaneously. Thus the world-warping energies of the Erinyes are re-imagined in an absurdist’s challenge to the scientist’s universe and the logician’s mentality.Carroll’s instinct to tame the furies is in accord with the overall tone and milieu of his stories, which are works of quirky charm rather than tales of terror, but his two queens are threatening enough to enable him to build the narrative to a dramatic climax. For film-makers and animators, though, it is the queen who provides the dramatic energy and presence. There is an over-riding temptation to let loose the pandemonium of the original Erinyes, exploiting their visual terror and their classical association with metamorphosis. FashioningThere is some sociological background to the coupling of the queen and the mirror in fairy story. In reality, the mirror might assist an aspiring princess to become queen by enchanting the prince who was heir to the throne, but what was the role of the looking glass once she was crowned? Historically, the self-imaging of the queen has intense and nervous resonances, and these can be traced back to Elizabeth I, whose elaborate persona was fraught with newly interpreted symbolism. Her portraits were her mirrors, and they reflect a figure in whom the qualities of radiance associated with divinity were transferred to the human monarch. Elizabeth developed the art of dressing herself in wearable light. If she lacked for a halo, she made up for it with the extravagant radiata of her ruffs and the wreaths of pearls around her head. Pearls in mediaeval poetry carried the mystique of a luminous microcosm, but they were also mirrors in themselves, each one a miniature reflecting globe. The Ditchely portrait of 1592 shows her standing as a colossus between heaven and earth, with the changing planetary light cycle as background. This is a queen who rules the world through the mediation of her own created image. It is an inevitable step from here to a corresponding intervention in the arrangement of the world at large, which involves the armies and armadas that form the backdrop to her other great portraits. And on the home front, a regime of terror focused on regular public decapitations and other grisly executions completes the strategy to remaking the world according to her will. Renowned costume designer Eiko Ishioka created an aesthetic for Mirror Mirror that combines elements of court fashion from the Elizabethan era and the French ancien régime, with allusions to Versailles. Formality and mannerism are the keynotes for the palace scenes. Julia Roberts as the Queen wears a succession of vast dresses that are in defiance of human scale and proportion. Their width at the hem is twice her height, and 100,000 Svarovski crystals were used for their embellishment. For the masked ball scene, she makes her entry as a scarlet peacock with a high arching ruff of pure white feathers. She amuses herself by arranging her courtiers as pieces on a chess-board. So stiffly attired they can barely move more than a square at a time, and with hats surmounted by precariously balanced ships, they are a mock armada from which the Queen may sink individual vessels on a whim, by ordering a fatal move. Snow White and the Huntsman takes a very different approach to extreme fashioning. Designer Colleen Atwood suggests the shape-shifter in the Queen’s costumes, incorporating materials evoking a range of species: reptile scales, fluorescent beetle wings from Thailand, and miniature bird skulls. There is an obvious homage here to the great fashion designer Alexander McQueen, whose hallmark was a fascination with the organic costuming of creatures in feathers, fur, wool, scales, shells, and fronds. Birds were everywhere in McQueen’s work. His 2006 show Widows of Culloden featured a range of headdresses that made the models look as if they had just walked through a flock of birds in full flight. The creatures were perched on their heads with outstretched wings askance across the models’ faces, obscuring their field of vision. As avatars from the spirit realm, birds are emblems of otherness, and associated with metempsychosis, the transmigration of souls. These resonances give a potent mythological aura to Theron’s Queen of the dark arts.Mirror Mirror and Snow White and the Huntsman accordingly present strikingly contrasted versions of self-fashioning. In Mirror Mirror we have an approach driven by traditions of aristocratic narcissism and courtly persona, in which form is both rigid and extreme. The Queen herself, far from being a shape-shifter, is a prisoner of the massive and rigid architecture that is her costume. Snow White and the Huntsman gives us a more profoundly magical interpretation, where form is radically unstable, infused with strange energies that may at any moment manifest themselves through violent transformation.Atwood was also costume designer for Burton’s Alice in Wonderland, where an invented framing story foregrounds the issue of fashioning as social control. Alice in this version is a young woman, being led by her mother to a garden party where a staged marriage proposal is to take place. Alice, as the social underling in the match, is simply expected to accept the honour. Instead, she escapes the scene and disappears down a rabbit hole to return to the wonderland of her childhood. In a nice comedic touch, her episodes of shrinking and growing involve an embarrassing separation from her clothes, so divesting her also of the demure image of the Victorian maiden. Atwood provides her with a range of fantasy party dresses that express the free spirit of a world that is her refuge from adult conformity.Alice gets to escape the straitjacket of social formation in Carroll’s original stories by overthrowing the queen’s game, and with it her micro-management of image and behaviour. There are other respects, though, in which Alice’s adventures are a form of social and moral fashioning. Her opening reprimand to the kitten includes some telling details about her own propensities. She once frightened a deaf old nurse by shouting suddenly in her ear, “Do let’s pretend that I’m a hungry hyaena and you’re a bone!” (147). Playing kings and queens is one of little Alice’s favourite games, and there is more than a touch of the Red Queen in the way she bosses and manages the kitten. It is easy to laud her impertinence in the face of the tyrannical characters she meets in her fantasies, but does she risk becoming just like them?As a story of moral self-fashioning, Alice through the Looking Glass cuts both ways. It is at once a critique of the Victorian social straitjacket, and a child’s fable about self-improvement. To be accorded the status of queen and with it the freedom of the board is also to be invested with responsibilities. If the human girl is the queen of species, how will she measure up? The published version of the story excludes an episode known to editors as “The Wasp in a Wig,” an encounter that takes place as Alice reaches the last ditch before the square upon which she will be crowned. She is about to jump the stream when she hears a sigh from woods behind her. Someone here is very unhappy, and she reasons with herself about whether there is any point in stopping to help. Once she has made the leap, there will be no going back, but she is reluctant to delay the move, as she is “very anxious to be a Queen” (309). The sigh comes from an aged creature in the shape of a wasp, who is sitting in the cold wind, grumbling to himself. Her kind enquiries are greeted with a succession of waspish retorts, but she persists and does not leave until she has cheered him up. The few minutes devoted “to making the poor old creature comfortable,” she tells herself, have been well spent.Read in isolation, the episode is trite and interferes with the momentum of the story. Carroll abandoned it on the advice of his illustrator John Tenniel, who wrote to say it didn’t interest him in the least (297). There is interest of another kind in Carroll’s instinct to arrest Alice’s momentum at that critical stage, with what amounts to a small morality tale, but Tenniel’s instinct was surely right. The mirror as a social object is surrounded by traditions of self-fashioning that are governed by various modes of conformity: moral, aesthetic, political. Traditions of myth and fantasy allow wider imaginative scope for the role of the mirror, and by association, for inventive speculation about human transformation in a world prone to extraordinary upheavals. ReferencesBorges, Jorge Luis. “Mirrors of Enigma.” Labyrinths: Selected Stories and Other Writings. Eds. Donald A. Yates and James Irby. New York: New Directions, 2007. 209–12. Carroll, Lewis. Alice through the Looking Glass. In The Annotated Alice. Ed. Martin Gardner. London: Penguin, 2000.The King James Bible.Le Guin, Ursula. The Earthsea Quartet. London: Penguin, 2012.Melchior-Bonnet, Sabine. The Mirror: A History. Trans. Katherine H. Jewett. London: Routledge, 2014.Thomas, R.S. “Reflections.” No Truce with the Furies, Collected Later Poems 1988–2000. Hexham, Northumberland: Bloodaxe, 2011.
APA, Harvard, Vancouver, ISO, and other styles
25

DeCook, Julia Rose. "Trust Me, I’m Trolling: Irony and the Alt-Right’s Political Aesthetic." M/C Journal 23, no. 3 (July 7, 2020). http://dx.doi.org/10.5204/mcj.1655.

Full text
Abstract:
In August 2017, a white supremacist rally marketed as “Unite the Right” was held in Charlottesville, Virginia. In participation were members of the alt-right, including neo-nazis, white nationalists, neo-confederates, and other hate groups (Atkinson). The rally swiftly erupted in violence between white supremacists and counter protestors, culminating in the death of a counter-protester named Heather Heyer, who was struck by a car driven by white supremacist James Alex Fields, and leaving dozens injured. Terry McQuliffe, the Governor of Virginia, declared a state of emergency on August 12, and the world watched while white supremacists boldly marched in clothing emblazoned with symbols ranging from swastikas to a cartoon frog (Pepe), with flags featuring the nation of “Kekistan”, and carrying tiki torches chanting, “You Will Not Replace Us... Jews Will Not Replace Us”.The purpose of this essay is not, however, to examine the Internet symbols that circulated during the Unite the Right rally but rather to hone in on a specific moment that illustrates a key part of Internet culture that was often overlooked during analysis of the events that occurred during the riots: a documentary filmmaker, C. J. Hunt, was at the rally to record footage for a project on the removal of Confederate monuments. While there, he saw a rally-goer dressed in the white polo t-shirt and khaki pants uniform of the white nationalist group Vanguard America. The rally-goer, a young white man, was being chased by a counter-protester. He began to scream and beg for mercy, and even went as far as stripping off his clothing and denying that he really believed in any of the group’s ideology. In the recording by Hunt, who asks why he was there and why he was undressing, the young white man responded that shouting white power is “fun”, and that he was participating in the event because he, quote, “likes to be offensive” (Hunt).As Hunt notes in a piece for GQ reflecting on his experience at the rally, as soon as the man was cut off from his group and confronted, the runaway racist’s demeanor immediately changed when he had to face the consequences of his actions. Trolls often rely on the safety and anonymity of online forums and digital spaces where they are often free from having to face the consequences of their actions, and for the runaway racist, things became real very quickly when he was forced to own up to his hateful actions. In a way, many members of these movements seem to want politics without consequence for themselves, but with significant repercussions for others. Milo Yiannopoulos, a self-professed “master troll”, built an entire empire worth millions of dollars off of what the far-right defends as ironic hate speech and a form of politics without consequences reserved only for the privileged white men that gleefully engage in it. The runaway racist and Yiannopoulos are borne out of an Internet culture that is built on being offensive, on trolling, and “troll” itself being an aspirational label and identity, but also more importantly, a political aesthetic.In this essay, I argue that trolling itself has become a kind of political aesthetic and identity, and provide evidence via examples like hoaxes, harassment campaigns, and the use of memes to signal to certain online populations and extremist groups in violent attacks. First coined by Walter Benjamin in order to explain a fundamental component of using art to foster consent and compliance in fascist regimes, the term since then has evolved to encompass far more than just works of art. Benjamin’s original conception of the term is in regard to a creation of a spectacle that prevents the masses from recognizing their rights – in short, the aestheticization of politics is not just about the strategies of the fascist regimes themselves but says more about the subjects within them. In the time of Benjamin’s writing, the specific medium was mass propaganda through the newly emerging film industry and other forms of art (W. Benjamin). To Benjamin, these aesthetics served as tools of distracting to make fascism more palatable to the masses. Aesthetic tools of distraction serve an affective purpose, revealing the unhappy consciousness of neoreactionaries (Hui), and provide an outlet for their resentment.Since political aesthetics are concerned with how cultural products like art, film, and even clothing reflect political ideologies and beliefs (Sartwell; McManus; Miller-Idriss), the objects of analysis in this essay are part of the larger visual culture of the alt-right (Bogerts and Fielitz; Stanovsky). Indeed, aesthetic aspects of political systems shift their meaning over time, or are changed and redeployed with transformed effect (Sartwell). In this essay, I am applying the concept of the aestheticization of politics by analyzing how alt-right visual cultures deploy distraction and dissimulation to advance their political agenda through things like trolling campaigns and hoaxes. By analyzing these events, their use of memes, trolling techniques, and their influence on mainstream culture, what is revealed is the influence of trolling on political culture for the alt-right and how the alt-right then distracts the rest of the public (McManus).Who’s Afraid of the Big Bad Troll?Large scale analyses of disinformation and extremist content online tends to examine how certain actors are connected, what topics emerge and how these are connected across platforms, and the ways that disinformation campaigns operate in digital environments (Marwick and Lewis; Starbird; Benkler et al.). Masculine and white-coded technology gave rise to male-dominated digital spaces (R. Benjamin), with trolling often being an issue faced by non-normative users of the Internet and their communities (Benjamin; Lumsden and Morgan; Nakamura; Phillips, Oxygen). Creating a kind of unreality where it is difficult to parse out truth from lies, fiction from non-fiction, the troll creates cultural products, and by hiding behind irony and humor confuses onlookers and is removed from any kind of reasonable blame for their actions. Irony has long been a rhetorical strategy used in politics, and the alt right has been no exception (Weatherby), but for our current sociopolitical landscape, trolling is a political strategy that infuses irony into politics and identity.In the digital era, political memes and internet culture are pervasive components of the spread of hate speech and extremist ideology on digital platforms. Trolling is not an issue that exists in a vacuum – rather, trolls are a product of greater mainstream culture that encourages and allows their behaviors (Phillips, This Is Why; Fichman and Sanfilippo; Marwick and Lewis). Trolls, and meme culture in general, have often been pointed to as being part of the reason for the rise of Trump and fascist politics across the world in recent years (Greene; Lamerichs et al.; Hodge and Hallgrimsdottir; Glitsos and Hall). Although criticism has been expressed about how impactful memes were in the election of Donald Trump, political memes have had an impact on the ways that trolling went from anonymous jerks on forums to figures like Yiannapoulos who built entire careers off of trolling, creating empires of hate (Lang). These memes that are often absurd and incomprehensible to those who are not a part of the community that they come from aim to cheapen, trivialize, and mock social justice movements like Black Lives Matter, feminism, LGBTQ+ rights, and others.But the history of trolling online goes as far back as the Internet itself. “Trolling” is just a catch all term to describe online behaviors meant to antagonize, to disrupt online conversations, and to silence other users (Cole; Fichman and Sanfilippo). As more and more people started moving online and engaging in participatory culture, trolling continued to evolve from seemingly harmless jokes like the “Rick Roll” to targeted campaigns meant to harass women off of social media platforms (Lumsden and Morgan; Graham). Trolling behaviors are more than just an ugly part of the online experience, but are also a way for users to maintain the borders of their online community - it’s meant to drive away those who are perceived to be outsiders not just from the specific forum, but the Internet itself (Graham). With the rise of modern social media platforms, trolling itself is also a part of the political landscape, creating a “toxic counterpublic” that combines irony with a kind of earnestness to spread and inject their beliefs into mainstream political discourse (Greene). As a mode of information warfare, these subversive rhetorical strategies meant to contradict or reverse existing political and value systems have been used throughout history as a political tactic (Blackstock).The goal of trolling is not just to disrupt conversations, but to lead to chaos via confusion about the sincerity and meaning of messages and visuals, and rather than functioning as a politics of outrage (on the part of the adherents), it is a politics of being as outrageous as possible. As a part of larger meme culture, the aesthetics of trolls and their outrageous content manage to operate under the radar by being able to excuse their behaviors and rhetoric as just “trolling” or “joking”. This ambiguity points to trolling on the far right as a political strategy and identity to absolve them of blame or accusations of what their real intentions are. Calling them “trolls” hides the level of sophistication and vast levels of influence that they had on public opinion and discourse in the United States (Geltzer; Starks et al.; Marwick and Lewis). We no longer live in a world apart from the troll’s influence and immune from their toxic discourse – rather, we have long been under the bridge with them.Co-Opted SymbolsOne of the most well-known examples of trolling as a political aesthetic and tactic may be the OK hand sign used by the Christchurch shooter. The idea that the OK hand sign was a secretly white supremacist symbol started as a hoax on 4chan. The initial 2017 hoax purported that the hand sign was meant to stand for “White Power”, with the three fingers representing the W and the circle made with the index finger and thumb as the P (Anti-Defamation League, “Okay Hand Gesture”). The purpose of perpetuating the hoax was to demonstrate that (a) they were being watched and (b) that the mainstream media is stupid and gullible enough to believe this hoax. Meant to incite confusion and to act as a subversive strategy, the OK hand sign was then actually adopted by the alt-right as a sort of meme to not just perpetuate the hoax, but to signal belonging to the larger group (Allyn). Even though the Anti-Defamation League initially listed it as not being a hate symbol and pointed out the origins of the hoax (Anti-Defamation League, “No, the ‘OK’ Gesture Is Not a Hate Symbol”), they then switched their opinion when the OK hand sign was being flashed by white supremacists, showing up in photographs at political events, and other social media content. In fact, the OK hand sign is also a common element in pictures of Pepe the Frog, who is a sort of “alt right mascot” (Tait; Glitsos and Hall), but like the OK hand sign, Pepe the Frog did not start as an alt-right mascot and was co-opted by the alt-right as a mode of representation.The confusion around the actual meaning behind the hand symbol points to how the alt-right uses these modes of representation in ways that are simultaneously an inside joke and a real expression of their beliefs. For instance, the Christchurch shooter referenced a number of memes and other rhetoric typical of 4chan and 8chan communities in his video and manifesto (Quek). In the shooter’s manifesto and video, the vast amounts of content that point to the trolling and visual culture of the alt-right are striking – demonstrating how alt-right memes not only make this violent ideology accessible, but are cultural products meant to be disseminated and ultimately, result in some kind of action (DeCook).The creation and co-optation of symbols by the alt-right like the OK hand sign are not just memes, but a form of language created by extremists for extremists (Greene; Hodge and Hallgrimsdottir). The shooter’s choice of including this type of content in his manifesto as well as certain phrases in his live-streamed video indicate his level of knowledge of what needed to be done for his attack to get as much attention as possible – the 4chan troll is the modern-day bogeyman, and parts of the manifesto have been identified as intentional traps for the mainstream media (Lorenz).Thus, the Christchurch shooter and trolling culture are linked, but referring to the symbols in the manifesto as being a part of “trolling” culture misses the deeper purpose – chaos, through the outrage spectacle, is the intended goal, particularly by creating arguments about the nature and utility of online trolling behavior. The shooter encouraged other 8chan users to disseminate his posted manifesto as well as to share the video of the attack – and users responded by immortalizing the event in meme format. The memes created celebrated the shooter as a hero, and although Facebook did remove the initial livestream video, it was reuploaded to the platform 1.2 million times in the first 24 hours, attempting to saturate the online platform with so many uploads that it would cause confusion and be difficult to remove (Gramenz). Some users even created gifs or set the video to music from the Doom video game soundtrack – a video game where the player is a demon slayer in an apocalyptic world, further adding another layer of symbolism to the attack.These political aesthetics – spread through memes, gifs, and “fan videos” – are the perfect vehicles for disseminating extremist ideology because of what they allow the alt-right to do with them: hide behind them, covering up their intentions, all the while adopting them as signifiers for their movement. With the number of memes, symbols, and phrases posted in his manifesto and spoken aloud in his mainstream, perhaps the Christchurch shooter wanted the onus of the blame to fall on these message board communities and the video games and celebrities referenced – in effect, it was “designed to troll” (Lorenz). But, there is a kernel of truth in every meme, post, image, and comment – their memes are a part of their political aesthetic, thus implicit and explicit allusions to the inner workings of their ideology are present. Hiding behind hoaxes, irony, edginess, and trolling, members of the alt-right and other extremist Internet cultures then engage in a kind of subversion that allows them to avoid taking any responsibility for real and violent attacks that occur as a result of their discourse. Antagonizing the left, being offensive, and participating in this outrage spectacle to garner a response from news outlets, activists, and outsiders are all a part of the same package.Trolls and the Outrage SpectacleThe confusion and the chaos left behind by these kinds of trolling campaigns and hoaxes leave many to ask: How disingenuous is it? Is it meant for mere shock value or is it really reflective of the person’s beliefs? In terms of the theme of dissimulation for this special issue, what is the real intent, and under what pretenses should these kinds of trolling behaviors be understood? Returning to the protestor who claimed “I just like to be offensive”, the skepticism from onlookers still exists: why go so far as to join an alt-right rally, wearing the uniform of Identity Evropa (now the American Identity Movement), as a “joke”?Extremists hide behind humor and irony to cloud judgments from others, begging the question of can we have practice without belief? But, ultimately, practice and belief are intertwined – the regret of the Runaway Racist is not because he suddenly realized he did not “believe”, but rather was forced to face the consequences of his belief, something that he as a white man perhaps never really had to confront. The cultural reach of dissimulation, in particular hiding true intent behind the claim of “irony”, is vast - YouTuber Pewdiepie claimed his use of racial and anti-Semitic slurs and putting on an entire Ku Klux Klan uniform in the middle of a video were “accidental” only after considerable backlash (Picheta). It has to be noted, however, that Pewdiepie is referenced in the manifesto of the Christchurch shooter – specifically, the shooter yelled during his livestream “subscribe to Pewdiepie”, (Lorenz). Pewdiepie and many other trolls, once called out for their behavior, and regardless of their actual intent, double down on their claims of irony to distract from the reality of their behaviors and actions.The normalization of this kind of content in mainstream platforms like Twitter, YouTube, Facebook, and even Instagram show how 4chan and alt-right Internet culture has seeped out of its borders and exists everywhere online. This “coded irony” is not only enabled rhetorically due to irony’s slippery definition, but also digitally via these online media (Weatherby). The aesthetics of the troll are present in every single platform and are disseminated everywhere – memes are small cultural units meant to be passed on (Shifman), and although one can argue it was not memes alone that resulted in the rise of the alt-right and the election of Donald Trump, memes are a part of the larger puzzle of the political radicalization process. The role of the Internet in radicalization is so powerful and insidious because of the presentation of content – it is funny, edgy, ironic, offensive, and outrageous. But these behaviors and attitudes are not just appealing to some kind of adolescent-like desire to push boundaries of what is and is not socially acceptable and/or politically incorrect (Marwick and Lewis), and calling it such clouds people’s perceptions of their level of sophistication in shaping political discourse.Memes and the alt-right are a noted phenomenon, and these visual cultures created by trolls on message boards have aided in the rise of the current political situation worldwide (Hodge and Hallgrimsdottir). We are well in the midst of a type of warfare based on not weapons and bodies, but information and data - in which memes and other elements of the far right’s political aesthetic play an important role (Molander et al.; Prier; Bogerts and Fielitz). The rise of the online troll as a political player and the alt-right are merely the logical outcomes of these systems.ConclusionThe alt-right’s spread was possible because of the trolling cultures and aesthetics of dissimulation created in message boards that predate 4chan (Kitada). The memes and inflammatory statements made by them serve multiple purposes, ranging from an intention to incite outrage among non-members of the group to signal group belonging and identity. In some odd way, if people do not understand the content, the content actually speaks louder and, in more volumes, that it would if its intent was more straightforward – in their confusion, people give these trolling techniques more attention and amplification in their attempt to make sense of them. Through creating confusion, distraction, and uncertainty around the legitimacy of messages, hand signs, and even memes, the alt-right has elevated the aestheticization of politics to a degree that Walter Benjamin could perhaps not have predicted in his initial lament about the distracted masses of fascist regimes (McManus). The political dimensions of trolling and the cognitive uncertainty that it creates is a part of its goal. Dismissing trolls is no longer an option, but also regarding them as sinister political operatives may be overblowing their significance. In the end, “ironic hate speech” is still hate speech, and by couching their extremist ideology in meme format they make their extremist beliefs more palatable -- and nobody is completely immune to their strategies.ReferencesAllyn, Bobby. “The ‘OK’ Hand Gesture Is Now Listed as a Symbol of Hate.” NPR 2019. <https://www.npr.org/2019/09/26/764728163/the-ok-hand-gesture-is-now-listed-as-a-symbol-of-hate>.Anti-Defamation League. “No, the ‘OK’ Gesture Is Not a Hate Symbol.” Anti-Defamation League. 10 Dec. 2017 <https://www.adl.org/blog/no-the-ok-gesture-is-not-a-hate-symbol>.———. “Okay Hand Gesture.” Anti-Defamation League. 28 Feb. 2020 <https://www.adl.org/education/references/hate-symbols/okay-hand-gesture>.Atkinson, David C. “Charlottesville and the Alt-Right: A Turning Point?” Politics, Groups, and Identities 6.2 (2018): 309-15.Benjamin, Ruha. Race after Technology: Abolitionist Tools for the New Jim Code. Polity, 2019.Benjamin, Walter. The Work of Art in the Age of Mechanical Reproduction. CreateSpace Independent Publishing Platform, 1936.Benkler, Yochai, et al. Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics. Oxford: Oxford UP, 2018.Blackstock, Paul W. The Strategy of Subversion: Manipulating the Politics of Other Nations. Chicago: Quadrangle Books, 1964.Bogerts, Lisa, and Maik Fielitz. “Do You Want Meme War?”: Understanding the Visual Memes of the German Far Right. 2019.Cole, Kirsti K. “‘It’s Like She’s Eager to Be Verbally Abused’: Twitter, Trolls, and (En)Gendering Disciplinary Rhetoric.” Feminist Media Studies 15.2 (2015): 356-58.DeCook, Julia R. “Memes and Symbolic Violence: #Proudboys and the Use of Memes for Propaganda and the Construction of Collective Identity.” Learning, Media and Technology 43.4 (2018): 485-504.Douglas, Nick. “It’s Supposed to Look Like Shit: The Internet Ugly Aesthetic.” Journal of Visual Culture 13.3 (2014): 314-39.Fichman, Pnina, and Madelyn R. Sanfilippo. Online Trolling and Its Perpetrators: Under the Cyberbridge. Rowman & Littlefield, 2016.Funke, Daniel. “When and How to Use 4chan to Cover Conspiracy Theories.” Poynter, 24 Sep. 2018. <https://www.poynter.org/fact-checking/2018/when-and-how-to-use-4chan-to-cover-conspiracy-theories/>.Geltzer, Joshua A. “Stop Calling Them ‘Russian Troll Farms’ - CNN.” CNN, 2018. <https://www.cnn.com/2018/08/17/opinions/stop-calling-russian-operatives-troll-farms-geltzer/index.html>.Glitsos, Laura, and James Hall. “The Pepe the Frog Meme: An Examination of Social, Political, and Cultural Implications through the Tradition of the Darwinian Absurd.” Journal for Cultural Research 23.4 (2019): 381-95.Graham, Elyse. “Boundary Maintenance and the Origins of Trolling.” New Media & Society (2019). doi:10.1177/1461444819837561.Gramenz, Jack. “Christchurch Mosque Attack Livestream: Why Facebook Continues to Fail.” New Zealand Herald 17 Feb. 2020. <https://www.nzherald.co.nz/business/news/article.cfm?c_id=3&objectid=12309116>.Greene, Viveca S. “‘Deplorable’ Satire: Alt-Right Memes, White Genocide Tweets, and Redpilling Normies.” Studies in American Humor 5.1 (2019): 31–69.Hodge, Edwin, and Helga Hallgrimsdottir. “Networks of Hate: The Alt-Right, ‘Troll Culture’, and the Cultural Geography of Social Movement Spaces Online.” Journal of Borderlands Studies (2019): 1–18.Hui, Yuk. “On the Unhappy Consciousness of Neoreactionaries.” E-Flux 81 (2017). <https://www.e-flux.com/journal/81/125815/on-the-unhappy-consciousness-of-neoreactionaries/>.Hunt, C. J. “A Charlottesville White Supremacist Stripped Down to Escape Protesters and We Got It on Video.” GQ 2017. <https://www.gq.com/story/charlottesville-white-supremacist-strips-to-escape-protestors>.Kitada, Akihiro. “Japan’s Cynical Nationalism.” Fandom Unbound: Otaku Culture in a Connected World. Eds. Mizuko Ito et al. Yale UP, 2012: 68–84.Lamerichs, Nicolle, et al. “Elite Male Bodies: The Circulation of Alt-Right Memes and the Framing of Politicians on Social Media.” Participations 15.1 (2018): 180–206.Lang, Nico. “Trolling in the Name of ‘Free Speech’: How Milo Yiannopoulos Built an Empire off Violent Harassment.” Salon, 2016. <http://www.salon.com/2016/12/19/trolling-in-the-name-of-free-speech-how-milo-yiannopoulos-built-an-empire-off-violent-harassment/>.Lorenz, Taylor. “The Shooter’s Manifesto Was Designed to Troll.” The Atlantic, 15 Mar. 2019. <https://www.theatlantic.com/technology/archive/2019/03/the-shooters-manifesto-was-designed-to-troll/585058/>.Lumsden, Karen, and Heather Morgan. “Media Framing of Trolling and Online Abuse: Silencing Strategies, Symbolic Violence, and Victim Blaming.” Feminist Media Studies 17.6 (2017): 926–40.Marwick, Alice E., and Rebecca Lewis. “Media Manipulation and Disinformation Online.” Data & Society, 2017. <http://centerformediajustice.org/wp-content/uploads/2017/07/DataAndSociety_MediaManipulationAndDisinformationOnline.pdf>.McManus, Matt. “Walter Benjamin and the Political Practices of the Alt-Right.” New Politics, 27 Dec. 2017. <https://newpol.org/walter-benjamin-and-political-practices-altright/>.Miller-Idriss, Cynthia. The Extreme Gone Mainstream: Commercialization and Far Right Youth Culture in Germany. Princeton UP, 2018.Molander, Roger C., et al. Strategic Information Warfare: A New Face of War. RAND Corporation, 1996. <https://www.rand.org/pubs/monograph_reports/MR661.html>.Nakamura, Lisa. Cybertypes: Race, Ethnicity, and Identity on the Internet. Routledge, 2002.Nissenbaum, Asaf, and Limor Shifman. “Internet Memes as Contested Cultural Capital: The Case of 4chan’s /b/ Board.” New Media & Society 19.4 (2017): 483–501.Phillips, Whitney. The Oxygen of Amplification. Data & Society, 2018. <https://datasociety.net/output/oxygen-of-amplification>.———. This Is Why We Can’t Have Nice Things: Mapping the Relationship between Online Trolling and Mainstream Culture. Cambridge, Mass.: MIT Press, 2015.Picheta, Rob. “PewDiePie Will Take a Break from YouTube, Saying He’s ‘Very Tired.’” CNN, 2019. <https://www.cnn.com/2019/12/16/tech/pewdiepie-taking-break-youtube-scli-intl/index.html>.Prier, Jarred. “Commanding the Trend: Social Media as Information Warfare.” Strategic Studies Quarterly 11.4 (2017): 50–85.Quek, Natasha. Bloodbath in Christchurch: The Rise of Far-Right Terrorism. 2019.Sartwell, Crispin. Political Aesthetics. Cornell UP, 2010.Shifman, Limor. Memes in Digital Culture. Cambridge, Mass.: MIT Press, 2014.Stanovsky, Derek. “Remix Racism: The Visual Politics of the ‘Alt-Right’.” Journal of Contemporary Rhetoric 7 (2017).Starbird, Kate. “Examining the Alternative Media Ecosystem through the Production of Alternative Narratives of Mass Shooting Events on Twitter.” International AAAI Conference on Web and Social Media (2017): 230–239. <https://www.aaai.org/ocs/index.php/ICWSM/ICWSM17/paper/view/15603>.Starks, Tim, Laurens Cerulus, and Mark Scott. “Russia’s Manipulation of Twitter Was Far Vaster than Believed.” Politico, 5 Jun. 2019. <https://politi.co/2HXDVQ2>.Tait, Amelia. “First They Came for Pepe: How ‘Ironic’ Nazism Is Taking Over the Internet.” New Statesman 16 Feb. 2017. <http://www.newstatesman.com/science-tech/internet/2017/02/first-they-came-pepe-how-ironic-nazism-taking-over-internet>.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography