Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Site-oriented risk assessment.

Articles de revues sur le sujet « Site-oriented risk assessment »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 32 meilleurs articles de revues pour votre recherche sur le sujet « Site-oriented risk assessment ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les articles de revues sur diverses disciplines et organisez correctement votre bibliographie.

1

Cherdantseva, Yulia, Pete Burnap, Simin Nadjm-Tehrani et Kevin Jones. « A Configurable Dependency Model of a SCADA System for Goal-Oriented Risk Assessment ». Applied Sciences 12, no 10 (11 mai 2022) : 4880. http://dx.doi.org/10.3390/app12104880.

Texte intégral
Résumé :
A key purpose of a Supervisory Control and Data Acquisition (SCADA) system is to enable either an on-site or remote supervisory control and monitoring of physical processes of various natures. In order for a SCADA system to operate safely and securely, a wide range of experts with diverse backgrounds must work in close rapport. It is critical to have an overall view of an entire system at a high level of abstraction which is accessible to all experts involved, and which assists with gauging and assessing risks to the system. Furthermore, a SCADA system is composed of a large number of interconnected technical and non-technical sub-elements, and it is crucial to capture the dependencies between these sub-elements for a comprehensive and rigorous risk assessment. In this paper, we present a generic configurable dependency model of a SCADA system which captures complex dependencies within a system and facilitates goal-oriented risk assessment. The model was developed by collecting and analysing the understanding of the dependencies within a SCADA system from 36 domain experts. We describe a methodology followed for developing the dependency model, present an illustrative example where the generic dependency model is configured for a SCADA system controlling water distribution, and outline an exemplary risk assessment process based on it.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Aguirre-Ayerbe, Ignacio, Jara Martínez Sánchez, Íñigo Aniel-Quiroga, Pino González-Riancho, María Merino, Sultan Al-Yahyai, Mauricio González et Raúl Medina. « From tsunami risk assessment to disaster risk reduction – the case of Oman ». Natural Hazards and Earth System Sciences 18, no 8 (24 août 2018) : 2241–60. http://dx.doi.org/10.5194/nhess-18-2241-2018.

Texte intégral
Résumé :
Abstract. Oman is located in an area of high seismicity, facing the Makran Subduction Zone, which is the major source of earthquakes in the eastern border of the Arabian plate. These earthquakes, as evidenced by several past events, may trigger a tsunami event. The aim of this work is to minimize the consequences that tsunami events may cause in coastal communities by integrating tsunami risk assessment and risk reduction measures as part of the risk-management preparedness strategy. An integrated risk assessment approach and the analysis of site-specific conditions permitted to propose target-oriented risk reduction measures. The process included a participatory approach, involving a panel of local stakeholders and international experts. One of the main concerns of this work was to obtain a useful outcome for the actual improvement of tsunami risk management in Oman. This goal was achieved through the development of comprehensive and functional management tools such as the Tsunami Hazard, Vulnerability and Risk Atlas and the Risk Reduction Measures Handbook, which will help to design and plan a roadmap towards risk reduction. The integrated tsunami risk assessment performed showed that the northern area of Oman would be the most affected, considering both the hazard and vulnerability components. This area also concentrates nearly 50 % of the hot spots identified throughout the country, 70 % of them are located in areas with a very high risk class, in which risk reduction measures were selected and prioritized.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Tabibzadeh, Maryam, et Gelareh Jahangiri. « A Proactive Risk Assessment Methodology to Enhance Patient Safety : Reducing Wrong Site Surgery as a Preventable Medical Error ». Proceedings of the Human Factors and Ergonomics Society Annual Meeting 64, no 1 (décembre 2020) : 664–68. http://dx.doi.org/10.1177/1071181320641152.

Texte intégral
Résumé :
Patient safety has been a major area of concern over the last decades in the healthcare industry. The number of preventable medical errors in hospitals has been noticeably high. These errors are more likely to occur in intensive care units including Operating Rooms (ORs). Wrong site surgery is one of the critical sentinel events that occur in healthcare settings. This paper fills an important gap by proposing an integrated, system-oriented methodology for proactive risk assessment of operations in ORs, to specifically analyze the wrong site surgery issue, through the identification and monitoring of appropriate Leading Safety Indicators (LSIs) to evaluate the safety of those operations and generate warning/predicting signals for potential failures. These LSIs are identified across the layers of an introduced framework, which is built on the foundation of the Human-Organization-Technology (HOT) model originally developed by Meshkati (1992). This multi-layered framework captures the contributing causes of wrong site surgery.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Kawall, Katharina. « The Generic Risks and the Potential of SDN-1 Applications in Crop Plants ». Plants 10, no 11 (22 octobre 2021) : 2259. http://dx.doi.org/10.3390/plants10112259.

Texte intégral
Résumé :
The use of site-directed nucleases (SDNs) in crop plants to alter market-oriented traits is expanding rapidly. At the same time, there is an on-going debate around the safety and regulation of crops altered with the site-directed nuclease 1 (SDN-1) technology. SDN-1 applications can be used to induce a variety of genetic alterations ranging from fairly ‘simple’ genetic alterations to complex changes in plant genomes using, for example, multiplexing approaches. The resulting plants can contain modified alleles and associated traits, which are either known or unknown in conventionally bred plants. The European Commission recently published a study on new genomic techniques suggesting an adaption of the current GMO legislation by emphasizing that targeted mutagenesis techniques can produce genomic alterations that can also be obtained by natural mutations or conventional breeding techniques. This review highlights the need for a case-specific risk assessment of crop plants derived from SDN-1 applications considering both the characteristics of the product and the process to ensure a high level of protection of human and animal health and the environment. The published literature on so-called market-oriented traits in crop plants altered with SDN-1 applications is analyzed here to determine the types of SDN-1 application in plants, and to reflect upon the complexity and the naturalness of such products. Furthermore, it demonstrates the potential of SDN-1 applications to induce complex alterations in plant genomes that are relevant to generic SDN-associated risks. In summary, it was found that nearly half of plants with so-called market-oriented traits contain complex genomic alterations induced by SDN-1 applications, which may also pose new types of risks. It further underscores the need for data on both the process and the end-product for a case-by-case risk assessment of plants derived from SDN-1 applications.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Sanuy, Marc, Enrico Duo, Wiebke S. Jäger, Paolo Ciavola et José A. Jiménez. « Linking source with consequences of coastal storm impacts for climate change and risk reduction scenarios for Mediterranean sandy beaches ». Natural Hazards and Earth System Sciences 18, no 7 (3 juillet 2018) : 1825–47. http://dx.doi.org/10.5194/nhess-18-1825-2018.

Texte intégral
Résumé :
Abstract. Integrated risk assessment approaches to support coastal managers' decisions when designing plans are increasingly becoming an urgent need. To enable efficient coastal management, possible present and future scenarios must be included, disaster risk reduction measures integrated, and multiple hazards dealt with. In this work, the Bayesian network-based approach to coastal risk assessment was applied and tested at two Mediterranean sandy coasts (Tordera Delta in Spain and Lido degli Estensi–Spina in Italy). Process-oriented models are used to predict hazards at the receptor scale which are converted into impacts through vulnerability relations. In each site, results from 96 simulations under different scenarios are integrated by using a Bayesian-based decision network to link forcing characteristics with expected impacts through conditional probabilities. Consultations with local stakeholders and experts have shown that the tool is valuable for communicating risks and the effects of risk reduction strategies. The tool can therefore be valuable support for coastal decision-making.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Nikitchenko, Vladimir E., Ekaterina O. Rystsova et Anastasiya N. Chernysheva. « ANALYSIS AND PREVENTION OF RISKS IN THE MANUFACTURE OF LABORATORY MICROBIOLOGICAL CULTURE MEDIA BY FMEA METHOD ». RUDN Journal of Agronomy and Animal Industries 14, no 1 (15 décembre 2019) : 90–98. http://dx.doi.org/10.22363/2312-797x-2019-14-1-90-98.

Texte intégral
Résumé :
At all stages of the production of microbiological nutrient media (MNM), the manufacturer, and in particular, the microbiological laboratories that manufacture the media on site, face many operational risks. The presence of such risks, in almost every critical point of production and further operation of the MNM, is due to the presence of common basic requirements for all MNM, which must be taken into account and respected during their development and preparation; as well as the complexity and laboriousness of the very process of manufacturing high-quality differential-diagnostic and other nutrient media, requiring the availability of all the components necessary for preparing these media, equipment, sterile conditions and qualified personnel. In this regard, there is a need to search for effective methods to identify and prevent undesirable situations associated with the production and use of MNM. The aim of this work was to adapt the risk assessment methodology based on the expert method for analyzing the types and consequences of FMEA failures (Failure Mode Effect Analysis) set out in GOST R ISO 31010-2011 for the needs of microbiological laboratories, including those for veterinary and sanitary expertise, producing microbiological nutrient environments and using them. As part of this work, a comparative analysis of risk assessment methods was carried out in order to select the optimal one; adaptation of the QMS principle - risk-oriented thinking and the FMEA method for risk assessment in the implementation of MNM manufacturing processes in a microbiological laboratory (for example, solid agar media); risk assessment protocol forms were developed; calculations of a quantitative assessment of risk levels were carried out in order to determine the need for preventive actions and their implementation in order to minimize the negative consequences of risk in case of its implementation using the developed protocols. The results showed that this technique can be successfully implemented and used in the claimed area.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Giuliani, Francesca, Anna De Falco, Valerio Cutini et Michele Di Sivo. « A simplified methodology for risk analysis of historic centers : the world heritage site of San Gimignano, Italy ». International Journal of Disaster Resilience in the Built Environment 12, no 3 (11 février 2021) : 336–54. http://dx.doi.org/10.1108/ijdrbe-04-2020-0029.

Texte intégral
Résumé :
Purpose Worldwide, natural hazards are affecting urban cultural heritage and World Heritage Sites, exacerbating other environmental and human-induced threats deriving from deterioration, uncontrolled urbanization and unsustainable tourism. This paper aims to develop a disaster risk analysis in Italian historic centers because they are complex large-scale systems that are cultural and economic resources for the country, as well as fragile areas. Design/methodology/approach A heritage-oriented qualitative methodology for risk assessment is proposed based upon the formalization of risk as a function of hazard, vulnerability and exposure, taking into account the values of cultural heritage assets. Findings This work provides a contribution to the body of knowledge in the Italian context of disaster risk mitigation on World Heritage Sites, opening for further research on the monitoring and maintenance of the tangible heritage assets. The application to the site of San Gimignano proves the effectiveness of the methodology for proposing preventive measures and actions that ensure the preservation of cultural values and a safer built environment. Originality/value The application of a value-based simplified approach to risk analysis is a novelty for historic centers that are listed as World Heritage Sites.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Toğan, Vedat, Fatemeh Mostofi, Yunus Ayözen et Onur Behzat Tokdemir. « Customized AutoML : An Automated Machine Learning System for Predicting Severity of Construction Accidents ». Buildings 12, no 11 (9 novembre 2022) : 1933. http://dx.doi.org/10.3390/buildings12111933.

Texte intégral
Résumé :
Construction companies are under pressure to enhance their site safety condition, being constantly challenged by rapid technological advancements, growing public concern, and fierce competition. To enhance construction site safety, literature investigated Machine Learning (ML) approaches as risk assessment (RA) tools. However, their deployment requires knowledge for selecting, training, testing, and employing the most appropriate ML predictor. While different ML approaches are recommended by literature, their practicality at construction sites is constrained by the availability, knowledge, and experience of data scientists familiar with the construction sector. This study develops an automated ML system that automatically trains and evaluates different ML to select the most accurate ML-based construction accident severity predictors for the use of construction professionals with limited data science knowledge. A real-life accident dataset is evaluated through automated ML approaches: Auto-Sklearn, AutoKeras, and customized AutoML. The investigated AutoML approaches offer higher scalability, accuracy, and result-oriented severity insight due to their simple input requirements and automated procedures.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Afzal, Muneeb, et Muhammad Tariq Shafiq. « Evaluating 4D-BIM and VR for Effective Safety Communication and Training : A Case Study of Multilingual Construction Job-Site Crew ». Buildings 11, no 8 (26 juillet 2021) : 319. http://dx.doi.org/10.3390/buildings11080319.

Texte intégral
Résumé :
Effective safety management is a key aspect of managing construction projects. Current safety management practices are heavily document-oriented that rely on historical data to identify potential hazards at a construction job site. Such document-bound safety practices are prone to interpretative and communication errors in multilingual construction environments, such as in the United Arab Emirates (UAE). Applications of Building Information Models (BIM) and Virtual Reality (VR) are claimed to improve hazards identification and communication in comparison to 2-D static drawings by simulating job-site conditions and safety implications and thus can interactively educate the job-site crew to enhance their understanding of the on-site conditions and safety requirements. This paper presents findings of a case study conducted to evaluate the effectiveness of 4-Dimensional (4-D) BIM and VR in simulating job-site safety instructions for a multilingual construction crew at a project in the UAE. 4-D BIM-enabled VR simulations, in lieu of the Abu Dhabi Occupational Safety and Health Center (OSHAD) code of practice, were developed and tested through risk assessment and safety training exercises for the job-site crew. The results showed a significant improvement in the job-site crew’s ability to recognize a hazard, understand safety protocols, and incorporate proactive risk response in mitigating the hazards. This study concludes that 4-D BIM-enabled VR visualization can improve information flow and knowledge exchange in a multilingual environment where jobsite crew do not speak a common language and cannot understand written safety instructions, manuals, and documents in any common language due to linguistic diversity. The findings of this study are useful in communicating safety instructions, and safety training, in the UAE, as well as in international projects.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Skrypnyk, O., M. Vorozhbiian, M. Ivashchenko et V. Abrakitov. « INFORMATION SIMULATION OF OCCUPATIONAL SAFETY AT CONSTRUCTION SITE ». Municipal economy of cities 1, no 168 (25 mars 2022) : 121–28. http://dx.doi.org/10.33042/2522-1809-2022-1-168-121-128.

Texte intégral
Résumé :
One of the directions of the economic development of the state is the construction industry, in which the issues of labor protection and improving safety are extremely acute, since its effective growth depends on the solution of this issue. With the development of scientific and technological progress, as well as digital technologies, the question of the possible application of this area in matters of ensuring the safety of production processes becomes relevant. Today, in the construction industry, BIM technologies are actively being used. BIM (Building Information Modeling) - Building information modeling is the process of creating an integrated model of the future construction project, which includes all stages of the life cycle of the project from the design stage to the dismantling stage. BIM technology is the very tool that shows how to improve the interaction of all project participants. BIM is based on a three-dimensional information model. The information model of the building means obtaining full information about the future construction site according to the most popular sections of the design documentation. This technology is a universal information platform that allows you to integrate various software modules into the BIM model of investment and construction projects. Thanks to this approach, it became possible to digitize construction production while monitoring the safety of work, as well as labor protection using a risk-oriented approach. Considering the application of this technology and the result, it can be seen that all research is aimed mainly at the work of designers. However, if we consider BIM technology as an information platform (base) on which new software products (complexes) can be superimposed, then we can create a qualitatively different approach in the application of this technology. In particular, it is possible to review the approach to the assessment of industrial safety and labor protection, in another perspective to approach the scheduling of construction schedules, consider the possibility of applying such programs in the assessment of construction and installation risks during the implementation of the investment and construction project. The article discusses 3D modeling of objects, such as a construction site. Individual areas with boundary assignment were analyzed to assess the degree of safety in these areas. To assess the state of labor protection, a breakdown of the studied object to 100 square meters was adopted. The stage of determining the most hazardous production factors, as well as possible risks that may be involved in the construction work has been investigated. According to the results of ranking of hazardous and harmful production factors, hazard zones are determined regardless of the type of construction and installation work. Based on the distribution of hazard zones (boundaries), it is possible to rank safety levels that characterize the safety situation at the construction site. The Construction Safety Index allows you to identify the processes and factors that most affect labor safety, which makes it possible for inspectors to most effectively correct the selection of protective measures at the construction site.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Ward, Leanne M., David R. Weber, Craig F. Munns, Wolfgang Högler et Babette S. Zemel. « A Contemporary View of the Definition and Diagnosis of Osteoporosis in Children and Adolescents ». Journal of Clinical Endocrinology & ; Metabolism 105, no 5 (22 décembre 2019) : e2088-e2097. http://dx.doi.org/10.1210/clinem/dgz294.

Texte intégral
Résumé :
Abstract The last 2 decades have seen growing recognition of the need to appropriately identify and treat children with osteoporotic fractures. This focus stems from important advances in our understanding of the genetic basis of bone fragility, the natural history and predictors of fractures in chronic conditions, the use of bone-active medications in children, and the inclusion of bone health screening into clinical guidelines for high-risk populations. Given the historic focus on bone densitometry in this setting, the International Society for Clinical Densitometry published revised criteria in 2013 to define osteoporosis in the young, oriented towards prevention of overdiagnosis given the high frequency of extremity fractures during the growing years. This definition has been successful in avoiding an inappropriate diagnosis of osteoporosis in healthy children who sustain long bone fractures during play. However, its emphasis on the number of long bone fractures plus a concomitant bone mineral density (BMD) threshold ≤ −2.0, without consideration for long bone fracture characteristics (eg, skeletal site, radiographic features) or the clinical context (eg, known fracture risk in serious illnesses or physical-radiographic stigmata of osteoporosis), inappropriately misses clinically relevant bone fragility in some children. In this perspective, we propose a new approach to the definition and diagnosis of osteoporosis in children, one that balances the role of BMD in the pediatric fracture assessment with other important clinical features, including fracture characteristics, the clinical context and, where appropriate, the need to define the underlying genetic etiology as far as possible.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Frantz, Inga, Heather M. Foran, Jamie M. Lachman, Elena Jansen, Judy Hutchings, Adriana Băban, Xiangming Fang et al. « Prevention of child mental health problems in Southeastern Europe : a multicentre sequential study to adapt, optimise and test the parenting programme ‘Parenting for Lifelong Health for Young Children’, protocol for stage 1, the feasibility study ». BMJ Open 9, no 1 (janvier 2019) : e026684. http://dx.doi.org/10.1136/bmjopen-2018-026684.

Texte intégral
Résumé :
IntroductionFamilies in low-income and middle-income countries (LMICs) face multiple challenges (eg, poverty and adverse childhood experiences) that increase the risk for child mental health problems, while the context may provide them with few resources. Existing prevention-oriented parenting programmes have been shown to be effective in reducing child behaviour problems and associated risk factors. This project has the overall goal of adapting, implementing and testing a parenting intervention in three Southeastern European LMIC and uses the Multiphase Optimisation Strategy and dimensions of the Reach, Effectiveness, Adoption, Implementation and Maintenance framework. It is implemented over three phases: (1) preparation, (2) optimisation and (3) evaluation. The preparation phase, the subject of this paper, involves the adaptation and feasibility piloting of the parenting programme.Methods and analysisThis protocol describes the assessment of an evidence-informed indicated prevention programme for families with children aged 2–9 years (Parenting for Lifelong Health for Young Children) for implementation in FYR of Macedonia, Republic of Moldova and Romania. In this phase, officials, experts, parents and practitioners are interviewed to explore their views of suitability and needs for further adaptation. In addition, a small pre–post pilot study will test the feasibility of the programme and its implementation as well as the evaluation measures in the three countries with 40 families per country site (n=120). Quantitative data analysis will comprise a psychometric analysis of measures, testing pre–post differences using ANCOVA, χ2tests and regression analysis. For qualitative data analysis, a thematic approach within an experiential framework will be applied.Ethics and disseminationThe ethics review board of the Alpen-Adria University Klagenfurt and ethical review boards in the three LMIC sites have approved the study.Trial registration numberNCT03552250.
Styles APA, Harvard, Vancouver, ISO, etc.
13

Reynolds, Matthew J., Monica Bodd, Ashley Nelson, Richard Newcomb, P. Connor Johnson, Tejaswini Dhawale, Lauren Heuer et al. « The Associations between Coping Strategy Use and Patient-Reported Outcomes in Patients with Acute Myeloid Leukemia ». Blood 138, Supplement 1 (5 novembre 2021) : 4131. http://dx.doi.org/10.1182/blood-2021-151895.

Texte intégral
Résumé :
Abstract Background: Patients with acute myeloid leukemia (AML) who receive intensive chemotherapy must cope with immense physical and psychological symptoms associated with a variety of patient-reported outcomes (PROs) such as quality of life (QOL). Although coping is critical to the management of an AML diagnosis and its treatment, data characterizing the use of coping strategies and its associations with PROs in the AML population are limited. Hence, we characterize coping strategy use among patients with AML and examine the associations between coping strategy use, psychological distress, and QOL. Methods: We used cross-sectional secondary data analyses to describe coping in 160 patients with newly diagnosed high-risk AML enrolled in a multi-site randomized supportive care trial. We used the Brief COPE, Hospital Anxiety and Depression Scale (HADS), PTSD Checklist-Civilian Version (PCL-C), and Functional Assessment of Cancer Therapy-Leukemia (FACT-Leu) within 72 hours of patient initiation of chemotherapy, to measure coping strategies, psychological distress and QOL, respectively. We grouped coping strategies into two higher-order domains of coping based on prior literature: approach-oriented coping (i.e., use of emotional support, active coping, positive reframing, acceptance) or avoidant coping (i.e., self-blame, denial, behavioral disengagement). We used the median split method for the distribution of coping domains. We used multivariate regression models adjusting for age, gender and diagnosis type (newly diagnosed vs. relapsed/refractory AML) to assess the relationship between coping and PROs. Results: Participants (median age of 64.4 years) were mostly non-Hispanic White (86.3%), male (60.0%), and married (73.8%). Most (51.9%) reported high utilization of approach-oriented coping strategies (e.g., emotional support) whereas 38.8% reported high utilization of avoidant coping strategies (e.g., denial) (Figure 1). At the time of AML diagnosis, use of approach-oriented coping was associated with less psychological distress and better QOL (Table 1). Use of avoidant coping was associated with more psychological distress and worse QOL. Additionally, patients who used multiple approach-oriented coping strategies had less psychological distress and better QOL (Table 2). In contrast, patients who used multiple avoidant coping strategies had more psychological distress, and worse QOL. Conclusions: Our study illustrates that most patients with high-risk AML utilize both approach-oriented and avoidant coping strategies. Our results also reveal links between approach-oriented coping strategies, less psychological distress, and better QOL. These findings underscore the need for early integration of supportive oncology interventions that help patients to cultivate approach-oriented coping strategies. Figure 1 Figure 1. Disclosures Brunner: Novartis: Consultancy, Research Funding; BMS/Celgene: Consultancy, Research Funding; Takeda: Consultancy, Research Funding; Acceleron: Consultancy; Agios: Consultancy; Keros Therapeutics: Consultancy; GSK: Research Funding; Aprea: Research Funding; AstraZeneca: Research Funding; Janssen: Research Funding. Fathi: AbbVie: Consultancy, Honoraria, Research Funding; Takeda: Consultancy, Honoraria; Pfizer: Consultancy, Honoraria; Blueprint: Consultancy, Honoraria; Seattle Genetics: Consultancy, Honoraria; Astellas: Consultancy, Honoraria; Daiichi Sankyo: Consultancy, Honoraria; Genentech: Consultancy, Honoraria; Trillium: Consultancy, Honoraria; Kura: Consultancy, Honoraria; Foghorn: Consultancy, Honoraria; Kite: Consultancy, Honoraria; Morphosys: Consultancy, Honoraria; Ipsen: Consultancy, Honoraria; Agios: Consultancy, Honoraria, Research Funding; Servier: Research Funding; Celgene/BMS: Consultancy, Honoraria, Research Funding. LeBlanc: Seattle Genetics: Consultancy, Other: Advisory board, Research Funding; Pfizer: Consultancy, Other: Advisory Board; AstraZeneca: Consultancy, Honoraria, Other: Advisory board, Research Funding; UpToDate: Patents & Royalties; American Cancer Society: Research Funding; Agios: Consultancy, Honoraria, Other: Advisory board; Travel fees, Speakers Bureau; BMS/Celgene: Consultancy, Honoraria, Other: Travel fees, Research Funding, Speakers Bureau; Daiichi-Sankyo: Consultancy, Honoraria, Other: Advisory board; Flatiron: Consultancy, Other: Advisory board; Astellas: Consultancy, Honoraria, Other: Advisory board; AbbVie: Consultancy, Honoraria, Other: Advisory board; Travel fees, Speakers Bureau; Otsuka: Consultancy, Honoraria, Other; Jazz Pharmaceuticals: Research Funding; Duke University: Research Funding; Helsinn: Consultancy, Research Funding; Heron: Consultancy, Honoraria, Other: advisory board; CareVive: Consultancy, Other, Research Funding; NINR/NIH: Research Funding; Amgen: Consultancy, Other: travel.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Abdelwahab, Siddig Ibrahim, Mohammed Al-Mamary, Khaled Hassanein, Manal Mohamed Elhassan Taha, Abdullah Farasani et Hassan Alhazmi. « Effects of anti-cyclooxygenases (COX-1 and COX-2), structure activity relationship, molecular docking and in silico ADMET of some synthesized chalcones ». Tropical Journal of Pharmaceutical Research 21, no 11 (16 février 2023) : 2419–27. http://dx.doi.org/10.4314/tjpr.v21i11.22.

Texte intégral
Résumé :
Purpose: To develop effective cancer chemopreventive and anti-inflammatory agents, a series of chalcones were prepared by reacting suitable aromatic aldehyde with appropriate acetophenones. Methods: Twenty-four synthesized chalcones (namely, 1 - 24) were assessed for their in vitro anti-cyclooxygenase-1 (COX-1) and anti-cyclooxygenase-2 (COX-2) activity in a COX catalyzed prostaglandin synthesis bioassay. Molecular docking was done to investigate the ligand-protein interactions, and selectivity on both enzymes. ADMET (absorption, distribution, metabolism, excretion, toxicity) modeling and software were also used. Results: The compounds inhibited both COX-1 and COX-2. Two compounds (3 and 19) demonstrated more marked COX-2 inhibition than compound 1. Indomethacin as a standard anti-cyclooxygenase shows unselective inhibition of 81.44 ± 6.5 and 91 ± 9.5, respectively. The in silico data revealed that a chalcone skeleton with C=O at 4-position, C2–C3 double bond and OH at 5-position are necessary properties for anti-cyclooxygenase effects. It was also revealed that the propenone moiety comprises of an appropriate scaffold which proposes a new acyclic 1,3-diphenylprop-2-en-1-ones with selective anti-COX effects. A molecular modeling investigations where these chalcones 1, 3 and 19 were docked in the active site of COX-2 depicted that the p-CH3 substituent on the C-4- phenyl ring A are oriented in the vicinity of the COX-2 secondary pocket Phe381, Gly526, Tyr385 and Val349. Conclusion: Based on the screening for oral bioavailability, in silico ADMET, and toxicity risk assessment, this study shows that these compounds could be a cornerstone for the development of new pharmaceuticals in the battle against COX-associated inflammatory disorders.
Styles APA, Harvard, Vancouver, ISO, etc.
15

Arian, Natt, Peter Tingate, Richard Hillis et Geoff O'Brien. « Petroleum systems of the Bass Basin : a 3D modelling perspective ». APPEA Journal 50, no 1 (2010) : 511. http://dx.doi.org/10.1071/aj09030.

Texte intégral
Résumé :
Petroleum generation, expulsion, migration and accumulation have been modelled in 3D at basin-scale for the Bass Basin, Tasmania. The petroleum systems model shows several source rocks of different ages have generated and expelled sufficient hydrocarbons to fill structures in the basin; however, the lithologies and fault properties in the model result in generally limited migration after hydrocarbon expulsion started. Impermeable faults, together with several fine-gained sealing facies in the Lower and Middle Eastern View Group (EVG) have resulted in minor vertical hydrocarbon migration in the lower parts of the EVG. An exception occurs in the northeastern part of the basin, where strike-slip movement of suitably oriented faults during Miocene reactivation resulted in breaches in deeper accumulations and migration to upper reservoir sands and, in several cases, leakage through the regional seal. The Middle Eastern View Group source rocks have produced most of the gas in the basin. Oil appears to be largely limited to the Yolla Trough, related to the relatively high thermal maturation of Narimba Sequence source rocks. In general, most of the hydrocarbon expelled from the Otway Megasequence occurred prior to the regional seal being deposited; however, modelling predicts it can contribute to the hydrocarbon inventory of the Cape Wickham Sub-basin. In particular, the modelling predicted an Otway sourced accumulation at the site of the recently drilled Rockhopper–1. In the Durroon Sub-basin in the Bark Trough, the Otway Megasequence is predicted to be the main source of accumulations. The modelling has provided detailed insights into migration in the existing plays and has allowed assessment of the reasons for previous exploration failures (e.g., a migration shadow at Toolka–1) and to suggest new locations with viable migration histories. Reservoir sands of the Upper EVG are only prospective in the Yolla and Cormorant troughs where charged by Early Eocene sources; however, Miocene reactivation is a major exploration risk in this area.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Chang, Yueming, Jeffrey S. Wasser, Adam Boruchov, Bruce J. Mayer et Kazuya Machida. « BCR Signalosome-Oriented Phosphotyrosine Profiling of CLL ». Blood 126, no 23 (3 décembre 2015) : 4128. http://dx.doi.org/10.1182/blood.v126.23.4128.4128.

Texte intégral
Résumé :
Abstract Introduction There is strong evidence that B-cell receptor (BCR) signaling has a critical role in the pathogenesis of B-cell malignancies such as non-Hodgkin's lymphoma (NHL) and chronic lymphocytic leukemia (CLL). The BCR signalosome, B-cell receptor signaling protein complex, is therefore a rational therapeutic target. This has already been proven by success in clinical trials of B-cell signaling inhibitors such as Ibrutinib and Idelalisib. It is becoming more important to know what combinations of new and traditional agents is best for each patient. Although genetic profiles may help to predict sensitivity to treatment to some degree, it is ideal to profile the BCR signaling state of each case to select most effective B-cell signaling inhibitors. Our goal is to develop a BCR signalosome-oriented molecular marker and investigate its clinical value. We focus on protein-protein interactions in the B-cell signalosome that are regulated by tyrosine kinases, their substrates, and SH2 domains. We hypothesized that the global tyrosine phosphorylation state determined by SH2 profiling, an SH2 domain-based molecular diagnostic approach, may meaningfully represent the B-cell signaling state of B-cell malignancies. Here we conducted SH2 profiling of 1) BCR signalosome peptides to determine specificity of BCR SH2 domain probes and 2) CLL patient samples to determine the presence of patient specific BCR SH2 profiles. Methods For microarrays, phosphorylation site databases were extensively searched and 368 tyrosine phosphorylation sites from the core 38 proteins which make up the BCR-signalosome were selected for peptide synthesis. Replicated peptide microarrays containing pairs of phosphorylated and unphosphorylated tyrosine motifs were separately probed with a set of BCR SH2 domains including BTK, BLNK, LYN, PI3K, SYK, and PLCg2. For clinical sample experiments, PBMC samples were collected from 35 CLL patients who visited UConn Health (UCH) and Saint Francis Hospital (SFH) between 2008 and 2014. The median age at study enrollment was 67 (46-98 years). Male patients constituted 63%. Binet stage A disease was present in 76% of the patients. Reverse-phase SH2 domain binding assay was performed as previously described using the BCR SH2 domains. Results According to the microarray results, 94% of BCR signalosome phosphotyrosines were bound by at least one SH2 domain (median 5 domains). A group of proteins including CD22, CD79A, CD45, and PLCg2 protein harbour tyrosine sites that can bind to more than 10 SH2 domains suggesting competition between these SH2 domains may exist in the cell. Specificity of BCR SH2 domains could be grouped into three levels: very specific (BTK, BLNK), medium (Lyn, PLCg2, SHP-1, etc) and broad (SHIP). We found a number of previously undocumented SH2-ligand interactions that may be involved in specific downstream signaling pathways. A clustering analysis of CLL samples revealed the presence of different patient groups such as BLNK-dominant and PLC-dominant. Of those clusters, we observed that a cluster with low BLNK signal and high LYN signal was enriched with clinically progressive type CLL cases. To test the prognostic impact of the BLNK/LYN profile, the CLL cases were divided into four groups by high (+) and low (-) BLNK and LYN SH2 binding and compared with PFS. There was a significant difference between the groups in a log-rank test, indicating that patients with the BLNK-low & LYN-high profile progressed more rapidly. Of note, in the microarray experiments we identified that a group of BCR signaling proteins/peptides such as CD19, CD79A, and Dok1 show a similar BLNK-low & LYN-high profile. Their involvements in the SH2 profile of CLL samples remain to be determined. Conclusion Aiming to develop a new molecular marker based on the BCR signaling state of B-cell malignancies, we applied SH2 profiling to BCR peptide microarrays and CLL/NHL patient samples. We confirmed that BCR signalosome-oriented SH2 probes have sufficient specificity to distinguish various signalosome tyrosine sites. SH2 profiling of CLL indicated that the BCR SH2 probes are able to distinguish CLL subgroups, one of which was correlated with a poor PFS. Further efforts are underway to determine the clinical marker value of the BCR signalosome profile, such as its utility in risk stratification, early detection of disease progression, and prediction or assessment of response to B-cell signaling inhibitor therapy. Disclosures Wasser: Amgen, Inc: Consultancy, Membership on an entity's Board of Directors or advisory committees, Research Funding.
Styles APA, Harvard, Vancouver, ISO, etc.
17

Rensink, Eelco. « Archaeological heritage management in the Meuse valley (Limburg, the Netherlands) from a national perspective : aims, methods and results ». Netherlands Journal of Geosciences 96, no 2 (14 février 2017) : 197–209. http://dx.doi.org/10.1017/njg.2016.53.

Texte intégral
Résumé :
AbstractNumerous archaeological investigations have been performed along the river Meuse in the Netherlands’ southeastern province of Limburg as part of the major ‘Maaswerken’ infrastructural project. To improve flood risk management and navigability, and for the purpose of gravel production and nature development, several areas of land covering a total of almost 2000 ha are being excavated to a great depth. In anticipation of this, archaeological research was performed for the purposes of recording and documenting archaeological remains in the most important areas and locations. From 1998 to 2015 the Cultural Heritage Agency of the Netherlands (Amersfoort) was in charge of the investigations, and acted as adviser to national public works agency Rijkswaterstaat.The archaeological research connected with the Maaswerken project differed from regular, site-based investigations in terms of the landscape archaeology perspective on which it was based. The research themes and principles associated with this perspective were published in several documents, including a scientific policy plan published in 2004, and presented in further detail in area programmes and project briefs. The policy plan assigned each project area to one of five value assessment categories, based on the intactness of the landscape and the archaeological potential for addressing the research questions. In areas of high landscape intactness and great archaeological potential (category 1) the Agency selected zones to be surveyed and assessed, and for archaeological excavation. Though most of the fieldwork, including specialist analysis, was performed in these zones, other category project areas have also been the subject of archaeological fieldwork, including borehole surveys, site-oriented research and watching briefs, but on a more incidental basis. Observations were also made in the river Meuse itself and in the river's winter bed.The archaeological investigations resulted in a large number of standard reports of desk studies and fieldwork, including reports of specialist analyses. A considerable proportion of these refer to the large-scale investigations at Borgharen and Itteren to the north of Maastricht, and at Lomm and Well–Aijen to the north of Venlo. The results of the investigations suggest the archaeological record here is rich and varied, with a time depth of c. 11,500 years, and traces of occupation and land use ranging from the Early Mesolithic (Well–Aijen, Borgharen) to the Second World War (Lomm).This paper reflects on almost 20 years of archaeological research in the project areas of the Maaswerken and on the principles and methods used in the field research. The common thread is the results of landscape and archaeological studies and the relationship between them. Examples are used to illustrate results that can be regarded as important from a national perspective, and in terms of archaeological heritage management.
Styles APA, Harvard, Vancouver, ISO, etc.
18

Tsanis, Ioannis K., Konstantinos D. Seiradakis, Sofia Sarchani, Ioanna S. Panagea, Dimitrios D. Alexakis et Aristeidis G. Koutroulis. « The Impact of Soil-Improving Cropping Practices on Erosion Rates : A Stakeholder-Oriented Field Experiment Assessment ». Land 10, no 9 (12 septembre 2021) : 964. http://dx.doi.org/10.3390/land10090964.

Texte intégral
Résumé :
The risk of erosion is particularly high in Mediterranean areas, especially in areas that are subject to a not so effective agricultural management–or with some omissions–, land abandonment or wildfires. Soils on Crete are under imminent threat of desertification, characterized by loss of vegetation, water erosion, and subsequently, loss of soil. Several large-scale studies have estimated average soil erosion on the island between 6 and 8 Mg/ha/year, but more localized investigations assess soil losses one order of magnitude higher. An experiment initiated in 2017, under the framework of the SoilCare H2020 EU project, aimed to evaluate the effect of different management practices on the soil erosion. The experiment was set up in control versus treatment experimental design including different sets of treatments, targeting the most important cultivations on Crete (olive orchards, vineyards, fruit orchards). The minimum-to-no tillage practice was adopted as an erosion mitigation practice for the olive orchard study site, while for the vineyard site, the cover crop practice was used. For the fruit orchard field, the crop-type change procedure (orange to avocado) was used. The experiment demonstrated that soil-improving cropping techniques have an important impact on soil erosion, and as a result, on soil water conservation that is of primary importance, especially for the Mediterranean dry regions. The demonstration of the findings is of practical use to most stakeholders, especially those that live and work with the local land.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Romeiko, Xiaobo Xue. « Assessing Health Impacts of Conventional Centralized and Emerging Resource Recovery-Oriented Decentralized Water Systems ». International Journal of Environmental Research and Public Health 17, no 3 (4 février 2020) : 973. http://dx.doi.org/10.3390/ijerph17030973.

Texte intégral
Résumé :
Energy shortage and climate change call for sustainable water and wastewater infrastructure capable of simultaneously recovering energy, mitigating greenhouse gas emissions, and protecting public health. Although energy and greenhouse gas emissions of water and wastewater infrastructure are extensively studied, the human health impacts of innovative infrastructure designed under the principles of decentralization and resource recovery are not fully understood. In order to fill this knowledge gap, this study assesses and compares the health impacts of three representative systems by integrating life cycle and microbial risk assessment approaches. This study found that the decentralized system options, such as on-site septic tank and composting or urine diverting toilets, presented much lower life cycle cancer and noncancer impacts than the centralized system. The microbial risks of decentralized systems options were also lower than those of the centralized system. Moreover, life cycle cancer and noncancer impacts contributed to approximately 95% of total health impacts, while microbial risks were associated with the remaining 5%. Additionally, the variability and sensitivity assessment indicated that reducing energy use of wastewater treatment and water distribution is effective in mitigating total health damages of the centralized system, while reducing energy use of water treatment is effective in mitigating total health damages of the decentralized systems.
Styles APA, Harvard, Vancouver, ISO, etc.
20

Kanter, Julie, Amber L. Allison, Caitlin Henry, Sheryl Martin-Schild, Melody Benton, Ernest DeJean, Katarina Unger et Stacy Drury. « The Cambridge Automated Neuropsychological Testing Automated Battery (CANTAB) Is Feasible and Valuable for the Evaluation of Neurocognitive Deficits in Pediatric Patients with Sickle Cell Disease : Results of a Pilot Study ». Blood 118, no 21 (18 novembre 2011) : 4839. http://dx.doi.org/10.1182/blood.v118.21.4839.4839.

Texte intégral
Résumé :
Abstract Abstract 4839 Background: As children with sickle cell disease (SCD) are at significant risk for neurocognitive complications, an automated and objective measure of neurocognitive functioning would address several challenges facing both clinical and research progress in SCD including longitudinal monitoring of deficits, cross-site comparability of neurocognitive tests in multicenter trials, and limited access to pediatric neuropsychologists. The Cambridge Automated Neuropsychological Testing Automated Battery (CANTAB) is a well-validated computerized test with significant normative data in individuals age 4 to 80 that has been used to monitor disease progression and treatment response in children and adults with a range of disorders but has not been used previously in SCD. Hypothesis: We hypothesize that the CANTAB system is a useful and viable tool for the neurocognitive evaluation of pediatric patients with SCD. We expect that the CANTAB testing will be well tolerated by SCD patients and parents, easy to administer in our comprehensive clinic, and generate valid results that correlate with both medical and psychological outcomes. Methods: 7 CANTAB tests which assess attention, executive function and memory were run on pediatric SCD patients during scheduled clinic visits. Parents completed the child behavior checklist (CBCL) which generates t-scores for children on internalizing, externalizing scales as well as DSM-oriented scales of affective, anxiety, pervasive developmental, attention and oppositional scales. Medical data including SCD genotype, average hemoglobin (hgb), hematocrit (hct), reticulocyte count (rct), lactate dehydrogenase and hospital utilization records (ER visits, # hospital visits in the last year) was collected. Demographic information and a total pain burden assessment were also collected. Results: 11 children with HbSS SCD were enrolled in the pilot study (table 1). All patients successfully completed the CANTAB testing without difficulty. Hgb and rct were associated with strategy score on spatial working memory and the latency score on the motor screening task. Hgb and rct also correlated with internalizing, externalizing, and total symptoms scores on the CBCL (table 2). Specifically lower hgb and higher rct were associated with increased CBCL scores. A regression model incorporating average hgb and total internalizing scores with spatial working memory as the dependent variable revealed a significant interaction between internalizing scores and hgb and a significant model p=.01 and r2 of 0.89 offering preliminary support for a multi-level model incorporating disease and child specific factors (table 2). The total pain burden score correlated error making in several tests including the delayed match to sample test (p=.01), spatial working memory test (p=.06), and Stockings of Cambridge task (p=.0038). The pain burden score was not associated with performance or latency on these tests indicating that pain burden may have a specific association with error making. Pain burden also correlated with the somatic measure on the CBCL (p=.01) indicating cross validation between the two measures. Conclusion: This pilot study demonstrates the feasibility and value of the CANTAB system in evaluating neurocognitive deficits in pediatric patients with SCD. These results can be assessed longitudinally following medical interventions. Furthermore, results indicate a multi-level model that includes medical factors, child specific factors, and demographics may be a more appropriate model to utilize in determining the etiology of neurocognitive deficits in SCD. Ongoing studies with an increased sample size will examine the association of neurocognitive function with SCD genotype, MRI, transcranial doppler studies, and family stress. SWM: Spatial working memory SOC: Stockings of Cambridge MOT: Motor Screening Test Disclosures: No relevant conflicts of interest to declare.
Styles APA, Harvard, Vancouver, ISO, etc.
21

Mori, Daisuke. « Paper 90 : Histopathology of Rotator Cuff Tendons in Elderly Patients with Glenohumeral Arthritis without Cuff Tears ». Orthopaedic Journal of Sports Medicine 10, no 7_suppl5 (1 juillet 2022) : 2325967121S0065. http://dx.doi.org/10.1177/2325967121s00653.

Texte intégral
Résumé :
Objectives: Some surgeons prefer reverse shoulder arthroplasty to total shoulder arthroplasty in elderly patients with osteoarthritis (OA) and without cuff tears because these patients may subsequently develop cuff tears. However, other studies have shown total shoulder arthroplasty provides good to excellent results in elderly patients with intact rotator cuffs, even in patients ≥ 80 years of age. We conducted the following studies to clarify potential rotator cuff degeneration in elderly OA patients, histologically analyzing the torn edges of ruptured rotator cuff tendons from patients with cuff tear arthropathy (CTA) with a proximal humeral fracture with an intact cuff (control) or en bloc cuff tendon remaining on the greater tuberosity from patients with OA after harvesting such tissues at the time of RSA. Then, we compared the clinical results of elderly patients undergoing total shoulder arthroplasty (TSA) and reversed shoulder arthroplasty (RSA) when these patients reached ≥ 80 years of age. We hypothesized that rotator cuff tendons were more severely degenerated microscopically in elderly patients with OA and intact cuff tendons compared with tendons in elderly patients with proximal humeral fractures, and comparable to those with CTA. Methods: We histologically evaluated torn rotator cuff tendon samples harvested from 13 samples in 11 shoulders in 9 patients with OA without cuff tears, 18 samples in 14 shoulders in 14 patients with CTA, and 2 shoulders in 2 patients with proximal humeral fractures using the Bonar score and electron microscopic analysis. In addition, we compared the clinical results of TSA in 7 shoulders in 6patients and RSA in 19 shoulders in 15 patients when these patients reached ≥ 80 years of age. Bonar scores were compared between the OA and CTA patients (OA and CTA groups), and Constant scores, and range of motion were compared between the two procedures (TSA and RSA groups). Two patients with proximal humeral fractures served as controls. We identified patients with secondary rotator cuff dysfunction by the presence of either moderate or severe superior subluxation of the humeral head base on radiographic assessment of humeral superior subluxation. Results: There were no significant differences in patients’ age, sex, BMI, heart disease, DM, hyperlipidemia, sample site, and preoperative Constant and ASES scores (except regarding the number of affected dominant arms), between the CTA and OA groups. There were no significant differences in the distribution of each category for tenocytes, ground substance, and collagen; and vascularity (P = .227 .107, .509, .848, respectively). In addition, there was no significant difference in the Bonar scores between the CTA and OA groups (P = .140). In the both groups, irregularly-orientated collagen fibers showing fiber separation and numerous blood vessels and inflammatory cells were observed in the sections with HE staining (Fig. 1A-C, A, OA patients; B and C, CTA patients). In the sections with AB/PAS staining, increase in alcianophilia indicating glycosaminoglycans among collagen fibers were observed (Figure 1, D, CTA patient). The control supraspinatus tendons from the patients of a four-part proximal humeral fracture demonstrated well-oriented collagen fibers with tightly cohesive well-demarcated bundles. The two control shoulders had 0 points and 2 points for the Bonar score, respectively (Figure 2). The ultrastructural analysis showed that collagen fibrils were arranged irregularly, with a heterogenous extracellular matrix, in the OA group. Similarly, in the CTA group, some collagen fibrils were oriented in different directions, and that there were empty spaces between the fibrils, representing non-collagenous extracellular matrix (Figure 3, A, C; OA patient, B,D; CTA patient). We found no significant difference in the fibril diameter (nm) between the two groups (mean,66.9 for the CTA group and 65.0 for the OA group) (P = .219) (Figure 3 C and D). There were significant improvements between preoperative and postoperative clinical scores in both groups. In addition, patients in the TSA group had significantly lower Constant scores, Constant ROM scores, and ROM in flexion and abduction at the final follow-up (P; .009, < .001, .003, .009, respectively). Upward migration of the prosthetic humeral head was observed in 7 shoulders (100%) overall and was graded as mild in 3 shoulders (42.9%), moderate in 4 shoulders (57.1%) as secondary cuff dysfunction. Conclusions: The most important finding in the present study was that rotator cuff tendons in elderly OA patients without cuff tears had relatively higher mean Bonar scores than the scores in the cuff tendons in two patients with proximal humeral fractures (control shoulders), and scores were comparable to the scores in cuff tendons in the elderly CTA patients. In addition, our clinical results showed that the TSA group had significantly lower clinical variables than those of the RSA group regarding the Constant score, Constant ROM score, and ROM in flexion and abduction in our cohort who were ≥ 80 years of age at the latest follow-up. Furthermore, 4 shoulders (57.1%) in the TSA group had moderate superior subluxation of the prosthetic humeral head as possible secondary cuff tendon dysfunction, at the final follow-up. Considering these histologic and clinical results, severe histological degeneration of rotator cuff tendons in elderly OA patients without cuff tears may be a risk of secondary rotator cuff dysfunction and poor clinical outcome after TSA.
Styles APA, Harvard, Vancouver, ISO, etc.
22

Sare, Eric Bio Nikki, Armelle Sabine Yélignan Hounkpatin, Vidédji Naéssé Adjahossou, Abdel Fawaz Bagoudou et Anicette Bio Sourou. « Assessment of bacterial contamination of irrigation water and market gardening products at Parakou (A city in northern Benin) ». International Journal of Biological and Chemical Sciences 15, no 1 (21 avril 2021) : 241–50. http://dx.doi.org/10.4314/ijbcs.v15i1.21.

Texte intégral
Résumé :
In many districts of Benin, the use of wastewater in urban agriculture is becoming more and more widespread. This activity around wastewater discharges potentially poses health risks to populations. As water is one of the main sources of food contamination in developing countries, the main objective of this study, oriented towards the assessment of the bacterial load, was to search for Salmonella which are pathogenic to humans in irrigation water as well as in some market gardening products consumed in Parakou district. The study was carried out on the market gardening perimeter of the slaughterhouse site located near the international marketArzèkè, where market gardeners exclusively use surface water from the mixture of groundwater and runoff from installed collectors. At the end of this study, the results from the observation of the different colonies, followed by biochemical tests for the detection and differentiation of Salmonella, allowed us to detect the presence of Salmonella in the different samples ranging from 50% to 80%. The presence of Fecal coliforms and Escherichia coli not only in water but also in market garden products was also confirmed. These results might partly explain the frequency of salmonellosis in the study area. Keywords: Salmonella, microbiological tests, biochemical tests.
Styles APA, Harvard, Vancouver, ISO, etc.
23

Marceglia, S., F. Pinciroli et S. Bonacina. « A Pictorial Schema for a Comprehensive User-oriented Identification of Medical Apps ». Methods of Information in Medicine 53, no 03 (2014) : 208–24. http://dx.doi.org/10.3414/me13-01-0093.

Texte intégral
Résumé :
SummaryObjectives: The huge amount of released medical apps prevents medical app users from believing that medical scientific societies and other accreditation bodies as well, have the resources and the power for assigning to any medical app a quality score. By the time being, any medical app user has to take the risks related to the frequently insufficient accreditation of that app. Providing clear user-oriented schemas, to be adopted both when putting a medical App on the market and when an App comes to be evaluated by a cohort or single users, becomes crucial. The aim of our research was to define a pictorial identification one-shot schema for a comprehensive user-oriented identification of medical apps.Methods: Adopting a pictorial approach is common in software design modeling. To build up our identification schema we started from the limited number of Apps already available on a web site of app reviews (iMedicalApps.com), and we identified an appropriately large set of attributes for describing medical apps. We arranged the attributes in six main families. We organized them in a one-shot comprehensive pictorial schema. We adopted a traffic light color code for assessing each attribute, that was sufficient to provide simple elements of alerts and alarms regarding a single App. Then, we considered apps from iMedicalApps.com web site belonging to three medical specialties: cardiology, oncology, and pharma and analyzed them according to the proposed pictorial schema.Results: A pictorial schema having the attributes grouped in the families related to “Responsible Promoters”, “Offered Services”, “Searching Methods”, “Applications Domains”, “Envisaged Users”, and “Qualifiers and Quantifiers” has been identified. Furthermore, we produced a one-shot pictorial schema for each considered app, and for each medical specialty, we produced it also in an aggregated form.Conclusions: The one-shot pictorial schema provides a useful perception of when and where to use a considered app. It fits positively the expectations of potential but different user’s profiles. It can be a first step towards a systematic assessment of apps from the user viewpoint.
Styles APA, Harvard, Vancouver, ISO, etc.
24

Pagh, Lars. « Tamdrup – Kongsgård og mindekirke i nyt lys ». Kuml 65, no 65 (25 novembre 2016) : 81–129. http://dx.doi.org/10.7146/kuml.v65i65.24843.

Texte intégral
Résumé :
TamdrupRoyal residence and memorial church in a new light Tamdrup has been shrouded in a degree of mystery in recent times. The solitary church located on a moraine hill west of Horsens is visible from afar and has attracted attention for centuries. On the face of it, it resembles an ordinary parish church, but on closer examination it is found to be unusually large, and on entering one discovers that hidden beneath one roof is a three-aisled construction, which originally was a Romanesque basilica. Why was such a large church built in this particular place? What were the prevailing circumstances in the Early Middle Ages when the foundation stone was laid? The mystery of Tamdrup has been addressed and discussed before. In the 1980s and 1990s, archaeological excavations were carried out which revealed traces of a magnate’s farm or a royal residence from the Late Viking Age or Early Middle Ages located on the field to the west of the church (fig. 4), and in 1991, the book Tamdrup – Kirke og gård was published. Now, by way of metal-detector finds, new information has been added. These new finds provide several answers, but also give rise to several new questions and problems. In recent years, a considerable number of metal finds recovered by metal detector at Tamdrup have been submitted to Horsens Museum. Since 2012, 207 artefacts have been recorded, primarily coins, brooches, weights and fittings from such as harness, dating from the Late Viking Age and Early Middle Ages. Further to these, a coin hoard dating from the time of Svein Estridson was excavated in 2013. The museum has processed the submitted finds, which have been recorded and passed on for treasure trove evaluation. As resources were not available for a more detailed assessment of the artefacts, in 2014 the museum formulated a research project that received funding from the Danish Agency for Culture, enabling the finds to be examined in greater depth. The aim of the research project was to study the metal-detector finds and the excavation findings, partly through an analysis of the total finds assemblage, partly by digitalisation of the earlier excavation plans so these could be compared with each other and with the new excavation data. This was intended to lead on to a new analysis, new interpretations and a new, overall evaluation of Tamdrup’s function, role and significance in the Late Viking Age and Early Middle Ages.Old excavations – new interpretationsIn 1983, on the eastern part of the field, a trial excavation trench was laid out running north-south (d). This resulted in two trenches (a, b) and a further three trial trenches being opened up in 1984 (fig. 6). In the northern trench, a longhouse, a fence and a pit-house were discovered (fig. 8). The interpretation of the longhouse (fig. 4) still stands, in so far as we are dealing with a longhouse with curved walls. The western end of the house appears unequivocal, but there could be some doubt about its eastern end. An alternative interpretation is a 17.5 m long building (fig. 8), from which the easternmost set of roof-bearing posts are excluded. Instead, another posthole is included as the northernmost post in the gable to the east. This gives a house with regularly curved walls, though with the eastern gable (4.3 m) narrower than the western (5.3 m). North of the trench (a) containing the longhouse, a trial trench (c) was also laid out, revealing a number of features. Similarly, there were also several features in the northern part of the middle trial trench (e). A pit in trial trench c was found to contain both a fragment of a bit branch and a bronze key. There was neither time nor resources to permit the excavation of these areas in 1984, but it seems very likely that there are traces of one or more houses here (fig. 9). Here we have a potential site for a possible main dwelling house or hall. In August 1990, on the basis of an evaluation, an excavation trench (h) was opened up to the west of the 1984 excavation (fig. 7). Here, traces were found of two buildings, which lay parallel to each other, oriented east-west. These were interpreted as small auxiliary buildings associated with the same magnate’s farm as the longhouse found in the 1984 excavation. The northern building was 4 m wide and the southern building was 5.5 m. Both buildings were considered to be c. 7 m long and with an open eastern gable. The southern building had one set of internal roof-bearing posts. The excavation of the two buildings in 1990 represented the art of the possible, as no great resources were available. Aerial photos from the time show that the trial trench from the evaluation was back-filled when the excavation was completed. Today, we have a comprehensive understanding of the trial trenches and excavation trenches thanks to the digitalised plans. Here, it becomes apparent that some postholes recorded during the evaluation belong to the southernmost of the two buildings, but these were unfortunately not relocated during the actual excavation. As these postholes, accordingly, did not form part of the interpretation, it was assumed that the building was 7 m in length (fig. 10). When these postholes from the evaluation are included, a ground plan emerges that can be interpreted as the remains of a Trelleborg house (fig. 11). The original 7 m long building constitutes the western end of this characteristic house, while the remainder of the south wall was found in the trial trench. Part of the north wall is apparently missing, but the rest of the building appears so convincing that the missing postholes must be attributed to poor conditions for preservation and observation. The northeastern part of the house has not been uncovered, which means that it is not possible to say with certainty whether the house was 19 or 25 m in length, minus its buttress posts. On the basis of the excavations undertaken in 1984 and 1990, it was assumed that the site represented a magnate’s farm from the Late Viking Age. It was presumed that the excavated buildings stood furthest to the north on the toft and that the farm’s main dwelling – in the best-case scenario the royal residence – should be sought in the area to the south between the excavated buildings. Six north-south-oriented trial trenches were therefore laid out in this area (figs. 6, 7 and 13 – trial trenches o, p, q, r, s and t). The results were, according to the excavation report, disappointing: No trace was found of Harold Bluetooth’s hall. It was concluded that there were no structures and features that could be linked together to give a larger entity such as the presumed magnate’s farm. After digitalisation of the excavation plans from 1991, we now have an overview of the trial trenches to a degree that was not possible previously (fig. 13). It is clear that there is a remarkable concentration of structures in the central and northern parts of the two middle trial trenches (q, r) and in part also in the second (p) and fourth (s) trial trenches from the west, as well as in the northern parts of the two easternmost trial trenches (s, t). An actual archaeological excavation would definitely be recommended here if a corresponding intensity of structures were to be encountered in an evaluation today (anno 2016). Now that all the plans have been digitalised, it is obvious to look at the trial trenches from 1990 and 1991 together. Although some account has to be taken of uncertainties in the digitalisation, this nevertheless confirms the picture of a high density of structures, especially in the middle of the 1991 trial trenches. The collective interpretation from the 1990 and 1991 investigations is that there are strong indications of settlement in the area of the middle 1991 trial trenches. It is also definitely a possibility that these represent the remains of a longhouse, which could constitute the main dwelling house. It can therefore be concluded that it is apparently possible to confirm the interpretation of the site as a potential royal residence, even though this is still subject to some uncertainty in the absence of new excavations. The archaeologists were disappointed following the evaluation undertaken in 1991, but the overview which modern technology is able to provide means that the interpretation is now rather more encouraging. There are strong indications of the presence of a royal residence. FindsThe perception of the area by Tamdrup church gained a completely new dimension when the first metal finds recovered by metal detector arrived at Horsens Museum in the autumn of 2011. With time, as the finds were submitted, considerations of the significance and function of the locality in the Late Viking Age and Early Middle Ages were subjected to revision. The interpretation as a magnate’s farm was, of course, common knowledge, but at Horsens Museum there was an awareness that this interpretation was in some doubt following the results of the 1991 investigations. The many new finds removed any trace of this doubt while, at the same time, giving cause to attribute yet further functions to the site. Was it also a trading place or a central place in conjunction with the farm? And was it active earlier than previously assumed? The 207 metal finds comprise 52 coins (whole, hack and fragments), 34 fittings (harness, belt fittings etc.), 28 brooches (enamelled disc brooches, Urnes fibulas and bird brooches), 21 weights, 15 pieces of silver (bars, hack and casting dead heads), 12 figures (pendants, small horses), nine distaff whorls, eight bronze keys, four lead amulets, three bronze bars, two fragments of folding scales and a number of other artefacts, the most spectacular of which included a gold ring and a bronze seal ring. In dating terms, most of the finds can be assigned to the Late Viking Age and Early Middle Ages. The largest artefact group consists of the coins, of which 52 have been found – either whole or as fragments. To these can be added the coin hoard, which was excavated in 2013 (fig. 12) and which primarily consists of coins minted under Svein Estridson. The other, non-hoard coins comprise: 13 Svein Estridson (figs. 15, 16), five Otto-Adelheid, five Arabic dirhams, three Sancta Colonia, one Canute the Great, one Edward the Confessor, one Theodorich II, one Heinrich II, one Rand pfennig, one Roman denarius (with drilled hole) and nine unidentified silver coins, of which some appear however to be German and others Danish/Anglo-Saxon. Most of the single coins date from the late 10th and early 11th centuries. The next-largest category of finds from Tamdrup are the fittings, which comprise 34 items. This category does, however, cover a broad diversity of finds, of which the dominant types are belt/strap fittings of various kinds and fittings associated with horse harness (figs. 17-24). In total, ten fittings have been found by metal detector that are thought to belong to harness. In addition to these is a single example from the excavation in 1984. The majority of these fittings are interpreted as parts of curb bits, headgear and stirrups. One particularly expressive figure was found at Tamdrup: a strap fitting from a stirrup, formed in a very characteristic way and depicting the face of a Viking (fig. 20). The fitting has been fixed on the stirrup strap at the point where the sides meet. Individual stirrup strap fittings are known by the hundred from England and are considered stylistically to be Anglo-Scandinavian. The fitting from Tamdrup is dated to the 11th century and is an example of a Williams’ Class B, Type 4, East Anglian type face mount. A special category of artefacts is represented by the brooches/fibulas, and enamel brooches are most conspicuous among the finds from Tamdrup. Of the total of 28 examples, 11 are enamel brooches. The most unusual is a large enamel disc brooch of a type that probably has not been found in Denmark previously (fig. 24). Its size alone (5.1 cm in diameter) is unusual. The centre of the brooch is raised relative to the rim and furnished with a pattern of apparently detached figures. On the rim are some alternating sail-shaped triangles on a base line which forms four crown-like motifs and defines a cruciform shape. Between the crowns are suggestions of small pits that probably were filled with enamel. Parallels to this type are found in central Europe, and the one that approaches closest stylistically is a brooch from Komjatice in western Slovakia, found in a grave (fig. 25). This brooch has a more or less identical crown motif, and even though the other elements are not quite the same, the similarity is striking. It is dated to the second half of the 10th century and the first half of the 11th century. The other enamel brooches are well-known types of small Carolingian and Ottonian brooches. There are four circular enamel cross-motif brooches (fig. 26a), two stellate disc brooches with central casing (fig. 26b), one stepped brooch with a cruciform motif, one cruciform fibula with five square casings and two disc-shaped brooches. In addition to the enamel brooches there are ten examples that can definitely be identified as animal brooches. Nine of these are of bronze, while one is of silver. The motifs are birds or dragons in Nordic animal styles from the Late Viking Age, Urnes and Ringerike styles, and simpler, more naturalistic forms of bird fibulas from the Late Viking Age and Early Middle Ages. Accordingly, the date for all the animal brooches is the 11th and 12th centuries. A total of 21 weights of various shapes and forms have been found at Tamdrup: spherical, bipolar spherical, disc-shaped, conical, square and facetted in various ways. Rather more than half are of lead, with the remainder being of bronze, including a couple of examples with an iron core and a mantle of bronze (so-called ørtug weights), where the iron has exploded out through the bronze mantle. One of the bipolar spheres (fig. 28) has ornamentation in the form of small pits on its base. Weights are primarily associated with trade, where it was important to be able to weigh an agreed amount of silver. Weights were, however, also used in the metal workshops, where it was crucial to be able to weigh a particular amount of metal for a specific cast in order to achieve the correct proportions between the different metals in an alloy. Eight bronze keys have been found, all dated broadly to the Viking Age (fig. 29). Most are fragmentarily preserved pieces of relatively small keys of a very simple type that must be seen as being for caskets or small chests. Keys became relatively widespread during the course of the Viking Age. Many were of iron and a good number of bronze. Nevertheless, the number of keys found at Tamdrup is impressive. A further group of artefacts that will be briefly mentioned are the distaff whorls. This is an artefact group which appears in many places and which was exceptionally common in the Viking Age. In archaeological excavations, examples are often found in fired clay, while metal distaff whorls – most commonly of lead – are found in particular by metal detector. Nine distaff whorls have been found at Tamdrup, all of lead. The finest and absolutely most prestigious artefact is a gold ring, which was found c. 60 m southwest of house 1. The ring consists of a 2 mm wide, very thin gold band, while the fittings comprise a central casing surrounded by originally eight small circular casings. In the middle sits a red stone, presumably a garnet, mounted in five rings. In a circle around the stone are the original eight small, circular mounts, of which six are preserved. The mounts, from which the stones are missing, alternate with three small gold spheres. The edges of the mounts have fine cable ornamentation. The dating is rather uncertain and is therefore not ascribed great diagnostic value. In the treasure trove description, the ring is dated to the Late Middle Ages/Renaissance, but it could presumably also date from the Early Middle Ages as it has features reminiscent of the magnificent brooch found at Østergård, which is dated to 1050. Two other spectacular artefacts were found in the form of some small four-legged animals, probably horses, cast in bronze. These figures are known from the Slav area and have presumably had a pre-Christian, symbolic function. Common to both of them are an elongated body, long neck and very short legs. Finally, mention should be made of four lead amulets. These are of a type where, on a long strip of lead, a text has been written in runes or Latin characters. Typically, these are Christian invocations intended to protect the wearer. The lead amulets are folded together and therefore do not take up much space. They are dated to the Middle Ages (1100-1400) and will therefore not be dealt with in further detail here. What the artefacts tell usWhat do the artefacts tell us? They help to provide a dating frame for the site, they tell us something about what has taken place there, they give an indication of which social classes/strata were represented, and, finally, they give us an insight into which foreign contacts could have existed, which influences people were under and which networks they were part of. Most of the artefacts date from the period 900-1000, and this is also the dating frame for the site as a whole. There is a slight tendency for the 10th century finds to be more evenly distributed across the site than those from the 11th century, which tend to be concentrated in the eastern part. A number of the finds are associated with tangible activities, for example the weights and, especially, the distaff whorls. Others also had practical functions but are, at the same time, associated with the upper echelons of society. Of the material from Tamdrup, the latter include the harness fittings and the keys, while the many brooches/fibulas and pendants also belong to artefact groups to which people from the higher strata of society had access. Some of the harness fittings and brooches suggest links with England. The stirrup-strap fitting and the cruciform strap fitting in Anglo-Scandinavian style have clear parallels in the English archaeological record. The coins, on the other hand, point towards Germany. There are a number of German coins from the end of the 10th century and the beginning of the 11th century, but the occurrence of Otto-Adelheid pennies and other German coins is not necessarily an indication of a direct German connection. From the second half of the 11th century, Svein Estridson coins dominate, but they are primarily Danish. Other artefacts that indicate contacts with western Europe are the enamelled brooches in Carolingian-Ottonian style. A number of objects suggest some degree of trade. Here again, it is the coins and the hack silver, and also the relatively large number of lead weights, that must be considered as relatively reliable indicators of trade, at least when their number is taken into consideration. In the light of the metal-detector finds it can, in conclusion, be stated that this was a locality inhabited by people of middle to high status. Many objects are foreign or show foreign inspiration and suggest therefore that Tamdrup was part of an international network. The artefacts support the interpretation of Tamdrup as a magnate’s farm and a royal residence. ConclusionTamdrup was located high up in the landscape, withdrawn from the coast, but nevertheless with quick and easy access to Horsens Fjord. Tamdrup could be approached from the fjord via Nørrestrand and the river Hansted Å on a northern route, or by the river Bygholm Å on a southern route (fig. 33). A withdrawn loca­tion was not atypical in the Viking Age and the Early Middle Ages. At that time there were also sites directly on the coast and at the heads of fjords, where early urbanisation materialised through the establishment of the first market towns, while the king’s residences had apparently to be located in places rather less accessible by boat and ship. As withdrawn but central, regional hubs and markers between land and sea. One must imagine that Tamdrup had a high status in the 10th and 11th centuries, when the king had a residence and a wooden church there. A place of great importance, culminating in the construction of a Romanesque basilica to commemorate the Christianisation of Denmark. Tamdrup appears to have lost its significance for the monarchy shortly after the stone church was completed, which could fit with King Niels, as the last of Svein Estridson’s sons, being killed in 1134, and another branch of the royal family taking over power. At the same time as Tamdrup lost its importance, Horsens flourished as a town and became of such great importance for the Crown that both Svein Grathe and Valdemar the Great had coins minted there. Tamdrup must have been a central element of the local topography in the Viking Age, when Horsens functioned as a landing place, perhaps with seasonal trading. In the long term, Horsens came out strongest, but it must be assumed that Tamdrup had the highest status between AD 900 and 1100.Lars PaghHorsens Museum
Styles APA, Harvard, Vancouver, ISO, etc.
25

Egwuonwu, GN, EI Okoyeh, DC Agarana, EG Nwaka, OB Nwosu et EE Chikwelu. « PRELIMINARY GULLY HAZARD ASSESSMENT USING 2D-ERT AND 2D-SRT GEOPHYSICAL SURVEYS : A CASE STUDY IN SOUTHEASTERN NIGERIA ». International Journal of Advanced Academic Research, 24 mars 2021, 49–65. http://dx.doi.org/10.46654/ij.24889849.e7301.

Texte intégral
Résumé :
Two-dimensional Electrical Resistivity Tomography (2DERT) and Seismic Refraction Tomography (2DSRT) were concurrently applied in assessment of a gully site with the view of assessing its stability and risk level. Eight profile lines oriented parallel and perpendicular to the boundary of the gully were surveyed. As a result, apparent resistivity model tomograms in the range of 1-9,000 and p-wave velocity models in the range of 300-700 were obtained from the two techniques respectively. Interpretation of the models obtained show predominance of unconsolidated clay, shale intercalates, clayey sand, sandy clay and weathered lateritic soil at shallow depths. Low amplitude undulating refracting layers, landslide slip subsurface and lose horizons were also delineated at shallow depths. The predominance of weak, clayey and unconsolidated lithology at the gully site suggests evidence of unstable gravitational equilibrium which imply environmental hazard. The plausible deductions made from the two
Styles APA, Harvard, Vancouver, ISO, etc.
26

« To the issue of risk management in coal production ». Geo-Technical Mechanics, no 158 (2021) : 123–30. http://dx.doi.org/10.15407/geotm2021.158.123.

Texte intégral
Résumé :
In the last quarter of the twentieth century, a new technical term appeared in world practice - risk, with its various manifestations: technical and industrial, emergency, aerological, environmental, individual, collective, complex, etc. New forms of safety assessment preceded the public's understanding of the need to improve approaches to its assessment in such a way, which led to great differences in the interpretation of risk-oriented technology for safety management of dangerous production objects (DPO). In the article, the peculiarities of coal production in modern Ukraine are considered in terms of labor protection with its drawbacks, and a conclusion is made that the way out for the coal industry from the situation is to reduce risks of various types, which increases the safety of coal workers. It is noted that the terms «risk» and «danger» are not synonyms. Safety does not mean absence of unacceptable risk. A term «risk management» requires additional discussion and elucidations. Based on the terminological foundations of management theory, management is a process which includes development of alternative control influences, making of decisions by choosing the most effective ones and the implementation of control influences to achieve the desired results of the controlled object. The Risk, as (a) measure of danger of an object, is not an object, and therefore cannot be an object of control. Risk cannot function, and it has no results of functioning. Therefore, it is impossible to manage risk as such. The term "risk management" is a market phrase it is a mistake to consider risk in the form of an independent entity (because risk is only a rate of danger). It is necessary to manage work the safety of the site as a whole, and not its features and parameters, one of which is the risk. Unfortunately, in practice, we have to observe how "risk is managed" by unscrupulous authors of industrial safety declarations, as well as by hasty interpreters in their one-sided and biased reports in some media. The risk of a man working in DPO needs further and more thorough researches.
Styles APA, Harvard, Vancouver, ISO, etc.
27

Scrieciu, Albert, Alessandro Pagano, Virginia Rosa Coletta, Umberto Fratino et Raffaele Giordano. « Bayesian Belief Networks for Integrating Scientific and Stakeholders’ Knowledge to Support Nature-Based Solution Implementation ». Frontiers in Earth Science 9 (5 juillet 2021). http://dx.doi.org/10.3389/feart.2021.674618.

Texte intégral
Résumé :
There is a growing interest worldwide on the potential of nature-based solutions (NBSs) as measures for dealing with water-related risks while producing multiple co-benefits that can contribute to several societal challenges and many of the sustainable development goals. However, several barriers still hamper their wider implementation, such as mainly the lack of stakeholders’ engagement and the limited integration of stakeholders’ knowledge throughout the phases of NBS design and implementation. This is a crucial aspect to guarantee that the multidimensional implications of NBSs are adequately understood and considered by decision-makers. Innovative methods and tools for improving NBS design and supporting decision-makers in overcoming the main barriers to implementation, ultimately enhancing their effectiveness, are therefore needed. The present work proposes a combined approach based on the integration of fuzzy cognitive maps, hydraulic modeling, and participatory Bayesian belief networks aiming to facilitate the stakeholders’ engagement and the knowledge integration process in NBS design and assessment. The approach was developed and implemented within the NAIAD project in the Lower Danube demo site, specifically oriented to support the process of the Potelu Wetland restoration. First, fuzzy cognitive maps are adopted as a problem structuring method for eliciting stakeholders’ risk perception and problem understanding, and for constructing a causal model describing the system as a whole, with specific attention to the expected role of the NBS in reducing flood risk and addressing the key local challenges. Second, hydraulic modeling is used to analyze the effect of extreme floods starting from the retrospective analysis of a specific event and to model the potential benefits of risk reduction measures. Last, a Bayesian belief network is used to support the model integration process and a scenario analysis with a user-friendly tool. The whole process can be replicated in other areas and is particularly suitable to support an active engagement of stakeholders (both institutional and not) in the process of NBS design and assessment.
Styles APA, Harvard, Vancouver, ISO, etc.
28

« Atmospheric Pollutant Analysis and Corrosion Simulation over LPG Transporting Pipelines in Sriperumpudur – Vadakal Industrial Site ». International Journal of Recent Technology and Engineering 8, no 2 (30 juillet 2019) : 779–82. http://dx.doi.org/10.35940/ijrte.b2410.078219.

Texte intégral
Résumé :
An Organization has to abide national safety legislation procedures so as to maintain business moral and workers safety. To ensure this, management has to do the following active monitoring steps like Hazard Identification, detailed Risk Assessment, drafting Safe Work procedure, Supervision and Training to Workers. While doing Risk Assessment the likelihood for the hazard to occur and the intensity of damage it would create should be properly calculated. Along with that, the effectiveness of existing control measure and the recommendations for advanced control measures also to be suggested. In general, incidents are happening mainly due to defect in Safety management, technical Job factor or individual factors. The accidents are happening mainly due to unsafe act or unsafe condition. Unsafe act includes Individual negligent behavior like, not wearing personal protective equipment, not carrying out the task as per the instruction, not carrying out the inspection or preventive maintenance as per the procedure etc., . Unsafe conditions include defective equipment in the work place, improper Housekeeping in the work site etc. In Petrochemical plants, as volatile Hydrocarbon exist in the process area, the chances for fire oriented emergencies are unavoidable if unsafe act or unsafe condition not identified and controlled appropriately. In a crude oil refinery or downstream petrochemical units, mostly flammable substances like Petrol, Diesel, Superior Kerosene, Aviation Turbine Fuels and Liquefied Petroleum Gas (LPG) would be the refined products. These products processed in distillation columns, stored in tanks, bullets and transported through pipelines. As this process equipment’s and distribution networks especially Pipeline grids are exposed to atmosphere the natural phenomenon called Atmospheric Corrosion is unavoidable. Atmospheric Corrosion may happen if the atmospheric pollutants interact with humidified air an electrochemical reaction leads to Corrosion. This electrochemical reaction would affect the Pipeline structures very badly results in puncture on pipeline surface leads to flammable substance leakage. To control this corrosion, along with proactive engineering control measure, ambient air quality analysis, corrosion simulation and corrosion monitoring should be done.
Styles APA, Harvard, Vancouver, ISO, etc.
29

Ayele, Alemayehu, Kifle Woldearegay et Matebie Meten. « A review on the multi-criteria seismic hazard analysis of Ethiopia : with implications of infrastructural development ». Geoenvironmental Disasters 8, no 1 (19 avril 2021). http://dx.doi.org/10.1186/s40677-020-00175-7.

Texte intégral
Résumé :
AbstractEarthquake is a sudden release of energy due to faults. Natural calamities like earthquakes can neither be predicted nor prevented. However, the severity of the damages can be minimized by development of proper infrastructure which includes microzonation studies, appropriate construction procedures and earthquake resistant designs. The earthquake damaging effect depends on the source, path and site conditions. The earthquake ground motion is affected by topography (slope, hill, valley, canyon, ridge and basin effects), groundwater and surface hydrology. The seismic hazard damages are ground shaking, structural damage, retaining structure failures and lifeline hazards. The medium to large earthquake magnitude (< 6) reported in Ethiopia are controlled by the main Ethiopian rift System. The spatial and temporal variation of earthquake ground motion should be addressed using the following systematic methodology. The general approaches used to analyze damage of earthquake ground motions are probabilistic seismic hazard assessment (PSHA), deterministic seismic hazard assessment (DSHA) and dynamic site response analysis. PSHA considers all the scenarios of magnitude, distance and site conditions to estimate the intensity of ground motion distribution. Conversely, DSHA taken into account the worst case scenarios or maximum credible earthquake to estimate the intensity of seismic ground motion distribution. Furthermore, to design critical infrastructures, DSHA is more valuable than PSHA. The DSHA and PSHA ground motion distributions are estimated as a function of earthquake magnitude and distance using ground motion prediction equations (GMPEs) at top of the bedrock. Site response analysis performed to estimate the ground motion distributions at ground surface using dynamic properties of the soils such as shear wave velocity, density, modulus reduction, and material damping curves. Seismic hazard evaluation of Ethiopia shown that (i) amplification is occurred in the main Ethiopian Rift due to thick soil, (ii) the probability of earthquake recurrence due to active fault sources. The situation of active fault is oriented in the N-S direction. Ethiopia is involved in huge infrastructural development (including roads, industrial parks and railways), increasing population and agricultural activity in the main Ethiopian Rift system. In this activity, socio-economic development, earthquake and earthquake-generated ground failures need to be given attention in order to reduce losses from seismic hazards and create safe geo-environment.
Styles APA, Harvard, Vancouver, ISO, etc.
30

Hemeda, Sayed. « Geo-environmental monitoring and 3D finite elements stability analysis for site investigation of underground monuments. Horemheb tomb (KV57), Luxor, Egypt ». Heritage Science 9, no 1 (8 février 2021). http://dx.doi.org/10.1186/s40494-021-00487-3.

Texte intégral
Résumé :
AbstractThe Valley of the Kings (KV) is a UNESCO world heritage site with more than thirty opened tombs. Since the first tombs were constructed, at least 24 historical flash flood events has been identified, each of which has been contributed to the destruction and deterioration of the tombs. Recently, most of these tombs have been damaged and inundated after 1994 flood. In order to understand the Geo-environmental impact mainly the past flash floods due to the intensive rainfall storm on the valley of kings and the long-term rock mass behavior under geostatic stresses in selected Horemheb tomb (KV57) and its impact on past failures and current stability, Remote sensing, GIS, LIDAR, 3D finite element stability analysis and rock mass quality assessments had been carried out using advanced methods and codes. Our work provides environmental satellite space views via landviewer Erath Observation System (EOS) Platform with passive and active sensors which include the Normalized Difference Vegetation Index (NDVI), Soil-Adjusted Vegetation Index (SAVI), Atmospherically resistant vegetation index (ARVI), Green Chlorophyll Index (GCI), Normalized Burn Ratio (NBR), Normalized Difference Snow Index (NDSI), Light Detection And Ranging (LIDAR) images, Terrain (DEM) Digital Elevation Models, 3D geological maps. In other hand experimental and numerical geotechnical evaluations and modeling of the rock mass of these underground structures and their surroundings have been executed. We estimated the rock mass quality of the different members within the Thebes limestone and Esna shale formations using the mechanical testing and Rock Mass Rating (RMR), rock quality system (Q-system) and Geological Strength Index (GSI) systems. Our recent analyses show that the KV57 rock- cut tomb at Luxor has been cut into poor to very poor quality marl shale masses due to the impact of flash foods. Rock failures of ceilings and pillars were frequently facilitated by local, unfavorably oriented persistent discontinuities, such as tension cracks and faults. Other failures were related to the disintegration of the marl limestone and Esna shale Formations into individual nodules upon weathering. Our data suggest that, in ancient Egypt monumental tomb construction, low-strength rock masses rarely resulted in modifications of the planned tomb design in order to minimise the risk of rock falls and to prevent collapses. The current flood protection measures are not enough. For this two following measures are proposed 1—to rise the current wall by 50 cm. 2—to fill the depression by reshaping bathymetry.
Styles APA, Harvard, Vancouver, ISO, etc.
31

McQuillan, Dan. « The Countercultural Potential of Citizen Science ». M/C Journal 17, no 6 (12 octobre 2014). http://dx.doi.org/10.5204/mcj.919.

Texte intégral
Résumé :
What is the countercultural potential of citizen science? As a participant in the wider citizen science movement, I can attest that contemporary citizen science initiatives rarely characterise themselves as countercultural. Rather, the goal of most citizen science projects is to be seen as producing orthodox scientific knowledge: the ethos is respectability rather than rebellion (NERC). I will suggest instead that there are resonances with the counterculture that emerged in the 1960s, most visibly through an emphasis on participatory experimentation and the principles of environmental sustainability and social justice. This will be illustrated by example, through two citizen science projects that have a commitment to combining social values with scientific practice. I will then describe the explicitly countercultural organisation, Science for the People, which arose from within the scientific community itself, out of opposition to the Vietnam War. Methodological and conceptual weaknesses in the authoritative model of science are explored, suggesting that there is an opportunity for citizen science to become anti-hegemonic by challenging the hegemony of science itself. This reformulation will be expressed through Deleuze and Guattari's notion of nomadic science, the means through which citizen science could become countercultural. Counterculture Before examining the countercultural potential of citizen science, I set out some of the grounds for identifying a counterculture drawing on the ideas of Theodore Roszak, who invented the term counterculture to describe the new forms of youth movements that emerged in the 1960s (Roszak). This was a perspective that allowed the carnivalesque procession of beatniks, hippies and the New Left to be seen as a single paradigm shift combining psychic and social revolution. But just as striking and more often forgotten is the way Roszak characterised the role of the counterculture as mobilising a vital critique of the scientific worldview (Roszak 273-274). The concept of counterculture has been taken up in diverse ways since its original formation. We can draw, for example, on Lawrence Grossberg's more contemporary analysis of counterculture (Grossberg) to clarify the main concepts and contrast them with a scientific approach. Firstly, a counterculture works on and through cultural formations. This positions it as something the scientific community would see as the other, as the opposite to the objective, repeatable and quantitative truth-seeking of science. Secondly, a counterculture is a diverse and hybrid space without a unitary identity. Again, scientists would often see science as a singular activity applied in modulated forms depending on the context, although in practice the different sciences can experience each other as different tribes. Thirdly, a counterculture is lived as a transformative experience where the participant is fundamentally changed at a psychic level through participation in unique events. Contrast this with the scientific idea of the separation of observer and observed, and the objective repeatability of the experiment irrespective of the experimenter. Fourthly, a counterculture is associated with a unique moment in time, a point of shift from the old to the new. For the counterculture of the 1960s this was the Age of Aquarius. In general, the aim of science and scientists is to contribute to a form of truth that is essentially timeless, in that a physical law is assumed to hold across all time (and space), although science also has moments of radical change with regard to scientific paradigms. Finally, and significantly for the conclusions of this paper, according to Roszak a counterculture stands against the mainstream. It offers a challenge not at the level of detail but, to the fundamental assumptions of the status quo. This is what “science” cannot do, in as much as science itself has become the mainstream. It was the character of science as the bedrock of all values that Roszak himself opposed and for which he named and welcomed the counterculture. Although critical of some of the more shallow aspects of its psychedelic experimentation or political militancy, he shared its criticism of the technocratic society (the technocracy) and the egocentric mode of consciousness. His hope was that the counterculture could help restore a visionary imagination along with a more human sense of community. What Is Citizen Science? In recent years the concept of citizen science has grown massively in popularity, but is still an open and unstable term with many variants. Current moves towards institutionalisation (Citizen Science Association) are attempting to marry growth and stabilisation, with the first Annual General Meeting of the European Citizen Science Association securing a tentative agreement on the common principles of citizen science (Haklay, "European"). Key papers and presentations in the mainstream of the movement emphasise that citizen science is not a new activity (Bonney et al.) with much being made of the fact that the National Audubon Society started its annual Christmas Bird Count in 1900 (National Audubon Society). However, this elides the key role of the Internet in the current surge, which takes two distinct forms; the organisation of distributed fieldwork, and the online crowdsourcing of data analysis. To scientists, the appeal of citizen science fieldwork follows from its distributed character; they can research patterns over large scales and across latitudes in ways that would be impossible for a researcher at a single study site (Toomey). Gathering together the volunteer, observations are made possible by an infrastructure of web tools. The role of the citizen in this is to be a careful observer; the eyes and ears of the scientist in cyberspace. In online crowdsourcing, the internet is used to present pattern recognition tasks; enrolling users in searching images for signs of new planets or the jets of material from black holes. The growth of science crowdsourcing is exponential; one of the largest sites facilitating this kind of citizen science now has well in excess of a million registered users (Zooniverse). Such is the force of the technological aura around crowdsourced science that mainstream publications often conflate it with the whole of citizen science (Parr). There are projects within citizen science which share core values with the counterculture as originally defined by Roszak, in particular open participation and social justice. These projects also show characteristics from Grossberg's analysis of counterculture; they are diverse and hybrid spaces, carry a sense of moving from an old era to a new one, and have cultural forms of their own. They open up the full range of the scientific method to participation, including problem definition, research design, analysis and action. Citizen science projects that aim for participation in all these areas include the Extreme Citizen Science research group (ExCiteS) at University College London (UCL), the associated social enterprise Mapping for Change (Mapping for Change), and the Public Laboratory for Open Technology and Science (Public Lab). ExCiteS sees its version of citizen science as "a situated, bottom-up practice" that "takes into account local needs, practices and culture". Public Lab, meanwhile, argue that many citizen science projects only offer non-scientists token forms of participation in scientific inquiry that rarely amount to more that data collection and record keeping. They counter this through an open process which tries to involve communities all the way from framing the research questions, to prototyping tools, to collating and interpreting the measurements. ExCiteS and Public Lab also share an implicit commitment to social justice through scientific activity. The Public Lab mission is to "put scientific inquiry at the heart of civic life" and the UCL research group strive for "new devices and knowledge creation processes that can transform the world". All of their work is framed by environmental sustainability and care for the planet, whether it's enabling environmental monitoring by indigenous communities in the Congo (ExCiteS) or developing do-it-yourself spectrometry kits to detect crude oil pollution (Public Lab, "Homebrew"). Having provided a case for elements of countercultural DNA being present in bottom-up and problem-driven citizen science, we can contrast this with Science for the People, a scientific movement that was born out of the counterculture. Countercultural Science from the 1970s: Science for the People Science for the People (SftP) was a scientific movement seeded by a rebellion of young physicists against the role of US science in the Vietnam War. Young members of the American Physical Society (APS) lobbied for it to take a position against the war but were heavily criticised by other members, whose written complaints in the communications of the APS focused on the importance of scientific neutrality and the need to maintain the association's purely scientific nature rather than allowing science to become contaminated by politics (Sarah Bridger, in Plenary 2, 0:46 to 1:04). The counter-narrative from the dissidents argued that science is not neutral, invoking the example of Nazi science as a justification for taking a stand. After losing the internal vote the young radicals left to form Scientists and Engineers for Social and Political Action (SESPA), which later became Science for the People (SftP). As well as opposition to the Vietnam War, SftP embodied from the start other key themes of the counterculture, such as civil rights and feminism. For example, the first edition of Science for the People magazine (appearing as Vol. 2, No. 2 of the SESPA Newsletter) included an article about leading Black Panther, Bobby Seale, alongside a piece entitled “Women Demand Equality in Science.” The final articles in the same issue are indicators of SftP's dual approach to science and change; both the radicalisation of professionals (“Computer Professionals for Peace”) and the demystification of technical practices (“Statistics for the People”) (Science for the People). Science for the People was by no means just a magazine. For example, their technical assistance programme provided practical support to street health clinics run by the Black Panthers, and brought SftP under FBI surveillance (Herb Fox, in Plenary 1, 0:25 to 0:35). Both as a magazine and as a movement, SftP showed a tenacious longevity, with the publication being produced every two months between August 1970 and May/June 1989. It mutated through a network of affiliated local groups and international links, and was deeply involved in constructing early critiques of nuclear power and genetic determinism. SftP itself seems to have had a consistent commitment to non-hierarchical processes and, as one of the founders expressed it, a “shit kicking” approach to putting its principles in to practice (Al Weinrub, in Plenary 1, 0:25 to 0:35). SftP criticised power, front and centre. It is this opposition to hegemony that puts the “counter” into counterculture, and is missing from citizen science as currently practised. Cracks in the authority of orthodox science, which can be traced to both methodologies and basic concepts, follow in this paper. These can be seen as an opportunity for citizen science to directly challenge orthodox science and thus establish an anti-hegemonic stance of its own. Weaknesses of Scientific Hegemony In this section I argue that the weaknesses of scientific hegemony are in proportion to its claims to authority (Feyerabend). Through my scientific training as an experimental particle physicist I have participated in many discussions about the ontological and epistemological grounds for scientific authority. While most scientists choose to present their practice publicly as an infallible machine for the production of truths, the opinions behind the curtain are far more mixed. Physicist Lee Somolin has written a devastating critique of science-in-practice that focuses on the capture of the institutional economy of science by an ideological grouping of string theorists (Smolin), and his account is replete with questions about science itself and ethnographic details that bring to life the messy behind-the-scenes conflicts in scientific-knowledge making. Knowledge of this messiness has prompted some citizen science advocates to take science to task, for example for demanding higher standards in data consistency from citizen science than is often the case in orthodox science (Haklay, "Assertions"; Freitag, "Good Science"). Scientists will also and invariably refer to reproducibility as the basis for the authority of scientific truths. The principle that the same experiments always get the same results, irrespective of who is doing the experiment, and as long as they follow the same method, is a foundation of scientific objectivity. However, a 2012 study of landmark results in cancer science was able to reproduce only 11 per cent of the original findings (Begley and Ellis). While this may be an outlier case, there are broader issues with statistics and falsification, a bias on positive results, weaknesses in peer review and the “publish or perish” academic culture (The Economist). While the pressures are all-too-human, the resulting distortions are rarely acknowledged in public by scientists themselves. On the other hand, citizen science has been slow to pick up the gauntlet. For example, while some scientists involved in citizen science have commented on the inequality and inappropriateness of orthodox peer review for citizen science papers (Freitag, “What Is the Role”) there has been no direct challenge to any significant part of the scientific edifice. I argue that the nearest thing to a real challenge to orthodox science is the proposal for a post-normal science, which pre-dates the current wave of citizen science. Post-normal science tries to accommodate the philosophical implications of post-structuralism and at the same time position science to tackle problems such as climate change, intractable to reproducibility (Funtowicz and Ravetz). It accomplishes this by extending the domains in which science can provide meaningful answers to include issues such as global warming, which involve high decision stakes and high uncertainty. It extends traditional peer review into an extended peer community, which includes all the stakeholders in an issue, and may involve active research as well as quality assessment. The idea of extended peer review has obvious overlaps with community-oriented citizen science, but has yet to be widely mobilised as a theoretical buttress for citizen-led science. Prior even to post-normal science are the potential cracks in the core philosophy of science. In her book Cosmopolitics, Isabelle Stengers characterises the essential nature of scientific truth as the ability to disqualify and exclude other truth claims. This, she asserts, is the hegemony of physics and its singular claim to decide what is real and what is true. Stengers traces this, in part, to the confrontation more than one hundred years ago between Max Planck and Ernst Mach, whereas the latter argued that claims to an absolute truth should be replaced by formulations that tied physical laws to the human practices that produced them. Planck stood firmly for knowledge forms that were unbounded by time, space or specific social-material procedures (Stengers). Although contemporary understandings of science are based on Planck's version, citizen science has the potential to re-open these questions in a productive manner for its own practices, if it can re-conceive of itself as what Deleuze and Guattari would call nomadic science (Deleuze; Deleuze & Guattari). Citizen Science as Nomadic Science Deleuze and Guattari referred to orthodox science as Royal Science or Striated Science, referring in part to its state-like form of authority and practice, as well as its psycho-social character. Their alternative is a smooth or nomadic science that, importantly for citizen science, does not have the ambition to totalise knowledge. Nomadic science is a form of empirical investigation that has no need to be hooked up to a grand narrative. The concept of nomadic science is a natural fit for bottom-up citizen science because it can valorise truths that are non-dual and that go beyond objectivity to include the experiential. In this sense it is like the extended peer review of post-normal science but without the need to be limited to high-risk high-stakes questions. As there is no a priori problem with provisional knowledges, it naturally inclines towards the local, the situated and the culturally reflective. The apparent unreliability of citizen science in terms of participants and tools, which is solely a source of anxiety, can become heuristic for nomadic science when re-cast through the forgotten alternatives like Mach's formulation; that truths are never separated from the specifics of the context and process that produced them (Stengers 6-18; 223). Nomadic science, I believe, will start to emerge through projects that are prepared to tackle toxic epistemology as much as toxic pollutants. For example, the Community Based Auditing (CBA) developed by environmental activists in Tasmania (Tattersall) challenges local alliances of state and extractive industries by undermining their own truth claims with regards to environmental impact, a process described in the CBA Toolbox as disconfirmation. In CBA, this mixture of post-normal science and Stenger's critique is combined with forms of data collection and analysis known as Community Based Sampling (Tattersall et al.), which would be recognisable to any citizen science project. The change from citizen science to nomadic science is not a total rupture but a shift in the starting point: it is based on an overt critique of power. One way to bring this about is being tested in the “Kosovo Science for Change” project (Science for Change Kosovo), where I am a researcher and where we have adopted the critical pedagogy of Paulo Freire as the starting point for our empirical investigations (Freire). Critical pedagogy is learning as the co-operative activity of understanding—how our lived experience is constructed by power, and how to make a difference in the world. Taking a position such as nomadic science, openly critical of Royal Science, is the anti-hegemonic stance that could qualify citizen science as properly countercultural. Citizen Science and Counterculture Counterculture, as I have expressed it, stands against or rejects the hegemonic culture. However, there is a strong tendency in contemporary social movements to take a stance not only against the dominant structures but against hegemony itself. They contest what Richard Day calls the hegemony of hegemony (Day). I witnessed this during the counter-G8 mobilisation of 2001. Having been an activist in the 1980s and 1990s I was wearily familiar with the sectarian competitiveness of various radical narratives, each seeking to establish itself as the correct path. So it was a strongly affective experience to stand in the convergence centre and listen to so many divergent social groups and movements agree to support each other's tactics, expressing a solidarity based on a non-judgemental pluralism. Since then we have seen the emergence of similarly anti-hegemonic countercultures around the Occupy and Anonymous movements. It is in this context of counterculture that I will try to summarise and evaluate the countercultural potential of citizen science and what being countercultural might offer to citizen science itself. To be countercultural it is not enough for citizen science to counterpose participation against the institutional and hierarchical aspects of professional science. As an activity defined purely by engagement it offers to plug the legitimacy gap for science while still being wholly dependent on it. A countercultural citizen science must pose a strong challenge to the status quo, and I have suggested that a route to this would be to develop as nomadic science. This does not mean replacing or overthrowing science but constructing an other to science with its own claim to empirical methods. It is fair to ask what this would offer citizen science that it does not already have. At an abstract level it would gain a freedom of movement; an ability to occupy Deleuzian smooth spaces rather than be constrained by the striation of established science. The founders of Science for the People are clear that it could never have existed if it had not been able to draw on the mass movements of its time. Being countercultural would give citizen science an affinity with the bottom-up, local and community-based issues where empirical methods are likely to have the most social impact. One of many examples is the movement against fracking (the hydraulic fracturing of deep rock formations to release shale gas). Together, these benefits of being countercultural open up the possibility for forms of citizen science to spread rhizomatically in a way that is not about immaterial virtual labour but is itself part of a wider cultural change. The possibility of a nomadic science stands as a doorway to the change that Roszak saw at the heart of the counterculture, a renewal of the visionary imagination. References Begley, C. Glenn, and Lee M. Ellis. "Drug Development: Raise Standards for Preclinical Cancer Research." Nature 483.7391 (2012): 531–533. 8 Oct. 2014 ‹http://www.nature.com/nature/journal/v483/n7391/full/483531a.html›. Bonney, Rick, et al. "Citizen Science: A Developing Tool for Expanding Science Knowledge and Scientific Literacy." BioScience 59.11 (2009): 977–984. 6 Oct. 2014 ‹http://bioscience.oxfordjournals.org/content/59/11/977›. Citizen Science Association. "Citizen Science Association." 2014. 6 Oct. 2014 ‹http://citizenscienceassociation.org/›. Day, Richard J.F. Gramsci Is Dead: Anarchist Currents in the Newest Social Movements. London: Pluto Press, 2005. Deleuze, Giles. Nomadology: The War Machine. New York, NY: MIT Press, 1986. Deleuze, Gilles, and Felix Guattari. A Thousand Plateaus. London: Bloomsbury Academic, 2013. ExCiteS. "From Non-Literate Data Collection to Intelligent Maps." 26 Aug. 2013. 8 Oct. 2014 ‹http://www.ucl.ac.uk/excites/projects/excites-projects/intelligent-maps/intelligent-maps›. Feyerabend, Paul K. Against Method. 4th ed. London: Verso, 2010. Freire, Paulo. Pedagogy of the Oppressed. Continuum International Publishing Group, 2000. Freitag, Amy. "Good Science and Bad Science in Democratized Science." Oceanspaces 22 Jan. 2014. 9 Oct. 2014 ‹http://oceanspaces.org/blog/good-science-and-bad-science-democratized-science›. ---. "What Is the Role of Peer-Reviewed Literature in Citizen Science?" Oceanspaces 29 Jan. 2014. 10 Oct. 2014 ‹http://oceanspaces.org/blog/what-role-peer-reviewed-literature-citizen-science›. Funtowicz, Silvio O., and Jerome R. Ravetz. "Science for the Post-Normal Age." Futures 25.7 (1993): 739–755. 8 Oct. 2014 ‹http://www.sciencedirect.com/science/article/pii/001632879390022L›. Grossberg, Lawrence. "Some Preliminary Conjunctural Thoughts on Countercultures." Journal of Gender and Power 1.1 (2014). 3 Nov. 2014 ‹http://gender-power.amu.edu.pl/?page_id=20›. Haklay, Muki. "Assertions on Crowdsourced Geographic Information & Citizen Science #2." Po Ve Sham - Muki Haklay’s Personal Blog 16 Jan. 2014. 8 Oct. 2014 ‹http://povesham.wordpress.com/2014/01/16/assertions-on-crowdsourced-geographic-information-citizen-science-2/›. ---. "European Citizen Science Association Suggestion for 10 Principles of Citizen Science." Po Ve Sham - Muki Haklay’s Personal Blog 14 May 2014. 6 Oct. 2014 ‹http://povesham.wordpress.com/2014/05/14/european-citizen-science-association-suggestion-for-10-principles-of-citizen-science/›. Mapping for Change. "Mapping for Change." 2014. 6 June 2014 ‹http://www.mappingforchange.org.uk/›. National Audubon Society. "Christmas Bird Count." 2014. 6 Oct. 2014 ‹http://birds.audubon.org/christmas-bird-count›. NERC. "Best Practice Guides to Choosing and Using Citizen Science for Environmental Projects." Centre for Ecology & Hydrology May 2014. 9 Oct. 2014 ‹http://www.ceh.ac.uk/products/publications/understanding-citizen-science.html›. Parr, Chris. "Why Citizen Scientists Help and How to Keep Them Hooked." Times Higher Education 6 June 2013. 6 Oct. 2014 ‹http://www.timeshighereducation.co.uk/news/why-citizen-scientists-help-and-how-to-keep-them-hooked/2004321.article›. Plenary 1: Stories from the Movement. Film. Science for the People, 2014. Plenary 2: The History and Lasting Significance of Science for the People. Film. Science for the People, 2014. Public Lab. "Public Lab: A DIY Environmental Science Community." 2014. 6 June 2014 ‹http://publiclab.org/›. ---. "The Homebrew Oil Testing Kit." Kickstarter 24 Sep. 2014. 8 Oct. 2014 ‹https://www.kickstarter.com/projects/publiclab/the-homebrew-oil-testing-kit›. Roszak, Theodore. The Making of a Counter Culture. Garden City, N.Y.: Anchor Books/Doubleday, 1969. Science for Change Kosovo. "Citizen Science Kosovo." Facebook, n.d. 17 Aug. 2014 ‹https://www.facebook.com/CitSciKS›. Science for the People. "SftP Magazine." 2013. 8 Oct. 2014 ‹http://science-for-the-people.org/sftp-resources/magazine/›. Smolin, Lee. The Trouble with Physics: The Rise of String Theory, the Fall of a Science, and What Comes Next. Reprint ed. Boston: Mariner Books, 2007. Stengers, Isabelle. Cosmopolitics I. Trans. Robert Bononno. Minneapolis: U of Minnesota P, 2010. Tattersall, Philip J. "What Is Community Based Auditing and How Does It Work?." Futures 42.5 (2010): 466–474. 9 Oct. 2014 ‹http://www.sciencedirect.com/science/article/pii/S0016328709002055›. ---, Kim Eastman, and Tasmanian Community Resource Auditors. Community Based Auditing: Tool Boxes: Training and Support Guides. Beauty Point, Tas.: Resource Publications, 2010. The Economist. "Trouble at the Lab." 19 Oct. 2013. 8 Oct. 2014 ‹http://www.economist.com/news/briefing/21588057-scientists-think-science-self-correcting-alarming-degree-it-not-trouble›. Toomey, Diane. "How Rise of Citizen Science Is Democratizing Research." 28 Jan. 2014. 6 Oct. 2014 ‹http://e360.yale.edu/feature/interview_caren_cooper_how_rise_of_citizen_science_is_democratizing_research/2733/›. UCL. "Extreme Citizen Science (ExCiteS)." July 2013. 6 June 2014 ‹http://www.ucl.ac.uk/excites/›. Zooniverse. "The Ever-Expanding Zooniverse - Updated." Daily Zooniverse 3 Feb. 2014. 6 Oct. 2014 ‹http://daily.zooniverse.org/2014/02/03/the-ever-expanding-zooniverse-updated/›.
Styles APA, Harvard, Vancouver, ISO, etc.
32

Maras, Steven. « Reflections on Adobe Corporation, Bill Viola, and Peter Ramus while Printing Lecture Notes ». M/C Journal 8, no 2 (1 juin 2005). http://dx.doi.org/10.5204/mcj.2338.

Texte intégral
Résumé :
In March 2002, I was visiting the University of Southern California. One night, as sometimes happens on a vibrant campus, two interesting but very different public lectures were scheduled against one another. The first was by the co-chairman and co-founder of Adobe Systems Inc., Dr. John E. Warnock, talking about books. The second was a lecture by acclaimed video artist Bill Viola. The first event was clearly designed as a networking forum for faculty and entrepreneurs. The general student population was conspicuously absent. Warnock spoke of the future of Adobe, shared stories of his love of books, and in an embodiment of the democratising potential of Adobe software (and no doubt to the horror of archivists in the room) he invited the audience to handle extremely rare copies of early printed works from his personal library. In the lecture theatre where Viola was to speak the atmosphere was different. Students were everywhere; even at the price of ten dollars a head. Viola spoke of time and memory in the information age, of consciousness and existence, to an enraptured audience—and showed his latest work. The juxtaposition of these two events says something about our cultural moment, caught between a paradigm modelled on reverence toward the page, and a still emergent sense of medium, intensity and experimentation. But, the juxtaposition yields more. At one point in Warnock’s speech, in a demonstration of the ultra-high resolution possible in the next generation of Adobe products, he presented a scan of a manuscript, two pages, two columns per page, overflowing with detail. Fig. 1. Dr John E. Warnock at the Annenberg Symposium. Photo courtesy of http://www.annenberg.edu/symposia/annenberg/2002/photos.php Later, in Viola’s presentation, a fragment of a video work, Silent Mountain (2001) splits the screen in two columns, matching Warnock’s text: inside each a human figure struggles with intense emotion, and the challenges of bridging the relational gap. Fig. 2. Images from Bill Viola, Silent Mountain (2001). From Bill Viola, THE PASSIONS. The J. Paul Getty Museum, Los Angeles in Association with The National Gallery, London. Ed. John Walsh. p. 44. Both events are, of course, lectures. And although they are different in style and content, a ‘columnular’ scheme informs and underpins both, as a way of presenting and illustrating the lecture. Here, it is worth thinking about Pierre de la Ramée or Petrus (Peter) Ramus (1515-1572), the 16th century educational reformer who in the words of Frances Yates ‘abolished memory as a part of rhetoric’ (229). Ramus was famous for transforming rhetoric through the introduction of his method or dialectic. For Walter J. Ong, whose discussion of Ramism we are indebted to here, Ramus produced the paradigm of the textbook genre. But it is his method that is more noteworthy for us here, organised through definitions and divisions, the distribution of parts, ‘presented in dichotomized outlines or charts that showed exactly how the material was organised spatially in itself and in the mind’ (Ong, Orality 134-135). Fig. 3. Ramus inspired study of Medicine. Ong, Ramus 301. Ong discusses Ramus in more detail in his book Ramus: Method, and the Decay of Dialogue. Elsewhere, Sutton, Benjamin, and I have tried to capture the sense of Ong’s argument, which goes something like the following. In Ramus, Ong traces the origins of our modern, diagrammatic understanding of argument and structure to the 16th century, and especially the work of Ramus. Ong’s interest in Ramus is not as a great philosopher, nor a great scholar—indeed Ong sees Ramus’s work as a triumph of mediocrity of sorts. Rather, his was a ‘reformation’ in method and pedagogy. The Ramist dialectic ‘represented a drive toward thinking not only of the universe but of thought itself in terms of spatial models apprehended by sight’ (Ong, Ramus 9). The world becomes thought of ‘as an assemblage of the sort of things which vision apprehends—objects or surfaces’. Ramus’s teachings and doctrines regarding ‘discoursing’ are distinctive for the way they draw on geometrical figures, diagrams or lecture outlines, and the organization of categories through dichotomies. This sets learning up on a visual paradigm of ‘study’ (Ong, Orality 8-9). Ramus introduces a new organization for discourse. Prior to Ramus, the rhetorical tradition maintained and privileged an auditory understanding of the production of content in speech. Central to this practice was deployment of the ‘seats’, ‘images’ and ‘common places’ (loci communes), stock arguments and structures that had accumulated through centuries of use (Ong, Orality 111). These common places were supported by a complex art of memory: techniques that nourished the practice of rhetoric. By contrast, Ramism sought to map the flow and structure of arguments in tables and diagrams. Localised memory, based on dividing and composing, became crucial (Yates 230). For Ramus, content was structured in a set of visible or sight-oriented relations on the page. Ramism transformed the conditions of visualisation. In our present age, where ‘content’ is supposedly ‘king’, an archaeology of content bears thinking about. In it, Ramism would have a prominent place. With Ramus, content could be mapped within a diagrammatic page-based understanding of meaning. A container understanding of content arises. ‘In the post-Gutenberg age where Ramism flourished, the term “content”, as applied to what is “in” literary productions, acquires a status which it had never known before’ (Ong, Ramus 313). ‘In lieu of merely telling the truth, books would now in common estimation “contain” the truth, like boxes’ (313). For Ramus, ‘analysis opened ideas like boxes’ (315). The Ramist move was, as Ong points out, about privileging the visual over the audible. Alongside the rise of the printing press and page-based approaches to the word, the Ramist revolution sought to re-work rhetoric according to a new scheme. Although spatial metaphors had always had a ‘place’ in the arts of memory—other systems were, however, phonetically based—the notion of place changed. Specific figures such as ‘scheme’, ‘plan’, and ‘table’, rose to prominence in the now-textualised imagination. ‘Structure’ became an abstract diagram on the page disconnected from the total performance of the rhetor. This brings us to another key aspect of the Ramist reformation: that alongside a spatialised organisation of thought Ramus re-works style as presentation and embellishment (Brummett 449). A kind of separation of conception and execution is introduced in relation to performance. In Ramus’ separation of reason and rhetoric, arrangement and memory are distinct from style and delivery (Brummett 464). While both dialectic and rhetoric are re-worked by Ramus in light of divisions and definitions (see Ong, Ramus Chs. XI-XII), and dialectic remains a ‘rhetorical instrument’ (Ramus 290), rhetoric becomes a unique site for simplification in the name of classroom practicality. Dialectic circumscribes the space of learning of rhetoric; invention and arrangement (positioning) occur in advance (289). Ong’s work on the technologisation of the word is strongly focused on identifying the impact of literacy on consciousness. What Ong’s work on Ramus shows is that alongside the so-called printing revolution the Ramist reformation enacts an equally if not more powerful transformation of pedagogic space. Any serious consideration of print must not only look at the technologisation of the word, and the shifting patterns of literacy produced alongside it, but also a particular tying together of pedagogy and method that Ong traces back to Ramus. If, as is canvassed in the call for papers of this issue of M/C Journal, ‘the transitions in print culture are uneven and incomplete at this point’, then could it be in part due to the way Ramism endures and is extended in electronic and hypermedia contexts? Powerpoint presentations, outlining tools (Heim 139-141), and the scourge of bullet points, are the most obvious evidence of greater institutionalization of Ramist knowledge architecture. Communication, and the teaching of communication, is now embedded in a Ramist logic of opening up content like a box. Theories of communication draw on so-called ‘models’ that draw on the representation of the communication process through boxes that divide and define. Perhaps in a less obvious way, ‘spatialized processes of thought and communication’ (Ong, Ramus 314) are essential to the logic of flowcharting and tracking new information structures, and even teaching hypertext (see the diagram in Nielsen 7): a link puts the popular notion that hypertext is close to the way we truly think into an interesting perspective. The notion that we are embedded in print culture is not in itself new, even if the forms of our continual reintegration into print culture can be surprising. In the experience of printing, of the act of pressing the ‘Print’ button, we find ourselves re-integrated into page space. A mini-preview of the page re-assures me of an actuality behind the actualizations on the screen, of ink on paper. As I write in my word processing software, the removal of writing from the ‘element of inscription’ (Heim 136) —the frictionless ‘immediacy’ of the flow of text (152) — is conditioned by a representation called the ‘Page Layout’, the dark borders around the page signalling a kind of structures abyss, a no-go zone, a place, beyond ‘Normal’, from which where there is no ‘Return’. At the same time, however, never before has the technological manipulation of the document been so complex, a part of a docuverse that exists in three dimensions. It is a world that is increasingly virtualised by photocopiers that ‘scan to file’ or ‘scan to email’ rather than good old ‘xeroxing’ style copying. Printing gives way to scanning. In a perverse extension of printing (but also residually film and photography), some video software has a function called ‘Print to Video’. That these super-functions of scanning to file or email are disabled on my department photocopier says something about budgets, but also the comfort with which academics inhabit Ramist space. As I stand here printing my lecture plan, the printer stands defiantly separate from the photocopier, resisting its colonizing convergence even though it is dwarfed in size. Meanwhile, the printer demurely dispenses pages, one at a time, face down, in a gesture of discretion or perhaps embarrassment. For in the focus on the pristine page there is a Puritanism surrounding printing: a morality of blemishes, smudges, and stains; of structure, format and order; and a failure to match that immaculate, perfect argument or totality. (Ong suggests that ‘the term “method” was appropriated from the Ramist coffers and used to form the term “methodists” to designate first enthusiastic preachers who made an issue of their adherence to “logic”’ (Ramus 304).) But perhaps this avoidance of multi-functionality is less of a Ludditism than an understanding that the technological assemblage of printing today exists peripherally to the ideality of the Ramist scheme. A change in technological means does not necessarily challenge the visile language that informs our very understanding of our respective ‘fields’, or the ideals of competency embodied in academic performance and expression, or the notions of content we adopt. This is why I would argue some consideration of Ramism and print culture is crucial. Any ‘true’ breaking out of print involves, as I suggest, a challenge to some fundamental principles of pedagogy and method, and the link between the two. And of course, the very prospect of breaking out of print raises the issue of its desirability at a time when these forms of academic performance are culturally valued. On the surface, academic culture has been a strange inheritor of the Ramist legacy, radically furthering its ambitions, but also it would seem strongly tempering it with an investment in orality, and other ideas of performance, that resist submission to the Ramist ideal. Ong is pessimistic here, however. Ramism was after all born as a pedagogic movement, central to the purveying ‘knowledge as a commodity’ (Ong, Ramus 306). Academic discourse remains an odd mixture of ‘dialogue in the give-and-take Socratic form’ and the scheduled lecture (151). The scholastic dispute is at best a ‘manifestation of concern with real dialogue’ (154). As Ong notes, the ideals of dialogue have been difficult to sustain, and the dominant practice leans towards ‘the visile pole with its typical ideals of “clarity”, “precision”, “distinctness”, and “explanation” itself—all best conceivable in terms of some analogy with vision and a spatial field’ (151). Assessing the importance and after-effects of the Ramist reformation today is difficult. Ong describes it an ‘elusive study’ (Ramus 296). Perhaps Viola’s video, with its figures struggling in a column-like organization of space, structured in a kind of dichotomy, can be read as a glimpse of our existence in or under a Ramist scheme (interestingly, from memory, these figures emote in silence, deprived of auditory expression). My own view is that while it is possible to explore learning environments in a range of ways, and thus move beyond the enclosed mode of study of Ramism, Ramism nevertheless comprises an important default architecture of pedagogy that also informs some higher level assumptions about assessment and knowledge of the field. Software training, based on a process of working through or mimicking a linked series of screenshots and commands is a direct inheritor of what Ong calls Ramism’s ‘corpuscular epistemology’, a ‘one to one correspondence between concept, word and referent’ (Ong, Orality 168). My lecture plan, providing an at a glance view of my presentation, is another. The default architecture of the Ramist scheme impacts on our organisation of knowledge, and the place of performance with in it. Perhaps this is another area where Ong’s fascinating account of secondary orality—that orality that comes into being with television and radio—becomes important (Orality 136). Not only does secondary orality enable group-mindedness and communal exchange, it also provides a way to resist the closure of print and the Ramist scheme, adapting knowledge to new environments and story frameworks. Ong’s work in Orality and Literacy could thus usefully be taken up to discuss Ramism. But this raises another issue, which has to do with the relationship between Ong’s two books. In Orality and Literacy, Ong is careful to trace distinctions between oral, chirographic, manuscript, and print culture. In Ramus this progression is not as prominent— partly because Ong is tracking Ramus’ numerous influences in detail —and we find a more clear-cut distinction between the visile and audile worlds. Yates seems to support this observation, suggesting contra Ong that it is not the connection between Ramus and print that is important, but between Ramus and manuscript culture (230). The interconnections but also lack of fit between the two books suggests a range of fascinating questions about the impact of Ramism across different media/technological contexts, beyond print, but also the status of visualisation in both rhetorical and print cultures. References Brummett, Barry. Reading Rhetorical Theory. Fort Worth: Harcourt, 2000. Heim, Michael. Electric Language: A Philosophical Study of Word Processing. New Haven: Yale UP, 1987. Maras, Steven, David Sutton, and with Marion Benjamin. “Multimedia Communication: An Interdisciplinary Approach.” Information Technology, Education and Society 2.1 (2001): 25-49. Nielsen, Jakob. Multimedia and Hypertext: The Internet and Beyond. Boston: AP Professional, 1995. Ong, Walter J. Orality and Literacy: The Technologizing of the Word. London: Methuen, 1982. —. Ramus: Method, and the Decay of Dialogue. New York: Octagon, 1974. The Second Annual Walter H. Annenberg Symposium. 20 March 2002. http://www.annenberg.edu/symposia/annenberg/2002/photos.php> USC Annenberg Center of Communication and USC Annenberg School for Communication. 22 March 2005. Viola, Bill. Bill Viola: The Passions. Ed. John Walsh. London: The J. Paul Getty Museum, Los Angeles in Association with The National Gallery, 2003. Yates, Frances A. The Art of Memory. Harmondsworth: Penguin, 1969. Citation reference for this article MLA Style Maras, Steven. "Reflections on Adobe Corporation, Bill Viola, and Peter Ramus while Printing Lecture Notes." M/C Journal 8.2 (2005). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0506/05-maras.php>. APA Style Maras, S. (Jun. 2005) "Reflections on Adobe Corporation, Bill Viola, and Peter Ramus while Printing Lecture Notes," M/C Journal, 8(2). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0506/05-maras.php>.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie