Articoli di riviste sul tema "Multi-Level Collision Risk"

Segui questo link per vedere altri tipi di pubblicazioni sul tema: Multi-Level Collision Risk.

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-30 articoli di riviste per l'attività di ricerca sul tema "Multi-Level Collision Risk".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi gli articoli di riviste di molte aree scientifiche e compila una bibliografia corretta.

1

Zhang, Weibin, Xinyu Feng, Yong Qi, Feng Shu, Yijin Zhang e Yinhai Wang. "Towards a Model of Regional Vessel Near-miss Collision Risk Assessment for Open Waters based on AIS Data". Journal of Navigation 72, n. 06 (22 maggio 2019): 1449–68. http://dx.doi.org/10.1017/s037346331900033x.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The absence of a regional, open water vessel collision risk assessment system endangers maritime traffic and hampers safety management. Most recent studies have analysed the risk of collision for a pair of vessels and propose micro-level risk models. This study proposes a new method that combines density complexity and a multi-vessel collision risk operator for assessing regional vessel collision risk. This regional model considers spatial and temporal features of vessel trajectory in an open water area and assesses multi-vessel near-miss collision risk through danger probabilities and possible consequences of collision risks via four types of possible relative striking positions. Finally, the clustering method of multi-vessel encountering risk, based on the proposed model, is used to identify high-risk collision areas, which allow reliable and accurate analysis to aid implementation of safety measures.
2

Xue, Qingwen, Ke Wang, Jian John Lu e Yujie Liu. "Rapid Driving Style Recognition in Car-Following Using Machine Learning and Vehicle Trajectory Data". Journal of Advanced Transportation 2019 (23 gennaio 2019): 1–11. http://dx.doi.org/10.1155/2019/9085238.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Rear-end collision crash is one of the most common accidents on the road. Accurate driving style recognition considering rear-end collision risk is crucial to design useful driver assistance systems and vehicle control systems. The purpose of this study is to develop a driving style recognition method based on vehicle trajectory data extracted from the surveillance video. First, three rear-end collision surrogates, Inversed Time to Collision (ITTC), Time-Headway (THW), and Modified Margin to Collision (MMTC), are selected to evaluate the collision risk level of vehicle trajectory for each driver. The driving style of each driver in training data is labelled based on their collision risk level using K-mean algorithm. Then, the driving style recognition model’s inputs are extracted from vehicle trajectory features, including acceleration, relative speed, and relative distance, using Discrete Fourier Transform (DFT), Discrete Wavelet Transform (DWT), and statistical method to facilitate the driving style recognition. Finally, Supporting Vector Machine (SVM) is applied to recognize driving style based on the labelled data. The performance of Random Forest (RF), K-Nearest Neighbor (KNN), and Multi-Layer Perceptron (MLP) is also compared with SVM. The results show that SVM overperforms others with 91.7% accuracy with DWT feature extraction method.
3

Du, Lei, Osiris A. Valdez Banda e Zhongyi Sui. "Available-Maneuvering-Margins-Based Ship Collision Alert System". Journal of Marine Science and Engineering 10, n. 8 (15 agosto 2022): 1123. http://dx.doi.org/10.3390/jmse10081123.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The timing of a ship taking evasive maneuvers is crucial for the success of collision avoidance, which is affected by the perceived risk by the navigator. Therefore, we propose a collision alert system (CAS) based on the perceived risk by the navigator to trigger a ship’s evasive maneuvers in a timely manner to avoid close-quarters situations. The available maneuvering margins (AMM) with ship stability guarantees are selected as a proxy to reflect the perceived risk of a navigator; hence, the proposed CAS is referred to as an AMM-based CAS. Considering the dynamic nature of ship operations, the non-linear velocity obstacle method is utilized to identify the presence of collision risk to further activate this AMM-based CAS. The AMM of a ship are measured based on ship maneuverability and stability models, and the degree to which they violate the risk-perception-based ship domain determines the level of collision alert. Several typical encounter scenarios are selected from AIS data to demonstrate the feasibility of this AMM-based CAS. The promising results suggest that this proposed AMM-based CAS is applicable in both ship pair encounter and multi-vessel encounter scenarios. Collision risk can be accurately detected, and then a collision alert consistent with the risk severity is issued. This proposed AMM-based CAS has the potential to assist autonomous ships in understanding the risk level of the encounter situation and determining the timing for evasive maneuvers. The advantages and limitation of this proposed method are discussed.
4

Wu, Di, Zhi Yu, Alimasi Adili e Fanchen Zhao. "A Self-Collision Detection Algorithm of a Dual-Manipulator System Based on GJK and Deep Learning". Sensors 23, n. 1 (3 gennaio 2023): 523. http://dx.doi.org/10.3390/s23010523.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Self-collision detection is fundamental to the safe operation of multi-manipulator systems, especially when cooperating in highly dynamic working environments. Existing methods still face the problem that detection efficiency and accuracy cannot be achieved at the same time. In this paper, we introduce artificial intelligence technology into the control system. Based on the Gilbert-Johnson-Keerthi (GJK) algorithm, we generated a dataset and trained a deep neural network (DLNet) to improve the detection efficiency. By combining DLNet and the GJK algorithm, we propose a two-level self-collision detection algorithm (DLGJK algorithm) to solve real-time self-collision detection problems in a dual-manipulator system with fast-continuous and high-precision properties. First, the proposed algorithm uses DLNet to determine whether the current working state of the system has a risk of self-collision; since most of the working states in a system workspace do not have a self-collision risk, DLNet can effectively reduce the number of unnecessary detections and improve the detection efficiency. Then, for the working states with a risk of self-collision, we modeled precise colliders and applied the GJK algorithm for fine self-collision detection, which achieved detection accuracy. The experimental results showed that compared to that with the global use of the GJK algorithm for self-collision detection, the DLGJK algorithm can reduce the time expectation of a single detection in a system workspace by 97.7%. In the path planning of the manipulators, it could effectively reduce the number of unnecessary detections, improve the detection efficiency, and reduce system overhead. The proposed algorithm also has good scalability for a multi-manipulator system that can be split into dual-manipulator systems.
5

Chen, Xingyu, Haijian Bai, Heng Ding, Jianshe Gao e Wenjuan Huang. "A Safety Control Method of Car-Following Trajectory Planning Based on LSTM". Promet - Traffic&Transportation 35, n. 3 (28 giugno 2023): 380–94. http://dx.doi.org/10.7307/ptt.v35i3.118.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This paper focuses on the potential safety hazards of collision in car-following behaviour generated by deep learning models. Based on an intelligent LSTM model, combined with a Gipps model of safe collision avoidance, a new, Gipps-LSTM model is constructed, which can not only learn the intelligent behaviour of people but also ensure the safety of vehicles. The idea of the Gipps-LSTM model combination is as follows: the concept of a potential collision point (PCP) is introduced, and the LSTM model or Gipps model is controlled and started through a risk judgment algorithm. Dataset 1 and dataset 2 are used to train and simulate the LSTM model and Gipps-LSTM model. The simulation results show that the Gipps-LSTM can solve the problem of partial trajectory collision in the LSTM model simulation. Moreover, the risk level of all trajectories is lower than that of the LSTM model. The safety and stability of the model are verified by multi-vehicle loop simulation and multi-vehicle linear simulation. Compared with the LSTM model, the safety of the Gipps-LSTM model is improved by 42.02%, and the convergence time is reduced by 25 seconds.
6

Zhao, Shizhong, Zhengsheng Hu, Yangyang Yu, Gongxun Deng e Min Deng. "Passive Safety Assessment of Railroad Trains in Moose Herd Collision Scenarios". Sustainability 16, n. 3 (25 gennaio 2024): 1043. http://dx.doi.org/10.3390/su16031043.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Moose herd–train collisions represent one of the potential hazards that railway operations must contend with, making the assessment of passive train safety in such scenarios a crucial concern. This study analyzes the responses of bullet trains colliding with moose herds and investigates the influence of various factors under these conditions. To achieve this goal, a multibody (MB) model was developed using the MADYMO platform. The displacement of the moose’s center of gravity (CG) was employed to assess the safety boundaries, while the relative positions between the wheels and rails were used to evaluate the risk of derailment. The findings revealed that the collision forces exhibited multi-peak characteristics that were subsequently transmitted to the wheel–rail contact system, resulting in disturbances in the relative positions of the wheels and rails. However, these disturbances did not reach a level that would induce train derailment. Furthermore, larger moose herds exhibited higher throw heights, although these heights remained within safe limits and did not pose a threat to overhead lines. The primary safety risk in moose–train collisions stemmed from secondary collisions involving moose that had fallen onto the tracks and oncoming trains. This study offers valuable insights for enhancing the operational safety of high-speed trains and safeguarding wildlife along railway corridors.
7

Bakdi, Azzeddine, Ingrid Kristine Glad, Erik Vanem e Øystein Engelhardtsen. "AIS-Based Multiple Vessel Collision and Grounding Risk Identification based on Adaptive Safety Domain". Journal of Marine Science and Engineering 8, n. 1 (19 dicembre 2019): 5. http://dx.doi.org/10.3390/jmse8010005.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The continuous growth in maritime traffic and recent developments towards autonomous navigation have directed increasing attention to navigational safety in which new tools are required to identify real-time risk and complex navigation situations. These tools are of paramount importance to avoid potentially disastrous consequences of accidents and promote safe navigation at sea. In this study, an adaptive ship-safety-domain is proposed with spatial risk functions to identify both collision and grounding risk based on motion and maneuverability conditions for all vessels. The algorithm is designed and validated through extensive amounts of Automatic Identification System (AIS) data for decision support over a large area, while the integration of the algorithm with other navigational systems will increase effectiveness and ensure reliability. Since a successful evacuation of a potential vessel-to-vessel collision, or a vessel grounding situation, is highly dependent on the nearby maneuvering limitations and other possible accident situations, multi-vessel collision and grounding risk is considered in this work to identify real-time risk. The presented algorithm utilizes and exploits dynamic AIS information, vessel registry and high-resolution maps and it is robust to inaccuracies of position, course and speed over ground records. The computation-efficient algorithm allows for real-time situation risk identification at a large-scale monitored map up to country level and up to several years of operation with a very high accuracy.
8

Zhou, Hechao, Chao Zhang, Jun Zhan e Jimin Zhang. "Research on the city tram collision at a level crossing". Advances in Mechanical Engineering 10, n. 9 (settembre 2018): 168781401879756. http://dx.doi.org/10.1177/1687814018797563.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The collision simulation between a city tram and a car at a level crossing is carried out based on the multi-body dynamics. Compared with the finite element method, this method has an obvious advantage of fast computation speed, so that it is convenient for the study on the dynamic responses and derailment mechanism of railway vehicles during a collision accident. The simulation results show that when a city tram is laterally impacted by a car at a level crossing, the dynamic response and the derailment risk of the collided city tram are greatly influenced by the boundary conditions, such as the mass and speed of the colliding car, the structural arrangement and the loading condition of the city tram. Moreover, in terms of the space limitation of the city tram, two measures are proposed to decrease the lateral impact force during its transition from car body to wheel set. One is to use the secondary damper and the other is to increase the secondary lateral clearance. The simulation results point out that the derailment coefficient of the improved city tram can be reduced by 52%, from 1.63 to 0.79.
9

Cui, Jiapeng, Yu Wu, Xichao Su e Jingyu Song. "A Task Allocation Model for a Team of Aircraft Launching on the Carrier". Mathematical Problems in Engineering 2018 (2018): 1–12. http://dx.doi.org/10.1155/2018/7920806.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
High level efficiency and safety are of paramount importance for the improvement of fighting capability of an aircraft carrier. The task allocation problem for a team of aircraft launching on the carrier is studied in this paper. Although the study of this problem is of great significance, no relevant literature has been found on this issue. Firstly, the conceptual model of problem is formulated with the planning objectives and the constraints defined. Then the multi-aircraft and multi-catapult launching task allocation problem is decomposed into two consecutive sub-tasks, that is, catapult allocation and the launching sequence determination. The taxi time of aircraft is considered during the catapult allocation process, and the launching position of each aircraft is determined using a decision-making method. In the launching sequence determination step, the starting collision risk of aircraft is introduced to optimize the launching sequence which results in the minimum collision risk on each catapult. Thirdly, the proposed method is validated using the setups of the Nimitz-class aircraft carrier. The proposed method is used to solve the task allocation problem and is compared to the artificial heuristics approach and the brute force approach. Experiment results demonstrate that the proposed method has better performance than the artificial heuristics approach and has better performance than the brute force approach in balancing efficiency and safety.
10

Ochoa, Eduardo, Nuno Gracias, Klemen Istenič, Josep Bosch, Patryk Cieślak e Rafael García. "Collision Detection and Avoidance for Underwater Vehicles Using Omnidirectional Vision". Sensors 22, n. 14 (18 luglio 2022): 5354. http://dx.doi.org/10.3390/s22145354.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Exploration of marine habitats is one of the key pillars of underwater science, which often involves collecting images at close range. As acquiring imagery close to the seabed involves multiple hazards, the safety of underwater vehicles, such as remotely operated vehicles (ROVs) and autonomous underwater vehicles (AUVs), is often compromised. Common applications for obstacle avoidance in underwater environments are often conducted with acoustic sensors, which cannot be used reliably at very short distances, thus requiring a high level of attention from the operator to avoid damaging the robot. Therefore, developing capabilities such as advanced assisted mapping, spatial awareness and safety, and user immersion in confined environments is an important research area for human-operated underwater robotics. In this paper, we present a novel approach that provides an ROV with capabilities for navigation in complex environments. By leveraging the ability of omnidirectional multi-camera systems to provide a comprehensive view of the environment, we create a 360° real-time point cloud of nearby objects or structures within a visual SLAM framework. We also develop a strategy to assess the risk of obstacles in the vicinity. We show that the system can use the risk information to generate warnings that the robot can use to perform evasive maneuvers when approaching dangerous obstacles in real-world scenarios. This system is a first step towards a comprehensive pilot assistance system that will enable inexperienced pilots to operate vehicles in complex and cluttered environments.
11

Ahmed, Ahmed, Samah Ibrahim A. Aal, Ahmed Abdelhafeez e Ahmed E. Fakhry. "An intelligent Fusion Framework for Risk Assessment of Autonomous Ship through Functional Mapping Criterion Sub-Intervals into Single Interval Method". Fusion: Practice and Applications 15, n. 1 (2024): 45–58. http://dx.doi.org/10.54216/fpa.150104.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Nowadays, intelligent information technology can implement high-level information processing and decision-making activities that can support risk assessment of autonomous. Risk assessment is a critical process for deploying autonomous ships, ensuring these innovative vessels' safe and efficient operation. There is a need to identify, analyze, and mitigate potential risks associated with system reliability, collision avoidance, cybersecurity, environmental conditions, human interaction, regulatory compliance, sensor performance, data integrity, emergency response, and testing and validation. This work provides an overview of the essential considerations and objectives of risk assessment in autonomous boats. We used the multi-criteria decision-making model to deal with various criteria. The Ranking of Alternatives through Functional Mapping Criterion Sub-Intervals into Single Interval (RAFSI) method is applied to rank the alternatives. We used the ten criteria and twenty options in this study. The results show that the proposed framework can provide a comprehensive risk assessment framework that can enable stakeholders to gain insights into potential hazards and vulnerabilities unique to autonomous ships.
12

Metz, Isabel C., Joost Ellerbroek, Thorsten Mühlhausen, Dirk Kügler e Jacco M. Hoekstra. "Analysis of Risk-Based Operational Bird Strike Prevention". Aerospace 8, n. 2 (28 gennaio 2021): 32. http://dx.doi.org/10.3390/aerospace8020032.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Bird strike prevention in civil aviation has traditionally focused on the airport perimeter. Since the risk of especially damaging bird strikes outside the airport boundaries is rising, this paper investigates the safety potential of operational bird strike prevention involving pilots and controllers. In such a concept, controllers would be equipped with a bird strike advisory system, allowing them to delay departures which are most vulnerable to the consequences of bird strikes in case of high bird strike risk. An initial study has shown the strong potential of the concept to prevent bird strikes in case of perfect bird movement prediction. This paper takes the research to the next level by taking into account the limited predictability of bird tracks. As such, the collision avoidance algorithm is extended to a bird strike risk algorithm. The risk of bird strikes is calculated for birds expected to cross the extended runway center line and to cause aircraft damage upon impact. By specifically targeting these birds and excluding birds lingering on the runway which are taken care of by the local wildlife control, capacity reductions should be limited, and the implementation remain feasible. The extrapolation of bird tracks is performed by simple linear regression based on the bird positions known at the intended take-off times. To calculate the probability of collision, uncertainties resulting from variability in bird velocity and track are included. The study demonstrates the necessity to limit alerts to potentially damaging strikes with birds crossing the extended runway center line to keep the imposed delays tolerable for airports operating at their capacity limits. It is shown that predicting bird movements based on simple linear regression without considering individual bird behavior is insufficient to achieve a safety-effect. Hence, in-depth studies of multi-year bird data to develop bird behavior models and reliable predictions are recommended for future research. This is expected to facilitate the implementation of a bird strike advisory system satisfying both safety and capacity aspects.
13

Yin, Hang, Yong Ming Gao, Chao Wang e Xue Bo Zhang. "Design and Implementation of a On-Orbit Service Oriented Dynamic Simulation Platform". Advanced Materials Research 712-715 (giugno 2013): 2588–93. http://dx.doi.org/10.4028/www.scientific.net/amr.712-715.2588.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The high cost of space missions, long cycle and high risk, simulation technology has been an important way in the research and engineering applications of space technology. A cost-effective space missions in rail services through the Service spacecraft on-orbit servicing, in-orbit replacement to extend the service life of the target spacecraft in the rail supply, to enhance the effectiveness of the spacecraft. In rail service technology involving orbital dynamics, collision dynamics, multibody dynamics, the dynamics knowledge integration application. Therefore requires a comprehensive and efficient simulation platform to support research to expand the application of on-orbit service. In this paper, the open source dynamics engine ODE Open Inventor graphics library and Maltlab numerical simulation software to build a multi-level, in-orbit service simulation platform, play to the advantage of three software can quickly carry out the front end of the rail service, power school simulation, operation, control and results processing.
14

Deng, Jian, Shaoyong Liu, Cheng Xie e Kezhong Liu. "Risk Coupling Characteristics of Maritime Accidents in Chinese Inland and Coastal Waters Based on N-K Model". Journal of Marine Science and Engineering 10, n. 1 (21 dicembre 2021): 4. http://dx.doi.org/10.3390/jmse10010004.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The causes of maritime accidents are complex, mostly due to the coupling of four types of factors: human-ship-environmental-management. To effectively analyze the causes of maritime accidents in China, and reveal the risk coupling characteristics of accidents, this paper establishes the N-K model of maritime accident, and calculates and analyzes the four types of coupling of risk factors affecting safety in maritime traffic. This paper collects 922 maritime accidents that occurred in China from 2000 to 2020, and analyzes the location, type, and level of accidents and uses the trigger principle to describe the process of accidents. For marine and inland river accidents, this paper calculates the four types of coupling values of risk factors (single-factor coupling, two-factor coupling, three-factor coupling, four-factor coupling) for comparison and analysis. In addition, this paper calculates the coupling values of six typical maritime accidents of collision, sinking, contact, fire/explosion, stranding, grounding. According to the coupling values and the frequency of sub-factors, this paper analyzes the coupling characteristics of maritime accidents. The results show that in maritime accidents, as the number of risk factors participating in the coupling increases, the coupling value increases, and the multi-factor coupling is more likely to cause accidents. The overall situation of risk coupling causes of maritime accidents is basically consistent with inland river accidents, but they have their own characteristics in the specific degree of risk coupling and the dominant risk elements. In different types of maritime accidents, the risk coupling has different characteristics, and the dominant risk factors are also different.
15

Guo, Chongchong, e Wenhua Wu. "Quantitative Risk Analysis of Disconnect Operations in a Marine Nuclear Power Platform Using Fuzzy Bayesian Network". Journal of Marine Science and Engineering 10, n. 10 (11 ottobre 2022): 1479. http://dx.doi.org/10.3390/jmse10101479.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Marine nuclear power platforms can continuously supply electricity and fresh water for marine resource exploration and surrounding islands. China’s first marine nuclear power platform uses a soft yoke multi-joint connect mode as the mooring positioning device. When the marine nuclear power platform needs repair, maintenance, nuclear fuel replacement, or a different operation area, a mooring disconnect operation must be carried out. The traditional mooring disconnect process consists of four stages: cable limiting, yoke offloading, yoke dropping, and equipment recovery stages. The entire disconnect process is a high-risk nuclear-related operation that could result in a collision accident between the yoke and hull structure, resulting in nuclear fuel leaks and casualties. Therefore, it is necessary to evaluate the risk factors of the disconnect process and to assess the risk level together with the consequence of each risk. In this paper, a quantitative risk analysis of nuclear power platform disconnect operations is carried out based on a fuzzy Bayesian network approach for risk events in each stage of the disconnect operations. Based on the forward fuzzy Bayesian inference, the criticality of each risk event to the disconnect process is evaluated and compared. The main risk factors that may cause a disconnect accident are then determined based on the reverse Bayesian inference rule. The results indicate that human error is the most likely factor leading to the failure of the disconnect process, requiring strict control of personnel operation procedures during this process. The yoke colliding with the hull and stern antifriction chain-breaking are the most significant hazards caused by the disconnect failing. Thus, the distance between the yoke and hull, stern tug tensile force, and maintenance of the antifriction chain should receive particular attention.
16

Wang, Xudong, Changming Hu, Jing Liang, Juan Wang e Siyuan Dong. "A Study of the Factors Influencing the Construction Risk of Steel Truss Bridges Based on the Improved DEMATEL–ISM". Buildings 13, n. 12 (7 dicembre 2023): 3041. http://dx.doi.org/10.3390/buildings13123041.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
To enhance the safety management of steel-truss-bridge construction, an evaluation method based on the improved DEMATEL–ISM was proposed to analyze the risk factors involved in such construction. Decision Making Trial and Evaluation Laboratory (DEMATEL) is a method for systematic factor analysis that utilizes graph-theory and -matrix tools, allowing for the assessment of the existence and strength of relationships between elements by analyzing the logical and direct impact relationships among various elements in a system. The distinctive feature of Interpretative Structural Modeling (ISM) is the decomposing of complex systems into several subsystems (elements) and constructing the system into a multi-level hierarchical structural model through algebraic operations. Specifically, triangular fuzzy numbers are introduced initially to improve the direct influence matrix in the DEMATEL method, thereby reducing the subjectivity of expert evaluations. The degree of influence, influenced degree, centrality degree, and causality degree of each influencing factor are determined and ranked based on the above analysis. In response to the characteristics of top-push construction, 20 key factors were selected from four aspects: “human, material, environment, and management”. The top five identified influencing factors are displacement during pushing (X10), safety-management qualification (X18), local buckling (X14), overturning of steel beams (X13), and collision with bridge piers during guide beam installation (X7). Subsequently, corresponding solutions were proposed for different influencing factors. The results of the study offer targeted measures to enhance the safety management of steel truss bridge construction and provide a reference for accident prevention.
17

Zhang, Jingyu, Fangyan Liu, Zhenqi Chen, Zhenhua Yu, Xingyao Xiao, Lei Shi e Zizheng Guo. "A multi-level analysis on the causes of train-pedestrian collisions in Southwest China 2011–2020". Accident Analysis & Prevention 193 (dicembre 2023): 107332. http://dx.doi.org/10.1016/j.aap.2023.107332.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
18

Fitrikananda, Bona P., Yazdi Ibrahim Jenie, Rianto Adhy Sasongko e Hari Muhammad. "Risk Assessment Method for UAV’s Sense and Avoid System Based on Multi-Parameter Quantification and Monte Carlo Simulation". Aerospace 10, n. 9 (1 settembre 2023): 781. http://dx.doi.org/10.3390/aerospace10090781.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The rise in Unmanned Aerial Vehicle (UAV) usage has opened exciting possibilities but has also introduced risks, particularly in aviation, with instances of UAVs flying dangerously close to commercial airplanes. The potential for accidents underscores the urgent need for effective measures to mitigate mid-air collision risks. This research aims to assess the effectiveness of the Sense and Avoid (SAA) system during operation by providing a rating system to quantify its parameters and operational risk, ultimately enabling authorities, developers, and operators to make informed decisions to reach a certain level of safety. Seven parameters are quantified in this research: the SAA’s detection range, field of view, sensor accuracy, measurement rate, system integration, and the intruder’s range and closing speed. While prior studies have addressed these parameter quantifications separately, this research’s main contribution is the comprehensive method that integrates them all within a simple five-level risk rating system. This quantification is complemented by a risk assessment simulator capable of testing a UAV’s risk rating within a large sample of arbitrary flight traffic in a Monte Carlo simulation setup, which ultimately derives its maximum risk rating. The simulation results demonstrated safety improvements using the SAA system, shown by the combined maximum risk rating value. Among the contributing factors, the detection range and sensor accuracy of the SAA system stand out as the primary drivers of this improvement. This conclusion is consistent even in more regulated air traffic imposed with five or three mandatory routes. Interestingly, increasing the number of intruders to 50 does not alter the results, as the intruders’ probability of being detected remains almost the same. On the other hand, improving SAA radar capability has a more significant effect on risk rating than enforcing regulations or limiting intruders.
19

Lei, Wu, Wen Wen, Jiang Xiangyang, Wang Ran, Zhu Zhi Jie, Wang Hong Yan, Chun Kit Ang e Mak Jyh Yoong. "Comprehensive application system of BIM technology in MULTIBAY project". Highlights in Science, Engineering and Technology 73 (29 novembre 2023): 570–79. http://dx.doi.org/10.54097/hset.v73i.14680.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
With the rapid development of urbanization, the number of super high-rise buildings with increased building volume and scale is gradually increasing. At present, there are still problems such as inconsistent engineering information transmission and low efficiency of collaborative management among various participants in the construction of international super high-rise buildings. Because of this, based on engineering practice, using the advantages of BIM technology such as visualization, simulation, and collaboration, this article explores the application process of BIM technology in the design stage of super high-rise buildings, such as multi-disciplinary detailed design and drawing collision inspection, as well as in the construction stage, such as visual technical disclosure and scheme selection. It also constructs a BIM whole process management system, enabling BIM technology to better participate in the whole process of project construction, improving the level of project information technology, and providing engineering reference for the application of BIM technology in similar super high-rise buildings.
20

Semenov, A. N., e O. P. Polyansky. "MODELS FOR THE FORMATION OF POLYPHASE GABBRO-MONZODIORITE MASSIFS OF THE WESTERN SANGILEN IN THE COLLISIONAL AND TRANSTENSIONAL-SHEAR SETTINGS". Geodynamics & Tectonophysics 14, n. 6 (14 dicembre 2023): 0725. http://dx.doi.org/10.5800/gt-2023-14-6-0725.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
A model for the formation of intrusions of the collision stage of 525–490 Ma and a model of magmatism of the transtensional shear stage of 465–440 Ma within the Mugur-Chinchilig and Erzin-Naryn blocks of Western Sangilen (Tuva) have been developed to describe the process of crust-mantle interaction. Model experiments confirm petrological data on the presence of multi-level chambers during the formation of the Pravotarlashkinsky and Bashkymugur massifs. The proposed model describes the migration of mantle magmas above the head of the mantle plume at the collision stage and assumes the rise of magmas along a permeable tectonic zone in the mantle lithosphere and crust at the transtensional-shear stage. The modeling results allow us to establish that material from the magma chamber can reach depths of the upper crust in the volume ratio of gabbroids to diorites from 1 : 2 to 3 : 4 and additionally introduce about 5 % of the volume fraction of lower crustal material.The physical parameters of the primary magmas (viscosity, solidus and liquidus temperatures, degree of melting depending on temperature and composition, change in density) were calculated taking into account the real geochemical characteristics of igneous rocks from the polyphase massifs of Western Sangilen.
21

Wright, Patrick G. R., Frazer G. Coomber, Chloe C. Bellamy, Sarah E. Perkins e Fiona Mathews. "Predicting hedgehog mortality risks on British roads using habitat suitability modelling". PeerJ 7 (21 gennaio 2020): e8154. http://dx.doi.org/10.7717/peerj.8154.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Road vehicle collisions are likely to be an important contributory factor in the decline of the European hedgehog (Erinaceus europaeus) in Britain. Here, a collaborative roadkill dataset collected from multiple projects across Britain was used to assess when, where and why hedgehog roadkill are more likely to occur. Seasonal trends were assessed using a Generalized Additive Model. There were few casualties in winter—the hibernation season for hedgehogs—with a gradual increase from February that reached a peak in July before declining thereafter. A sequential multi-level Habitat Suitability Modelling (HSM) framework was then used to identify areas showing a high probability of hedgehog roadkill occurrence throughout the entire British road network (∼400,000 km) based on multi-scale environmental determinants. The HSM predicted that grassland and urban habitat coverage were important in predicting the probability of roadkill at a national scale. Probabilities peaked at approximately 50% urban cover at a one km scale and increased linearly with grassland cover (improved and rough grassland). Areas predicted to experience high probabilities of hedgehog roadkill occurrence were therefore in urban and suburban environments, that is, where a mix of urban and grassland habitats occur. These areas covered 9% of the total British road network. In combination with information on the frequency with which particular locations have hedgehog road casualties, the framework can help to identify priority areas for mitigation measures.
22

Chang, Philip, Peter Elmer, Yanxi Gu, Vyacheslav Krutelyov, Gavin Niendorf, Michael Reid, Balaji Venkat Sathia Narayanan et al. "Line Segment Tracking in the High-luminosity LHC". EPJ Web of Conferences 295 (2024): 02019. http://dx.doi.org/10.1051/epjconf/202429502019.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The Large Hadron Collider (LHC) will be upgraded to Highluminosity LHC, increasing the number of simultaneous proton-proton collisions (pileup, PU) by several-folds. The harsher PU conditions lead to exponentially increasing combinatorics in charged particle tracking, placing a large demand on the computing resources. The projection on required computing resources exceeds the computing budget with the current algorithms running on single-thread CPUs. Motivated by the rise of heterogeneous computing in high-performance computing centers, we present Line Segment Tracking (LST), a highly parallelizeable algorithm that can run efficiently on GPUs and is being integrated to the CMS experiment central software. The usage of Alpaka framework for the algorithm implementation allows better portability of the code to run on different types of commercial parallel processors allowing flexibility on which processors to purchase for the experiment in the future. To verify a similar computational performance with a native solution, the Alpaka implementation is compared with a CUDA one on a NVIDIA Tesla V100 GPU. The algorithm creates short track segments in parallel, and progressively form higher level objects by linking segments that are consistent with genuine physics track hypothesis. The computing and physics performance are on par with the latest, multi-CPU versions of existing CMS tracking algorithms.
23

Essefi, Elhoucine. "Homo Sapiens Sapiens Progressive Defaunation During The Great Acceleration: The Cli-Fi Apocalypse Hypothesis". International Journal of Toxicology and Toxicity Assessment 1, n. 1 (17 luglio 2021): 18–23. http://dx.doi.org/10.55124/ijt.v1i1.114.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This paper is meant to study the apocalyptic scenario of the at the perspectives of the Great Acceleration. the apocalyptic scenario is not a pure imagination of the literature works. Instead, scientific evidences are in favour of dramatic change in the climatic conditions related to the climax of Man actions. the modelling of the future climate leads to horrible situations including intolerable temperatures, dryness, tornadoes, and noticeable sear level rise evading coastal regions. Going far from these scientific claims, Homo Sapiens Sapiens extended his imagination through the Climate-Fiction (cli-fi) to propose a dramatic end. Climate Fiction is developed into a recording machine containing every kind of fictions that depict environmental condition events and has consequently lost its true significance. Introduction The Great Acceleration may be considered as the Late Anthropocene in which Man actions reached their climax to lead to dramatic climatic changes paving the way for a possible apocalyptic scenario threatening the existence of the humanity. So, the apocalyptic scenario is not a pure imagination of the literature works. Instead, many scientific arguments especially related to climate change are in favour of the apocalypse1. As a matter of fact, the modelling of the future climate leads to horrible situations including intolerable temperatures (In 06/07/2021, Kuwait recorded the highest temperature of 53.2 °C), dryness, tornadoes, and noticeable sear level rise evading coastal regions. These conditions taking place during the Great Acceleration would have direct repercussions on the human species. Considering that the apocalyptic extinction had really caused the disappearance of many stronger species including dinosaurs, Homo Sapiens Sapiens extended his imagination though the Climate-Fiction (cli-fi) to propose a dramatic end due to severe climate conditions intolerable by the humankind. The mass extinction of animal species has occurred several times over the geological ages. Researchers have a poor understanding of the causes and processes of these major crises1. Nonetheless, whatever the cause of extinction, the apocalyptic scenario has always been present in the geological history. For example, dinosaurs extinction either by asteroids impact or climate changes could by no means denies the apocalyptic aspect2.At the same time as them, many animal and plant species became extinct, from marine or flying reptiles to marine plankton. This biological crisis of sixty-five million years ago is not the only one that the biosphere has suffered. It was preceded and followed by other crises which caused the extinction or the rarefaction of animal species. So, it is undeniable that many animal groups have disappeared. It is even on the changes of fauna that the geologists of the last century have based themselves to establish the scale of geological times, scale which is still used. But it is no less certain that the extinction processes, extremely complex, are far from being understood. We must first agree on the meaning of the word "extinction", namely on the apocalyptic aspect of the concept. It is quite understood that, without disappearances, the evolution of species could not have followed its course. Being aware that the apocalyptic extinction had massacred stronger species that had dominated the planet, Homo Sapiens Sapiens has been aware that the possibility of apocalyptic end at the perspective of the Anthropocene (i.e., Great Acceleration) could not be excluded. This conviction is motivated by the progressive defaunation in some regions3and the appearance of alien species in others related to change of mineralogy and geochemistry4 leading to a climate change during the Anthropocene. These scientific claims fed the vast imagination about climate change to set the so-called cli-fi. The concept of the Anthropocene is the new geological era which begins when the Man actions have reached a sufficient power to modify the geological processes and climatic cycles of the planet5. The Anthropocene by no means excludes the possibility of an apocalyptic horizon, namely in the perspectives of the Great Acceleration. On the contrary, two scenarios do indeed seem to dispute the future of the Anthropocene, with a dramatic cross-charge. The stories of the end of the world are as old as it is, as the world is the origin of these stories. However, these stories of the apocalypse have evolved over time and, since the beginning of the 19th century, they have been nourished particularly by science and its advances. These fictions have sometimes tried to pass themselves off as science. This is the current vogue, called collapsology6. This end is more than likely cli-fi driven7and it may cause the extinction of the many species including the Homo Sapiens Sapiens. In this vein, Anthropocene defaunation has become an ultimate reality8. More than one in eight birds, more than one in five mammals, more than one in four coniferous species, one in three amphibians are threatened. The hypothesis of a hierarchy within the living is induced by the error of believing that evolution goes from the simplest to the most sophisticated, from the inevitably stupid inferior to the superior endowed with an intelligence giving prerogative to all powers. Evolution goes in all directions and pursues no goal except the extension of life on Earth. Evolution certainly does not lead from bacteria to humans, preferably male and white. Our species is only a carrier of the DNA that precedes us and that will survive us. Until we show a deep respect for the biosphere particularly, and our planet in general, we will not become much, we will remain a predator among other predators, the fiercest of predators, the almighty craftsman of the Anthropocene. To be in the depths of our humanity, somehow giving back to the biosphere what we have taken from it seems obvious. To stop the sixth extinction of species, we must condemn our anthropocentrism and the anthropization of the territories that goes with it. The other forms of life also need to keep their ecological niches. According to the first, humanity seems at first to withdraw from the limits of the planet and ultimately succumb to them, with a loss of dramatic meaning. According to the second, from collapse to collapse, it is perhaps another humanity, having overcome its demons, that could come. Climate fiction is a literary sub-genre dealing with the theme of climate change, including global warming. The term appears to have been first used in 2008 by blogger and writer Dan Bloom. In October 2013, Angela Evancie, in a review of the novel Odds against Tomorrow, by Nathaniel Rich, wonders if climate change has created a new literary genre. Scientific basis of the apocalyptic scenario in the perspective of the Anthropocene Global warming All temperature indices are in favour of a global warming (Fig.1). According to the different scenarios of the IPCC9, the temperatures of the globe could increase by 2 °C to 5 °C by 2100. But some scientists warn about a possible runaway of the warming which can reach more than 3 °C. Thus, the average temperature on the surface of the globe has already increased by more than 1.1 °C since the pre-industrial era. The rise in average temperatures at the surface of the globe is the first expected and observed consequence of massive greenhouse gas emissions. However, meteorological surveys record positive temperature anomalies which are confirmed from year to year compared to the temperatures recorded since the middle of the 19th century. Climatologists point out that the past 30 years have seen the highest temperatures in the Northern Hemisphere for over 1,400 years. Several climatic centres around the world record, synthesize and follow the evolution of temperatures on Earth. Since the beginning of the 20th century (1906-2005), the average temperature at the surface of the globe has increased by 0.74 °C, but this progression has not been continuous since 1976, the increase has clearly accelerated, reaching 0.19 °C per decade according to model predictions. Despite the decline in solar activity, the period 1997-2006 is marked by an average positive anomaly of 0.53 °C in the northern hemisphere and 0.27 °C in the southern hemisphere, still compared to the normal calculated for 1961-1990. The ten hottest years on record are all after 1997. Worse, 14 of the 15 hottest years are in the 21st century, which has barely started. Thus, 2016 is the hottest year, followed closely by 2015, 2014 and 2010. The temperature of tropical waters increased by 1.2 °C during the 20th century (compared to 0.5 °C on average for the oceans), causing coral reefs to bleach in 1997. In 1998, the period of Fort El Niño, the prolonged warming of the water has destroyed half of the coral reefs of the Indian Ocean. In addition, the temperature in the tropics of the five ocean basins, where cyclones form, increased by 0.5 °C from 1970 to 2004, and powerful cyclones appeared in the North Atlantic in 2005, while they were more numerous in other parts of the world. Recently, mountains of studies focused on the possible scenario of climate change and the potential worldwide repercussions including hell temperatures and apocalyptic extreme events10 , 11, 12. Melting of continental glaciers As a direct result of the global warming, melting of continental glaciers has been recently noticed13. There are approximately 198,000 mountain glaciers in the world; they cover an area of approximately 726,000 km2. If they all melted, the sea level would rise by about 40 cm. Since the late 1960s, global snow cover has declined by around 10 to 15%. Winter cold spells in much of the northern half of the northern hemisphere are two weeks shorter than 100 years ago. Glaciers of mountains have been declining all over the world by an average of 50 m per decade for 150 years. However, they are also subject to strong multi-temporal variations which make forecasts on this point difficult according to some specialists. In the Alps, glaciers have been losing 1 meter per year for 30 years. Polar glaciers like those of Spitsbergen (about a hundred km from the North Pole) have been retreating since 1880, releasing large quantities of water. The Arctic has lost about 10% of its permanent ice cover every ten years since 1980. In this region, average temperatures have increased at twice the rate of elsewhere in the world in recent decades. The melting of the Arctic Sea ice has resulted in a loss of 15% of its surface area and 40% of its thickness since 1979. The record for melting arctic sea ice was set in 2017. All models predict the disappearance of the Arctic Sea ice in summer within a few decades, which will not be without consequences for the climate in Europe. The summer melting of arctic sea ice accelerated far beyond climate model predictions. Added to its direct repercussions of coastal regions flooding, melting of continental ice leads to radical climatic modifications in favour of the apocalyptic scenario. Fig.1 Evolution of temperature anomaly from 1880 to 2020: the apocalyptic scenario Sea level rise As a direct result of the melting of continental glaciers, sea level rise has been worldwide recorded14 ,15. The average level of the oceans has risen by 22 cm since 1880 and 2 cm since the year 2000 because of the melting of the glaciers but also with the thermal expansion of the water. In the 20th century, the sea level rose by around 2 mm per year. From 1990 to 2017, it reached the relatively constant rate of just over 3mm per year. Several sources contributed to sea level increase including thermal expansion of water (42%), melting of continental glaciers (21%), melting Greenland glaciers (15%) and melting Antarctic glaciers (8%). Since 2003, there has always been a rapid rise (around 3.3 mm / year) in sea level, but the contribution of thermal expansion has decreased (0.4 mm / year) while the melting of the polar caps and continental glaciers accelerates. Since most of the world’s population is living on coastal regions, sea level rise represents a real threat for the humanity, not excluding the apocalyptic scenario. Multiplication of extreme phenomena and climatic anomalies On a human scale, an average of 200 million people is affected by natural disasters each year and approximately 70,000 perish from them. Indeed, as evidenced by the annual reviews of disasters and climatic anomalies, we are witnessing significant warning signs. It is worth noting that these observations are dependent on meteorological survey systems that exist only in a limited number of countries with statistics that rarely go back beyond a century or a century and a half. In addition, scientists are struggling to represent the climatic variations of the last two thousand years which could serve as a reference in the projections. Therefore, the exceptional nature of this information must be qualified a little. Indeed, it is still difficult to know the return periods of climatic disasters in each region. But over the last century, the climate system has gone wild. Indeed, everything suggests that the climate is racing. Indeed, extreme events and disasters have become more frequent. For instance, less than 50 significant events were recorded per year over the period 1970-1985, while there have been around 120 events recorded since 1995. Drought has long been one of the most worrying environmental issues. But while African countries have been the main affected so far, the whole world is now facing increasingly frequent and prolonged droughts. Chile, India, Australia, United States, France and even Russia are all regions of the world suffering from the acceleration of the global drought. Droughts are slowly evolving natural hazards that can last from a few months to several decades and affect larger or smaller areas, whether they are small watersheds or areas of hundreds of thousands of square kilometres. In addition to their direct effects on water resources, agriculture and ecosystems, droughts can cause fires or heat waves. They also promote the proliferation of invasive species, creating environments with multiple risks, worsening the consequences on ecosystems and societies, and increasing their vulnerability. Although these are natural phenomena, there is a growing understanding of how humans have amplified the severity and impacts of droughts, both on the environment and on people. We influence meteorological droughts through our action on climate change, and we influence hydrological droughts through our management of water circulation and water processes at the local scale, for example by diverting rivers or modifying land use. During the Anthropocene (the present period when humans exert a dominant influence on climate and environment), droughts are closely linked to human activities, cultures, and responses. From this scientific overview, it may be concluded apocalyptic scenario is not only a literature genre inspired from the pure imagination. Instead, many scientific arguments are in favour of this dramatic destiny of Homo Sapiens Sapiens. Fig.2. Sea level rise from 1880 to 2020: a possible apocalyptic scenario (www.globalchange.gov, 2021) Apocalyptic genre in recent writing As the original landmark of apocalyptic writing, we must place the destruction of the Temple of Jerusalem in 587 BC and the Exile in Babylon. Occasion of a religious and cultural crossing with imprescriptible effects, the Exile brought about a true rebirth, characterized by the maintenance of the essential ethical, even cultural, of a national religion, that of Moses, kept as pure as possible on a foreign land and by the reinterpretation of this fundamental heritage by the archaic return of what was very old, both national traditions and neighbouring cultures. More precisely, it was the place and time for the rehabilitation of cultures and the melting pot for recasting ancient myths. This vast infatuation with Antiquity, remarkable even in the vocabulary used, was not limited to Israel: it even largely reflected a general trend. The long period that preceded throughout the 7th century BC and until 587, like that prior to the edict of Cyrus in 538 BC, was that of restorations and rebirths, of returns to distant sources and cultural crossings. In the biblical literature of this period, one is struck by the almost systematic link between, on the one hand, a very sustained mythical reinvestment even in form and, on the other, the frequent use of biblical archaisms. The example of Shadday, a word firmly rooted in the Semites of the Northwest and epithet of El in the oldest layers of the books of Genesis and Exodus, is most eloquent. This term reappears precisely at the time of the Exile as a designation of the divinity of the Patriarchs and of the God of Israel; Daily, ecological catastrophes now describe the normal state of societies exposed to "risks", in the sense that Ulrich Beck gives to this term: "the risk society is a society of catastrophe. The state of emergency threatens to become a normal state there1”. Now, the "threat" has become clearer, and catastrophic "exceptions" are proliferating as quickly as species are disappearing and climate change is accelerating. The relationship that we have with this worrying reality, to say the least, is twofold: on the one hand, we know very well what is happening to us; on the other hand, we fail to draw the appropriate theoretical and political consequences. This ecological duplicity is at the heart of what has come to be called the “Anthropocene”, a term coined at the dawn of the 21st century by Eugene Stoermer (an environmentalist) and Paul Crutzen (a specialist in the chemistry of the atmosphere) in order to describe an age when humanity would have become a "major geological force" capable of disrupting the climate and changing the terrestrial landscape from top to bottom. If the term “Anthropocene” takes note of human responsibility for climate change, this responsibility is immediately attributed to overpowering: strong as we are, we have “involuntarily” changed the climate for at least two hundred and fifty years. Therefore, let us deliberately change the face of the Earth, if necessary, install a solar shield in space. Recognition and denial fuel the signifying machine of the Anthropocene. And it is precisely what structures eco-apocalyptic cinema that this article aims to study. By "eco-apocalyptic cinema", we first mean a cinematographic sub-genre: eco-apocalyptic and post-eco-apocalyptic films base the possibility (or reality) of the end of the world on environmental grounds and not, for example, on damage caused by the possible collision of planet Earth with a comet. Post-apocalyptic science fiction (sometimes abbreviated as "post-apo" or "post-nuke") is a sub-genre of science fiction that depicts life after a disaster that destroyed civilization: nuclear war, collision with a meteorite, epidemic, economic or energy crisis, pandemic, alien invasion. Conclusion Climate and politics have been linked together since Aristotle. With Montesquieu, Ibn Khaldûn or Watsuji, a certain climatic determinism is attributed to the character of a nation. The break with modernity made the climate an object of scientific knowledge which, in the twentieth century, made it possible to document, despite the controversies, the climatic changes linked to industrialization. Both endanger the survival of human beings and ecosystems. Climate ethics are therefore looking for a new relationship with the biosphere or Gaia. For some, with the absence of political agreements, it is the beginning of inevitable catastrophes. For others, the Anthropocene, which henceforth merges human history with natural history, opens onto technical action. The debate between climate determinism and human freedom is revived. The reference to the biblical Apocalypse was present in the thinking of thinkers like Günther Anders, Karl Jaspers or Hans Jonas: the era of the atomic bomb would mark an entry into the time of the end, a time marked by the unprecedented human possibility of 'total war and annihilation of mankind. The Apocalypse will be very relevant in describing the chaos to come if our societies continue their mad race described as extra-activist, productivist and consumerist. In dialogue with different theologians and philosophers (such as Jacques Ellul), it is possible to unveil some spiritual, ethical, and political resources that the Apocalypse offers for thinking about History and human engagement in the Anthropocene. What can a theology of collapse mean at a time when negative signs and dead ends in the human situation multiply? What then is the place of man and of the cosmos in the Apocalypse according to Saint John? Could the end of history be a collapse? How can we live in the time we have left before the disaster? Answers to such questions remain unknown and no scientist can predict the trajectory of this Great Acceleration taking place at the Late Anthropocene. When science cannot give answers, Man tries to infer his destiny for the legend, religion and the fiction. Climate Fiction is developed into a recording machine containing every kind of fictions that depict environmental condition events and has consequently lost its true significance. Aware of the prospect of ecological collapse additionally as our apparent inability to avert it, we tend to face geology changes of forceful proportions that severely challenge our ability to imagine the implications. Climate fiction ought to be considered an important supplement to climate science, as a result, climate fiction makes visible and conceivable future modes of existence inside worlds not solely deemed seemingly by science, however that area unit scientifically anticipated. Hence, this chapter, as part of the book itself, aims to contribute to studies of ecocriticism, the environmental humanities, and literary and culture studies. References David P.G. Bondand Stephen E. Grasby. "Late Ordovician mass extinction caused by volcanism, warming, and anoxia, not cooling and glaciation: REPLY." Geology 48, no. 8 (Geological Society of America2020): 510. Cyril Langlois.’Vestiges de l'apocalypse: ‘le site de Tanis, Dakota du Nord 2019’. Accessed June, 6, 2021, https://planet-terre.ens-lyon.fr/pdf/Tanis-extinction-K-Pg.pdf NajouaGharsalli,ElhoucineEssefi, Rana Baydoun, and ChokriYaich. ‘The Anthropocene and Great Acceleration as controversial epoch of human-induced activities: case study of the Halk El Menjel wetland, eastern Tunisia’. Applied Ecology and Environmental Research 18(3) (Corvinus University of Budapest 2020): 4137-4166 Elhoucine Essefi, ‘On the Geochemistry and Mineralogy of the Anthropocene’. International Journal of Water and Wastewater Treatment, 6(2). 1-14, (Sci Forschen2020): doi.org/10.16966/2381-5299.168 Elhoucine Essefi. ‘Record of the Anthropocene-Great Acceleration along a core from the coast of Sfax, southeastern Tunisia’. Turkish journal of earth science, (TÜBİTAK,2021). 1-16. Chiara Xausa. ‘Climate Fiction and the Crisis of Imagination: Alexis Wright’s Carpentaria and The Swan Book’. Exchanges: The Interdisciplinary Research Journal 8(2), (WARWICK 2021): 99-119. Akyol, Özlem. "Climate Change: An Apocalypse for Urban Space? An Ecocritical Reading of “Venice Drowned” and “The Tamarisk Hunter”." Folklor/Edebiyat 26, no. 101 (UluslararasıKıbrısÜniversitesi 2020): 115-126. Boswell, Suzanne F. "The Four Tourists of the Apocalypse: Figures of the Anthropocene in Caribbean Climate Fiction.". Paradoxa 31, (Academia 2020): 359-378. Ayt Ougougdal, Houssam, Mohamed YacoubiKhebiza, Mohammed Messouli, and Asia Lachir. "Assessment of future water demand and supply under IPCC climate change and socio-economic scenarios, using a combination of models in Ourika Watershed, High Atlas, Morocco." Water 12, no. 6 (MPDI 2020): 1751.DOI:10.3390/w12061751. Wu, Jia, Zhenyu Han, Ying Xu, Botao Zhou, and Xuejie Gao. "Changes in extreme climate events in China under 1.5 C–4 C global warming targets: Projections using an ensemble of regional climate model simulations." Journal of Geophysical Research: Atmospheres 125, no. 2 (Wiley2020): e2019JD031057.https://doi.org/10.1029/2019JD031057 Khan, Md Jamal Uddin, A. K. M. Islam, Sujit Kumar Bala, and G. M. Islam. "Changes in climateextremes over Bangladesh at 1.5° C, 2° C, and 4° C of global warmingwith high-resolutionregionalclimate modeling." Theoretical&AppliedClimatology 140 (EBSCO2020). Gudoshava, Masilin, Herbert O. Misiani, Zewdu T. Segele, Suman Jain, Jully O. Ouma, George Otieno, Richard Anyah et al. "Projected effects of 1.5 C and 2 C global warming levels on the intra-seasonal rainfall characteristics over the Greater Horn of Africa." Environmental Research Letters 15, no. 3 (IOPscience2020): 34-37. Wang, Lawrence K., Mu-Hao Sung Wang, Nai-Yi Wang, and Josephine O. Wong. "Effect of Global Warming and Climate Change on Glaciers and Salmons." In Integrated Natural Resources Management, ed.Lawrence K. Wang, Mu-Hao Sung Wang, Yung-Tse Hung, Nazih K. Shammas(Springer 2021), 1-36. Merschroth, Simon, Alessio Miatto, Steffi Weyand, Hiroki Tanikawa, and Liselotte Schebek. "Lost Material Stock in Buildings due to Sea Level Rise from Global Warming: The Case of Fiji Islands." Sustainability 12, no. 3 (MDPI 2020): 834.doi:10.3390/su12030834 Hofer, Stefan, Charlotte Lang, Charles Amory, Christoph Kittel, Alison Delhasse, Andrew Tedstone, and Xavier Fettweis. "Greater Greenland Ice Sheet contribution to global sea level rise in CMIP6." Nature communications 11, no. 1 (Nature Publishing Group 2020): 1-11.
24

Kano, Takeshi, Takeru Kanno, Taishi Mikami e Akio Ishiguro. "Active-sensing-based decentralized control of autonomous mobile agents for quick and smooth collision avoidance". Frontiers in Robotics and AI 9 (11 novembre 2022). http://dx.doi.org/10.3389/frobt.2022.992716.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
There is an increasing demand for multi-agent systems in which each mobile agent, such as a robot in a warehouse or a flying drone, moves toward its destination while avoiding other agents. Although several control schemes for collision avoidance have been proposed, they cannot achieve quick and safe movement with minimal acceleration and deceleration. To address this, we developed a decentralized control scheme that involves modifying the social force model, a model of pedestrian dynamics, and successfully realized quick, smooth, and safe movement. However, each agent had to observe many nearby agents and predict their future motion; that is, unnecessary sensing and calculations were required for each agent. In this study, we addressed this issue by introducing active sensing. In this control scheme, an index referred to as the “collision risk level” is defined, and the observation range of each agent is actively controlled on this basis. Through simulations, we demonstrated that the proposed control scheme works reasonably while reducing unnecessary sensing and calculations.
25

Liang, Ziwei, e Mingbao Pang. "A variable time headway model for mixed car-following process considering multiple front vehicles information in foggy weather". Transportation Safety and Environment, 14 marzo 2024. http://dx.doi.org/10.1093/tse/tdae011.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Abstract The aim of this work is to investigate the Fog-related variable time headway (FVTH) model of connected and automated mixed flow considering multi-front vehicles information in a reduced visibility environment for driving safety. The car-following modes of mixed vehicles are analysed and the vehicle ratio for each mode based on Markov chain models is derived, where the number of front vehicles, CAVs penetration rate, and platoon intensity are considered. The combined coupling effect of visibility and driving speed on time headway is explored, and a variable time headway strategy is proposed. Their relationship equations are deduced as the FVTH model. The perturbation method is used to discuss the stability of traffic flow and obtain its stability judgment conditions. The proposed model was validated and the effects were discussed via simulation experiments. The results indicate that the acceleration and deceleration times of vehicles and collision possibility decrease significantly using the proposed method. When penetration rate is 50%, the number of front vehicles is three and platoon intensity is zero, the time to return to a stable state is reduced by 18.9%-30.3% and 24.7%-39.4%, respectively, and Time Exposed Time-to-collision (TET) is reduced by 26.1%-48.9% and 43.7%%-65.4%, respectively, compared with the basic IDM method and IDM method that considers multi-front vehicles information. As visibility decreases, the reduced degree of these indicator values increases. The driving efficiency and safety level can be enhanced.
26

Lin, Zenan, Ming Yue, Guangyi Chen e Jianzhong Sun. "Path planning of mobile robot with PSO-based APF and fuzzy-based DWA subject to moving obstacles". Transactions of the Institute of Measurement and Control, 4 luglio 2021, 014233122110247. http://dx.doi.org/10.1177/01423312211024798.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This paper proposes a two-layer path-planning method, where an optimized artificial potential field (APF) method and an improved dynamic window approach (DWA) are used at the global and local layer, respectively. This method enables the robot to plan a better path under a multi-obstacle environment while avoiding the moving obstacles effectively. For the part of global path planning, a new repulsive field is proposed based on the APF method. The length and smoothness of the global path are taken as fitness functions of particle swarm optimization (PSO) to obtain the obstacle influence range, the coefficients of gravitation and repulsion in APF. At the level of local path planning, on the basis of DWA, a fuzzy control scheme is adopted to evaluate the danger level of moving obstacles via collision risk index and relative distance. In brief, compared with existing methods, the robot can reasonably plan a shorter and smoother path with the aid of PSO-based APF, meanwhile quickly react to the moving obstacles and avoid them by fuzzy-based DWA. Finally, a static multi-obstacle environment and two dynamic scenarios with moving obstacles are simulated to verify the effectiveness of the proposed path-planning method.
27

Vassalos, Dracos, Francesco Mauro, Donald Paterson e Ahmed Salem. "The importance of first-principles tools for safety enhancement in the design of passenger ships in the case of flooding events". International Marine Design Conference, 26 maggio 2024. http://dx.doi.org/10.59490/imdc.2024.909.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The design of a passenger ship is a complex process covering multiple aspects of naval architecture and marine engineering to address performance, functionality safety, and cost as primary objectives. Between them, safety is a key element focusing on the people on board. In this sense, ship safety in the case of flooding events needs proper estimation from the first stages of the design process employing an appropriate metric. To this end, safety can be evaluated as a risk by calculating the Potential Loss of Life. Thanks to a multi-level framework developed during project FLARE, it is possible to calculate the risk associated with an accident, increasing the level of reliability as the design process advances. The framework aims at employing first-principles tools from the early stages of the design process, abandoning static calculations and empirical formulae as soon as data is available to set up advanced calculation techniques. Then, the framework adopts rigid-body time-domain calculations for the flooding simulations, advanced evacuation analysis tools, and direct crash simulation to evaluate collision damages. The process allows for testing alternative design solutions for the ship to enhance safety. Investigating risk control options is also possible, considering active or passive systems such as fixed foam installations, deployable barriers, or crashworthiness. Such an approach allows for evaluating safer solutions, respecting other design constraints and cost-related aspects. The present work describes the risk assessment framework for the case of flooding events, together with the different levels of accuracy that can be achieved, showing the improvements that could be reached by employing alternative risk control options.
28

Wu, Taotao, Fusako Sato, Jacobo Antona-Makoshi, Lee F. Gabler, J. Sebastian Giudice, Ahmed Alshareef, Masayuki Yaguchi, Mitsutoshi Masuda, Susan S. Margulies e Matthew B. Panzer. "Integrating Human and Nonhuman Primate Data to Estimate Human Tolerances for Traumatic Brain Injury". Journal of Biomechanical Engineering 144, n. 7 (15 febbraio 2022). http://dx.doi.org/10.1115/1.4053209.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Abstract Traumatic brain injury (TBI) contributes to a significant portion of the injuries resulting from motor vehicle crashes, falls, and sports collisions. The development of advanced countermeasures to mitigate these injuries requires a complete understanding of the tolerance of the human brain to injury. In this study, we developed a new method to establish human injury tolerance levels using an integrated database of reconstructed football impacts, subinjurious human volunteer data, and nonhuman primate data. The human tolerance levels were analyzed using tissue-level metrics determined using harmonized species-specific finite element (FE) brain models. Kinematics-based metrics involving complete characterization of angular motion (e.g., diffuse axonal multi-axial general evaluation (DAMAGE)) showed better power of predicting tissue-level deformation in a variety of impact conditions and were subsequently used to characterize injury tolerance. The proposed human brain tolerances for mild and severe TBI were estimated and presented in the form of injury risk curves based on selected tissue-level and kinematics-based injury metrics. The application of the estimated injury tolerances was finally demonstrated using real-world automotive crash data.
29

Coles, Daniel, Athanasios Angeloudis, Deborah Greaves, Gordon Hastie, Matthew Lewis, Lucas Mackie, James McNaughton et al. "A review of the UK and British Channel Islands practical tidal stream energy resource". Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 477, n. 2255 (novembre 2021). http://dx.doi.org/10.1098/rspa.2021.0469.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This review provides a critical, multi-faceted assessment of the practical contribution tidal stream energy can make to the UK and British Channel Islands future energy mix. Evidence is presented that broadly supports the latest national-scale practical resource estimate, of 34 TWh/year, equivalent to 11% of the UK’s current annual electricity demand. The size of the practical resource depends in part on the economic competitiveness of projects. In the UK, 124 MW of prospective tidal stream capacity is currently eligible to bid for subsidy support (MeyGen 1C, 80 MW; PTEC, 30 MW; and Morlais, 14 MW). It is estimated that the installation of this 124 MW would serve to drive down the levelized cost of energy (LCoE), through learning, from its current level of around 240 £ / MWh to below 150 £ / MWh , based on a mid-range technology learning rate of 17%. Doing so would make tidal stream cost competitive with technologies such as combined cycle gas turbines, biomass and anaerobic digestion. Installing this 124 MW by 2031 would put tidal stream on a trajectory to install the estimated 11.5 GW needed to generate 34 TWh/year by 2050. The cyclic, predictable nature of tidal stream power shows potential to provide additional, whole-system cost benefits. These include reductions in balancing expenditure that are not considered in conventional LCoE estimates. The practical resource is also dependent on environmental constraints. To date, no collisions between animals and turbines have been detected, and only small changes in habitat have been measured. The impacts of large arrays on stratification and predator–prey interaction are projected to be an order of magnitude less than those from climate change, highlighting opportunities for risk retirement. Ongoing field measurements will be important as arrays scale up, given the uncertainty in some environmental and ecological impact models. Based on the findings presented in this review, we recommend that an updated national-scale practical resource study is undertaken that implements high-fidelity, site-specific modelling, with improved model validation from the wide range of field measurements that are now available from the major sites. Quantifying the sensitivity of the practical resource to constraints will be important to establish opportunities for constraint retirement. Quantification of whole-system benefits is necessary to fully understand the value of tidal stream in the energy system.
30

Moll, Ellen. "What’s in a Nym? Gender, Race, Pseudonymity, and the Imagining of the Online Persona". M/C Journal 17, n. 3 (11 giugno 2014). http://dx.doi.org/10.5204/mcj.816.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The Internet has long been a venue for individuals to craft their online personas on their own terms, and many have embraced the opportunity to take on a persona that is not associated with a legally recognised name. The rise of social networking has continued to spur proliferation of online personas, but often in ways that intensify corporate mediation of these personas. Debates about online pseudonymity exemplify these tensions, especially when social media corporations attempt to implement “real name policies” that require users to use one, legally recognised name in their online interactions. These debates, however, have broader stakes: they are negotiations over who has the right to control the individual presentation of self, and thus part of a larger conversation about information control and the future of Internet culture. While there was some coverage of these debates in traditional news media, blogs were a key site for examining how real name policies affect oppressed or marginalised groups. To explore these issues, this essay analyses the rhetoric of feminist and anti-racist blog posts that argue for protecting online pseudonymity practices. In general, these sites construct pseudonymity as a technology of resistance and as a vital tool in ensuring that the Internet remains (or becomes) a democratising force. The essay will first provide an overview of the issue and of blog posts about real name policies and gender and/or race, which were selected by the depth and interest of their commentary, and found by search engine or Twitter hashtag using search terms such as “pseudonymity” and “real name policy.” The essay will then explore how these blog posts theorise how real name policies contribute to the broader move toward a surveillance society. Through these arguments, these bloggers reveal that various online communities have vastly different ways of understanding what it means to construct an online persona, and that these varied understandings in turn shape how communities inscribe value (or danger) in pseudonymous Internet practices. Feminist and Anti-Racist Blogger Responses to Real Name Policies While online pseudonymity has long been hotly debated, the conversation intensified following moves by Google-plus to implement “real name policies” in July 2011. Officially these real name policies were intended to improve the experience of users by making it easy to be found online and ensuring that online conversations remained civil. Critics of real name policies often object to the term “real name” and its implication that a pseudonym is a “fake” name. Moreover, proponents of pseudonymity tend to distinguish between pseudonymity and anonymity; a pseudonym is a public persona with relationships, a reputation to uphold, and often years of use. A pseudonym is thus not a way of escaping the responsibilities of having one’s online actions associated with one’s public persona—it is quite the opposite. Nevertheless, defenders of pseudonymity generally argue that both pseudonymity and anonymity must be permitted. Supporters argue that real name policies will enhance the experience of users, and particularly that they will help stop the widespread incivility of many internet comments, on the presumption that using one’s real name will ensure accountability for one’s behavior online. On the other side, many bloggers have argued that the use of real names will not solve these problems and will instead be a threat to the safety and privacy of users, as well as stymieing debate about important or controversial issues. Moreover, many of these bloggers theorise about gender, politics, technology, and identity in ways that resonate well with broader feminist and critical race theory, as well as current conversations about technology and surveillance society. Feminist and other defenses of pseudonymity have used a variety of tactics. One has been to portray pseudonymity as a standard part of Internet culture, and legal names or “wallet names” as an arbitrary way of governing production of public personas. Underlying this framing of pseudonymity as a fundamental part of Internet culture is a long tradition of defining the Internet as a free, open, and democratic space. Internet enthusiasts have long described and prescribed an Internet in which anyone is free to explore and exchange ideas without the ordinary limits imposed by the flesh world, arguing that the Internet encourages more open debate, decentralises networks of knowledge, allows users to try on new identities, and challenges the rigidity of categories and hierarchies that shape knowledge and conversations in the non-virtual world (Rheingold, Plant). Traditionally, pseudonymity and anonymity have been key ways for users to pursue these ends. Thus, the ability to create one or more online personas has, in this conversation, a direct relationship to questions of democracy and about whose practices count as legitimate or valuable in the online world. Additionally, many feminist bloggers frame real name policies as an attempt at corporate control; these policies thus are symbolic to some bloggers of the shift from what they imagine was once a free and open Internet to a corporate-controlled, highly commercialised realm. s.e. smith, for example, writes that “This is what the nymwars are about; a collision between capitalism and the rest of us, where identities are bargaining chips and tools,” with “nym” being the term for the name and persona that one employs online (“The Google+ Nymwars”). Pseudonymity is thus understood by these bloggers as a necessary practice in a democratic Internet, in which one has the right to define one’s own persona online, rather than allowing one’s persona to be defined by a corporation. This framing of pseudonymity as a normalised and valuable part of Internet communication also seems to be an attempt to pre-empt the question of why someone needs a pseudonym if they are not doing anything wrong, but many of the arguments in favor of pseudonyms in fact address this question directly by producing long lists, such as those at geekfeminism and techdirt. In particular, feminist and anti-racist arguments for protecting pseudonymity emphasise that this practice is especially important for women and other marginalised groups, especially since using a real name may expose them to harassment, discrimination, or social consequences. Women who discuss feminism, for example, are sometimes subject to death and rape threats (Hess; Sarkeesian; s.e. smith “On Blogging, Threats, and Silence”). While many feminist bloggers choose to use their real names anyway, most still suggest that pseudonymity must remain a choice anywhere where one seeks to have conversations about issues of import. Moreover, these arguments are a reversal of the claim that real name requirements will stop harassment—while real name policies are purportedly instated to protect the safety of online conversations, many bloggers, pseudonymous and otherwise, suggest that real name policies make women and minorities of all kinds less safe, both online and off-line, and have other negative effects on these groups as well. For instance, Elusis writes that: For minorities, often their name and reputation doesn’t just affect them, it affects their family, and it affects other members of their minority group. Stories of not just outing but of harassment, abuse, and death threats that escalated to the point of being taken seriously by law enforcement (which takes rather a lot). […] Men who get in arguments with other people online don’t get threatened with rape on a regular basis. Unsurprisingly, trans people get abused in this way too. People of color get driven from online spaces** for daring to speak out. (Hyperlinks in original) Likewise, Sarah Stokely writes: As a woman who’s written about feminism online and received anonymous hatemail and death threats for doing so, I would like to preserve my right to post under a pseudonym to keep myself safe in the real world and if I choose, so I’m not identified as a woman online in places where it might not be safe to do so. […] I don’t believe that getting rid of anonymity online will stop bad behaviour like the abuse and death threats I’ve received. I do think that getting rid of anonymity and pseudonymity online will make it easier for people like myself to become targets of abuse and potentially put us in danger. Note that these comments suggest that simply being a woman or member of any kind of minority may make one a target of harassment. Also notice that these comments tend to frame real name policies as an expression of the privileged—real name policies only appear innocuous because of the assumption that the experiences of financially privileged English-speaking white men are universal, and that knowledge of the experiences of marginalised groups is not necessary to design safe and effective policies for consumers of technology. According to feminist blogger critiques of real name policies, it is this privilege that assumes that those using pseudonyms are the “Others” that decent people must be protected from, instead of examining the possibility that those using the pseudonyms might be the ones in danger. A quotation from Geek Feminism, a site whose lengthy discussions of pseudonymity are often cited by bloggers, further illustrates the centrality of privilege to this debate: the writer notes that a proponents of real name policies has dismissed critique by saying, “Don’t say anything in a comment thread that you wouldn’t say in person,” and Geek Feminism responds, “but that sounds like the voice of someone who’s never received abuse or harassment in person” (“Hacker News and Pseudonymity”). The many bloggers who critique the privilege they find responsible for real name policies suggest that beneath conflicts over pseudonymity and accountability online is not the question of how the online world relates to the flesh world, but instead a fundamental disagreement about the nature of accountability and free expression in the flesh world. In this light, attempts to make the online world mimic the accountabilities and social norms of the offline world operate under the assumption that oppression and abuse are not the norm in the flesh world, and that it is Internet technology and Internet culture that has made conversations uncivil or unsafe, and that these should be converted to be more like the flesh world. In this set of assumptions, the flesh world is characterised by respectful and safe interactions, categories of identity are natural as opposed to something that society imposes on individuals, and the existing ways of holding people accountable for their words and actions is very effective at protecting people. Clearly, however, it takes a degree of privilege to characterise the flesh world this way. Thus, the pseudonymity debate is largely about deeper-seated questions on the nature of identity and power in online and offline settings, while appearing to be about the differences between the real world and the online world. Other bloggers have also countered the assumption that real name policies make the Internet safer, often by pointing out that sites that have mandated the use of real names still see a great deal of harassment. s.e. smith, for instance, argues, “If Google really cares about safety, it needs strong, effective, and enforceable site policies. It needs to create a culture of safety, because, well, if your website’s full of assholes, it’s your fault. Real names policies don’t work. Good site policies and the cultivation of a culture of mutual respect do” (“The Google+ Nymwars,” hyperlinks in original). Pseudonyms allow users to participate in important debates online while maintaining a public persona that allows for continued conversations and interactions, which is vital for sustained activism. In this light, policies that take away users’ abilities to control or shape their online personas may force users to choose silence for their own safety. Individual control over online personas is thus both a safety issue and a free speech issue; in direct contradiction to claims that real name policies make users safer and more able to participate in civil discussions. Other pro-pseudonymity bloggers also celebrate the way that a “robust culture of pseudonymity” focuses discussion on ideas rather than the privilege of the speaker, “which, I often think, is why authoritarians and those with authoritarian tendencies hate it” (Paolucci). boyd notes that: the issue of reputation must be turned on its head when thinking about marginalised people. Folks point to the issue of people using pseudonyms to obscure their identity and, in theory, ‘protect’ their reputation. The assumption baked into this is that the observer is qualified to actually assess someone’s reputation. All too often, and especially with marginalised people, the observer takes someone out of context and judges them inappropriately. boyd is one of many bloggers who note that if one’s name is coded as white, Anglo, and male, using one’s real name may often enhance one’s credibility and authority, but if one’s name is coded otherwise, a pseudonym may be helpful; again, assuming that the white male experience is universal allows one to assume that using a real name is a harmless request. In general, these bloggers’ tactics all serve to denaturalise the assumption that a real name is the normal, desirable, and traditional mode of presenting one’s persona, and highlight the ways that real name policies claim to reflect universal concerns but primarily reflect wealthy white men’s experiences with online personas. Information, Power, and Control over Online Personas Additionally, defenders of pseudonymity associate real name policies with the move to a surveillance society, with particular emphasis on corporate surveillance of consumer behavior, also known as the “personal information economy.” Many feminist blogger discussions of pseudonymity note that while real name policies are purportedly intended for safety and protection, they actually allow corporations to amass huge swaths of data about individuals and to keep nearly all the online activities of one person attached to their name. For example blogger much_a_luck writes that: This is exactly the source of trying to pin down who users ‘really’ are. The advertising economy is super-creepy to me, everybody trying to make money by telling people about something someone else is doing, as efficiently as possible. Maybe I'm naive, but I feel like the internet's advertising-driven economy, with it’s [sic] ability to track and target activity, has just blown this whole sector completely out of control. (Paolucci) And indeed the practice of gathering and storing as much information as possible, simply on the chance that an institution might one day use this information, is becoming a more common fear, whether with regard to corporate data mining or recent news stories about privacy and government surveillance. In the larger conversation about surveillance, in fact, it is often the case that while one side argues that information gathering makes everyone safer, an opposition will claim that such measures actually make people vulnerable to abuses of this information. Blogger Space_dinosaur_blue has called real name policies a “security placebo” that claims to stop harassment while actually doing nothing but invading privacy (comment to Paolucci). s.e. smith has argued: What this is really about, of course, is capitalism. […] For the owners of […] sites like Google+ and Facebook, there’s also a big potential to make a profit through the direct commodification of user identities. […] The standards that Google+ sets revolve around the purchase, sale, and exchange of identity, a multibillion dollar industry worldwide. This is what people should be talking about. (“The Google+ Nymwars”) Clearly, the pseudonymity debate resonates in many ways with broader discussions of surveillance, corporate and otherwise. First, scholars have often noted that surveillances practices tend to be more harmful to those in marginalised or oppressed groups, and feminist arguments for pseudonymity reinforce this finding. Additionally, many defenders of pseudonymity point out the dissembling found in companies’ claims that real name policies are there to protect the safety of users and create a civil and decent space for people to interact while actually using the data for marketing research purposes. Framing pseudonymity as anti-social, uncivil, and dangerous, assumes a criminality so to speak, or at the very least, an illegitimacy, on the part of pseudonym users. The rhetorical move here is worth noting: implicitly suggesting that a real name is an inherent part of civility and safety is also suggesting that you have an ethical obligation to those who would compile information about you. In other words, the rules of civility demand that you participate in the corporatisation and commodification of your identity and personal information. Shaping an online persona—or multiple personas—is not an act of creativity or political resistance or freedom; it is assumed to be an act of aggression toward others. We see here a new form of the “good citizenship” argument that characterises the surveillance society. In debates about national security, for instance, acceptance of extensive surveillance of all citizens is framed as a contribution to national security. Here, however, it is not national security but corporate interests that have been inserted as the epitome of the “common good.” In this framework, an anti-corporate approach to personal information appears to be anti-social and even unethical. Commodification of identity is not only the norm but also an obligation of citizenship. Furthermore, as scholars of surveillance have noted (Gilliom and Monahan for instance), social networking creates an environment in which most individuals are participating in creating a surveillance society simply through the level of documentation they voluntarily provide. Again, more and more, willing participation in surveillance practices—making it easy to be surveilled—is becoming part of one’s civic duty. Thus, the debate over pseudonymity is also a debate about the extent to which corporations can expect compliance to the increasingly normalised demands of a surveillance society. And so, for all of these reasons, debates over pseudonymity reveal a host of complex and multi-layered tensions about technology’s influence on the construction of personas, and how these personas are shaped by encroaching forms of surveillance and the marketing of identities. Proponents of pseudonymity use numerous strategies to challenge, subvert, or reconceptualise privileged assumptions about the complex relationships among names, personas, and identities. In doing so, they contribute to an important shift, from the classic question of “What’s in a name?” to “Who wants to know, and why?” References boyd, danah. “‘Real Names’ Policies Are an Abuse of Power.” Zephoria 4 Aug. 2011. 18 Oct. 2013 ‹http://www.zephoria.org/thoughts/archives/2011/08/04/real-names.html›. Coffeeandink. “RaceFail: Once More, with Misdirection.” Coffeeandink 2 Mar. 2009. 18 Oct. 2013 ‹http://coffeeandink.livejournal.com/901816.html›. Elusis. “Don’t Try to Teach Your Internet Grandmother to Suck Eggs: On Anonymity/Pseudonymity.” Elusis 5 Mar. 2009. 18 Oct. 2013 ‹http://elusis.livejournal.com/1891498.html›. Geek Feminism. “Hacker News and Pseudonymity.” Geek Feminism Wiki n.d. 15 Jan. 2014 ‹http://geekfeminism.org/2010/06/10/hacker-news-and-pseudonymity/›. Gilliom, John, and Torin Monahan. SuperVision: An Introduction to the Surveillance Society. Chicago: University of Chicago Press, 2012. Hess, Amanda. “Why Women Aren’t Welcome on the Internet.” Pacific Standard 6 Jan. 2014. 15 Apr. 2014 ‹http://www.psmag.com/navigation/health-and-behavior/women-arent-welcome-internet-72170/›. Masnick, Mike. “What’s in a Name: The Importance of Pseudonymity and the Dangers of Requiring ‘Real Names.’” TechDirt 5 Aug. 2011. 29 Apr. 2014 ‹https://www.techdirt.com/articles/20110805/14103715409/whats-name-importance-pseudonymity-dangers-requiring-real-names.shtml›. Paolucci, Denise. “Real Name Policies: They Just Don’t Work.” Dreamwidth 3 Aug. 2011. 15 Oct. 2013 ‹http://denise.dreamwidth.org/60359.html›. Plant, Sadie. Zeros + Ones: Digital Women and the New Technoculture. New York: Doubleday, 1997. Rheingold, Howard. Smart Mobs: The Next Social Revolution. Cambridge, MA: Basic Books, 2002. Sarkeesian, Anita. “Harassment, Misogyny and Silencing on YouTube.” Feminist Frequency 7 June 2012. 17 Apr. 2014 ‹http://www.feministfrequency.com/2012/06/harassment-misogyny-and-silencing-on-youtube/›. smith, s.e. “The Google+ Nymwars: Where Identity and Capitalism Collide.” Tiger Beatdown 3 Aug. 2011. 18 Oct. 2013 ‹http://tigerbeatdown.com/2011/08/03/the-google-nymwars-where-identity-and-capitalism-collide/›. smith, s.e. “On Blogging, Threats, and Silence.” Tiger Beatdown 11 Oct. 2011. 17 Apr. 2014 ‹http://tigerbeatdown.com/2011/10/11/on-blogging-threats-and-silence/›. Stokely, Sarah. “Why Google Should Allow Anonymous/Pseudonymous Names on Google+.” Sarah Stokely: On Teaching and Participating in Online Media 8 July 2011. 15 Oct. 2013 ‹http://www.sarahstokely.com/blog/2011/07/why-google-should-allow-anonymouspseudonymous-names-on-google/›. “Who Is Harmed by a Real Names Policy?” Geek Feminism Wiki n.d. 15 Oct. 2013 ‹http://geekfeminism.wikia.com/wiki/Who_is_harmed_by_a_%22Real_Names%22_policy%3F›.

Vai alla bibliografia