Статті в журналах з теми "Human falls modelling"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Human falls modelling.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Human falls modelling".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Ullah, Shahid, Caroline F. Finch, and Lesley Day. "Statistical modelling for falls count data." Accident Analysis & Prevention 42, no. 2 (March 2010): 384–92. http://dx.doi.org/10.1016/j.aap.2009.08.018.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Santos, Guto, Patricia Endo, Kayo Monteiro, Elisson Rocha, Ivanovitch Silva, and Theo Lynn. "Accelerometer-Based Human Fall Detection Using Convolutional Neural Networks." Sensors 19, no. 7 (April 6, 2019): 1644. http://dx.doi.org/10.3390/s19071644.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Human falls are a global public health issue resulting in over 37.3 million severe injuries and 646,000 deaths yearly. Falls result in direct financial cost to health systems and indirectly to society productivity. Unsurprisingly, human fall detection and prevention are a major focus of health research. In this article, we consider deep learning for fall detection in an IoT and fog computing environment. We propose a Convolutional Neural Network composed of three convolutional layers, two maxpool, and three fully-connected layers as our deep learning model. We evaluate its performance using three open data sets and against extant research. Our approach for resolving dimensionality and modelling simplicity issues is outlined. Accuracy, precision, sensitivity, specificity, and the Matthews Correlation Coefficient are used to evaluate performance. The best results are achieved when using data augmentation during the training process. The paper concludes with a discussion of challenges and future directions for research in this domain.
3

Shahabpoor, E., and A. Pavic. "Human-Structure Dynamic Interaction during Short-Distance Free Falls." Shock and Vibration 2016 (2016): 1–12. http://dx.doi.org/10.1155/2016/2108676.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The dynamic interactions of falling human bodies with civil structures, regardless of their potentially critical effects, have sparsely been researched in contact biomechanics. The physical contact models suggested in the existing literature, particularly for short-distant falls in home settings, assume the human body falls on a “rigid” (not vibrating) ground. A similar assumption is usually made during laboratory-based fall tests, including force platforms. Based on observations from a set of pediatric head-first free fall tests, the present paper shows that the dynamics of the grounded force plate are not always negligible when doing fall test in a laboratory setting. By using a similar analogy for lightweight floor structures, it is shown that ignoring the dynamics of floors in the contact model can result in an up to 35% overestimation of the peak force experienced by a falling human. A nonlinear contact model is suggested, featuring an agent-based modelling approach, where the dynamics of the falling human and the impact object (force plate or a floor structure here) are each modelled using a single-degree-of-freedom model to simulate their dynamic interactions. The findings of this research can have wide applications in areas such as impact biomechanics and sports science.
4

Kolla, Eduard, and Veronika Adamová. "3D modelovanie pre potreby simulačnej rekonštrukcie pádu ľudského subjektu z výšky." Krízový manažment 22, no. 2 (2023): 55–63. http://dx.doi.org/10.26552/krm.c.2023.2.55-63.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The article presents the methodology for 3D modelling for biomechanical evaluation of falls from the height of a human subject. It focuses on 3D modelling as a base for numerical simulation using multibody modules in the PC-Crash simulation software. Basic steps for the creation of a 3D multibody model of the relevant building structure from a 3D point cloud are presented, as well as the procedure for the creation of a biofidelic female human body model. The multibody model of building structure and a multibody model of the human body can be in the following steps used for comprehensive parametric evaluation of possible fall scenarios using the iterative approach for convergence of trace correspondences.
5

BOSIAKOV, Sergei M., Sergei A. PRONKEVICH, Igor A. MOROZ, and Gennadi I. ZALUZHNI. "BIOMECHANICAL MODELLING OF THE HUMAN SKULL STRESS STATE UNDER IMPACT BY CYLINDRICAL SOLID." Mechanics of Machines, Mechanisms and Materials 1, no. 62 (March 2023): 88–94. http://dx.doi.org/10.46864/1995-0470-2023-1-62-88-94.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Skull fractures are quite often observed in victims of falls, traffic accidents, attacks with the use of bats and rods. The aim of the study is to assess the stress-strain state of the human head under impact on the basis of finite element modelling. The impact is applied to the frontal region of the frontal bone by the middle part and the end of a cylindrical solid (a rod). The solid is differently oriented with respect to the in relation to the Frankfurt plane. The head model includes the epidermis (skin), bone structures of the skull, bone structures of the lower jaw, eyeballs, teeth, meninges (dura, arachnoid and pia mater), cerebrum (white and gray matter), cerebellum, brain stem, muscles and ligaments. The elements of the human head model are described by the models of a linearly elastic material, a viscoelastic incompressible material, an elastic-plastic material considering fracture, and a hyper-elastic material. The eyeballs are assumed as absolutely rigid. The finite element analysis was carried out for different values of the initial velocity of a rod, corresponding to the moment of its contact with the skin of the head. It was found out that the maximum equivalent stresses and deformations of the skull bone structures occur under impact by the middle part of the rod compared to impact by its end. The impact action of the rod leads to the maximum equivalent stresses if the rod is located at an angle of 60° to the vertical. The region of the maximum stresses is located at the intersection of the sagittal and coronal sutures, and to a greater extent, significant stresses are observed along the coronal suture. The results obtained can be used by experts in the field of forensic science to evaluate various scenarios for the occurrence of traumatic brain injury and substantiate further forensic investigations.
6

Vincze, Janos, and Gabriella Vincze-Tiszay. "The Biophysical Modelling of the Stress Theory." Advances in Social Sciences Research Journal 10, no. 3 (April 1, 2023): 344–51. http://dx.doi.org/10.14738/assrj.103.14350.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Selye published a short note about his findings in Nature in 1936. Selye applied his famous theory of stress to everyday life. A stress-free state is equivalent to being dead. In our paper we write about the some directions in stress research and the stress syndromes in clinical medicine. We show that eustress suits to situations when the psychoneuroendocrine stimulation of the organism and its behavior is temperate, maintaining at an optimal level the physical and mental resources and the health status and inducing a positive adaptation to the environment. We give a comparative analysis between the eustress and distress. Psychical stressors are those which come into being independently from the man’s will and destroy – as social factors – psychic and organic components taking place in the response, the organism consequently get tired. During the harmonic life gradually all human psycho-organic components shall get exhausted. Forming a biophysical model is not the task of the biologist or physicist alone, a good model can successfully constructed only by common, collective-work. This is typically the task of the biophysicist and a problem which falls within the competence of this discipline. Model is always an approximation, the user of the model has to take into consideration that he can approach only the absolute truth just through the endless series of relative truths. We use the Le-Chatelier principle and determine the measure of psychical organization.
7

P, Nishanth. "Machine Learning based Human Fall Detection System." International Journal for Research in Applied Science and Engineering Technology 9, no. VI (June 25, 2021): 2677–82. http://dx.doi.org/10.22214/ijraset.2021.35394.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Falls have become one of the reasons for death. It is common among the elderly. According to World Health Organization (WHO), 3 out of 10 living alone elderly people of age 65 and more tend to fall. This rate may get higher in the upcoming years. In recent years, the safety of elderly residents alone has received increased attention in a number of countries. The fall detection system based on the wearable sensors has made its debut in response to the early indicator of detecting the fall and the usage of the IoT technology, but it has some drawbacks, including high infiltration, low accuracy, poor reliability. This work describes a fall detection that does not reliant on wearable sensors and is related on machine learning and image analysing in Python. The camera's high-frequency pictures are sent to the network, which uses the Convolutional Neural Network technique to identify the main points of the human. The Support Vector Machine technique uses the data output from the feature extraction to classify the fall. Relatives will be notified via mobile message. Rather than modelling individual activities, we use both motion and context information to recognize activities in a scene. This is based on the notion that actions that are spatially and temporally connected rarely occur alone and might serve as background for one another. We propose a hierarchical representation of action segments and activities using a two-layer random field model. The model allows for the simultaneous integration of motion and a variety of context features at multiple levels, as well as the automatic learning of statistics that represent the patterns of the features.
8

Hanappi, Hardy. "Perplexing complexity human modelling and primacy of the group as essence of complexity." Review of Evolutionary Political Economy 1, no. 3 (November 2020): 397–417. http://dx.doi.org/10.1007/s43253-020-00028-x.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractThis paper describes the emergence of complexity as duplicated evolutionary process. The first procedural source of complexity is the quantum jump of the evolution of the human species when it started to maintain certain brain-internal models of its environment. The second—parallel—procedural origin is the evolution of a communication structure, a language, with which an already existing group of primates could frame their internal models. In contrast to definitions of complexity which use the concept in the context of theoretical physics, this approach reveals some perplexing properties of model building for a special subject of investigation, namely the human species. All adequate models of political economy (economics is just the sub-discipline that freezes political dynamics) have to be complex. Since today’s mainstream economic theory lends its formal apparatus from the mathematics of Newtonian physics, it misses the most essential features characterizing human social dynamics, i.e. its complexity. On the other hand, a formal definition of complexity by mathematicians, e.g. the one provided by Princeton Companion to Mathematics, sometimes falls short of the inspirations gained by closely observing biological systems. What is needed thus is transdisciplinary research. The first part of the paper takes Erwin Schrödinger’s book ‘What is Life?’ as a starting point for this issue. In this part, several—sometimes highly speculative—suggestions on how to proceed are presented. The following second part then identifies two central obstacles that turn out to be overcome: First, scientific research in this field always has to come up with a synthesis that states what is essential. A wealth of singular islands of knowledge isolated in their domains is unsatisfactory. Second, the modelling of political economy dynamics as a complex system has to be rooted in an understanding of how living systems in their deepest structure work. The daring hypothesis put forward is that such an understanding can be enabled by letting quantum theoretic reasoning revolutionize the formal language of the social sciences.
9

Shalom, N., R. Rabin, A. J. Abinesh, Abi Sam EA, and Aadith B. Roshan. "Design and FEM-based Analysis of Wheel Rims." International Journal for Research in Applied Science and Engineering Technology 11, no. 8 (August 31, 2023): 1297–302. http://dx.doi.org/10.22214/ijraset.2023.55336.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract: The automobile wheel rim's main function is to offer a stable surface on which to mount the tyre. Its size and design should be appropriate to fit the specific tyre that the vehicle needs. This project takes into consideration a tyre for an automobile wheel rim that falls within the disc wheel category. Design is a significant industrial process that affects the product's quality. With the aid of the modelling programme Solidworks, the wheel rim is created. The time required to create intricate 3-D models and the risk associated with the design and production process may both be significantly reduced through modelling. In order to represent the wheel rim, solidworks is used. Later, for analytical purposes, this model is loaded into ANSYS. The most recent programme utilised for modelling the various forces operating on the component, as well as for computing and visualising the results, is called ANSYS. In contrast to the approach of doing mathematical calculations by a human, ANSYS software's solver mode calculates the stresses, deflections, strain, and their relationships without the need for user involvement, saving time. When doing static analysis, ANSYS takes into account four distinct materials: structural steel, aluminium alloy, magnesium alloy, and titanium alloy.
10

Tariq, Ali, Babar Ali, Fahim Ullah, and Fahad K. Alqahtani. "Reducing Falls from Heights through BIM: A Dedicated System for Visualizing Safety Standards." Buildings 13, no. 3 (March 2, 2023): 671. http://dx.doi.org/10.3390/buildings13030671.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Falls from height (FFH) are common safety hazards on construction sites causing monetary and human loss. Accordingly, ensuring safety at heights is a prerequisite for implementing a strong safety culture in the construction industry. However, despite multiple safety management systems, FFH are still rising, indicating that compliance with safety standards and rules remains low or neglected. Building information modelling (BIM) is used in this study to develop a safety clauses visualization system using Autodesk Revit’s application programming interface (API). The prototype digitally stores and views clauses of safety standards, such as the Operational Health and Safety Rules 2022 and Introduction to Health and Safety in Construction by NEBOSH 2008, in the BIM environment. This facilitates the safety manager’s ability to ensure that the precautionary measures needed to work at different heights are observed. The developed prototype underwent a focus group evaluation involving nine experts to assess its effectiveness in preventing FFH. It successfully created a comprehensive safety clause library that allows safety managers to provide relevant safety equipment to workers before work execution. It also enhances the awareness of construction workers of all safety requirements vis-à-vis heights. Moreover, it creates a database of safety standards that can be viewed and expanded in future by adding more safety standards to ensure wider applicability.
11

Baraskar, Trupti, Vignesh Charan Raman, and Poojan Panchal. "Implementation of Super Resolution Techniques in Geospatial Satellite Imagery." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 9s (August 31, 2023): 37–42. http://dx.doi.org/10.17762/ijritcc.v11i9s.7394.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The potential for more precise land cover classifications and pattern analysis is provided by technological advancements and the growing accessibility of high-resolution satellite images, which might significantly improve the detection and quantification of land cover change for conservation. A group of methods known as "super-resolution imaging" use generative modelling to increase the resolution of an imaging system. Super-Resolution Imaging, which falls under the category of sophisticated computer vision and image processing, has a variety of practical uses, including astronomical imaging, surveillance and security, medical imaging, and satellite imaging. As computer vision is where deep learning algorithms for super-resolution first appeared, they were mostly created on RGB images in 8-bit colour depth, where the sensor and camera are separated by a few meters. But no evaluation of these methods has been done.
12

Kati, Vassiliki, Christina Kassara, Dimitrios Vassilakis, and Haritakis Papaioannou. "Balkan Chamois (Rupicapra rupicapra balcanica) Avoids Roads, Settlements, and Hunting Grounds: An Ecological Overview from Timfi Mountain, Greece." Diversity 12, no. 4 (March 27, 2020): 124. http://dx.doi.org/10.3390/d12040124.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Balkan chamois (Rupicapra rupicapra balcanica) is a protected species with an Inadequate-Bad (U2) conservation status in Greece. Our study explores its seasonal range use pattern, demography and habitat selection in a site of the Natura 2000 network, Timfi Mountain. To this aim, we examined 1168 observations obtained from six seasonal surveys (2002: four seasons, 2014 and 2017: autumn) and performed an ecological-niche factor analysis (ENFA), using 16 environmental and human-disturbance variables. The species had an annual range of 6491 ha (25% of the study area), followed the typical range-use pattern, and presented the minimum core area during the rutting season (autumn). Timfi Mt hosted 469 individuals in 2017 (the largest population in Greece), increasing by 3.55 times since 2002. The species selected higher altitudes during summer and autumn, pinewoods over broad-leaved woods as winter grounds, and it avoided south-facing slopes. Our results supported the anthropogenic risk avoidance hypothesis; the species always selected remote areas away from roads, human settlements, and hunting grounds. In Greece, 40% of its distribution area falls within hunting ban areas (16.5% of the country). A national conservation policy is needed towards maintaining and increasing roadless areas and hunting-ban areas within Balkan chamois range nationwide.
13

Abubakar Jumare, Ismail, Ramchandra Bhandari, and Abdellatif Zerga. "Environmental Life Cycle Assessment of Grid-Integrated Hybrid Renewable Energy Systems in Northern Nigeria." Sustainability 11, no. 21 (October 23, 2019): 5889. http://dx.doi.org/10.3390/su11215889.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Life cycle assessment is a crucial tool in evaluating systems performances for sustainability and decision-making. This paper provided environmental impact of integrating renewable energy systems to the utility-grid based on a baseline optimized energy production data from “HOMER” for renewable systems modelling of a site in northern Nigeria. The ultimate goal was to ascertain the best hybrid option(s) in sustaining the environment. Different assumptions and scenarios were modelled and simulated using Ganzleitlichen Bilanz (GaBi). Uncertainty analysis was ensured to the impact data based on pedigree-matrix and Excel-program, as well as overall policy relevance. The results of the impact categories revealed first scenario (i.e., conventional path-based) with the highest impacts on global warming potential (GWP), acidification potential (AP), human toxicity potential (HTP), and abiotic depletion potential (ADPfossils). The lowest impacts arise in the renewable-based scenarios for all the considered categories except the Ozone-layer depletion potential Category where the highest contribution falls in the third scenario (i.e., photovoltaic (PV)/biomass-biogas system) although all values being infinitesimal. In quantitative terms, the reduction in the GWP from the highest being the first scenario to the lowest being the fourth scenario (i.e., wind/biomass-biogas system) was 96.5%. Hence, with the outstanding contributions of the hybrid renewable systems, adopting them especially the lowest impact scenarios with expansions is relevant for environmental sustainability.
14

Wilhelm, Johannes, Mariusz Ptak, Fábio A. O. Fernandes, Konrad Kubicki, Artur Kwiatkowski, Monika Ratajczak, Marek Sawicki, and Dariusz Szarek. "Injury Biomechanics of a Child’s Head: Problems, Challenges and Possibilities with a New aHEAD Finite Element Model." Applied Sciences 10, no. 13 (June 28, 2020): 4467. http://dx.doi.org/10.3390/app10134467.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Traumatic brain injury (TBI) is a major public health problem among children. The predominant causes of TBI in young children are motor vehicle accidents, firearm incidents, falls, and child abuse. The limitation of in vivo studies on the human brain has made the finite element modelling an important tool to study brain injury. Numerical models based on the finite element approach can provide valuable data on biomechanics of brain tissues and help explain many pathological conditions. This work reviews the existing numerical models of a child’s head. However, the existing literature is very limited in reporting proper geometric representation of a small child’s head. Therefore, an advanced 2-year-old child’s head model, named aHEAD 2yo (aHEAD: advanced Head models for safety Enhancement And medical Development), has been developed, which advances the state-of-the-art. The model is one of the first published in the literature, which entirely consists of hexahedral elements for three-dimensional (3D) structures of the head, such as the cerebellum, skull, and cerebrum with detailed geometry of gyri and sulci. It includes cerebrospinal fluid as Smoothed Particle Hydrodynamics (SPH) and a detailed model of pressurized bringing veins. Moreover, the presented review of the literature showed that material models for children are now one of the major limitations. There is also no unambiguous opinion as to the use of separate materials for gray and white matter. Thus, this work examines the impact of various material models for the brain on the biomechanical response of the brain tissues during the mechanical loading described by Hardy et al. The study compares the inhomogeneous models with the separation of gray and white matter against the homogeneous models, i.e., without the gray/white matter separation. The developed model along with its verification aims to establish a further benchmark in finite element head modelling for children and can potentially provide new insights into injury mechanisms.
15

Masingi, Vusi Ntiyiso, and Daniel Maposa. "Modelling Long-Term Monthly Rainfall Variability in Selected Provinces of South Africa: Trend and Extreme Value Analysis Approaches." Hydrology 8, no. 2 (April 23, 2021): 70. http://dx.doi.org/10.3390/hydrology8020070.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Extreme rainfall events have made significant damages to properties, public infrastructure and agriculture in some provinces of South Africa notably in KwaZulu-Natal and Gauteng among others. The general global increase in the frequency and intensity of extreme precipitation events in recent years is raising a concern that human activities might be heavily disturbed. This study attempts to model long-term monthly rainfall variability in the selected provinces of South Africa using various statistical techniques. The study investigates the normality and stationarity of the underlying distribution of the whole body of rainfall data for each selected province, the long-term trends of the rainfall data and the extreme value distributions which model the tails of the rainfall distribution data. These approaches were meant to help achieve the broader purpose of this study of investigating the long-term rainfall trends, stationarity of the rainfall distributions and extreme value distributions of monthly rainfall records in the selected provinces of South Africa in this era of climate change. The five provinces considered in this study are Eastern Cape, Gauteng, KwaZulu-Natal, Limpopo and Mpumalanga. The findings revealed that the long-term rainfall distribution for all the selected provinces does not come from a normal distribution. Furthermore, the monthly rainfall data distribution for the majority of the provinces is not stationary. The paper discusses the modelling of monthly rainfall extremes using the non-stationary generalised extreme value distribution (GEVD) which falls under the block maxima extreme value theory (EVT) approach. The maximum likelihood estimation method was used to obtain the estimates of the parameters. The stationary GEVD was found as the best distribution model for Eastern Cape, Gauteng, and KwaZulu-Natal provinces. Furthermore, model fitting supported non-stationary GEVD model for maximum monthly rainfall with nonlinear quadratic trend in the location parameter and a linear trend in the scale parameter for Limpopo, while in Mpumalanga the non-stationary GEVD model with a nonlinear quadratic trend in the scale parameter and no variation in the location parameter fitted well to the monthly rainfall data. The negative values of the shape parameters for Eastern Cape and Mpumalanga suggest that the data follow the Weibull distribution class, while the positive values of the shape parameters for Gauteng, KwaZulu-Natal and Limpopo suggest that the data follow the Fréchet distribution class. The findings from this paper could give information that can assist decision makers establish strategies for proper planning of agriculture, infrastructure, drainage system and other water resource applications in the South African provinces.
16

Raul, Jean-Sébastien, Daniel Baumgartner, Rémy Willinger, and Bertrand Ludes. "Finite element modelling of human head injuries caused by a fall." International Journal of Legal Medicine 120, no. 4 (July 30, 2005): 212–18. http://dx.doi.org/10.1007/s00414-005-0018-1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Salman, Verda, Aliya H. Khan, and Madeeha Gohar Qureshi. "Issues in Statistical Modelling of Human Capital and Economic Growth Nexus." NUST Journal of Social Sciences and Humanities 6, no. 2 (February 2, 2021): 98–136. http://dx.doi.org/10.51732/njssh.v6i2.54.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The human capital and growth relationship has been subject to considerable debate in economic literature. The empirical growth models are beset with problems ranging from theoretical frameworks and statistical modelling to estimation procedures. Due to non-availability of precise human capital variable, theoretical knowledge fails when pitched against empirical data. This paper endeavors to answer four main questions that have been figured out prominently in this debate: Is there a direct interplay between human capital and growth? Are parametric techniques incapable of capturing non-linear aspects of human capital-growth relationship as compared to semi-parametric techniques? Are estimates of human capital sensitive to proxy of human capital variables? Are estimates of human capital sensitive to estimation techniques? Our findings reveal that human capital has a well-established role in accelerating growth through both its ‘level effects’ and ‘rate effects’. The results are not sensitive to definition of education variable but are rather technique dependent. The semi-parametric model provides sufficient evidence for non-linearity in human capital-growth relationship contrary to parametric models.
18

Lau, Lin Wei, Chee Kuang Kok, Gooi Mee Chen, and Chih-Ping Tso. "Modelling Human-Structure Interaction in Sideways Fall for Hip Impact Force Estimation." International Journal of Technology 13, no. 5 (October 19, 2022): 1149. http://dx.doi.org/10.14716/ijtech.v13i5.5824.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Miltz, Ada, Andrew N. Phillips, Andrew Speakman, Valentina Cambiano, Alison Rodger, and Fiona C. Lampe. "Implications for a policy of initiating antiretroviral therapy in people diagnosed with human immunodeficiency virus: the CAPRA research programme." Programme Grants for Applied Research 5, no. 18 (October 2017): 1–40. http://dx.doi.org/10.3310/pgfar05180.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
BackgroundMore than 100,000 people in the UK are living with a human immunodeficiency virus (HIV) infection. There are currently estimated to be around 4000 people newly infected in the UK per year, mostly men who have sex with men (MSM). It has become increasingly clear that antiretroviral therapy (ART) used to treat people infected with HIV also has a profound effect on infectivity. At the initiation of the programme, it was the policy in the UK to initiate ART in people when their cluster of differentiation 4 (CD4) count was approaching 350/µl.ObjectivesTo assess what would be the effectiveness and cost-effectiveness of a policy of immediate initiation of ART at diagnosis among MSM, taking into account the potential reductions in new infections.DesignWe calibrated an individual-based model of HIV transmission, progression and the effect of ART in MSM, informed by a series of studies on sexual behaviour in relation to ART use and the transmission risk in people with viral suppression on ART, and by surveillance data collected by Public Health England.Setting, participants and interventionsThe series of studies used to inform the model included (1) the Antiretrovirals, Sexual Transmission Risk and Attitudes (ASTRA) study, a cross-sectional self-administered questionnaire study of people diagnosed with HIV attending eight HIV outpatient clinics in the UK (2011–12); (2) the Cognitive Impairment in People with HIV in the European Region (CIPHER) study, a study of levels of neurocognitive impairment in HIV-positive ASTRA participants and people from HIV clinics in Rome, Copenhagen and Minsk; (3) the Attitudes to, and Understanding of, Risk of Acquisition of HIV (AURAH) study, a cross-sectional self-administered questionnaire study of individuals who have not been diagnosed as HIV-positive attending 20 genitourinary medicine clinics across the UK (2013–14); (4) a substudy of sexual behaviour among individuals enrolled in an open-label multicentre international randomised trial (from 2013) of immediate versus deferred ART (to CD4 cell counts of 350/µl) in people with CD4 cell counts of > 500/µl [the Strategic Timing of Antiretroviral Therapy (START) trial]; and (5) Partners of People on ART: a new Evaluation of the Risks (PARTNER), an observational multicentre longitudinal study of HIV serodifferent heterosexual and MSM couples, in which the HIV-positive partner is on ART (2010–14).Main outcome measuresThe main outcome measures were the clinical effectiveness and cost-effectiveness of a policy of immediate initiation of ART at diagnosis.ResultsBased on data from studies (i)–(v), we estimated from our modelling work that increases in condomless sex (CLS) among MSM as a whole may explain the increase in HIV infection incidence in MSM epidemics over a time when ART coverage and viral suppression increased, demonstrating the limiting effects of non-condom use on the HIV epidemic among MSM. Accordingly, an increase in the overall proportion of MSM living with HIV who are virally suppressed on ART from the current level of < 60% to 90% without increases in CLS was required to achieve a reduction in the incidence of HIV among MSM to < 1 per 1000 person-years. The incremental cost-effectiveness ratio associated with the fourfold increase in levels of HIV testing and ART at diagnosis required to provide this increase from < 60% to 90% was £20,000 if we assumed continuation of current ART prices. However, this value falls to £3500 if we assume that ART prices will fall to 20% of their current cost as a result of the introduction of generic drugs. Therefore, our evaluation suggests that ART initiation at diagnosis is likely to be highly cost-effective in MSM at a population level, particularly accounting for future lower ART costs as generic drugs are used. The impact will be much greater if levels of HIV testing can be enhanced.LimitationsIt was necessary to make some assumptions beyond the available data in order to extrapolate cost-effectiveness through modelling.ConclusionsOur findings suggest that ART initiation at diagnosis is likely to be cost-effective in MSM. Of note, after this programme of work was completed, results from the main START trial demonstrated benefit in ART initiation even in people with CD4 cell counts of > 500/µl, supporting ART initiation in people diagnosed with a HIV infection.Future workThere is a need for future research into the means of increasing the frequency with which MSM test for HIV.FundingThe National Institute for Health Research Programme Grants for Applied Research programme.
20

Maus, Horst-Moritz, Shai Revzen, John Guckenheimer, Christian Ludwig, Johann Reger, and Andre Seyfarth. "Constructing predictive models of human running." Journal of The Royal Society Interface 12, no. 103 (February 2015): 20140899. http://dx.doi.org/10.1098/rsif.2014.0899.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Running is an essential mode of human locomotion, during which ballistic aerial phases alternate with phases when a single foot contacts the ground. The spring-loaded inverted pendulum (SLIP) provides a starting point for modelling running, and generates ground reaction forces that resemble those of the centre of mass (CoM) of a human runner. Here, we show that while SLIP reproduces within-step kinematics of the CoM in three dimensions, it fails to reproduce stability and predict future motions. We construct SLIP control models using data-driven Floquet analysis, and show how these models may be used to obtain predictive models of human running with six additional states comprising the position and velocity of the swing-leg ankle. Our methods are general, and may be applied to any rhythmic physical system. We provide an approach for identifying an event-driven linear controller that approximates an observed stabilization strategy, and for producing a reduced-state model which closely recovers the observed dynamics.
21

Kingdom, Fred, and Bernard Moulden. "Modelling Visual Detection: Luminance Response Non-linearity and Internal Noise." Quarterly Journal of Experimental Psychology Section A 41, no. 4 (November 1989): 675–96. http://dx.doi.org/10.1080/14640748908402389.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Two experiments that investigate the effect of various display factors on the detectability of a thin line signal in random visual noise are described. Three statistical decision models are described, together with their ability to account for the results. The first is an “ideal detector” model, the second an “energy integrator” model, and the third a model based upon the operation of retinal ganglion cells which incorporates a gain control mechanism. The ideal detector model fails to give a good account of human performance, whereas the other two models provide a good fit to the data. The digital Laplacian with gain control model has the slight advantage over the energy integrating model in being able to account for a small superiority in the detection of dark as opposed to bright signals. Finally, both models require the inclusion of an estimate of the internal noise of the human visual system to account for the pattern of performance observed under changing conditions of display contrast.
22

Mousse, Mikael Ange, and Béthel Atohoun. "Saliency based human fall detection in smart home environments using posture recognition." Health Informatics Journal 27, no. 3 (July 2021): 146045822110309. http://dx.doi.org/10.1177/14604582211030954.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The implementation of people monitoring system is an evolving research theme. This paper introduces an elderly monitoring system that recognizes human posture from overlapping cameras for people fall detection in a smart home environment. In these environments, the zone of movement is limited. Our approach used this characteristic to recognize human posture fastly by proposing a region-wise modelling approach. It classifies persons pose in four groups: standing, crouching, sitting and lying on the floor. These postures are obtained by calculating an estimation of the human bounding volume. This volume is estimated by obtaining the height of the person and its surface that is in contact with the ground according to the foreground information of each camera. Using them, we distinguish each postures and differentiate lying on floor posture, which can be considered as the falling posture from other postures. The global multiview information of the scene is obtaining by using homographic projection. We test our proposed algorithm on multiple cameras based fall detection public dataset and the results prove the efficiency of our method.
23

Ma, Xiaozhi, Xiao Li, Hongping Yuan, Zhiming Huang, and Tongwei Zhang. "Justifying the Effective Use of Building Information Modelling (BIM) with Business Intelligence." Buildings 13, no. 1 (December 29, 2022): 87. http://dx.doi.org/10.3390/buildings13010087.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Although building information modelling (BIM) is a widely acknowledged information and communication technology (ICT) in the architecture, engineering, construction, and operation (AECO) industry, its implementation is hindered by the hybrid practice of BIM and non-BIM information processing, and sometimes, it fails to add value to the AECO business. It is crucial to define, on a scientific base, how to ensure the effective use of BIM regarding the various conditions in which to apply BIM in AECO practices. Although several studies have investigated similar topics, very few have focused on the adoption of distinct BIM applications over the conventional practice from the perspective of business intelligence (BI) as a theoretical framework to justify the effective value of BIM use in the AECO. This study proposes a framework relying on BI principles to justify effective BIM use and explicates the contextual factors in AECO practices. The data were acquired from a three-round Delphi survey. The framework suggests that effective BIM use in AECO practices should follow the two principles of BI: achieving technical effectiveness and realizing business value. The pursuit of technical effectiveness should consider business objectives, business issues, business sustainability and regulatory eligibility, and the realization of business value involves willingness to adopt BIM, human-computer interoperability, visualization-based data quality and sources, data processing and system integration, and application maturity. This study provides a new perspective by which to address the issue of the technological iteration in the current hybrid BIM and non-BIM practice and could help to improve BIM implementation in the AECO industry.
24

XIAO, YANNI. "A SEMI-STOCHASTIC MODEL FOR HIV POPULATION DYNAMICS." International Journal of Biomathematics 02, no. 03 (September 2009): 391–404. http://dx.doi.org/10.1142/s1793524509000662.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
A semi-stochastic model is developed and investigated for human immunodeficiency virus type-1 (HIV-1) population dynamics. The model includes both stochastic parts (changes of CD4+ T cells) and deterministic parts (changes of free virions). Using the best currently available parameter values, we estimate the distributions of the time of occurrence and the magnitude of the early peak in virions. We investigated the effects of varying parameter values on mean solutions in order to assess the stochastic effects of between-patient variability. Numerical simulation shows that the lower the infection rate, the higher the death rate of the infected cells, more rapid clearance of virions and lower rate of virion emission by the infected cells result in lower speed of infection progression and magnitude but a greater variability in response. We also examine the probability that a small viral inoculum fails to establish an infection. Further, we theoretically quantify the expected variability around the infected equilibrium for each population by using diffusion approximation.
25

Zhu, Y. J., and T. J. Lu. "A multi-scale view of skin thermal pain: from nociception to pain sensation." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 368, no. 1912 (February 13, 2010): 521–59. http://dx.doi.org/10.1098/rsta.2009.0234.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
All biological bodies live in a thermal environment, including the human body, where skin is the interface with a protecting function. When the temperature is out of the normal physiological range, skin fails to protect, and the pain sensation is evoked. Furthermore, in medicine, with advances in laser, microwave and similar technologies, various thermal therapeutic methods have been widely used to cure disease/injury involving skin tissue. However, the corresponding problem of pain relief has limited further application and development of these thermal treatments. Skin thermal pain is induced through both direct (i.e. an increase/decrease in temperature) and indirect (e.g. thermomechanical and thermochemical) ways, and is governed by complicated thermomechanical–chemical–neurophysiological responses. However, a complete understanding of the underlying mechanisms is still far from clear. In this article, starting from an engineering perspective, we aim to recast the biological behaviour of skin in engineering system parlance. Then, by coupling the concepts of engineering with established methods in neuroscience, we attempt to establish multi-scale modelling of skin thermal pain through ion channel to pain sensation. The model takes into account skin morphological plausibility, the thermomechanical response of skin tissue and the biophysical and neurological mechanisms of pain sensation.
26

Nazir, Aqsa, Naveed Ahmed, Umar Khan, and Syed Tauseef Mohyud-Din. "Analytical approach to study a mathematical model of CD4+T-cells." International Journal of Biomathematics 11, no. 04 (May 2018): 1850056. http://dx.doi.org/10.1142/s1793524518500560.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Human immunodeficiency virus (HIV) emerged as one of the most serious health issues of the modern era. Till date, it challenges the scientists working in the fields related to its prevention, least spread and eradication. It affects not only the person suffering from it but also the communities and their economies. Mathematical modeling is one of the ways to explore the possibilities of prediction (and control) strategies for contagious deceases. In this paper, we have tried to extend the scope of a currently available prediction model for a continuous time span. For this purpose, an analytical investigation of the system of nonlinear differential equations, governing the HIV infection of CD4[Formula: see text]T-cells, is carried out. A new emerging analytical technique Optimal Variational Iteration Method (OVIM) has been used to obtain an analytical and convergent solution. Analytical solutions are continuous solutions that can be used to predict the phenomena without the involvement of interpolation or extrapolation errors. On the other hand, their use in the derived equations, depending upon solution itself, is far easier than the numerical solutions. We have presented the error analysis and the prediction curves graphically. Moreover, a comparison with traditional Variational Iteration is also provided. It is concluded that the traditional method fails to converge for the updated models which involve the delayed differential equations.
27

Srinivasan, V. "Reimagining the past – use of counterfactual trajectories in socio-hydrological modelling: the case of Chennai, India." Hydrology and Earth System Sciences 19, no. 2 (February 5, 2015): 785–801. http://dx.doi.org/10.5194/hess-19-785-2015.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract. The developing world is rapidly urbanizing. One of the challenges associated with this growth will be to supply water to growing cities of the developing world. Traditional planning tools fare poorly over 30–50 year time horizons because these systems are changing so rapidly. Models that hold land use, economic patterns, governance systems or technology static over a long planning horizon could result in inaccurate predictions leading to sub-optimal or paradoxical outcomes. Most models fail to account for adaptive responses by humans that in turn influence water resource availability, resulting in coevolution of the human–water system. Is a particular trajectory inevitable given a city's natural resource endowment, is the trajectory purely driven by policy or are there tipping points in the evolution of a city's growth that shift it from one trajectory onto another? Socio-hydrology has been defined as a new science of water and people that will explicitly account for such bi-directional feedbacks. However, a particular challenge in incorporating such feedbacks is imagining technological, social and political futures that could fundamentally alter future water demand, allocation and use. This paper offers an alternative approach – the use of counterfactual trajectories – that allows policy insights to be gleaned without having to predict social futures. The approach allows us to "reimagine the past"; to observe how outcomes would differ if different decisions had been made. The paper presents a "socio-hydrological" model that simulates the feedbacks between the human, engineered and hydrological systems in Chennai, India over a 40-year period. The model offers several interesting insights. First, the study demonstrates that urban household water security goes beyond piped water supply. When piped supply fails, users turn to their own wells. If the wells dry up, consumers purchase expensive tanker water or curtail water use and thus become water insecure. Second, unsurprisingly, different initial conditions result in different trajectories. But initial advantages in piped infrastructure are eroded if the utility is unable to expand the piped system to keep up with growth. Both infrastructure and sound management decisions are necessary to ensure household water security although the impacts of mismanagement may not manifest until much later when the population has grown and a multi-year drought strikes. Third, natural resource endowments can limit the benefits of good policy and infrastructure. Cities can boost recharge through artificial recharge schemes. However, cities underlain by productive aquifers can better rely on groundwater as a buffer against drought, compared to cities with unproductive aquifers.
28

Skene-Arnold, Tamara D., Hue Anh Luu, R. Glen Uhrig, Veerle De Wever, Mhairi Nimick, Jason Maynes, Andrea Fong, et al. "Molecular mechanisms underlying the interaction of protein phosphatase-1c with ASPP proteins." Biochemical Journal 449, no. 3 (January 9, 2013): 649–59. http://dx.doi.org/10.1042/bj20120506.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The serine/threonine PP-1c (protein phosphatase-1 catalytic subunit) is regulated by association with multiple regulatory subunits. Human ASPPs (apoptosis-stimulating proteins of p53) comprise three family members: ASPP1, ASPP2 and iASPP (inhibitory ASPP), which is uniquely overexpressed in many cancers. While ASPP2 and iASPP are known to bind PP-1c, we now identify novel and distinct molecular interactions that allow all three ASPPs to bind differentially to PP-1c isoforms and p53. iASPP lacks a PP-1c-binding RVXF motif; however, we show it interacts with PP-1c via a RARL sequence with a Kd value of 26 nM. Molecular modelling and mutagenesis of PP-1c–ASPP protein complexes identified two additional modes of interaction. First, two positively charged residues, Lys260 and Arg261 on PP-1c, interact with all ASPP family members. Secondly, the C-terminus of the PP-1c α, β and γ isoforms contain a type-2 SH3 (Src homology 3) poly-proline motif (PxxPxR), which binds directly to the SH3 domains of ASPP1, ASPP2 and iASPP. In PP-1cγ this comprises residues 309–314 (PVTPPR). When the Px(T)PxR motif is deleted or mutated via insertion of a phosphorylation site mimic (T311D), PP-1c fails to bind to all three ASPP proteins. Overall, we provide the first direct evidence for PP-1c binding via its C-terminus to an SH3 protein domain.
29

AMAKU, M., F. AZEVEDO, M. N. BURATTINI, G. E. COELHO, F. A. B. COUTINHO, D. GREENHALGH, L. F. LOPEZ, R. S. MOTITSUKI, A. WILDER-SMITH, and E. MASSAD. "Magnitude and frequency variations of vector-borne infection outbreaks using the Ross–Macdonald model: explaining and predicting outbreaks of dengue fever." Epidemiology and Infection 144, no. 16 (August 19, 2016): 3435–50. http://dx.doi.org/10.1017/s0950268816001448.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
SUMMARYThe classical Ross–Macdonald model is often utilized to model vector-borne infections; however, this model fails on several fronts. First, using measured (or estimated) parameters, which values are accepted from the literature, the model predicts a much greater number of cases than what is usually observed. Second, the model predicts a single large outbreak that is followed by decades of much smaller outbreaks, which is not consistent with what is observed. Usually towns or cities report a number of recurrences for many years, even when environmental changes cannot explain the disappearance of the infection between the peaks. In this paper, we continue to examine the pitfalls in modelling this class of infections, and explain that, if properly used, the Ross–Macdonald model works and can be used to understand the patterns of epidemics and even, to some extent, be used to make predictions. We model several outbreaks of dengue fever and show that the variable pattern of yearly recurrence (or its absence) can be understood and explained by a simple Ross–Macdonald model modified to take into account human movement across a range of neighbourhoods within a city. In addition, we analyse the effect of seasonal variations in the parameters that determine the number, longevity and biting behaviour of mosquitoes. Based on the size of the first outbreak, we show that it is possible to estimate the proportion of the remaining susceptible individuals and to predict the likelihood and magnitude of the eventual subsequent outbreaks. This approach is described based on actual dengue outbreaks with different recurrence patterns from some Brazilian regions.
30

Purohit, Kanishkavikram, Shivangi Srivastava, Varun Nookala, Vivek Joshi, Pritesh Shah, Ravi Sekhar, Satyam Panchal, et al. "Soft Sensors for State of Charge, State of Energy, and Power Loss in Formula Student Electric Vehicle." Applied System Innovation 4, no. 4 (October 13, 2021): 78. http://dx.doi.org/10.3390/asi4040078.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The proliferation of electric vehicle (EV) technology is an important step towards a more sustainable future. In the current work, two-layer feed-forward artificial neural-network-based machine learning is applied to design soft sensors to estimate the state of charge (SOC), state of energy (SOE), and power loss (PL) of a formula student electric vehicle (FSEV) battery-pack system. The proposed soft sensors were designed to predict the SOC, SOE, and PL of the EV battery pack on the basis of the input current profile. The input current profile was derived on the basis of the designed vehicle parameters, and formula Bharat track features and guidelines. All developed soft sensors were tested for mean squared error (MSE) and R-squared metrics of the dataset partitions; equations relating the derived and predicted outputs; error histograms of the training, validation, and testing datasets; training state indicators such as gradient, mu, and validation fails; validation performance over successive epochs; and predicted versus derived plots over one lap time. Moreover, the prediction accuracy of the proposed soft sensors was compared against linear or nonlinear regression models and parametric structure models used for system identification such as autoregressive with exogenous variables (ARX), autoregressive moving average with exogenous variables (ARMAX), output error (OE) and Box Jenkins (BJ). The testing dataset accuracy of the proposed FSEV SOC, SOE, PL soft sensors was 99.96%, 99.96%, and 99.99%, respectively. The proposed soft sensors attained higher prediction accuracy than that of the modelling structures mentioned above. FSEV results also indicated that the SOC and SOE dropped from 97% to 93.5% and 93.8%, respectively, during the running time of 118 s (one lap time). Thus, two-layer feed-forward neural-network-based soft sensors can be applied for the effective monitoring and prediction of SOC, SOE, and PL during the operation of EVs.
31

Srinivasan, V. "Coevolution of water security in a developing city." Hydrology and Earth System Sciences Discussions 10, no. 11 (November 5, 2013): 13265–91. http://dx.doi.org/10.5194/hessd-10-13265-2013.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract. The world is rapidly urbanizing. One of the challenges associated with this growth will be to supply water to rapidly growing, developing-world cities. While there is a long history of interdisciplinary research in water resources management, relatively few water studies attempts to explain why water systems evolve the way they do; why some regions develop sustainable, secure well-functioning water systems while others do not and which feedbacks force the transition from one trajectory to the other. This paper attempts to tackle this question by examining the historical evolution of one city in Southern India. A key contribution of this paper is the co-evolutionary modelling approach adopted. The paper presents a "socio-hydrologic" model that simulates the feedbacks between the human, engineered and hydrologic system for Chennai, India over a forty year period and evaluates the implications for water security. This study offers some interesting insights on urban water security in developing country water systems. First, the Chennai case study argues that urban water security goes beyond piped water supply. When piped supply fails users first depend on their own wells. When the aquifer is depleted, a tanker market develops. When consumers are forced to purchase expensive tanker water, they are water insecure. Second, different initial conditions result in different water security trajectories. However, initial advantages in infrastructure are eroded if the utility's management is weak and it is unable to expand or maintain the piped system to keep up with growth. Both infrastructure and management decisions are necessary to achieving water security. Third, the effects of mismanagement do not manifest right away. Instead, in the manner of a "frog in a pot of boiling water", the system gradually deteriorates. The impacts of bad policy may not manifest till much later when the population has grown and a major multi-year drought hits.
32

Gonzalez-Redin, Julen, Iain J. Gordon, J. Gareth Polhill, Terence P. Dawson, and Rosemary Hill. "Navigating Sustainability: Revealing Hidden Forces in Social–Ecological Systems." Sustainability 16, no. 3 (January 29, 2024): 1132. http://dx.doi.org/10.3390/su16031132.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
During the 1992 Rio Conference, the sustainable development agenda envisioned a transformative change for the management of natural resources, where the well-being of human society would be enhanced through the sustainable use of natural capital. Several decades on, relentless economic growth persists at the expense of natural capital, as demonstrated by biodiversity decline, climate change and other environmental challenges. Why is this happening and what can be done about it? We present three agent-based models that explore the social, economic and governance factors driving (un)sustainability in complex social–ecological systems. Our modelling results reinforce the idea that the current economic system fails to safeguard the natural capital upon which it relies, leading to the prevailing decoupling between the economic and natural systems. In attempting to find solutions for such disjunction, our research shows that social–ecological systems are complex, dynamic and non-linear. Interestingly, results also reveal that there are common factors to most social–ecological systems that have the potential to improve or diminish sustainability: the role of financial entities and monetary debt; economic speculation; technological development and efficiency; long-term views, tipping point management and government interventions; and top-down and bottom-up conservation forces. These factors can play a dual role, as they can either undermine or enhance sustainability depending on their specific context and particular conditions. Therefore, the current economic system may not be inherently unsustainable, but rather specific economic mechanisms, decision-making processes and the complex links between economic and natural systems could be at the root of the problem. We argue that short- and medium-term sustainability can be achieved by implementing mechanisms that shift capitalist forces to support environmental conservation. Long-term sustainability, in contrast, requires a more profound paradigm shift: the full integration and accounting of externalities and natural capital into the economy.
33

Sun, Qilin, and Lequan Min. "Dynamics Analysis and Simulation of a Modified HIV Infection Model with a Saturated Infection Rate." Computational and Mathematical Methods in Medicine 2014 (2014): 1–14. http://dx.doi.org/10.1155/2014/145162.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This paper studies a modified human immunodeficiency virus (HIV) infection differential equation model with a saturated infection rate. It is proved that if the basic virus reproductive numberR0of the model is less than one, then the infection-free equilibrium point of the model is globally asymptotically stable; ifR0of the model is more than one, then the endemic infection equilibrium point of the model is globally asymptotically stable. Based on the clinical data from HIV drug resistance database of Stanford University, using the proposed model simulates the dynamics of the two groups of patients’ anti-HIV infection treatment. The numerical simulation results are in agreement with the evolutions of the patients’ HIV RNA levels. It can be assumed that if an HIV infected individual’s basic virus reproductive numberR0<1then this person will recover automatically; if an antiretroviral therapy makes an HIV infected individual’sR0<1, this person will be cured eventually; if an antiretroviral therapy fails to suppress an HIV infected individual’s HIV RNA load to be of unpredictable level, the time that the patient’s HIV RNA level has achieved the minimum value may be the starting time that drug resistance has appeared.
34

Piras, Paolo, Valerio Varano, Maxime Louis, Antonio Profico, Stanley Durrleman, Benjamin Charlier, Franco Milicchio, and Luciano Teresi. "Transporting Deformations of Face Emotions in the Shape Spaces: A Comparison of Different Approaches." Journal of Mathematical Imaging and Vision 63, no. 7 (May 18, 2021): 875–93. http://dx.doi.org/10.1007/s10851-021-01030-6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractStudying the changes of shape is a common concern in many scientific fields. We address here two problems: (1) quantifying the deformation between two given shapes and (2) transporting this deformation to morph a third shape. These operations can be done with or without point correspondence, depending on the availability of a surface matching algorithm, and on the type of mathematical procedure adopted. In computer vision, the re-targeting of emotions mapped on faces is a common application. We contrast here four different methods used for transporting the deformation toward a target once it was estimated upon the matching of two shapes. These methods come from very different fields such as computational anatomy, computer vision and biology. We used the large diffeomorphic deformation metric mapping and thin plate spline, in order to estimate deformations in a deformational trajectory of a human face experiencing different emotions. Then we use naive transport (NT), linear shift (LS), direct transport (DT) and fanning scheme (FS) to transport the estimated deformations toward four alien faces constituted by 240 homologous points and identifying a triangulation structure of 416 triangles. We used both local and global criteria for evaluating the performance of the 4 methods, e.g., the maintenance of the original deformation. We found DT, LS and FS very effective in recovering the original deformation while NT fails under several aspects in transporting the shape change. As the best method may differ depending on the application, we recommend carefully testing different methods in order to choose the best one for any specific application.
35

Kwon, Joseph, Hazel Squires, Matthew Franklin, and Tracey Young. "Systematic review and critical methodological appraisal of community-based falls prevention economic models." Cost Effectiveness and Resource Allocation 20, no. 1 (July 16, 2022). http://dx.doi.org/10.1186/s12962-022-00367-y.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Background Falls impose significant health and economic burdens on community-dwelling older persons. Decision modelling can inform commissioning of alternative falls prevention strategies. Several methodological challenges arise when modelling public health interventions including community-based falls prevention. This study aims to conduct a systematic review (SR) to: systematically identify community-based falls prevention economic models; synthesise and critically appraise how the models handled key methodological challenges associated with public health modelling; and suggest areas for further methodological research. Methods The SR followed the 2021 PRISMA reporting guideline and covered the period 2003–2020 and 12 academic databases and grey literature. The extracted methodological features of included models were synthesised by their relevance to the following challenges: (1) capturing non-health outcomes and societal intervention costs; (2) considering heterogeneity and dynamic complexity; (3) considering theories of human behaviour and implementation; and (4) considering equity issues. The critical appraisal assessed the prevalence of each feature across models, then appraised the methods used to incorporate the feature. The methodological strengths and limitations stated by the modellers were used as indicators of desirable modelling practice and scope for improvement, respectively. The methods were also compared against those suggested in the broader empirical and methodological literature. Areas of further methodological research were suggested based on appraisal results. Results 46 models were identified. Comprehensive incorporation of non-health outcomes and societal intervention costs was infrequent. The assessments of heterogeneity and dynamic complexity were limited; subgroup delineation was confined primarily to demographics and binary disease/physical status. Few models incorporated heterogeneity in intervention implementation level, efficacy and cost. Few dynamic variables other than age and falls history were incorporated to characterise the trajectories of falls risk and general health/frailty. Intervention sustainability was frequently based on assumptions; few models estimated the economic/health returns from improved implementation. Seven models incorporated ethnicity- and severity-based subgroups but did not estimate the equity-efficiency trade-offs. Sixteen methodological research suggestions were made. Conclusion Existing community-based falls prevention models contain methodological limitations spanning four challenge areas relevant for public health modelling. There is scope for further methodological research to inform the development of falls prevention and other public health models.
36

Kwon, Joseph, Hazel Squires, and Tracey Young. "Economic model of community-based falls prevention: seeking methodological solutions in evaluating the efficiency and equity of UK guideline recommendations." BMC Geriatrics 23, no. 1 (March 30, 2023). http://dx.doi.org/10.1186/s12877-023-03916-z.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Background Falls significantly harm geriatric health and impose substantial costs on care systems and wider society. Decision modelling can inform the commissioning of falls prevention but face methodological challenges, including: (1) capturing non-health outcomes and societal intervention costs; (2) considering heterogeneity and dynamic complexity; (3) considering theories of human behaviour and implementation; and (4) considering issues of equity. This study seeks methodological solutions in developing a credible economic model of community-based falls prevention for older persons (aged 60 +) to inform local falls prevention commissioning as recommended by UK guidelines. Methods A framework for conceptualising public health economic models was followed. Conceptualisation was conducted in Sheffield as a representative local health economy. Model parameterisation used publicly available data including English Longitudinal Study of Ageing and UK-based falls prevention trials. Key methodological developments in operationalising a discrete individual simulation model included: (1) incorporating societal outcomes including productivity, informal caregiving cost, and private care expenditure; (2) parameterising dynamic falls-frailty feedback loop whereby falls influence long-term outcomes via frailty progression; (3) incorporating three parallel prevention pathways with unique eligibility and implementation conditions; and (4) assessing equity impacts through distributional cost-effectiveness analysis (DCEA) and individual-level lifetime outcomes (e.g., number reaching ‘fair innings’). Guideline-recommended strategy (RC) was compared against usual care (UC). Probabilistic sensitivity, subgroup, and scenario analyses were conducted. Results RC had 93.4% probability of being cost-effective versus UC at cost-effectiveness threshold of £20,000 per QALY gained under 40-year societal cost-utility analysis. It increased productivity and reduced private expenditure and informal caregiving cost, but productivity gain and private expenditure reduction were outstripped by increases in intervention time opportunity costs and co-payments, respectively. RC reduced inequality delineated by socioeconomic status quartile. Gains in individual-level lifetime outcomes were small. Younger geriatric age groups can cross-subsidise their older peers for whom RC is cost-ineffective. Removing the falls-frailty feedback made RC no longer efficient or equitable versus UC. Conclusion Methodological advances addressed several key challenges associated with falls prevention modelling. RC appears cost-effective and equitable versus UC. However, further analyses should confirm whether RC is optimal versus other potential strategies and investigate feasibility issues including capacity implications.
37

Gong, Liyun, Lu Zhang, Ming Zhu, Miao Yu, Ross Clifford, Carol Duff, Xujiong Ye, and Stefanos Kollias. "A novel computer vision-based data driven modelling approach for person specific fall detection." Journal of Ambient Intelligence and Smart Environments, September 6, 2021, 1–15. http://dx.doi.org/10.3233/ais-210611.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this paper, we propose a novel person specific fall detection system based on a monocular camera, which can be applied for assisting the independent living of an older adult living alone at home. A single camera covering the living area is used for video recordings of an elderly person’s normal daily activities. From the recorded video data, the human silhouette regions in every frame are then extracted based on the codebook background subtraction technique. Low-dimensionality representative features of extracted silhouetted are then extracted by convolutional neural network-based autoencoder (CNN-AE). Features obtained from the CNN-AE are applied to construct an one class support vector machine (OCSVM) model, which is a data driven model based on the video recordings and can be applied for fall detection. From the comprehensive experimental evaluations on different people in a real home environment, it is shown that the proposed fall detection system can successfully detect different types of falls (falls towards different orientations at different positions in a real home environment) with small false alarms.
38

Rødseth, Ørnulf Jan, Lars Andreas Lien Wennersberg, and Håvard Nordahl. "Towards approval of autonomous ship systems by their operational envelope." Journal of Marine Science and Technology, April 24, 2021. http://dx.doi.org/10.1007/s00773-021-00815-z.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractCurrent guidelines for approval of autonomous ship systems are focused on the ships’ concrete operations and their geographic area. This is a natural consequence of the link between geography and the navigational complexity, but moving the ship to a new area or changing owners may require a costly re-approval. The automotive industry has introduced the Operational Design Domain (ODD) that can be used as a basis for approval. However, the ODD does not include the human control responsibilities, while most autonomous ship systems are expected to be dependent on sharing control responsibilities between humans and automation. We propose the definition of an operational envelope for autonomous ship systems that include the sharing of responsibilities between human and automation, and that is general enough to allow approval of autonomous ship systems in all geographic areas and operations that falls within the envelope. We also show how the operational envelope can be defined using a system modelling language, such as the unified modelling language (UML).
39

RUDAKOV, Marat L., Konstantin A. KOLVAKH, and Iana V. DERKACH. "Assessment of Environmental and Occupational Safety in Mining Industry during Underground Coal Mining." Journal of Environmental Management and Tourism 11, no. 3 (June 14, 2020). http://dx.doi.org/10.14505//jemt.v11.3(43).10.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Ensuring the ever-increasing demands of society for energy resources is a key problem for the development of the economy of all countries of the world. At this stage, to improve the living and working conditions of a person, the development of industry and transport, the growth of production based on scientific and technological progress, a continuous increase in energy use is required. Analysis of the environmental consequences of coal mining shows that the human impact on the environment in the process of economic activity becomes global. Therefore, the main goal of the work is to assess the environmental safety in production because of a decrease in rock mass during underground coal mining. The work analyses the levels of negative environmental impact of coal industry enterprises. The relationship between the key statistical indicator that affects the accident rate and the value of professional risk is demonstrated. It is shown that despite the general tendency to reduce the number of cases of fatal injuries to coal mine workers, the procedure for assessing the risk caused by rock falls needs to be improved. In this regard, when assessing occupational risk, it becomes relevant to use information from multifunctional systems of safety (MSS). The complex method of effective control of mountain pressure is illustrated by application of the developed methodology of computer modelling of geo-mechanical processes, instrumental and geophysical methods for protection and maintenance of mine workings at development of a coal seam of the Barentsburg field.
40

Hotsyk, Olha. "Bioproductivity of the forests of the Cheremsky Nature Reserve." Ukrainian Journal of Forest and Wood Science 13, no. 3 (July 25, 2022). http://dx.doi.org/10.31548/forest.13(3).2022.32-40.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Climate change undermines the stability of natural ecosystems and adversely affects human life. Forest biocenoses can regulate the gas exchange of the atmosphere, accumulate and sequester carbon dioxide emissions, which are dangerous for the environment, in the phytomass components for a long time. The purpose of this study is to investigate the dynamics of bioproductivity of stands of the main forest-forming species of the Cheremsky Nature Reserve by components of phytomass and the carbon deposited in them. To solve the tasks of the study, the method of P.I. Lakyda was used. Experimental data of temporary trial plots, which fully characterize the forest massifs of the object under study, were used for modelling. The ratio coefficients Rv were calculated for stem wood (Rv(sw)); stem bark (Rv(sb)); branches (Rv(b)); leaves (needles) (Rv(l)). It was established that all above-ground components of Scots pine phytomass are described by regression equations. The coefficients of determination turned out to be insignificant, for the wood and bark of the stems of silver birch and common alder. In the structure of the phytomass of the forest stands of the reserve, the largest share (72.0%) falls on coniferous stands, a much smaller share – on soft-wood stands (26.0%) and the smallest – on hard-wood stands (2.0%). Over 13 years, the density of phytomass of stands and the carbon sequestered in it increased 1.4 times. Every year, forest biocenoses of the reserve release 6,989 tonnes of oxygen (4.2 t·ha-1) into the atmosphere. The main volume of oxygen (91.8%) is produced by coniferous stands. Based on the collected research material for stands of the main forest-forming species of the Cheremsky Nature Reserve, the following were calculated: ratios of above-ground phytomass components to their stock in the bark; mathematical models for evaluating the dynamics of phytomass components; standards for calculating oxygen productivity. The results of the study of the bio- and oxygen productivity of the forests of the Cheremsky Nature Reserve will be a significant contribution to effective management of the forest reserves, as well as to solving problems related to climate change at the regional and global levels
41

Tilki, Ramazan, and Özlem Ayvaz Tunç. "THE CONTRIBUTION OF SCULPTURE COURSES IN DRAWING METHODS TO THE PERCEPTION OF THREE-DIMENSIONAL (3D) OBJECTS AND TO THE PROCESS OF APPLYING THE OBJECTS ON TWO-DIMENSIONAL (2D) SURFACES." Arts and Music in Cultural Discourse. Proceedings of the International Scientific and Practical Conference, November 29, 2016, 193. http://dx.doi.org/10.17770/amcd2016.2200.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
It is established that drawing courses have an important place in the ateliers in the Department of Painting in Fine Arts Education, and that drawing is taught with different methods. The reason why great importance is attached to the ways drawing is handled is because it provides a basis for the departments that constitute plastic arts. The instruction of drawing in terms of its purpose, principles, in other words how it can be taught, reveals the problem of method in drawing instruction. Although it is quite difficult to solve this problem due to the features of this field, the solution to this problem can be achieved by identifying the visual elements of a design, an object or a subject, determining certain specific methods and applying these methods on students. The methods and the techniques applied during the drawing process and the identification of visual elements are determining factors in achieving the expected results. The aim of the sculpture and elective sculpture courses is to enable students to make connections between the surfaces that make up a whole by developing their ability to comprehend 3D forms. Sculpture Design courses, which are mainly based on modelling with clay, deal with making of busts, reliefs and figures. Sculpture courses aim to provide opportunities for students to make their own designs and enable them to reach to a level where they can perform their designed works by supporting them with plaster, polyester, cast, metal, stone, workshops where they can work with various materials. Consequently, by using a living model, any student who takes sculpture courses can identify: - the analysis of organic and geometrical forms of human body;- surface and form composition;- geometric and organic composition;- the differences on a person’s face in terms of age, gender, and character.In drawings that are aimed at the use 3D geometrical objects, the use and identification of surfaces, the drawing or painting area or the objects that falls into the painting area are an important part of the process as well as the relationship between the objects themselves and their area. In this regard, the partition of drawing area according to the purpose, designing and planning the placing of the surfaces that make up the anatomical features of the 3D object show the importance of the sculpture and elective sculpture courses. This study aims to offer a new perspective to the needs of drawing courses and contribute to the drawing courses conducted in related departments. It is assumed that this study will gain importance since it will provide new insight for the students and the instructor.
42

Yu, Kenny, Francis Tuerlinckx, Wolf Vanpaemel, and Jonas Zaman. "Humans display interindividual differences in the latent mechanisms underlying fear generalization behaviour." Communications Psychology 1, no. 1 (August 1, 2023). http://dx.doi.org/10.1038/s44271-023-00005-0.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractHuman generalization research aims to understand the processes underlying the transfer of prior experiences to new contexts. Generalization research predominantly relies on descriptive statistics, assumes a single generalization mechanism, interprets generalization from mono-source data, and disregards individual differences. Unfortunately, such an approach fails to disentangle various mechanisms underlying generalization behaviour and can readily result in biased conclusions regarding generalization tendencies. Therefore, we combined a computational model with multi-source data to mechanistically investigate human generalization behaviour. By simultaneously modelling learning, perceptual and generalization data at the individual level, we revealed meaningful variations in how different mechanisms contribute to generalization behaviour. The current research suggests the need for revising the theoretical and analytic foundations in the field to shift the attention away from forecasting group-level generalization behaviour and toward understanding how such phenomena emerge at the individual level. This raises the question for future research whether a mechanism-specific differential diagnosis may be beneficial for generalization-related psychiatric disorders.
43

McMillan, Hilary, Gemma Coxon, Ryoko Araki, Saskia Salwey, Christa Kelleher, Yanchen Zheng, Wouter Knoben, Sebastian Gnann, Jan Seibert, and Lauren Bolotin. "When good signatures go bad: Applying hydrologic signatures in large sample studies." Hydrological Processes 37, no. 9 (September 2023). http://dx.doi.org/10.1002/hyp.14987.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractHydrologic signatures are quantitative metrics that describe streamflow statistics and dynamics. Signatures have many applications, including assessing habitat suitability and hydrologic alteration, calibrating and evaluating hydrologic models, defining similarity between watersheds and investigating watershed processes. Increasingly, signatures are being used in large sample studies to guide flow management and modelling at continental scales. Using signatures in studies involving 1000s of watersheds brings new challenges as it becomes impractical to examine signature parameters and behaviour in each watershed. For example, we might wish to check that signatures describing flood event characteristics have correctly identified event periods, that signature values have not been biassed by data errors, or that human and natural influences on signature values have been correctly interpreted. In this commentary, we draw from our collective experience to present case studies where naïve application of signatures fails to correctly identify streamflow dynamics. These include unusual precipitation or flow regimes, data quality issues, and signature use in human‐influenced watersheds. We conclude by providing guidance and recommendations on applying signatures in large sample studies.
44

Gildea, Kevin, Daniel Hall, Christopher Cherry, and Ciaran Simms. "Forward dynamics computational modelling of a cyclist fall with the inclusion of protective response using deep learning-based human pose estimation." Journal of Biomechanics, January 2024, 111959. http://dx.doi.org/10.1016/j.jbiomech.2024.111959.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Lei, Siyue, Bin Tang, Yanhua Chen, Mingfu Zhao, Yifei Xu, and Zourong Long. "Temporal channel reconfiguration multi‐graph convolution network for skeleton‐based action recognition." IET Computer Vision, April 17, 2024. http://dx.doi.org/10.1049/cvi2.12279.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractSkeleton‐based action recognition has received much attention and achieved remarkable achievements in the field of human action recognition. In time series action prediction for different scales, existing methods mainly focus on attention mechanisms to enhance modelling capabilities in spatial dimensions. However, this approach strongly depends on the local information of a single input feature and fails to facilitate the flow of information between channels. To address these issues, the authors propose a novel Temporal Channel Reconfiguration Multi‐Graph Convolution Network (TRMGCN). In the temporal convolution part, the authors designed a module called Temporal Channel Fusion with Guidance (TCFG) to capture important temporal information within channels at different scales and avoid ignoring cross‐spatio‐temporal dependencies among joints. In the graph convolution part, the authors propose Top‐Down Attention Multi‐graph Independent Convolution (TD‐MIG), which uses multi‐graph independent convolution to learn the topological graph feature for different length time series. Top‐down attention is introduced for spatial and channel modulation to facilitate information flow in channels that do not establish topological relationships. Experimental results on the large‐scale datasets NTU‐RGB + D60 and 120, as well as UAV‐Human, demonstrate that TRMGCN exhibits advanced performance and capabilities. Furthermore, experiments on the smaller dataset NW‐UCLA have indicated that the authors’ model possesses strong generalisation abilities.
46

Huber, Eva, Sebastian Sauppe, Arrate Isasi-Isasmendi, Ina Bornkessel-Schlesewsky, Paola Merlo, and Balthasar Bickel. "Surprisal from language models can predict ERPs in processing predicate-argument structures only if enriched by an Agent Preference principle." Neurobiology of Language, September 8, 2023, 1–68. http://dx.doi.org/10.1162/nol_a_00121.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Language models based on artificial neural networks increasingly capture key aspects of how humans process sentences. Most notably, model-based surprisals predict event-related potentials such as N400 amplitudes during parsing. Assuming that these models represent realistic estimates of human linguistic experience, their success in modelling language processing raises the possibility that the human processing system relies on no other principles than the general architecture of language models and on sufficient linguistic input. Here, we test this hypothesis on N400 effects observed during the processing of verb-final sentences in German, Basque, and Hindi. By stacking Bayesian generalised additive models, we show that, in each language, N400 amplitudes and topographies in the region of the verb are best predicted when model-based surprisals are complemented by an Agent Preference principle that transiently interprets initial role-ambiguous NPs as agents, leading to reanalysis when this interpretation fails. Our findings demonstrate the need for this principle independently of usage frequencies and structural differences between languages. The principle has an unequal force, however. Compared to surprisal, its effect is weakest in German, stronger in Hindi, and still stronger in Basque. This gradient is correlated with the extent to which grammars allow unmarked NPs to be patients, a structural feature that boosts reanalysis effects. We conclude that language models gain more neurobiological plausibility by incorporating an Agent Preference. Conversely, theories of human processing profit from incorporating surprisal estimates in addition to principles like the Agent Preference, which arguably have distinct evolutionary roots.
47

Cristiani, Demetrio, Claudio Sbarufatti, Francesco Cadini, and Marco Giglio. "Fatigue damage diagnosis and prognosis of an aeronautical structure based on surrogate modelling and particle filter." Structural Health Monitoring, November 28, 2020, 147592172097155. http://dx.doi.org/10.1177/1475921720971551.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
A key issue affecting the performances of every human-conceived engineering system is its degradation, fatigue crack growth being one of the major structural deterioration phenomena. Fatigue crack growth is usually modelled as a stochastic process: uncertainty sources lie both in the item and in the physical degradation process variability. Fatigue crack growth deserves close attention, especially considering that condition-based maintenance methodologies are recently experiencing a major drive to increase their technology readiness level, requiring validated diagnostic and prognostic methodologies which should be capable of operating online and in real-time. In this regard, particle filters provide a consistent Bayesian framework, where the posterior distribution of the system degradation state is recursively approximated based on a time-growing stream of observations measuring the system response, enabling, in general, increasingly informed lifetime estimates. However, the real-time operation capability of such methods is hindered by their requirements in terms of computational power, which is mainly due to the complexity of the structural models they rely upon. Within this work, a comprehensive particle filter framework, able to deal with fatigue crack growth uncertainty sources while simultaneously addressing the computational burden issue, is proposed. The algorithm structure enables to simultaneously perform the diagnosis and prognosis of fatigue crack growth, while the adoption of the augmented state formulation allows to address scenarios where the degradation process of fatigue crack growth fails to meet the degradation model ruling the particle filter. Artificial neural networks–based surrogate modelling is adopted at different stages and embedded within the particle filter algorithm, relieving the computational burden associated with the evaluation of the trajectory likelihoods as well as enabling a fast estimation of the remaining useful life. Both simulated and experimental data sets regarding fatigue crack growth in an aluminium aeronautical panel are used for the algorithm testing, additionally proving the validity and effectiveness thereof by means of common prognostic performance metrics.
48

Alim, Affan, Abdul Rafay, and Imran Naseem. "PoGB-Pred: Prediction of Antifreeze Proteins Sequences using Amino Acid Composition with Feature Selection followed by a Sequential based Ensemble Approach." Current Bioinformatics 15 (July 7, 2020). http://dx.doi.org/10.2174/1574893615999200707141926.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Background: Proteins contribute significantly in every task of cellular life. Their functions encompass the building and repairing of tissues in human bodies and other organisms. Hence they are the building blocks of bones, muscles, cartilage, skin, and blood. Similarly, antifreeze proteins are of prime significance for organisms that live in very cold areas. With the help of these proteins, the cold water organisms can survive below zero temperature and resist the water crystallization process which may cause the rupture in the internal cells and tissues. AFP’s have attracted attention and interest in food industries and cryopreservation. Objective: With the increase in the availability of genomic sequence data of protein, an automated and sophisticated tool for AFP recognition and identification is in dire need. The sequence and structures of AFP are highly distinct, therefore, most of the proposed methods fail to show promising results on different structures. A consolidated method is proposed to produce the competitive performance on highly distinct AFP structure. Methods: In this study, we propose to use machine learning-based algorithms Principal Component Analysis (PCA) followed by Gradient Boosting (GB) for antifreeze protein identification. To analyze the performance and validation of the proposed model, various combinations of two segments composition of amino acid and dipeptide are used. PCA, in particular, is proposed to dimension reduction and high variance retaining of data which is followed by an ensemble method named gradient boosting for modelling and classification. Results: The proposed method obtained the superfluous performance on PDB, Pfam and Uniprot dataset as compared with the RAFP-Pred method. In experiment-3, by utilizing only 150 PCA components a high accuracy of 89.63 was achieved which is superior to the 87.41 utilizing 300 significant features reported for the RAFP-Pred method. Experiment-2 is conducted using two different dataset such that non-AFP from the PISCES server and AFPs from Protein data bank. In this experiment-2, our proposed method attained high sensitivity of 79.16 which is 12.50 better than state-of-the-art the RAFP-pred method. Conclusion: AFPs have a common function with distinct structure. Therefore, the development of a single model for different sequences often fails to AFPs. A robust results have been shown by our proposed model on the diversity of training and testing dataset. The results of the proposed model outperformed compared to the previous AFPs prediction method such as RAFP-Pred. Our model consists of PCA for dimension reduction followed by gradient boosting for classification. Due to simplicity, scalability properties and high performance result our model can be easily extended for analyzing the proteomic and genomic dataset.
49

Cham, Karen, and Jeffrey Johnson. "Complexity Theory." M/C Journal 10, no. 3 (June 1, 2007). http://dx.doi.org/10.5204/mcj.2672.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Complex systems are an invention of the universe. It is not at all clear that science has an a priori primacy claim to the study of complex systems. (Galanter 5) Introduction In popular dialogues, describing a system as “complex” is often the point of resignation, inferring that the system cannot be sufficiently described, predicted nor managed. Transport networks, management infrastructure and supply chain logistics are all often described in this way. In socio-cultural terms “complex” is used to describe those humanistic systems that are “intricate, involved, complicated, dynamic, multi-dimensional, interconnected systems [such as] transnational citizenship, communities, identities, multiple belongings, overlapping geographies and competing histories” (Cahir & James). Academic dialogues have begun to explore the collective behaviors of complex systems to define a complex system specifically as an adaptive one; i.e. a system that demonstrates ‘self organising’ principles and ‘emergent’ properties. Based upon the key principles of interaction and emergence in relation to adaptive and self organising systems in cultural artifacts and processes, this paper will argue that complex systems are cultural systems. By introducing generic principles of complex systems, and looking at the exploration of such principles in art, design and media research, this paper argues that a science of cultural systems as part of complex systems theory is the post modern science for the digital age. Furthermore, that such a science was predicated by post structuralism and has been manifest in art, design and media practice since the late 1960s. Complex Systems Theory Complexity theory grew out of systems theory, an holistic approach to analysis that views whole systems based upon the links and interactions between the component parts and their relationship to each other and the environment within they exists. This stands in stark contrast to conventional science which is based upon Descartes’s reductionism, where the aim is to analyse systems by reducing something to its component parts (Wilson 3). As systems thinking is concerned with relationships more than elements, it proposes that in complex systems, small catalysts can cause large changes and that a change in one area of a system can adversely affect another area of the system. As is apparent, systems theory is a way of thinking rather than a specific set of rules, and similarly there is no single unified Theory of Complexity, but several different theories have arisen from the natural sciences, mathematics and computing. As such, the study of complex systems is very interdisciplinary and encompasses more than one theoretical framework. Whilst key ideas of complexity theory developed through artificial intelligence and robotics research, other important contributions came from thermodynamics, biology, sociology, physics, economics and law. In her volume for the Elsevier Advanced Management Series, “Complex Systems and Evolutionary Perspectives on Organisations”, Eve Mitleton-Kelly describes a comprehensive overview of this evolution as five main areas of research: complex adaptive systems dissipative structures autopoiesis (non-equilibrium) social systems chaos theory path dependence Here, Mitleton-Kelly points out that relatively little work has been done on developing a specific theory of complex social systems, despite much interest in complexity and its application to management (Mitleton-Kelly 4). To this end, she goes on to define the term “complex evolving system” as more appropriate to the field than ‘complex adaptive system’ and suggests that the term “complex behaviour” is thus more useful in social contexts (Mitleton-Kelly). For our purpose here, “complex systems” will be the general term used to describe those systems that are diverse and made up of multiple interdependent elements, that are often ‘adaptive’, in that they have the capacity to change and learn from events. This is in itself both ‘evolutionary’ and ‘behavioural’ and can be understood as emerging from the interaction of autonomous agents – especially people. Some generic principles of complex systems defined by Mitleton Kelly that are of concern here are: self-organisation emergence interdependence feedback space of possibilities co-evolving creation of new order Whilst the behaviours of complex systems clearly do not fall into our conventional top down perception of management and production, anticipating such behaviours is becoming more and more essential for products, processes and policies. For example, compare the traditional top down model of news generation, distribution and consumption to the “emerging media eco-system” (Bowman and Willis 14). Figure 1 (Bowman & Willis 10) Figure 2 (Bowman & Willis 12) To the traditional news organisations, such a “democratization of production” (McLuhan 230) has been a huge cause for concern. The agencies once solely responsible for the representation of reality are now lost in a global miasma of competing perspectives. Can we anticipate and account for complex behaviours? Eve Mitleton Kelly states that “if organisations are understood as complex evolving systems co-evolving as part of a social ‘ecosystem’, then that changed perspective changes ways of acting and relating which lead to a different way of working. Thus, management strategy changes, and our organizational design paradigms evolve as new types of relationships and ways of working provide the conditions for the emergence of new organisational forms” (Mitleton-Kelly 6). Complexity in Design It is thus through design practice and processes that discovering methods for anticipating complex systems behaviours seem most possible. The Embracing Complexity in Design (ECiD) research programme, is a contemporary interdisciplinary research cluster consisting of academics and designers from architectural engineering, robotics, geography, digital media, sustainable design, and computing aiming to explore the possibility of trans disciplinary principles of complexity in design. Over arching this work is the conviction that design can be seen as model for complex systems researchers motivated by applying complexity science in particular domains. Key areas in which design and complexity interact have been established by this research cluster. Most immediately, many designed products and systems are inherently complex to design in the ordinary sense. For example, when designing vehicles, architecture, microchips designers need to understand complex dynamic processes used to fabricate and manufacture products and systems. The social and economic context of design is also complex, from market economics and legal regulation to social trends and mass culture. The process of designing can also involve complex social dynamics, with many people processing and exchanging complex heterogeneous information over complex human and communication networks, in the context of many changing constraints. Current key research questions are: how can the methods of complex systems science inform designers? how can design inform research into complex systems? Whilst ECiD acknowledges that to answer such questions effectively the theoretical and methodological relations between complexity science and design need further exploration and enquiry, there are no reliable precedents for such an activity across the sciences and the arts in general. Indeed, even in areas where a convergence of humanities methodology with scientific practice might seem to be most pertinent, most examples are few and far between. In his paper “Post Structuralism, Hypertext & the World Wide Web”, Luke Tredennick states that “despite the concentration of post-structuralism on text and texts, the study of information has largely failed to exploit post-structuralist theory” (Tredennick 5). Yet it is surely in the convergence of art and design with computation and the media that a search for practical trans-metadisciplinary methodologies might be most fruitful. It is in design for interactive media, where algorithms meet graphics, where the user can interact, adapt and amend, that self-organisation, emergence, interdependence, feedback, the space of possibilities, co-evolution and the creation of new order are embraced on a day to day basis by designers. A digitally interactive environment such as the World Wide Web, clearly demonstrates all the key aspects of a complex system. Indeed, it has already been described as a ‘complexity machine’ (Qvortup 9). It is important to remember that this ‘complexity machine’ has been designed. It is an intentional facility. It may display all the characteristics of complexity but, whilst some of its attributes are most demonstrative of self organisation and emergence, the Internet itself has not emerged spontaneously. For example, Tredinnick details the evolution of the World Wide Web through the Memex machine of Vannevar Bush, through Ted Nelsons hypertext system Xanadu to Tim Berners-Lee’s Enquire (Tredennick 3). The Internet was engineered. So, whilst we may not be able to entirely predict complex behavior, we can, and do, quite clearly design for it. When designing digitally interactive artifacts we design parameters or co ordinates to define the space within which a conceptual process will take place. We can never begin to predict precisely what those processes might become through interaction, emergence and self organisation, but we can establish conceptual parameters that guide and delineate the space of possibilities. Indeed this fact is so transparently obvious that many commentators in the humanities have been pushed to remark that interaction is merely interpretation, and so called new media is not new at all; that one interacts with a book in much the same way as a digital artifact. After all, post-structuralist theory had established the “death of the author” in the 1970s – the a priori that all cultural artifacts are open to interpretation, where all meanings must be completed by the reader. The concept of the “open work” (Eco 6) has been an established post modern concept for over 30 years and is commonly recognised as a feature of surrealist montage, poetry, the writings of James Joyce, even advertising design, where a purposive space for engagement and interpretation of a message is designated, without which the communication does not “work”. However, this concept is also most successfully employed in relation to installation art and, more recently, interactive art as a reflection of the artist’s conscious decision to leave part of a work open to interpretation and/or interaction. Art & Complex Systems One of the key projects of Embracing Complexity in Design has been to look at the relationship between art and complex systems. There is a relatively well established history of exploring art objects as complex systems in themselves that finds its origins in the systems art movement of the 1970s. In his paper “Observing ‘Systems Art’ from a Systems-Theroretical Perspective”, Francis Halsall defines systems art as “emerging in the 1960s and 1970s as a new paradigm in artistic practice … displaying an interest in the aesthetics of networks, the exploitation of new technology and New Media, unstable or de-materialised physicality, the prioritising of non-visual aspects, and an engagement (often politicised) with the institutional systems of support (such as the gallery, discourse, or the market) within which it occurs” (Halsall 7). More contemporarily, “Open Systems: Rethinking Art c.1970”, at Tate Modern, London, focuses upon systems artists “rejection of art’s traditional focus on the object, to wide-ranging experiments al focus on the object, to wide-ranging experiments with media that included dance, performance and…film & video” (De Salvo 3). Artists include Andy Warhol, Richard Long, Gilbert & George, Sol Lewitt, Eva Hesse and Bruce Nauman. In 2002, the Samuel Dorsky Museum of Art, New York, held an international exhibition entitled “Complexity; Art & Complex Systems”, that was concerned with “art as a distinct discipline offer[ing] its own unique approache[s] and epistemic standards in the consideration of complexity” (Galanter and Levy 5), and the organisers go on to describe four ways in which artists engage the realm of complexity: presentations of natural complex phenomena that transcend conventional scientific visualisation descriptive systems which describe complex systems in an innovative and often idiosyncratic way commentary on complexity science itself technical applications of genetic algorithms, neural networks and a-life ECiD artist Julian Burton makes work that visualises how companies operate in specific relation to their approach to change and innovation. He is a strategic artist and facilitator who makes “pictures of problems to help people talk about them” (Burton). Clients include public and private sector organisations such as Barclays, Shell, Prudential, KPMG and the NHS. He is quoted as saying “Pictures are a powerful way to engage and focus a group’s attention on crucial issues and challenges, and enable them to grasp complex situations quickly. I try and create visual catalysts that capture the major themes of a workshop, meeting or strategy and re-present them in an engaging way to provoke lively conversations” (Burton). This is a simple and direct method of using art as a knowledge elicitation tool that falls into the first and second categories above. The third category is demonstrated by the ground breaking TechnoSphere, that was specifically inspired by complexity theory, landscape and artificial life. Launched in 1995 as an Arts Council funded online digital environment it was created by Jane Prophet and Gordon Selley. TechnoSphere is a virtual world, populated by artificial life forms created by users of the World Wide Web. The digital ecology of the 3D world, housed on a server, depends on the participation of an on-line public who accesses the world via the Internet. At the time of writing it has attracted over a 100,000 users who have created over a million creatures. The artistic exploration of technical applications is by default a key field for researching the convergence of trans-metadisciplinary methodologies. Troy Innocent’s lifeSigns evolves multiple digital media languages “expressed as a virtual world – through form, structure, colour, sound, motion, surface and behaviour” (Innocent). The work explores the idea of “emergent language through play – the idea that new meanings may be generated through interaction between human and digital agents”. Thus this artwork combines three areas of converging research – artificial life; computational semiotics and digital games. In his paper “What Is Generative Art? Complexity Theory as a Context for Art Theory”, Philip Galanter describes all art as generative on the basis that it is created from the application of rules. Yet, as demonstrated above, what is significantly different and important about digital interactivity, as opposed to its predecessor, interpretation, is its provision of a graphical user interface (GUI) to component parts of a text such as symbol, metaphor, narrative, etc for the multiple “authors” and the multiple “readers” in a digitally interactive space of possibility. This offers us tangible, instantaneous reproduction and dissemination of interpretations of an artwork. Conclusion: Digital Interactivity – A Complex Medium Digital interaction of any sort is thus a graphic model of the complex process of communication. Here, complexity does not need deconstructing, representing nor modelling, as the aesthetics (as in apprehended by the senses) of the graphical user interface conveniently come first. Design for digital interactive media is thus design for complex adaptive systems. The theoretical and methodological relations between complexity science and design can clearly be expounded especially well through post-structuralism. The work of Barthes, Derrida & Foucault offers us the notion of all cultural artefacts as texts or systems of signs, whose meanings are not fixed but rather sustained by networks of relationships. Implemented in a digital environment post-structuralist theory is tangible complexity. Strangely, whilst Philip Galanter states that science has no necessary over reaching claim to the study of complexity, he then argues conversely that “contemporary art theory rooted in skeptical continental philosophy [reduces] art to social construction [as] postmodernism, deconstruction and critical theory [are] notoriously elusive, slippery, and overlapping terms and ideas…that in fact [are] in the business of destabilising apparently clear and universal propositions” (4). This seems to imply that for Galanter, post modern rejections of grand narratives necessarily will exclude the “new scientific paradigm” of complexity, a paradigm that he himself is looking to be universal. Whilst he cites Lyotard (6) describing both political and linguistic reasons why postmodern art celebrates plurality, denying any progress towards singular totalising views, he fails to appreciate what happens if that singular totalising view incorporates interactivity? Surely complexity is pluralistic by its very nature? In the same vein, if language for Derrida is “an unfixed system of traces and differences … regardless of the intent of the authored texts … with multiple equally legitimate meanings” (Galanter 7) then I have heard no better description of the signifiers, signifieds, connotations and denotations of digital culture. Complexity in its entirety can also be conversely understood as the impact of digital interactivity upon culture per se which has a complex causal relation in itself; Qvortups notion of a “communications event” (9) such as the Danish publication of the Mohammed cartoons falls into this category. Yet a complex causality could be traced further into cultural processes enlightening media theory; from the relationship between advertising campaigns and brand development; to the exposure and trajectory of the celebrity; describing the evolution of visual language in media cultures and informing the relationship between exposure to representation and behaviour. In digital interaction the terms art, design and media converge into a process driven, performative event that demonstrates emergence through autopoietic processes within a designated space of possibility. By insisting that all artwork is generative Galanter, like many other writers, negates the medium entirely which allows him to insist that generative art is “ideologically neutral” (Galanter 10). Generative art, like all digitally interactive artifacts are not neutral but rather ideologically plural. Thus, if one integrates Qvortups (8) delineation of medium theory and complexity theory we may have what we need; a first theory of a complex medium. Through interactive media complexity theory is the first post modern science; the first science of culture. References Bowman, Shane, and Chris Willis. We Media. 21 Sep. 2003. 9 March 2007 http://www.hypergene.net/wemedia/weblog.php>. Burton, Julian. “Hedron People.” 9 March 2007 http://www.hedron.com/network/assoc.php4?associate_id=14>. Cahir, Jayde, and Sarah James. “Complex: Call for Papers.” M/C Journal 9 Sep. 2006. 7 March 2007 http://journal.media-culture.org.au/journal/upcoming.php>. De Salvo, Donna, ed. Open Systems: Rethinking Art c. 1970. London: Tate Gallery Press, 2005. Eco, Umberto. The Open Work. Cambridge, Mass.: Harvard UP, 1989. Galanter, Phillip, and Ellen K. Levy. Complexity: Art & Complex Systems. SDMA Gallery Guide, 2002. Galanter, Phillip. “Against Reductionism: Science, Complexity, Art & Complexity Studies.” 2003. 9 March 2007 http://isce.edu/ISCE_Group_Site/web-content/ISCE_Events/ Norwood_2002/Norwood_2002_Papers/Galanter.pdf>. Halsall, Francis. “Observing ‘Systems-Art’ from a Systems-Theoretical Perspective”. CHArt 2005. 9 March 2007 http://www.chart.ac.uk/chart2005/abstracts/halsall.htm>. Innocent, Troy. “Life Signs.” 9 March 2007 http://www.iconica.org/main.htm>. Johnson, Jeffrey. “Embracing Complexity in Design (ECiD).” 2007. 9 March 2007 http://www.complexityanddesign.net/>. Lyotard, Jean-Francois. The Postmodern Condition. Manchester: Manchester UP, 1984. McLuhan, Marshall. The Gutenberg Galaxy: The Making of Typographic Man. Toronto: U of Toronto P, 1962. Mitleton-Kelly, Eve, ed. Complex Systems and Evolutionary Perspectives on Organisations. Elsevier Advanced Management Series, 2003. Prophet, Jane. “Jane Prophet.” 9 March 2007 http://www.janeprophet.co.uk/>. Qvortup, Lars. “Understanding New Digital Media.” European Journal of Communication 21.3 (2006): 345-356. Tedinnick, Luke. “Post Structuralism, Hypertext & the World Wide Web.” Aslib 59.2 (2007): 169-186. Wilson, Edward Osborne. Consilience: The Unity of Knowledge. New York: A.A. Knoff, 1998. Citation reference for this article MLA Style Cham, Karen, and Jeffrey Johnson. "Complexity Theory: A Science of Cultural Systems?." M/C Journal 10.3 (2007). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0706/08-cham-johnson.php>. APA Style Cham, K., and J. Johnson. (Jun. 2007) "Complexity Theory: A Science of Cultural Systems?," M/C Journal, 10(3). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0706/08-cham-johnson.php>.
50

Shaw, Janice Marion. "The Curious Transformation of Boy to Computer." M/C Journal 19, no. 4 (August 31, 2016). http://dx.doi.org/10.5204/mcj.1130.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Mark Haddon’s The Curious Incident of the Dog in the Night-Time has achieved success as “the new Rain Man” or “the new definitive, popular account of the autistic condition” (Burks-Abbott 294). Integral to its favourable reception is the way it conflates the autistic main character, the fifteen-year-old narrator Christopher Boone, with the savant, or individual who exhibits both neurological problems and giftedness, thereby engaging with the way autism is presented in popular culture. In a variety of contemporary films and television series, autism has been transformed from a disability to a form of giftedness by relating it to abilities associated in contemporary media with a genius, in particular by invoking the metaphor of an autistic mind as a type of computer. As a result, the book engages with the current association of giftedness in mathematics and science with social awkwardness and isolation as constructed in popular culture: in idiomatic terms, the genius “nerd” figure characterised by an uncertain, adolescent approach to social contact (Kendall 353). The disablement of the character is, then, lessened so that the idea of being “special,” continually evoked throughout the text, has a transformative function that is related less to the special needs of those with a disability and more to the common element in adolescent fiction of longing for extraordinary power and control through being a special, gifted individual. The Curious Incident of the Dog in the Night-Time relates the protagonist, Christopher, to Sherlock Holmes and his methods of detection, specifically through the title being taken from a story by Conan Doyle, “Silver Blaze,” in which the “curious incident” referred to is that the dog did nothing in the night. In the original story, that the dog did not bark or react to an intruder was a clue that the person was known to the animal, so allowing Holmes to solve the crime by a process of deduction. Christopher copies these traditional methods of the classical detective to solve his personal mystery, that of who killed a neighbour’s dog, Wellington. The adoption of this title allows a double irony to emerge. Christopher’s attempts to emulate Holmes in his approach to crime are predicated on his assumption of his likeness to the model of the classical detective as he states, “I think that if I were a proper detective he is the kind of detective I would be,” pointing out the similarity of their powers of observation and his ability, like Holmes, to “detach his mind at will” as well as his capacity to find patterns in events (92). Through the novel, these attributes are aligned with his autism, constructing a trope of his disability conferring extraordinary abilities that are predicated on a computer-like detachment and precision in his method of thinking. The accessible narrative of the autistic Christopher gives the reader the impression of being able to understand the perspective of an individual with a spectrum disorder. In this way, the text not only engages with, but contributes to the construction of this disability in current popular culture as merely an extension of giftedness, especially in mathematics, and an associated unwillingness to communicate. Indeed, according to Raoul Eshelman, “one of its most engaging narrative devices is to make us identify with a mentally impaired narrator who is manifestly not interested in identifying either with us or anyone else” (1). The main character’s reference to mathematical and scientific ideas exploits an interest in giftedness already established by popular literature and film, and engages with a transformation effected in popular culture of the genius as autistic, and its corollary of an autistic person as potentially a genius. Such a construction ranges from fictional characters like Sheldon in The Big Bang Theory, Charlie and his physicist colleagues in Numb3rs, and Raymond Babbitt in Rain Man, to real life characters or representative figures in reality series and feature films such as x + y, The Imitation Game, The Big Short, and the television program Beauty and the Geek. While never referring specifically to autism, all the real or fictional representations contribute to the construction of a stereotype in which behaviours on the autistic spectrum are linked to a talent in mathematics and the sciences. In addition to this, detectives in the classical crime fiction alluded to in the novel typically exhibit traits of superhuman powers of deduction, pattern making, and problem solving that engage with the popular notion of genius in general and mathematics in particular by possessing a mind like a computer. Such detectives from current television series as Saga from The Bridge and Spencer Reid from Criminal Minds exhibit distance, coldness, and lack of social awareness or empathy with others, and this is presented as the basis of their extraordinary ability to discern patterns and solve crime. Spencer Reid, for example, has three PhDs in Science disciplines and Mathematics. Charlie in the television series Numb3rs is also a genius who uses his mathematical abilities to not only find the solution to crime but also explain the maths behind it to his FBI colleagues, and, in conjunction, the audience. But the character with the clearest association to Christopher is, naturally, Sherlock Holmes, both as constructed in Conan Doyle’s original text and the current adaptations and transformations of it. The television series Sherlock and Elementary, as well as the films Sherlock Holmes and Sherlock Holmes: A Game of Shadows all invoke a version of Holmes in which his powers of deduction are associated with symptoms to be found in a spectrum disorder.Like Christopher, the classical detective is characterised by being cold, emotionless, distant, socially inept, and isolated, but also keenly observant, analytical, and scientific; one who approaches the crime as a puzzle to be solved (Cawelti 43) with computer-like precision. In what is considered to be the original detective story, The Murders in the Rue Morgue, Poe included a “pseudo-mathematical logic in his literary scenario” (Platten 255). In Conan Doyle’s stories, Holmes, too, adopts a mathematical and scientific approach to construct patterns from clues that he alone can discern, and thereby solve the crime. The depiction of investigators in contemporary media such as Charlie in Numb3rs engages with these origins so that he is objective, dispassionate, and able to relate to real-world problems only through the filter of mathematical formulae. Christopher is presented similarly by engaging with the idea of the detective as implied savant and relying on an ability to discern patterns for successful crime solving.The book links the disabling behaviours of autism with the savant, so that the stereotype of the mystic displaying both disability and giftedness in fiction of earlier ages has been transformed in contemporary literature to a figure with extraordinary powers related both to autism and to the contemporary form of mysticism: innate mathematical ability and computer-style calculation. Allied with what Murray terms the “unknown and ambiguous nature” of autism, it is characterised as “the alien within the human, the mystical within the rational, the ultimate enigma” (25) in a way that is in keeping with the current fascination with the nature of genius and its association with being “special,” a term continually evoked and discussed throughout the book by the main character. The chapters on scientific ideas relate to Christopher’s world view, filtered through a mathematical and analytical approach to life and relationships with other people. Christopher examines beliefs such as the concept of humanity as superior to other animals, and the idea of religion and creationism, that is, the idea of humanity itself as special, with a cold and logical approach. He similarly discusses the idea of the individual person as special, linking this to a metaphor of the human mind being a computer (203, 148). Christopher’s narrow perspective as a result of his autism is not presented as disabling so much as protective, because the metaphorical connection of his viewpoint to a computer provides him with distance. Although initially Christopher fails to realise the significance of events, this allows him to be “switched off” (103) from events that he finds traumatising.The transformative metaphor of an autistic individual thinking like a computer is also invoked through Christopher’s explanation of “why people think that their brains are special, and different from computers” (147). Indeed, both in terms of his tendency to retreat or by “pressing CTRL + ALT + DEL and shutting down programs and turning the computer off and rebooting” (178) in times of stress, Christopher metaphorically views himself as a computer. Such a perspective invokes yet another popular cultural reference through the allusion to the human brain as “Captain Jean-Luc Picard in Star Trek: The Next Generation, sitting in his captain’s seat looking at a big screen” (147). But more importantly, the explanation refers to the basic premise of the book, that the text offers access to a condition that is inherently unknowable, but able to be understood by the reader through metaphor, often based on computers or technology as a result of a popular construction of autism that “the condition is the product of a brain in which the hard drive is incorrectly formatted” (Murray 25).Throughout the novel, the notion of “special” is presented as a trope for those with a disability, but as the protagonist, Christopher, points out, everyone is special in some way, so the whole idea of a disability as disabling is problematised throughout the text, while its associations of giftedness are upheld. Christopher’s disability, never actually designated as Asperger’s Syndrome or any type of spectrum disorder, is transformed into a protective mechanism that shields him from problematic social relationships of which he is unaware, but that the less naïve reader can well discern. In this way, rather than a limitation, the main character’s disorder protects him from a harsh reality. Even Christopher’s choice of Holmes as a role model is indicative of his desire to impose an eccentric order on his world, since this engages with a character in popular fiction who is famous not simply for his abilities, but for his eccentricity bordering on a form of autism. His aloof personality and cold logic not only fail to hamper him in his investigations, but these traits actually form the basis of them. The majority of recent adaptations of Conan Doyle’s stories, especially the BBC series Sherlock, depict Holmes with symptoms associated with spectrum disorder such as lack of empathy, difficulty in communication, and limited social skills, and these are clearly shown as contributing to his problem-solving ability. The trope of Christopher as detective also allows a parodic, postmodern comment on the classical detective form, because typically this fiction has a detective that knows more than the reader, and therefore the goal for the reader is to find the solution to the crime before it is revealed by the investigator in the final stages of the text (Rzepka 14). But the narrative works ironically in the novel since the non-autistic reader knows more than a narrator who is hampered by a limited worldview. From the beginning of the book, the narrative as focalised through Christopher’s narrow perspective allows a more profound view of events to be adopted by the reader, who is able to read clues that elude the protagonist. Christopher is well aware of this as he explains his attraction to the murder mystery novel, even though he has earlier stated he does not like novels since his inability to imagine or empathise means he is unable to relate to their fiction. For him, the genre of murder mystery is more akin to the books on maths and science that he finds comprehensible, because, like the classical detective, he views the crime as primarily a puzzle to be solved: as he states, “In a murder mystery novel someone has to work out who the murderer is and then catch them. It is a puzzle. If it is a good puzzle you can sometimes work out the answer before the end of the book” (5). But unlike Christopher, Holmes invariably knows more about the crime, can interpret the clues, and find the pattern, before other characters such as Watson, and especially the reader. In contrast, in The Curious Incident of the Dog in the Night-Time, the reader has more awareness of the probable context and significance of events than Christopher because, like a computer, he can calculate but not imagine. The reader can interpret clues within the plot of the story, such as the synchronous timing of the “death” of Christopher’s mother with the breakdown of the marriage of a neighbour, Mrs Shears. The astute reader is able to connect these events and realise that his mother has not died, but is living in a relationship with the neighbour’s husband. The construction of this pattern is denied Christopher, since he fails to determine their significance due to his limited imagination. Such a failure is related to Simon Baron-Cohen’s Theory of Mind, in which he proposes that autistic individuals have difficulty with social behaviour because they lack the capacity to comprehend that other people have individual mental states, or as Christopher terms it, “when I was little I didn’t understand about other people having minds” (145). Haddon utilises fictional licence when he allows Christopher to overcome such a limitation by a conscious shift in perspective, despite the specialist teacher within the text claiming that he would “always find this very difficult” (145). Christopher has here altered his view of events through his modelling both on the detective genre and on his affinity with mathematics, since he states, “I don’t find this difficult now. Because I decided that it was a kind of puzzle, and if something is a puzzle there is always a way of solving it” (145). In this way, the main character is shown as transcending symptoms of autism through the power of his giftedness in mathematics to ultimately discern a pattern in human relationships thereby adopting a computational approach to social problems.Haddon similarly explains the perspective of an individual with autism through a metaphor of Christopher’s memory being like a DVD recording. He is able to distance himself from his memories, choosing “Rewind” and then “Fast Forward” (96) to retrieve his recollection of events. This aspect of the precision of his memory relates to his machine-like coldness and lack of empathy for the feelings of others. But it also refers to the stereotype of the nerd figure in popular culture, where the nerd is able to relate more to a computer than to other people, exemplified in Sheldon from the television series The Big Bang Theory. Thus the presentation of Christopher’s autism relates to his giftedness in maths and science more than to areas that relate to his body. In general, descriptions of inappropriate or distressing bodily functions associated with disorders are mainly confined to other students at Christopher’s school. His references to his fellow students, such as Joseph eating his poo and playing in it (129) and his unsympathetic evaluation of Steve as not as clever or interesting as a dog because he “needs help to eat his food and could not even fetch a stick” (6), make a clear distinction between him and the other children, who despite being termed “special needs” are “special” in a different way from Christopher, because, according to him, “All the other children at my school are stupid” (56). While some reference is made to Christopher’s inappropriate behaviour in times of stress, such as punching a fellow student, wetting himself while on the train, and vomiting outside the school, in the main the emphasis is on his giftedness as a result of his autism, as displayed in the many chapters where he explains scientific and mathematical concepts. This is extrapolated into a further mathematical metaphor underlying the book, that he is like one of the prime numbers he finds so fascinating, because prime numbers do not fit neatly into the pattern of the number system, but they are essential and special nevertheless. Moreover, as James Berger suggests, prime numbers can “serve as figures for the autistic subject,” because like autistic individuals “they do not mix; they are singular, indivisible, unfactorable” yet “Mathematics could not exist without these singular entities that [. . .] are only apparent anomalies” (271).Haddon therefore offers a transformation by confounding autism with a computer-like ability to solve mathematical problems, so that the text is, as Haddon concedes, “as much about a gifted boy with behavior problems as it is about anyone on the autism spectrum” (qtd. in Burks-Abbott 291). Indeed, the word “autism” does not even appear in the book, while the terms “genius,” (140) “clever,” (32, 65, 252) and the like are continually being invoked in descriptions of Christopher, even if ironically. More importantly, the reader is constantly being shown his giftedness through the reiteration of his study of A Level Mathematics, and his explanation of scientific concepts. Throughout, Christopher explains aspects of mathematics, astrophysics, and other sciences, referring to such well-known puzzles in popular culture as the Monty Hall problem, as well as more obscure formulae and their proofs. They function to establish Christopher’s intuitive grasp of complex mathematical and scientific principles, as well as providing the reader with insight into both his perspective and the paradoxical nature of an individual who is at once able to solve quadratic equations in his head, yet is incapable of understanding the simple instruction, “Take the tube to Willesden Junction” (211).The presentation of Christopher is that of an individual who displays an extension of the social problems established in popular literature as connected to a talent for mathematics, therefore engaging with a depiction already existing in popular mythology: the isolated and analytical nerd or genius social introvert. Indeed, much of Christopher’s autistic behaviour functions to protect him from unsettling or traumatic information, since he fails to realise the significance of the information he collects or the clues he is given. His disability is therefore presented as not limiting so much as protective, and so the notion of disability is subsumed by the idea of the savant. The book, then, engages with a contemporary representation within popular culture that has transformed spectrum disability into mathematical giftedness, thereby metaphorically associating the autistic mind with the computer. ReferencesBaron-Cohen, Simon. Mindblindness: An Essay on Autism and Theory of Mind. Cambridge MA: MIT Press, 1995. Berger, James. “Alterity and Autism: Mark Haddon’s Curious Incident in the Neurological Spectrum.” Autism and Representation. Ed. Mark Osteen. Hoboken: Routledge, 2007. 271–88. Burks-Abbott, Gyasi. “Mark Haddon’s Popularity and Other Curious Incidents in My Life as an Autistic.” Autism and Representation. Ed. Mark Osteen. Hoboken: Routledge, 2007. 289–96. Cawelti, John G. Adventure, Mystery, and Romance: Formula Stories as Art and Popular Culture. Chicago: U of Chicago P, 1976. Eshelman, Raoul. “Transcendence and the Aesthetics of Disability: The Case of The Curious Incident of the Dog in the Night-Time.” Anthropoetics: The Journal of Generative Anthropology 15.1 (2009). Haddon, Mark. The Curious Incident of the Dog in the Night-Time. London: Random House Children’s Books, 2004. Kendall, Lori. “The Nerd Within: Mass Media and the Negotiation of Identity among Computer-Using Men.” Journal of Men’s Studies 3 (1999): 353–67. Murray, Stuart. “Autism and the Contemporary Sentimental: Fiction and the Narrative Fascination of the Present.” Literature and Medicine 25.1 (2006): 24–46. Platten, David. “Reading Glasses, Guns and Robots: A History of Science in French Crime Fiction.” French Cultural Studies 12 (2001): 253–70. Rzepka, Charles J. Detective Fiction. Cambridge, UK: Polity Press, 2005.

До бібліографії