Статті в журналах з теми "Extreme Model-Driven Design"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Extreme Model-Driven Design.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Extreme Model-Driven Design".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Khan, Rafflesia, Alexander Schieweck, Ciara Breathnach, and Tiziana Margaria. "Historical Civil Registration Record Transcription Using an eXtreme Model Driven Approach." Proceedings of the Institute for System Programming of the RAS 33, no. 3 (2021): 123–42. http://dx.doi.org/10.15514//ispras-2021-33(3)-10.

Повний текст джерела
Анотація:
Modelling is considered as a universal approach to define and simplify real-world applications through appropriate abstraction. Model-driven system engineering identifies and integrates appropriate concepts, techniques, and tools which provide important artefacts for interdisciplinary activities. In this paper, we show how we used a model-driven approach to design and improve a Digital Humanities dynamic web application within an interdisciplinary project that enables history students and volunteers of history associations to transcribe a large corpus of image-based data from the General Register Office (GRO) records. Our model-driven approach generates the software application from data, workflow and GUI abstract models, ready for deployment.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Khan, Rafflesia, Alexander Schieweck, Ciara Breathnach, and Tiziana Margaria. "Historical Civil Registration Record Transcription Using an eXtreme Model Driven Approach." Proceedings of the Institute for System Programming of the RAS 33, no. 3 (2021): 123–42. http://dx.doi.org/10.15514/ispras-2021-33(3)-10.

Повний текст джерела
Анотація:
Modelling is considered as a universal approach to define and simplify real-world applications through appropriate abstraction. Model-driven system engineering identifies and integrates appropriate concepts, techniques, and tools which provide important artefacts for interdisciplinary activities. In this paper, we show how we used a model-driven approach to design and improve a Digital Humanities dynamic web application within an interdisciplinary project that enables history students and volunteers of history associations to transcribe a large corpus of image-based data from the General Register Office (GRO) records. Our model-driven approach generates the software application from data, workflow and GUI abstract models, ready for deployment.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Larsen, Gunner Chr, and Kurt S. Hansen. "Statistical Model of Extreme Shear." Journal of Solar Energy Engineering 127, no. 4 (February 18, 2005): 444–55. http://dx.doi.org/10.1115/1.2035702.

Повний текст джерела
Анотація:
In order to continue cost-optimization of modern large wind turbines, it is important to continuously increase the knowledge of wind field parameters relevant to design loads. This paper presents a general statistical model that offers site-specific prediction of the probability density function (PDF) of turbulence driven short-term extreme wind shear events, conditioned on the mean wind speed, for an arbitrary recurrence period. The model is based on an asymptotic expansion, and only a few and easily accessible parameters are needed as input. The model of the extreme PDF is supplemented by a model that, on a statistically consistent basis, describes the most likely spatial shape of an extreme wind shear event. Predictions from the model have been compared with results from an extreme value data analysis, based on a large number of full-scale measurements recorded with a high sampling rate. The measurements have been extracted from ”Database on Wind Characteristics” (http:∕∕www.winddata.com∕), and they refer to a site characterized by a flat homogeneous terrain. The comparison has been conducted for three different mean wind speeds in the range of 15-19m∕s, and model predictions and experimental results are consistent, given the inevitable uncertainties associated with the model as well as with the extreme value data analysis.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Jeong, Dae Il, Alex J. Cannon, and Robert J. Morris. "Projected changes to wind loads coinciding with rainfall for building design in Canada based on an ensemble of Canadian regional climate model simulations." Climatic Change 162, no. 2 (May 21, 2020): 821–35. http://dx.doi.org/10.1007/s10584-020-02745-y.

Повний текст джерела
Анотація:
Abstract Strong wind coinciding with rainfall is an important weather phenomenon in many science and engineering fields. This study investigates changes in hourly extreme driving rain wind pressure (DRWP)—a climatic variable used in building design in Canada—for future periods of specified global mean temperature change using an ensemble of a Canadian regional climate model (CanRCM4) driven by the Canadian Earth system model (CanESM2) under the Representative Concentration Pathway 8.5 scenario. Evaluation of the model shows that the CanRCM4 ensemble reproduces hourly extreme wind speeds and rainfall (> 1.8 mm/h) occurrence frequency and the associated design (5-year return level) DRWP across Canada well when compared with 130 meteorological stations. Significant increases in future design DRWP are projected over western, eastern, and northern Canada, with the areal extent and relative magnitude of the increases scaling approximately linearly with the amount of global warming. Increases in future rainfall occurrence frequency are driven by the combined effect of increases in precipitation amount and changes in precipitation type from solid to liquid due to increases in air temperature; these are identified as the main factors leading to increases in future design DRWP. Future risk ratios of the design DRWP are highly dependent on those of the rainfall occurrence, which shows large increases over the three regions, while they are partly affected by the increases in future extreme wind speeds over western and northeastern Canada. Increases in DRWP can be an emerging risk for existing buildings, particularly in western, eastern, and northern Canada, and a consideration for managing and designing buildings across Canada.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Murray, Lee T., Eric M. Leibensperger, Clara Orbe, Loretta J. Mickley, and Melissa Sulprizio. "GCAP 2.0: a global 3-D chemical-transport model framework for past, present, and future climate scenarios." Geoscientific Model Development 14, no. 9 (September 24, 2021): 5789–823. http://dx.doi.org/10.5194/gmd-14-5789-2021.

Повний текст джерела
Анотація:
Abstract. This paper describes version 2.0 of the Global Change and Air Pollution (GCAP 2.0) model framework, a one-way offline coupling between version E2.1 of the NASA Goddard Institute for Space Studies (GISS) general circulation model (GCM) and the GEOS-Chem global 3-D chemical-transport model (CTM). Meteorology for driving GEOS-Chem has been archived from the E2.1 contributions to phase 6 of the Coupled Model Intercomparison Project (CMIP6) for the pre-industrial era and the recent past. In addition, meteorology is available for the near future and end of the century for seven future scenarios ranging from extreme mitigation to extreme warming. Emissions and boundary conditions have been prepared for input to GEOS-Chem that are consistent with the CMIP6 experimental design. The model meteorology, emissions, transport, and chemistry are evaluated in the recent past and found to be largely consistent with GEOS-Chem driven by the Modern-Era Retrospective analysis for Research and Applications version 2 (MERRA-2) product and with observational constraints.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Wang, Xinmei, Yifei Wang, and Tao Wu. "The Review of Electromagnetic Field Modeling Methods for Permanent-Magnet Linear Motors." Energies 15, no. 10 (May 13, 2022): 3595. http://dx.doi.org/10.3390/en15103595.

Повний текст джерела
Анотація:
Permanent-magnet linear motors (PMLMs) are widely used in various fields of industrial production, and the optimization design of the PMLM is increasingly attracting attention in order to improve the comprehensive performance of the motor. The primary problem of PMLM optimization design is the establishment of a motor model, and this paper summarizes the modeling of the PMLM electromagnetic field. First, PMLM parametric modeling methods (model-driven methods) such as the equivalent circuit method, analytical method, and finite element method, are introduced, and then non-parametric modeling methods (data-driven methods) such as the surrogate model and machine learning are introduced. Non-parametric modeling methods have the characteristics of higher accuracy and faster computation, and are the mainstream approach to motor modeling at present. However, surrogate models and traditional machine learning models such as support vector machine (SVM) and extreme learning machine (ELM) approaches have shortcomings in dealing with the high-dimensional data of motors, and some machine learning methods such as random forest (RF) require a large number of samples to obtain better modeling accuracy. Considering the modeling problem in the case of the high-dimensional electromagnetic field of the motor under the condition of a limited number of samples, this paper introduces the generative adversarial network (GAN) model and the application of the GAN in the electromagnetic field modeling of PMLM, and compares it with the mainstream machine learning models. Finally, the development of motor modeling that combines model-driven and data-driven methods is proposed.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Huang, Shenghong, Qiusheng Li, Man Liu, Fubin Chen, and Shun Liu. "Numerical Simulation of Wind-Driven Rain on a Long-Span Bridge." International Journal of Structural Stability and Dynamics 19, no. 12 (December 2019): 1950149. http://dx.doi.org/10.1142/s0219455419501499.

Повний текст джерела
Анотація:
Wind-driven rain (WDR) and its interactions with structures is an important research subject in wind engineering. As bridge spans are becoming longer and longer, the effects of WDR on long-span bridges should be well understood. Therefore, this paper presents a comprehensive numerical simulation study of WDR on a full-scale long-span bridge under extreme conditions. A validation study shows that the predictions of WDR on a bridge section model agree with experimental results, validating the applicability of the WDR simulation approach based on the Eulerian multiphase model. Furthermore, a detailed numerical simulation of WDR on a long-span bridge, North Bridge of Xiazhang Cross-sea Bridge is conducted. The simulation results indicate that although the loads induced by raindrops on the bridge surfaces are very small as compared to the wind loads, extreme rain intensity may occur on some windward surfaces of the bridge. The adopted numerical methods and rain loading models are validated to be an effective tool for WDR simulation for bridges and the results presented in this paper provide useful information for the water-erosion proof design of future long-span bridges.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Cheng, Qi, Shuchun Wang, and Xifeng Fang. "Intelligent design technology of automobile inspection tool based on 3D MBD model intelligent retrieval." Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering 235, no. 10-11 (March 2, 2021): 2917–27. http://dx.doi.org/10.1177/09544070211000174.

Повний текст джерела
Анотація:
The existing process equipment design resource utilization rate in automobile industry is low, so it is urgent to change the design method to improve the design efficiency. This paper proposed a fast design method of process equipment driven by classification retrieval of 3D model-based definition (MBD). Firstly, an information integration 3D model is established to fully express the product information definition and to effectively express the design characteristics of the existing 3D model. Through the classification machine-learning algorithm of 3D MBD model based on Extreme Learning Machine (ELM), the 3D MBD model with similar characteristics to the auto part model to be designed was retrieved from the complex process equipment case database. Secondly, the classification and retrieval of the model are realized, and the process equipment of retrieval association mapping with 3D MBD model is called out. The existing process equipment model is adjusted and modified to complete the rapid design of the process equipment of the product to be designed. Finally, a corresponding process equipment design system was developed and verified through a case study. The application of machine learning to the design of industrial equipment greatly shortens the development cycle of equipment. In the design system, the system learns from engineers, making them understand the design better than engineers. Therefore, it can help any user to quickly design 3D models of complex products.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Qin, Guodong, Huapeng Wu, and Aihong Ji. "Equivalent Dynamic Analysis of a Cable-Driven Snake Arm Maintainer." Applied Sciences 12, no. 15 (July 26, 2022): 7494. http://dx.doi.org/10.3390/app12157494.

Повний текст джерела
Анотація:
In this paper, we investigate a design method for a cable-driven snake arm maintainer (SAM) and its dynamics modelling. A SAM can provide redundant degrees of freedom and high structural stiffness, as well as high load capacity and a simplified structure ideal for various narrow and extreme working environments, such as nuclear power plants. However, their serial-parallel configuration and cable drive system make the dynamics of a SAM strongly coupled, which is not conducive to accurate control. In this paper, we propose an equivalent dynamics modelling method for the strongly coupled dynamic characteristics of each joint cable. The cable traction dynamics are forcibly decoupled using force analysis and joint torque equivalent transformation. Then, the forcibly equivalent dynamic model is obtained based on traditional series robot dynamic modelling methods (Lagrangian method, etc.). To verify the correctness of the equivalent dynamics, a simple model-based controller is established. In addition, a SAM prototype is produced to collect joint angles and cable forces at different trajectories. Finally, the results of the equivalent dynamics control simulation and the prototype tests demonstrate the validity of the SAM structural design and the equivalent dynamics model.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Gu, Ziyu, Shuwei Pang, Wenxiang Zhou, Yuchen Li, and Qiuhong Li. "An Online Data-Driven LPV Modeling Method for Turbo-Shaft Engines." Energies 15, no. 4 (February 9, 2022): 1255. http://dx.doi.org/10.3390/en15041255.

Повний текст джерела
Анотація:
The linear parameter-varying (LPV) model is widely used in aero engine control system design. The conventional local modeling method is inaccurate and inefficient in the full flying envelope. Hence, a novel online data-driven LPV modeling method based on the online sequential extreme learning machine (OS-ELM) with an additional multiplying layer (MLOS-ELM) was proposed. An extra multiplying layer was inserted between the hidden layer and the output layer, where the hidden layer outputs were multiplied by the input variables and state variables of the LPV model. Additionally, the input layer was set to the LPV model’s scheduling parameter. With the multiplying layer added, the state space equation matrices of the LPV model could be easily calculated using online gathered data. Simulation results showed that the outputs of the MLOS-ELM matched that of the component level model of a turbo-shaft engine precisely. The maximum approximation error was less than 0.18%. The predictive outputs of the proposed online data-driven LPV model after five samples also matched that of the component level model well, and the maximum predictive error within a large flight envelope was less than 1.1% with measurement noise considered. Thus, the efficiency and accuracy of the proposed method were validated.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Ehmele, Florian, and Michael Kunz. "Flood-related extreme precipitation in southwestern Germany: development of a two-dimensional stochastic precipitation model." Hydrology and Earth System Sciences 23, no. 2 (February 25, 2019): 1083–102. http://dx.doi.org/10.5194/hess-23-1083-2019.

Повний текст джерела
Анотація:
Abstract. Various fields of application, such as risk assessments of the insurance industry or the design of flood protection systems, require reliable precipitation statistics in high spatial resolution, including estimates for events with high return periods. Observations from point stations, however, lack of spatial representativeness, especially over complex terrain. Current numerical weather models are not capable of running simulations over thousands of years. This paper presents a new method for the stochastic simulation of widespread precipitation based on a linear theory describing orographic precipitation and additional functions that consider synoptically driven rainfall and embedded convection in a simplified way. The model is initialized by various statistical distribution functions describing prevailing atmospheric conditions such as wind vector, moisture content, or stability, estimated from radiosonde observations for a limited sample of observed heavy rainfall events. The model is applied for the stochastic simulation of heavy rainfall over the complex terrain of southwestern Germany. It is shown that the model provides reliable precipitation fields despite its simplicity. The differences between observed and simulated rainfall statistics are small, being of the order of only ±10 % for return periods of up to 1000 years.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Moreno, G. B. Z. L., S. A. L. Luiz, A. C. Pinto, A. R. Fioravanti, E. X. Miqueles, and H. C. N. Tolentino. "Tolerance-budgeting for ultra-stable KB systems." Journal of Physics: Conference Series 2380, no. 1 (December 1, 2022): 012074. http://dx.doi.org/10.1088/1742-6596/2380/1/012074.

Повний текст джерела
Анотація:
Abstract The emergence of new-generation light sources has driven experimental stations and optomechanical instrumentation to increasingly ambitious designs: precision engineering, optics design, and experimental techniques are being pushed to the limit of what is achievable, targeting the best spatial, spectral, and temporal resolution for their measurements. The extreme brilliance making diffraction-limited focusing feasible, also sets new sensitivity baselines for vibrations, clamping and thermal deformations, demanding stiffer mechanics and tighter tolerances for fabrication, metrology, assembly, and alignment, as well as creative commissioning and experiment control strategies. Such interdisciplinary design often requires cross-checking between mechanical, optical, and experimental specifications, where shared variables such as mirror dimensions, incidence angle, and optical magnification factor might induce conflicting behaviour, especially when tightly bounded to pioneering design targets on focus size and divergence, working distance, and flux density to name a few, stressing specifications and tolerances throughout every design step. In this manner, an analytical model integrating the main mirror tolerances could act as a more assertive starting point to broader, model-based assessments, pruning the decision space for subsequent finite-element analysis targeting globally optimal designs. This contribution suggests a tolerance budgeting approach for designing ultra-stable KB mirror systems, which in turn authorized an exactly-constrained realization [5], providing the high stability needed for ambitious nanoprobe designs, as the example of recently commissioned tarumã station, from Sirius/lnls carnaúba beamline [8], and underway designs such as mogno and sapoti stations also at Sirius.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Cousineau, Julien, and Enda Murphy. "Numerical Investigation of Climate Change Effects on Storm Surges and Extreme Waves on Canada’s Pacific Coast." Atmosphere 13, no. 2 (February 12, 2022): 311. http://dx.doi.org/10.3390/atmos13020311.

Повний текст джерела
Анотація:
Storm surges and waves are key climate-driven parameters affecting the design and operation of ports and other infrastructure on the coast. Reliable predictions of future storm surges and waves are not yet available for the west coast of Canada, and this data gap hinders effective climate risk assessment, planning and adaptation. This paper presents numerical simulations of storm surges and waves in British Columbia coastal waters under a future climate (Representative Concentration Pathway) scenario (RCP8.5). The numerical models were first forced by wind and surface pressure fields from the ERA-5 global reanalysis, and calibrated and validated using historical wave and water level records. The models were then driven by atmospheric data from four regional climate models (RCMs) to investigate potential changes in the frequency and magnitude of storm surges and extreme waves over the 21st century. The model outputs were analyzed to determine the potential impacts of climate change on storm surges and wave effects at key ports and transportation assets in western Canada. The study is the first of its kind to utilize unstructured, computational models to simulate storm surges and waves for the entire western Canada coastal region, while maintaining the high spatial resolution in coastal sub-basins needed to capture local dynamic responses.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Pomberger, Sebastian, Matthias Oberreiter, Martin Leitner, Michael Stoschka, and Jörg Thuswaldner. "Probabilistic Surface Layer Fatigue Strength Assessment of EN AC-46200 Sand Castings." Metals 10, no. 5 (May 9, 2020): 616. http://dx.doi.org/10.3390/met10050616.

Повний текст джерела
Анотація:
The local fatigue strength within the aluminium cast surface layer is affected strongly by surface layer porosity and cast surface texture based notches. This article perpetuates the scientific methodology of a previously published fatigue assessment model of sand cast aluminium surface layers in T6 heat treatment condition. A new sampling position with significantly different surface roughness is investigated and the model exponents a 1 and a 2 are re-parametrised to be suited for a significantly increased range of surface roughness values. Furthermore, the fatigue assessment model of specimens in hot isostatic pressing (HIP) heat treatment condition is studied for all sampling positions. The obtained long life fatigue strength results are approximately 6% to 9% conservative, thus proven valid within an range of 30 µm ≤ S v ≤ 260 µm notch valley depth. To enhance engineering feasibility even further, the local concept is extended by a probabilistic approach invoking extreme value statistics. A bivariate distribution enables an advanced probabilistic long life fatigue strength of cast surface textures, based on statistically derived parameters such as extremal valley depth S v i and equivalent notch root radius ρ ¯ i . Summing up, a statistically driven fatigue strength assessment tool of sand cast aluminium surfaces has been developed and features an engineering friendly design method.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Chen, Nan, Honghu Liu, and Fei Lu. "Shock trace prediction by reduced models for a viscous stochastic Burgers equation." Chaos: An Interdisciplinary Journal of Nonlinear Science 32, no. 4 (April 2022): 043109. http://dx.doi.org/10.1063/5.0084955.

Повний текст джерела
Анотація:
Viscous shocks are a particular type of extreme event in nonlinear multiscale systems, and their representation requires small scales. Model reduction can thus play an essential role in reducing the computational cost for the prediction of shocks. Yet, reduced models typically aim to approximate large-scale dominating dynamics, which do not resolve the small scales by design. To resolve this representation barrier, we introduce a new qualitative characterization of the space–time locations of shocks, named the “shock trace,” via a space–time indicator function based on an empirical resolution-adaptive threshold. Unlike exact shocks, the shock traces can be captured within the representation capacity of the large scales, thus facilitating the forecast of the timing and locations of the shocks utilizing reduced models. Within the context of a viscous stochastic Burgers equation, we show that a data-driven reduced model, in the form of nonlinear autoregression (NAR) time series models, can accurately predict the random shock traces, with relatively low rates of false predictions. Furthermore, the NAR model, which includes nonlinear closure terms to approximate the feedback from the small scales, significantly outperforms the corresponding Galerkin truncated model in the scenario of either noiseless or noisy observations. The results illustrate the importance of the data-driven closure terms in the NAR model, which account for the effects of the unresolved dynamics brought by nonlinear interactions.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Jeong, Dae Il, Alex J. Cannon, and Xuebin Zhang. "Projected changes to extreme freezing precipitation and design ice loads over North America based on a large ensemble of Canadian regional climate model simulations." Natural Hazards and Earth System Sciences 19, no. 4 (April 17, 2019): 857–72. http://dx.doi.org/10.5194/nhess-19-857-2019.

Повний текст джерела
Анотація:
Abstract. Atmospheric ice accretion caused by freezing precipitation (FP) can lead to severe damage and the failure of buildings and infrastructure. This study investigates projected changes to extreme ice loads – those used to design infrastructure over North America (NA) – for future periods of specified global mean temperature change (GMTC), relative to the recent 1986–2016 period, using a large 50-member initial-condition ensemble of the CanRCM4 regional climate model, driven by CanESM2 under the RCP8.5 scenario. The analysis is based on 3-hourly ice accretions on horizontal, vertical and radial surfaces calculated based on FP diagnosed by the offline Bourgouin algorithm and wind speed during FP. The CanRCM4 ensemble projects an increase in future design ice loads for most of northern NA and decreases for most of southern NA and some northeastern coastal regions. These changes are mainly caused by regional increases in future upper-level and surface temperatures associated with global warming. Projected changes in design ice thickness are also affected by changes in future precipitation intensity and surface wind speed. Changes in upper-level and surface temperature conditions for FP occurrence in CanRCM4 are in broad agreement with those from nine global climate models but display regional differences under the same level of global warming, indicating that a larger multi-model, multi-scenario ensemble may be needed to better account for additional sources of structural and scenario uncertainty. Increases in ice accretion for latitudes higher than 40∘ N are substantial and would have clear implications for future building and infrastructure design.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Zhang, Jian, Xiao-Hua Yang, and Yu-Qi Li. "A refined rank set pair analysis model based on wavelet analysis for predicting temperature series." International Journal of Numerical Methods for Heat & Fluid Flow 25, no. 5 (June 1, 2015): 974–85. http://dx.doi.org/10.1108/hff-05-2014-0140.

Повний текст джерела
Анотація:
Purpose – The purpose of this paper is to accurately simulate and predict the daily extreme temperature in Beijing Reservoir and the monthly extreme temperature in Tianjin Reservoir using wavelet refined rank set pair analysis (WRRSPA). Design/methodology/approach – The new method, called WRRSPA, which combines wavelet analysis and refined rank set pair analysis (RRSPA), was proposed for use in this study because of the non-linear and multi-time scale characteristics of the temperature series. The model includes the advantages of the multi-resolution feature of wavelet analysis and the non-parametric data-driven prediction from refined rank set air analysis. Findings – Based on the daily extreme temperature of Beijing Reservoir, the predictions of the last 18 days reveal that WRRSPA is more appropriate because the percentage of the relative errors that are smaller than 10 percent increased from 78 percent by Back Propagation (BP) and 78 percent by RRSPA to 100 percent by WRRSPA in Beijing Reservoir. In addition, WRRSPA has lower values of root mean squared error (RMSE) and mean absolute error (MAE) and a higher coefficient of efficiency (modified coefficient of efficiency (MCE)). The last 12 monthly extreme temperature predictions of Tianjin Reservoir demonstrate that WRRSPA produces prediction results: the percentage of relative errors that are smaller than 10 percent are improved from 34 percent by BP and 58 percent by RRSPA to 67 percent by WRRSPA. In addition, WRRSPA also has lower values of RMSE and MAE and a higher coefficient of efficiency (MCE). Research limitations/implications – The analysis results ignore the physical processes and may be affected by the limited observation data. In addition, the WRRSPA method is still in its early stages of application and must be further tested. Practical implications – The results of the study are helpful for the study of the complex features and accurate prediction of temperature series. Social implications – This paper contributes to further the process of research of climate change. Originality/value – This study represents the first use of the WRRSPA method to analyze the multi-scale characteristics and forecast the future values of the extreme temperature series from Beijing Reservoir and Tianjin Reservoir. This paper provides an important theoretical support for extreme temperature prediction.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Stewart, Shelby, Jack Giambalvo, Julia Vance, Jeremy Faludi, and Steven Hoffenson. "A Product Development Approach Advisor for Navigating Common Design Methods, Processes, and Environments." Designs 4, no. 1 (February 14, 2020): 4. http://dx.doi.org/10.3390/designs4010004.

Повний текст джерела
Анотація:
Many different product development approaches are taught and used in engineering and management disciplines. These formalized design methods, processes, and environments differ in the types of projects for which they are relevant, the project components they include, and the support they provide users. This paper details a review of sixteen well-established product development approaches, the development of a decision support system to help designers and managers navigate these approaches, and the administration of a survey to gather subjective assessments and feedback from design experts. The included approaches—design thinking, systems thinking, total quality management, agile development, waterfall process, engineering design, spiral model, vee model, axiomatic design, value-driven design, decision-based design, lean manufacturing, six sigma, theory of constraints, scrum, and extreme programming—are categorized based on six criteria: complexity, guidance, phase, hardware or software applicability, values, and users. A decision support system referred to as the Product Development Approach Advisor (PD Advisor) is developed to aid designers in navigating these approaches and selecting an appropriate approach based on specific project needs. Next, a survey is conducted with design experts to gather feedback on the support system and the categorization of approaches and criteria. The survey results are compared to the original classification of approaches by the authors to validate and provide feedback on the PD Advisor. The findings highlight the value and limitations of the PD Advisor for product development practice and education, as well as the opportunities for future work.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Poschlod, Benjamin. "Using high-resolution regional climate models to estimate return levels of daily extreme precipitation over Bavaria." Natural Hazards and Earth System Sciences 21, no. 11 (November 25, 2021): 3573–98. http://dx.doi.org/10.5194/nhess-21-3573-2021.

Повний текст джерела
Анотація:
Abstract. Extreme daily rainfall is an important trigger for floods in Bavaria. The dimensioning of water management structures as well as building codes is based on observational rainfall return levels. In this study, three high-resolution regional climate models (RCMs) are employed to produce 10- and 100-year daily rainfall return levels and their performance is evaluated by comparison to observational return levels. The study area is governed by different types of precipitation (stratiform, orographic, convectional) and a complex terrain, with convective precipitation also contributing to daily rainfall levels. The Canadian Regional Climate Model version 5 (CRCM5) at a 12 km spatial resolution and the Weather and Forecasting Research (WRF) model at a 5 km resolution both driven by ERA-Interim reanalysis data use parametrization schemes to simulate convection. WRF at a 1.5 km resolution driven by ERA5 reanalysis data explicitly resolves convectional processes. Applying the generalized extreme value (GEV) distribution, the CRCM5 setup can reproduce the observational 10-year return levels with an areal average bias of +6.6 % and a spatial Spearman rank correlation of ρ=0.72. The higher-resolution 5 km WRF setup is found to improve the performance in terms of bias (+4.7 %) and spatial correlation (ρ=0.82). However, the finer topographic details of the WRF-ERA5 return levels cannot be evaluated with the observation data because their spatial resolution is too low. Hence, this comparison shows no further improvement in the spatial correlation (ρ=0.82) but a small improvement in the bias (2.7 %) compared to the 5 km resolution setup. Uncertainties due to extreme value theory are explored by employing three further approaches. Applied to the WRF-ERA5 data, the GEV distributions with a fixed shape parameter (bias is +2.5 %; ρ=0.79) and the generalized Pareto (GP) distributions (bias is +2.9 %; ρ=0.81) show almost equivalent results for the 10-year return period, whereas the metastatistical extreme value (MEV) distribution leads to a slight underestimation (bias is −7.8 %; ρ=0.84). For the 100-year return level, however, the MEV distribution (bias is +2.7 %; ρ=0.73) outperforms the GEV distribution (bias is +13.3 %; ρ=0.66), the GEV distribution with fixed shape parameter (bias is +12.9 %; ρ=0.70), and the GP distribution (bias is +11.9 %; ρ=0.63). Hence, for applications where the return period is extrapolated, the MEV framework is recommended. From these results, it follows that high-resolution regional climate models are suitable for generating spatially homogeneous rainfall return level products. In regions with a sparse rain gauge density or low spatial representativeness of the stations due to complex topography, RCMs can support the observational data. Further, RCMs driven by global climate models with emission scenarios can project climate-change-induced alterations in rainfall return levels at regional to local scales. This can allow adjustment of structural design and, therefore, adaption to future precipitation conditions.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Zuo, Meng, Tianjun Zhou, and Wenmin Man. "Hydroclimate Responses over Global Monsoon Regions Following Volcanic Eruptions at Different Latitudes." Journal of Climate 32, no. 14 (June 21, 2019): 4367–85. http://dx.doi.org/10.1175/jcli-d-18-0707.1.

Повний текст джерела
Анотація:
Abstract Understanding the influence of volcanic eruptions on the hydroclimate over global monsoon regions is of great scientific and social importance. However, the link between the latitude of volcanic eruptions and related hydroclimate changes over global monsoon regions in the last millennium remains inconclusive. Here we show divergent hydroclimate responses after different volcanic eruptions based on large sets of reconstructions, observations, and climate model simulation. Both the proxy and observations show that Northern Hemispheric (Southern Hemispheric) monsoon precipitation is weakened by northern (southern) and tropical eruptions but is enhanced by the southern (northern) eruptions. A similar relationship is found in coupled model simulations driven by volcanic forcing. The model evidence indicates that the dynamic processes related to changes in atmospheric circulation play a dominant role in precipitation responses. The dry conditions over the Northern Hemisphere (Southern Hemisphere) and global monsoon regions following northern (southern) and tropical eruptions are induced through weakened monsoon circulation. The wet conditions over Northern Hemispheric (Southern Hemispheric) monsoon regions after southern (northern) eruptions are caused by the enhanced cross-equator flow. We extend our model simulation analysis from mean state precipitation to extreme precipitation and find that the response of the extreme precipitation is consistent with that of the mean precipitation but is more sensitive over monsoon regions. The response of surface runoff and net primary production is stronger than that of precipitation over some submonsoon regions. Our results imply that it is imperative to consider the potential volcanic eruptions at different hemispheres in the design of near-term decadal climate prediction experiments.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Amjad, Maaz, Irshad Ahmad, Mahmood Ahmad, Piotr Wróblewski, Paweł Kamiński, and Uzair Amjad. "Prediction of Pile Bearing Capacity Using XGBoost Algorithm: Modeling and Performance Evaluation." Applied Sciences 12, no. 4 (February 18, 2022): 2126. http://dx.doi.org/10.3390/app12042126.

Повний текст джерела
Анотація:
The major criteria that control pile foundation design is pile bearing capacity (Pu). The load bearing capacity of piles is affected by the various characteristics of soils and the involvement of multiple parameters related to both soil and foundation. In this study, a new model for predicting bearing capacity is developed using an extreme gradient boosting (XGBoost) algorithm. A total of 200 driven piles static load test-based case histories were used to construct and verify the model. The developed XGBoost model results were compared to a number of commonly used algorithms—Adaptive Boosting (AdaBoost), Random Forest (RF), Decision Tree (DT) and Support Vector Machine (SVM) using various performance measure metrics such as coefficient of determination, mean absolute error, root mean square error, mean absolute relative error, Nash–Sutcliffe model efficiency coefficient and relative strength ratio. Furthermore, sensitivity analysis was performed to determine the effect of input parameters on Pu. The results show that all of the developed models were capable of making accurate predictions however the XGBoost algorithm surpasses others, followed by AdaBoost, RF, DT, and SVM. The sensitivity analysis result shows that the SPT blow count along the pile shaft has the greatest effect on the Pu.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Dafka, Stella, Andrea Toreti, Juerg Luterbacher, Prodromos Zanis, Evangelos Tyrlis, and Elena Xoplaki. "Simulating Extreme Etesians over the Aegean and Implications for Wind Energy Production in Southeastern Europe." Journal of Applied Meteorology and Climatology 57, no. 5 (May 2018): 1123–34. http://dx.doi.org/10.1175/jamc-d-17-0172.1.

Повний текст джерела
Анотація:
AbstractEpisodes of extremely strong northerly winds (known as etesians) during boreal summer can cause hazardous conditions over the Aegean Archipelago (Greece) and represent a threat for the safe design, construction, and operation of wind energy turbines. Here, these extremes are characterized by employing a peak-over-threshold approach in the extended summer season (May–September) from 1989 to 2008. Twelve meteorological stations in the Aegean are used, and results are compared with 6-hourly wind speed data from five ERA-Interim–driven regional climate model (RCM) simulations from the European domain of the Coordinated Regional Climate Downscaling Experiment (EURO-CORDEX). The main findings show that, in the range of wind speeds for the maximum power output of the turbine, the most etesian-exposed stations could operate 90% at a hub height of 80 m. The central and northern Aegean are identified as areas prone to wind hazards, where medium- to high-wind (class II or I according to the International Electrotechnical Committee standards) wind turbines could be more suitable. In the central Aegean, turbines with a cutout wind speed > 25 m s−1 are recommended. Overall, RCMs can be considered a valuable tool for investigating wind resources at regional scale. Therefore, this study encourages a broader use of climate models for the assessment of future wind energy potential over the Aegean.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

A. Alissa, Khalid, Dalia H. Elkamchouchi, Khaled Tarmissi, Ayman Yafoz, Raed Alsini, Omar Alghushairy, Abdullah Mohamed, and Mesfer Al Duhayyim. "Dwarf Mongoose Optimization with Machine-Learning-Driven Ransomware Detection in Internet of Things Environment." Applied Sciences 12, no. 19 (September 22, 2022): 9513. http://dx.doi.org/10.3390/app12199513.

Повний текст джерела
Анотація:
The internet of things (ransomware refers to a type of malware) is the concept of connecting devices and objects of all types on the internet. IoT cybersecurity is the task of protecting ecosystems and IoT gadgets from cyber threats. Currently, ransomware is a serious threat challenging the computing environment, which needs instant attention to avoid moral and financial blackmail. Thus, there comes a real need for a novel technique that can identify and stop this kind of attack. Several earlier detection techniques followed a dynamic analysis method including a complex process. However, this analysis takes a long period of time for processing and analysis, during which the malicious payload is often sent. This study presents a new model of dwarf mongoose optimization with machine-learning-driven ransomware detection (DWOML-RWD). The presented DWOML-RWD model was mainly developed for the recognition and classification of goodware/ransomware. In the presented DWOML-RWD technique, the feature selection process is initially carried out using an enhanced krill herd optimization (EKHO) algorithm by the use of dynamic oppositional-based learning (QOBL). For ransomware detection, DWO with an extreme learning machine (ELM) classifier can be utilized. The design of the DWO algorithm aids in the optimal parameter selection of the ELM model. The experimental validation of the DWOML-RWD method can be examined on a benchmark dataset. The experimental results highlight the superiority of the DWOML-RWD model over other approaches.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Meng, Jing-Hui, Hao-Chi Wu, and Tian-Hu Wang. "Optimization of Two-Stage Combined Thermoelectric Devices by a Three-Dimensional Multi-Physics Model and Multi-Objective Genetic Algorithm." Energies 12, no. 14 (July 23, 2019): 2832. http://dx.doi.org/10.3390/en12142832.

Повний текст джерела
Анотація:
Due to their advantages of self-powered capability and compact size, combined thermoelectric devices, in which a thermoelectric cooler module is driven by a thermoelectric generator module, have become promising candidates for cooling applications in extreme conditions or environments where the room is confined and the power supply is sacrificed. When the device is designed as two-stage configuration for larger temperature difference, the design degree is larger than that of a single-stage counterpart. The element number allocation to each stage in the system has a significant influence on the device performance. However, this issue has not been well-solved in previous studies. This work proposes a three-dimensional multi-physics model coupled with multi-objective genetic algorithm to optimize the optimal element number allocation with the coefficient of performance and cooling capacity simultaneously as multi-objective functions. This method increases the accuracy of performance prediction compared with the previously reported examples studied by the thermal resistance model. The results show that the performance of the optimized device is remarkably enhanced, where the cooling capacity is increased by 23.3% and the coefficient of performance increased by 122.0% compared with the 1# Initial Solution. The mechanism behind this enhanced performance is analyzed. The results in this paper should be beneficial for engineers and scientists seeking to design a combined thermoelectric device with optimal performance under the constraint of total element number.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Kos, Zeljko, Yevhenii Klymenko, Irina Karpiuk, and Iryna Grynyova. "Bearing Capacity near Support Areas of Continuous Reinforced Concrete Beams and High Grillages." Applied Sciences 12, no. 2 (January 11, 2022): 685. http://dx.doi.org/10.3390/app12020685.

Повний текст джерела
Анотація:
This work presents a proposed engineering method for calculating the bearing capacity of the supporting sections of continuous monolithic reinforced concrete tape beams, which combine pressed or driven reinforced concrete piles into a single foundation design. According to the mechanics of reinforced concrete, it is recommended to consider the grillage to be a continuous reinforced concrete beam, which, as a rule, collapses according to the punching scheme above the middle support (pile caps), with the possible formation of a plastic hinge above it. The justification for the proposed method included the results of experimental studies, comparisons of the experimental tensile shear force with the results of calculations according to the design standards of developed countries, and modeling of the stress-strain state of the continuous beam grillage in the extreme span and above the middle support-pile adverse transverse load in the form of concentrated forces. The work is important, as it reveals the physical essence of the phenomenon and significantly clarifies the physical model of the operation of inclined sections over the middle support. The authors assessed the influence of design factors in continuous research elements, and on the basis of this, the work of the investigated elements under a transverse load was simulated in the Lira-Sapr PC to clarify the stress-strain state and confirm the scheme of their destruction adopted in the physical model by the finite element method in nonlinear formulation. Based on the analysis and comparison of the experimental and simulation results, a design model was proposed for bearing capacity near the supporting sections of continuous reinforced concrete beams and high grillages that is capable of adequately determining their strength.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Castleman, Blake A., Nicole-Jeanne Schlegel, Lambert Caron, Eric Larour, and Ala Khazendar. "Derivation of bedrock topography measurement requirements for the reduction of uncertainty in ice-sheet model projections of Thwaites Glacier." Cryosphere 16, no. 3 (March 8, 2022): 761–78. http://dx.doi.org/10.5194/tc-16-761-2022.

Повний текст джерела
Анотація:
Abstract. Determining the future evolution of the Antarctic Ice Sheet is critical for understanding and narrowing the large existing uncertainties in century-scale global mean sea-level-rise (SLR) projections. One of the most significant glaciers and ice streams in Antarctica, Thwaites Glacier, is at risk of destabilization and, if destabilized, has the potential to be the largest regional-scale contributor of SLR on Earth. This is because Thwaites Glacier is vulnerable to the marine ice-sheet instability as its grounding line is significantly influenced by ocean-driven basal melting rates, and its bedrock topography retrogrades into kilometer-deep troughs. In this study, we investigate how bedrock topography features influence the grounding line migration beneath Thwaites Glacier when extreme ocean-driven basal melt rates are applied. Specifically, we design experiments using the Ice-sheet and Sea-level System Model (ISSM) to quantify the SLR projection uncertainty due to reported errors in the current bedrock topography maps that are often used by ice-sheet models. We find that spread in model estimates of sea-level-rise contribution from Thwaites Glacier due to the reported bedrock topography error could be as large as 21.9 cm after 200 years of extreme ocean warming. Next, we perturb the bedrock topography beneath Thwaites Glacier using wavelet decomposition techniques to introduce realistic noise (within error). We explore the model space with multiple realizations of noise to quantify what spatial and vertical resolutions in bedrock topography are required to minimize the uncertainty in our 200-year experiment. We conclude that at least a 2 km spatial and 8 m vertical resolution would independently constrain possible SLR to ±2 cm over 200 years, fulfilling requirements outlined by the 2017 Decadal Survey for Earth Science. Lastly, we perform an ensemble of simulations to determine in which regions our model of Thwaites Glacier is most sensitive to perturbations in bedrock topography. Our results suggest that the retreat of the grounding line is most sensitive to bedrock topography in proximity to the grounding line's initial position. Additionally, we find that the location and amplitude of the bedrock perturbation is more significant than its sharpness and shape. Overall, these findings inform and benchmark observational requirements for future missions that will measure ice-sheet bedrock topography, not only in the case of Thwaites Glacier but for Antarctica on the continental scale.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Taylor, Christopher, Barbara Pretzner, Thomas Zahel, and Christoph Herwig. "Architectural and Technological Improvements to Integrated Bioprocess Models towards Real-Time Applications." Bioengineering 9, no. 10 (October 9, 2022): 534. http://dx.doi.org/10.3390/bioengineering9100534.

Повний текст джерела
Анотація:
Integrated or holistic process models may serve as the engine of a digital asset in a multistep-process digital twin. Concatenated individual-unit operation models are effective at propagating errors over an entire process, but are nonetheless limited in certain aspects of recent applications that prevent their deployment as a plausible digital asset, particularly regarding bioprocess development requirements. Sequential critical quality attribute tests along the process chain that form output–input (i.e., pool-to-load) relationships, are impacted by nonaligned design spaces at different scales and by simulation distribution challenges. Limited development experiments also inhibit the exploration of the overall design space, particularly regarding the propagation of extreme noncontrolled parameter values. In this contribution, bioprocess requirements are used as the framework to improve integrated process models by introducing a simplified data model for multiunit operation processes, increasing statistical robustness, adding a new simulation flow for scale-dependent variables, and describing a novel algorithm for extrapolation in a data-driven environment. Lastly, architectural and procedural requirements for a deployed digital twin are described, and a real-time workflow is proposed, thus providing a final framework for a digital asset in bioprocessing along the full product life cycle.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Yang, Long, Maofeng Liu, James A. Smith, and Fuqiang Tian. "Typhoon Nina and the August 1975 Flood over Central China." Journal of Hydrometeorology 18, no. 2 (January 31, 2017): 451–72. http://dx.doi.org/10.1175/jhm-d-16-0152.1.

Повний текст джерела
Анотація:
Abstract The August 1975 flood in central China was one of the most destructive floods in history. Catastrophic flooding was the product of extreme rainfall from Typhoon Nina over a 3-day period from 5 to 7 August 1975. Despite the prominence of the August 1975 flood, relatively little is known about the evolution of rainfall responsible for the flood. Details of extreme rainfall and flooding for the August 1975 event in central China are examined based on empirical analyses of rainfall and streamflow measurements and based on downscaling simulations using the Weather Research and Forecasting (WRF) Model, driven by Twentieth Century Reanalysis (20CR) fields. Key hydrometeorological features of the flood event are placed in a climatological context through hydroclimatological analyses of 20CR fields. Results point to the complex evolution of rainfall over the 3-day period with distinctive periods of storm structure controlling rainfall distribution in the flood region. Blocking plays a central role in controlling anomalous storm motion of Typhoon Nina and extreme duration of heavy rainfall. Interaction of Typhoon Nina with a second tropical depression played a central role in creating a zone of anomalously large water vapor transport, a central feature of heavy rainfall during the critical storm period on 7 August. Analyses based on the quasigeostrophic omega equation identified the predominant role of warm air advection for synoptic-scale vertical motion. Back-trajectory analyses using a Lagrangian parcel tracking algorithm are used to assess and quantify water vapor transport for the flood. The analytical framework developed in this study is designed to improve hydrometeorological approaches for flood-control design.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Righi, Gaia, Thomas E. Lockard, Robert E. Rudd, Marc A. Meyers, and Hye-Sook Park. "Design of high-pressure iron Rayleigh–Taylor strength experiments for the National Ignition Facility." Journal of Applied Physics 131, no. 14 (April 14, 2022): 145902. http://dx.doi.org/10.1063/5.0084693.

Повний текст джерела
Анотація:
Iron is an important metal, scientifically and technologically. It is a common metal on Earth, forming the main constituent of the planet's inner core, where it is believed to be in solid state at high pressure and high temperature. It is also the main component of many important structural materials used in quasistatic and dynamic conditions. Laser-driven Rayleigh–Taylor instability provides a means of probing material strength at high pressure and high temperature. The unavoidable phase transition in iron at relatively low pressure induces microstructural changes that ultimately affect its strength in this extreme regime. This inevitable progression can make it difficult to design experiments and understand their results. Here, we address this challenge with the introduction of a new approach: a direct-drive design for Rayleigh–Taylor strength experiments capable of reaching up to 400 GPa over a broad range of temperatures. We use 1D and 2D hydrodynamic simulations to optimize target components and laser pulse shape to induce the phase transition and compress the iron to high pressure and high temperature. At the simulated pressure–temperature state of 350 GPa and 4000 K, we predict a ripple growth factor of 3–10 depending on the strength with minimal sensitivity to the equation of state model used. The growth factor is the primary observable, and the measured value will be compared to simulations to enable the extraction of the strength under these conditions. These experiments conducted at high-energy laser facilities will provide a unique way to study an important metal.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Santiago, Alfonso, Constantine Butakoff, Beatriz Eguzkitza, Richard A. Gray, Karen May-Newman, Pras Pathmanathan, Vi Vu, and Mariano Vázquez. "Design and execution of a verification, validation, and uncertainty quantification plan for a numerical model of left ventricular flow after LVAD implantation." PLOS Computational Biology 18, no. 6 (June 13, 2022): e1010141. http://dx.doi.org/10.1371/journal.pcbi.1010141.

Повний текст джерела
Анотація:
Background Left ventricular assist devices (LVADs) are implantable pumps that act as a life support therapy for patients with severe heart failure. Despite improving the survival rate, LVAD therapy can carry major complications. Particularly, the flow distortion introduced by the LVAD in the left ventricle (LV) may induce thrombus formation. While previous works have used numerical models to study the impact of multiple variables in the intra-LV stagnation regions, a comprehensive validation analysis has never been executed. The main goal of this work is to present a model of the LV-LVAD system and to design and follow a verification, validation and uncertainty quantification (VVUQ) plan based on the ASME V&V40 and V&V20 standards to ensure credible predictions. Methods The experiment used to validate the simulation is the SDSU cardiac simulator, a bench mock-up of the cardiovascular system that allows mimicking multiple operation conditions for the heart-LVAD system. The numerical model is based on Alya, the BSC’s in-house platform for numerical modelling. Alya solves the Navier-Stokes equation with an Arbitrary Lagrangian-Eulerian (ALE) formulation in a deformable ventricle and includes pressure-driven valves, a 0D Windkessel model for the arterial output and a LVAD boundary condition modeled through a dynamic pressure-flow performance curve. The designed VVUQ plan involves: (a) a risk analysis and the associated credibility goals; (b) a verification stage to ensure correctness in the numerical solution procedure; (c) a sensitivity analysis to quantify the impact of the inputs on the four quantities of interest (QoIs) (average aortic root flow Q A o a v g, maximum aortic root flow Q A o m a x, average LVAD flow Q V A D a v g, and maximum LVAD flow Q V A D m a x); (d) an uncertainty quantification using six validation experiments that include extreme operating conditions. Results Numerical code verification tests ensured correctness of the solution procedure and numerical calculation verification showed a grid convergence index (GCI)95% <3.3%. The total Sobol indices obtained during the sensitivity analysis demonstrated that the ejection fraction, the heart rate, and the pump performance curve coefficients are the most impactful inputs for the analysed QoIs. The Minkowski norm is used as validation metric for the uncertainty quantification. It shows that the midpoint cases have more accurate results when compared to the extreme cases. The total computational cost of the simulations was above 100 [core-years] executed in around three weeks time span in Marenostrum IV supercomputer. Conclusions This work details a novel numerical model for the LV-LVAD system, that is supported by the design and execution of a VVUQ plan created following recognised international standards. We present a methodology demonstrating that stringent VVUQ according to ASME standards is feasible but computationally expensive.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Ribeiro, Andreia Filipa Silva, Ana Russo, Célia Marina Gouveia, Patrícia Páscoa, and Jakob Zscheischler. "Risk of crop failure due to compound dry and hot extremes estimated with nested copulas." Biogeosciences 17, no. 19 (October 9, 2020): 4815–30. http://dx.doi.org/10.5194/bg-17-4815-2020.

Повний текст джерела
Анотація:
Abstract. The interaction between co-occurring drought and hot conditions is often particularly damaging to crop's health and may cause crop failure. Climate change exacerbates such risks due to an increase in the intensity and frequency of dry and hot events in many land regions. Hence, here we model the trivariate dependence between spring maximum temperature and spring precipitation and wheat and barley yields over two province regions in Spain with nested copulas. Based on the full trivariate joint distribution, we (i) estimate the impact of compound hot and dry conditions on wheat and barley loss and (ii) estimate the additional impact due to compound hazards compared to individual hazards. We find that crop loss increases when drought or heat stress is aggravated to form compound dry and hot conditions and that an increase in the severity of compound conditions leads to larger damage. For instance, compared to moderate drought only, moderate compound dry and hot conditions increase the likelihood of crop loss by 8 % to 11 %, while when starting with moderate heat, the increase is between 19 % to 29 % (depending on the cereal and region). These findings suggest that the likelihood of crop loss is driven primarily by drought stress rather than by heat stress, suggesting that drought plays the dominant role in the compound event; that is, drought stress is not required to be as extreme as heat stress to cause similar damage. Furthermore, when compound dry and hot conditions aggravate stress from moderate to severe or extreme levels, crop loss probabilities increase 5 % to 6 % and 6 % to 8 %, respectively (depending on the cereal and region). Our results highlight the additional value of a trivariate approach for estimating the compounding effects of dry and hot extremes on crop failure risk. Therefore, this approach can effectively contribute to design management options and guide the decision-making process in agricultural practices.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Chen, Na, Meng Wang, Tom Alkim, and Bart van Arem. "A Robust Longitudinal Control Strategy of Platoons under Model Uncertainties and Time Delays." Journal of Advanced Transportation 2018 (2018): 1–13. http://dx.doi.org/10.1155/2018/9852721.

Повний текст джерела
Анотація:
Automated vehicles are designed to free drivers from driving tasks and are expected to improve traffic safety and efficiency when connected via vehicle-to-vehicle communication, that is, connected automated vehicles (CAVs). The time delays and model uncertainties in vehicle control systems pose challenges for automated driving in real world. Ignoring them may render the performance of cooperative driving systems unsatisfactory or even unstable. This paper aims to design a robust and flexible platooning control strategy for CAVs. A centralized control method is presented, where the leader of a CAV platoon collects information from followers, computes the desired accelerations of all controlled vehicles, and broadcasts the desired accelerations to followers. The robust platooning is formulated as a Min-Max Model Predictive Control (MM-MPC) problem, where optimal accelerations are generated to minimize the cost function under the worst case, where the worst case is taken over the possible models. The proposed method is flexible in such a way that it can be applied to both homogeneous platoon and heterogeneous platoon with mixed human-driven and automated controlled vehicles. A third-order linear vehicle model with fixed feedback delay and stochastic actuator lag is used to predict the platoon behavior. Actuator lag is assumed to vary randomly with unknown distributions but a known upper bound. The controller regulates platoon accelerations over a time horizon to minimize a cost function representing driving safety, efficiency, and ride comfort, subject to speed limits, plausible acceleration range, and minimal net spacing. The designed strategy is tested by simulating homogeneous and heterogeneous platoons in a number of typical and extreme scenarios to assess the system stability and performance. The test results demonstrate that the designed control strategy for CAV can ensure the robustness of stability and performance against model uncertainties and feedback delay and outperforms the deterministic MPC based platooning control.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Calabrese, Matteo, Martin Cimmino, Francesca Fiume, Martina Manfrin, Luca Romeo, Silvia Ceccacci, Marina Paolanti, et al. "SOPHIA: An Event-Based IoT and Machine Learning Architecture for Predictive Maintenance in Industry 4.0." Information 11, no. 4 (April 9, 2020): 202. http://dx.doi.org/10.3390/info11040202.

Повний текст джерела
Анотація:
Predictive Maintenance (PdM) is a prominent strategy comprising all the operational techniques and actions required to ensure machine availability and to prevent a machine-down failure. One of the main challenges of PdM is to design and develop an embedded smart system to monitor and predict the health status of the machine. In this work, we use a data-driven approach based on machine learning applied to woodworking industrial machines for a major woodworking Italian corporation. Predicted failures probabilities are calculated through tree-based classification models (Gradient Boosting, Random Forest and Extreme Gradient Boosting) and calculated as the temporal evolution of event data. This is achieved by applying temporal feature engineering techniques and training an ensemble of classification algorithms to predict Remaining Useful Lifetime (RUL) of woodworking machines. The effectiveness of the proposed method is showed by testing an independent sample of additional woodworking machines without presenting machine down. The Gradient Boosting model achieved accuracy, recall, and precision of 98.9%, 99.6%, and 99.1%. Our predictive maintenance approach deployed on a Big Data framework allows screening simultaneously multiple connected machines by learning from terabytes of log data. The target prediction provides salient information which can be adopted within the maintenance management practice.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Karimi Sharafshadeh, Bizhan, Mohammad Javad Ketabdari, Farhood Azarsina, Mohammad Amiri, and Moncef L. Nehdi. "New Fuzzy-Heuristic Methodology for Analyzing Compression Load Capacity of Composite Columns." Buildings 13, no. 1 (January 3, 2023): 125. http://dx.doi.org/10.3390/buildings13010125.

Повний текст джерела
Анотація:
Predicting the mechanical strength of structural elements is a crucial task for the efficient design of buildings. Considering the shortcomings of experimental and empirical approaches, there is growing interest in using artificial intelligence techniques to develop data-driven tools for this purpose. In this research, empowered machine learning was employed to analyze the axial compression capacity (CC) of circular concrete-filled steel tube (CCFST) composite columns. Accordingly, the adaptive neuro-fuzzy inference system (ANFIS) was trained using four metaheuristic techniques, namely earthworm algorithm (EWA), particle swarm optimization (PSO), salp swarm algorithm (SSA), and teaching learning-based optimization (TLBO). The models were first applied to capture the relationship between the CC and column characteristics. Subsequently, they were requested to predict the CC for new column conditions. According to the results of both phases, all four models could achieve dependable accuracy. However, the PSO-ANFIS was tangibly more efficient than the other models in terms of computational time and accuracy and could attain more accurate predictions for extreme conditions. This model could predict the CC with a relative error below 2% and a correlation exceeding 99%. The PSO-ANFIS is therefore recommended as an effective tool for practical applications in analyzing the behavior of the CCFST columns.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Ho, Long, Cassia Pompeu, Wout Van Echelpoel, Olivier Thas, and Peter Goethals. "Model-Based Analysis of Increased Loads on the Performance of Activated Sludge and Waste Stabilization Ponds." Water 10, no. 10 (October 10, 2018): 1410. http://dx.doi.org/10.3390/w10101410.

Повний текст джерела
Анотація:
In a way to counter criticism on low cost-effective conventional activated sludge (AS) technology, waste stabilization ponds (WSPs) offer a valid alternative for wastewater treatment due to their simple and inexpensive operation. To evaluate this alternative with respect to its robustness and resilience capacity, we perform in silico experiments of different peak-load scenarios in two mathematical models representing the two systems. A systematic process of quality assurance for these virtual experiments is implemented, including sensitivity and identifiability analysis, with non-linear error propagation. Moreover, model calibration of a 210-day real experiment with 31 days of increased load was added to the evaluation. Generally speaking, increased-load scenarios run in silico showed that WSP systems are more resilient towards intermediate disturbances, hence, are suitable to treat not only municipal wastewater, but also industrial wastewater, such as poultry wastewater, and paperboard wastewater. However, when disturbances are extreme (over 7000 mg COD·L−1), the common design of the natural system fails to perform better than AS. Besides, the application of sensitivity analysis reveals the most influential parameters on the performance of the two systems. In the AS system, parameters related to autotrophic bacteria have the highest influence on the dynamics of particulate organic matter, while nitrogen removal is largely driven by nitrification and denitrification. Conversely, with an insignificant contribution of heterotrophs, the nutrient removal in the pond system is mostly done by algal assimilation. Furthermore, this systematic model-based analysis proved to be a suitable means for investigating the maximum load of wastewater treatment systems, and from that avoiding environmental problems and high economic costs for cleaning surface waters after severe overload events.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Siddiqui, Shahan Yamin, Atifa Athar, Muhammad Adnan Khan, Sagheer Abbas, Yousaf Saeed, Muhammad Farrukh Khan, and Muhammad Hussain. "Modelling, Simulation and Optimization of Diagnosis Cardiovascular Disease Using Computational Intelligence Approaches." Journal of Medical Imaging and Health Informatics 10, no. 5 (May 1, 2020): 1005–22. http://dx.doi.org/10.1166/jmihi.2020.2996.

Повний текст джерела
Анотація:
Background: To provide ease to diagnose that serious sickness multi-technique model is proposed. Data Analytics and Machine intelligence are involved in the detection of various diseases for human health care. The computer is used as a tool by experts in the medical field, and the computer-based mechanism is used to diagnose different diseases in patients with high Precision. Due to revolutionary measures employed in Artificial Neural Networks (ANNs) within the research domain in the medical area, which appear to be in the data-driven applications usually described in the domain of health care. Cardio sickness according to name is a type of an ailment that is directly connected to the human heart and blood circulation setup, so it should be diagnosed on time because the delay of diagnosing of that disease may lead the sufferer to death. The research is mainly aimed to design a system that will be able to detect cardiovascular sickness in the sufferer using machine learning approaches. Objective: The main objective of the research is to gather information of the six parameters that is age, chest pain, electrocardiogram, systolic blood pressure, fasting blood sugar and serum cholesterol are used by Mamdani fuzzy expert to detect cardiovascular sickness. To propose a type of device which will be successfully used in overcoming the cardiovascular diseases. This proposed model Diagnosis Cardiovascular Disease using Mamdani Fuzzy Inference System (DCD-MFIS) shows 87.05 percent Precision. To delineate an effective Neural Network Model to predict with greater precision, whether a person is suffering from cardiovascular disease or not. As the ANN is composed of various algorithms, some will be handed down for the training of the network. The main target of the research is to make the use of three techniques, which include fuzzy logic, neural network, and deep machine learning. The research will employ the three techniques along with the previous comparisons, and given that, the results will be compared respectively. Methods: Artificial neural network and deep machine learning techniques are applied to detect cardiovascular sickness. Both techniques are applied using 13 parameters age, gender, chest pain, systolic blood pressure, serum cholesterol, fasting blood sugar, electrocardiogram, exercise including angina, heart rate, old peak, number of vessels, affected person and slope. In this research, the ANN-based research is one of the algorithms collections, which is the detection of cardiovascular diseases, is proposed. ANN constitutes of many algorithms, some of the algorithms are employed in the paper for the training of the network used, to achieve the prediction ratio and in contrast of the comparison of the mutual results shown. Results: To make better analysis and consideration of the three frameworks, which include fuzzy logic, ANN, Deep Extreme Machine Learning. The proposed automated model Diagnosis Cardiovascular Disease includes Fuzzy logic using Mamdani Fuzzy Inference System (DCD-MFIS), Artificial Neural Network (DCD–ANN) and Deep Extreme Machine Learning (DCD–DEML) approach using back propagation system. These frameworks help in attaining greater precision and accuracy. Proposed DCD Deep Extreme Machine Learning attains more accuracy with previously proposed solutions that are 92.45%. Conclusion: From the previous comparisons, the propose automated Diagnosis of Cardiovascular Disease using Fuzzy logic, Artificial Neural Network, and deep extreme machine learning approaches. The automated systems DCDMFIS, DCD–ANN and DCD–DEML, the framework proposed as effective and efficient with 87.05%, 89.4% and 92.45 % success ratios respectively. To verify the performance which lies in the ANNs and computational analysis, many indicators determining the precise performance were calculated. The training of the neural networks is made true using the 10 to 20 neurons layers which denote the hidden layer. DEML reveals and indicates a hidden layer containing 10 neurons, which shows the best result. In the last, we can conclude that after making a consideration among the three techniques fuzzy logic, Artificial Neural Network and Proposed DCD Deep Extreme Machine, the Proposed DCD Deep Extreme Machine Learning based solution give more accuracy with previously proposed solutions that are 92.45%.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Barker, Jason S., Jeremy S. Fried, and Andrew N. Gray. "Evaluating Model Predictions of Fire Induced Tree Mortality Using Wildfire-Affected Forest Inventory Measurements." Forests 10, no. 11 (October 27, 2019): 958. http://dx.doi.org/10.3390/f10110958.

Повний текст джерела
Анотація:
Forest land managers rely on predictions of tree mortality generated from fire behavior models to identify stands for post-fire salvage and to design fuel reduction treatments that reduce mortality. A key challenge in improving the accuracy of these predictions is selecting appropriate wind and fuel moisture inputs. Our objective was to evaluate postfire mortality predictions using the Forest Vegetation Simulator Fire and Fuels Extension (FVS-FFE) to determine if using representative fire-weather data would improve prediction accuracy over two default weather scenarios. We used pre- and post-fire measurements from 342 stands on forest inventory plots, representing a wide range of vegetation types affected by wildfire in California, Oregon, and Washington. Our representative weather scenarios were created by using data from local weather stations for the time each stand was believed to have burned. The accuracy of predicted mortality (percent basal area) with different weather scenarios was evaluated for all stands, by forest type group, and by major tree species using mean error, mean absolute error (MAE), and root mean square error (RMSE). One of the representative weather scenarios, Mean Wind, had the lowest mean error (4%) in predicted mortality, but performed poorly in some forest types, which contributed to a relatively high RMSE of 48% across all stands. Driven in large part by over-prediction of modelled flame length on steeper slopes, the greatest over-prediction mortality errors arose in the scenarios with higher winds and lower fuel moisture. Our results also indicated that fuel moisture was a stronger influence on post-fire mortality than wind speed. Our results suggest that using representative weather can improve accuracy of mortality predictions when attempting to model over a wide range of forest types. Focusing simulations exclusively on extreme conditions, especially with regard to wind speed, may lead to over-prediction of tree mortality from fire.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Stimolo, María Inés, and Marcela Porporato. "How different cost behaviour is in emerging economies? Evidence from Argentina." Journal of Accounting in Emerging Economies 10, no. 1 (December 3, 2019): 21–47. http://dx.doi.org/10.1108/jaee-05-2018-0050.

Повний текст джерела
Анотація:
Purpose Cost behaviour literature is expanding its reach beyond developed economies; however, there is limited knowledge about its causes in emerging economies. This is an exploratory study of sticky costs behaviour determinants in Argentina, a country with periodic political and economic turbulence. The purpose of this paper is to test the effect of GDP, asset intensity, industry and cost type in an inflationary context. Design/methodology/approach Anderson et al. (2003) empirical regression (ABJ model) is replicated in Argentina with 667 observations from 96 firms between the years 2004 and 2012. It uses panel data and variables are defined as change rates between two periods. The sample excludes financial and insurance firms. It tests if sticky cost behaviour changes in periods of macroeconomic deceleration, or in firms belonging to industries with different asset intensity levels, or among different cost types. Findings The analysis shows that costs are sticky in Argentina, where a superb economic outlook is required to delay cutting resources or increasing costs. Cost behaviour is affected by social and cultural factors, such as labour inflexibility driven by powerful unions and not by protective employment laws, asset intensity (industry) and macroeconomic environment. Results suggest that costs are sticky for aggregate samples, but not for all subsamples. Practical implications Administrative costs are sticky when GDP grows; but when growth declines, managers or firms do not delay cost cutting actions. Some subsamples are extreme cases of stickiness while others are anti-sticky, casting some doubt on the usefulness of sticky costs empirical tests applied to country-wide samples. Careful selection of observations for sticky costs studies in emerging economies is critical. Originality/value Evidence from previous studies show that on average costs are remarkably sticky in Argentina; this study shows that cost reduction activities occur faster but are not persistent enough to change the aggregated long-term results of cost stickiness in the presence of moderate to high inflation. The study contributes to the literature by suggesting that observations used in sticky costs studies from emerging economies might be mainly from positive macroeconomic environments, might have skewed results due to extreme cases of stickiness or might be distorted by inflation.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Hsu, Chien-Ning, Chien-Liang Liu, You-Lin Tain, Chin-Yu Kuo, and Yun-Chun Lin. "Machine Learning Model for Risk Prediction of Community-Acquired Acute Kidney Injury Hospitalization From Electronic Health Records: Development and Validation Study." Journal of Medical Internet Research 22, no. 8 (August 4, 2020): e16903. http://dx.doi.org/10.2196/16903.

Повний текст джерела
Анотація:
Background Community-acquired acute kidney injury (CA-AKI)-associated hospitalizations impose significant health care needs and contribute to in-hospital mortality. However, most risk prediction models developed to date have focused on AKI in a specific group of patients during hospitalization, and there is limited knowledge on the baseline risk in the general population for preventing CA-AKI-associated hospitalization. Objective To gain further insight into risk exploration, the aim of this study was to develop, validate, and establish a scoring system to facilitate health professionals in enabling early recognition and intervention of CA-AKI to prevent permanent kidney damage using different machine-learning techniques. Methods A nested case-control study design was employed using electronic health records derived from a group of Chang Gung Memorial Hospitals in Taiwan from 2010 to 2017 to identify 234,867 adults with at least two measures of serum creatinine at hospital admission. Patients were classified into a derivation cohort (2010-2016) and a temporal validation cohort (2017). Patients with the first episode of CA-AKI at hospital admission were classified into the case group and those without CA-AKI were classified in the control group. A total of 47 potential candidate variables, including age, gender, prior use of nephrotoxic medications, Charlson comorbid conditions, commonly measured laboratory results, and recent use of health services, were tested to develop a CA-AKI hospitalization risk model. Permutation-based selection with both the extreme gradient boost (XGBoost) and least absolute shrinkage and selection operator (LASSO) algorithms was performed to determine the top 10 important features for scoring function development. Results The discriminative ability of the risk model was assessed by the area under the receiver operating characteristic curve (AUC), and the predictive CA-AKI risk model derived by the logistic regression algorithm achieved an AUC of 0.767 (95% CI 0.764-0.770) on derivation and 0.761 on validation for any stage of AKI, with positive and negative predictive values of 19.2% and 96.1%, respectively. The risk model for prediction of CA-AKI stages 2 and 3 had an AUC value of 0.818 for the validation cohort with positive and negative predictive values of 13.3% and 98.4%, respectively. These metrics were evaluated at a cut-off value of 7.993, which was determined as the threshold to discriminate the risk of AKI. Conclusions A machine learning–generated risk score model can identify patients at risk of developing CA-AKI-related hospitalization through a routine care data-driven approach. The validated multivariate risk assessment tool could help clinicians to stratify patients in primary care, and to provide monitoring and early intervention for preventing AKI while improving the quality of AKI care in the general population.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Presley, Jennifer. "Artificial Lift: Toolbox Optimization." Journal of Petroleum Technology 74, no. 11 (November 1, 2022): 34–41. http://dx.doi.org/10.2118/1122-0034-jpt.

Повний текст джерела
Анотація:
_ The story of artificial lift has long been one of maintaining the status quo or the “if it ain’t broke, don’t fix it” approach, but adaptation and experimentation have been present along the way. From this pairing came innovations like the widely recognized symbol of the oil patch—the pumpjack—and the hidden marvels of technological wizardry that dwell downhole. In this second part of a two-part series on advances in artificial lift, we’ll look at the state of optimization and a trio of techniques and technologies under development or new to the market. No ‘Snowflakes’ For more than a decade the oil and gas industry has worked in from the edges in its quest to solve the puzzle that is the developmental life cycle of a shale reservoir. Each stage in the cycle has been one of adversity, with the drilling and completion stages presenting a host of challenges in the process of unlocking resources from reservoirs thousands of feet deep vertically and laterally long. These challenges continue into the well’s production stage, accelerating the cycle of adaptation and experimentation as crafty production engineers and field service technicians look for solutions to stave off the dreaded decline curve. “Our understanding of shale reservoirs has gone through the roof. In the early days, people were just starting to understand what shale is. Geologists knew the construct of shale, but to produce from it was a new phenomenon,” said Spandan Spandan, partner at McKinsey & Co., adding that the industry’s understanding of well design and construction has also come far. “This essentially allowed us to convert wells from a snowflake—each well optimized for its own conditions—to something that could be mass produced. The manufacturing era of wells was driven by the understanding of wells and by optimizing to the extreme,” he said. Shale producers looking to capitalize on the manufacturing era encouraged operators to select equipment and services based primarily on price, forcing service companies to focus on developing low-cost technology options, according to a 2017 McKinsey report. That focus remains as the world continues to recover from the COVID-19 pandemic along with geopolitical turmoil that has elevated global demand and placed pressure on oil and gas supplies. “We’ve seen a lot more volatility. The war in Ukraine has highlighted how the market is trying to balance affordability of energy, plus security of supply, and emissions. Historically, we had only two factors. Now we are trying to balance all three of them quite actively. And all of this is being done under the umbrella of the energy transition,” said Spandan. New Commercial Model One challenge facing the artificial-lift sector, particularly shale, is the establishment of a commercial model that allows operators to invest in artificial lift without having to make a long-term commitment of capital, Spandan said. “That is something that both the service company and the operator will need to collaborate on to come up with a commercial model that enables that. We’ve seen some movement within the industry to move the capex spend on artificial-lift systems into an opex model,” he said. “But the next step of evolution for the industry is to perhaps have production optimization as a service, one that bundles the sensors, the lift system, specialty chemicals, all of that as a service, that the capex is converted into something that’s opex.”
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Gupta, Raghvendra, Rohit Mehta, Supreet Singh Bahga, and Amit Gupta. "(Digital Presentation) Thermal Behaviour Prediction of Commercial Lithium-Ion Cells Under Different C-Rate and Ambient Conditions Using Surrogate Modelling." ECS Meeting Abstracts MA2022-01, no. 2 (July 7, 2022): 389. http://dx.doi.org/10.1149/ma2022-012389mtgabs.

Повний текст джерела
Анотація:
Lithium-ion batteries (LIBs) have found widespread application in energy storage due to their high energy density, low self-discharge and low maintenance characteristics. However, the stability and longevity of the batteries under extreme operating conditions is yet to be fully understood. Numerous experimental and simulation studies have been performed to elucidate the effect of temperature, current, depth of discharge, and the number of cycles [1,2,3] on the performance of LIBs. These studies show that the solid-electrolyte interface (SEI) layer and gas formation accelerate at high C-rates due to increased electrolyte decomposition [4,5]. Consequently, the cell's internal resistance increases, resulting in ever-increasing irreversible heat generation inside the cell [6]. This heat generation leads to high internal temperature and a corresponding reduction in cell capacity. The data of cell temperature for a wide range of ambient temperatures and cycling rates is limited because cell performance is typically reported at room temperature. The design of battery packs for electric vehicles(EVs) is also based on test data taken at room conditions. Given that the cell's capacity, specific energy, maximum power output is bound to depend on C-rate and ambient temperature, it is necessary to test and design battery packs based on these parameters. The thermal characteristics of a lithium-ion cell can be predicted using a physics-based thermal model or data-driven methods (DDM) relying on empirical data. Experimental determination of internal cell parameters required in physics-based thermal models is challenging and time-consuming for commercial cells. On the other hand, DDM requires only cell cycling data at specific operating conditions, thereby eliminating the need for internal parameter estimation. This model predicts the experimental input and output data pattern and fits the response surface to estimate the unknown output. In this study, surrogate modelling has been employed to estimate the surface temperature, capacity, energy, and average power of commercial lithium-ion cells for different discharge currents and ambient temperatures. Surrogate modelling, a popular data analysis and reduced-order modelling technique, aims to find a global minimum of a particular objective function using a few objective function evaluations [7]. The surrogate-based model divides experimental data into training and test data sets. The training data set is employed to train the algorithm. After that, the testing data set is used to validate the model's accuracy. Experiments were performed to develop a surrogate model, with the number of experiments decided using the design of experiments[8] for a current range of 0.5C to 3C-rate and ambient temperature range of 0°C to 45°C . The cycling of cells was performed using a high current battery cycler (Arbin), and the ambient temperature was maintained using a thermal chamber (Cincinnati). The surface temperature of commercial 18650 (NMC811) lithium-ion cells was recorded using T-type thermocouples and a National Instruments DAQ module. A polynomial response surface was fitted using surrogate modelling on experimentally obtained data. The response surface shown in figure 1. contains nine data points for the preliminary study, sub-divided into a set of seven training and two testing data. The prediction error sum of squares (PRESS) is currently bounded within 10% due to the limited training data set availability and is expected to be within a band of 1% after adding data from ongoing experiments. The estimation of temperature, capacity, and specific energy can be used to optimize and develop an improved thermal management system for electric vehicles that works under various operating conditions. References Waldmann, T., Wilka, M., Kasper, M., Fleischhammer, M., & Wohlfahrt-Mehrens, M. (2014). Journal of Power Sources, 262, 129-135. Guan, T., Sun, S., Yu, F., Gao, Y., Fan, P., Zuo, P., Du,C., & Yin, G. (2018). Electrochimica Acta, 279, 204-212. Simolka, M., Heger, J. F., Traub, N., Kaess, H., & Friedrich, K. A. (2020). Journal of The Electrochemical Society, 167(11), 110502. Xu, B., Diao, W., Wen, G., Choe, S. Y., Kim, J., & Pecht, M. (2021). Journal of Power Sources, 510, 230390. Rashid, M., & Gupta, A. (2017). Electrochimica Acta, 231, 171-184. Leng, F., Tan, C. M., & Pecht, M. (2015). Scientific reports, 5(1), 1-12. Queipo, N. V., Haftka, R. T., Shyy, W., Goel, T., Vaidyanathan, R., & Tucker, P. K. (2005). Progress in aerospace sciences, 41(1), 1-28. Li, W., Xiao, M., Peng, X., Garg, A., & Gao, L. (2019). Applied Thermal Engineering, 147, 90-100. Figure 1
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Li, Hong Jun, Wei Jiang, Dehua Zou, Yu Yan, An Zhang, and Wei Chen. "Robust motion control for multi-split transmission line four-wheel driven mobile operation robot in extreme power environment." Industrial Robot: the international journal of robotics research and application 47, no. 2 (January 27, 2020): 219–29. http://dx.doi.org/10.1108/ir-09-2019-0203.

Повний текст джерела
Анотація:
Purpose In the multi-splitting transmission lines extreme power environment of ultra-high voltage and strong electromagnetic interference, to improve the trajectory tracking and stability control performance of the robot manipulator when conduct electric power operation, and effectively reduce the influence of disturbance factors on the robot motion control, this paper aims to presents a robust trajectory tracking motion control method for power cable robot manipulators based on sliding mode variable structure control theory. Design/methodology/approach Through the layering of aerial-online-ground robot three-dimensional control architecture, the robot joint motion dynamic model has been built, and the motion control model of the N-degrees of freedom robot system has also been obtained. On this basis, the state space expression of joint motion control under disturbance and uncertainty has been also derived, and the manipulator sliding mode variable structure trajectory tracking control model has also been established. The influence of the perturbation control parameters on the robot motion control can be compensated by the back propagation neural network learning, the stability of the controller has been analyzed by using Lyapunov theory. Findings The robot has been tested on a analog line in the lab, the effectiveness of sliding mode variable structure control is verified by trajectory tracking simulation experiments of different typical signals with different methods. The field operation experiment further verifies the engineering practicability of the control method. At the same time, the control method has the remarkable characteristics of sound versatility, strong adaptability and easy expansion. Originality/value Three-dimensional control architecture of underground-online-aerial robots has been proposed for industrial field applications in the ubiquitous power internet of things environment (UPIOT). Starting from the robot joint motion, the dynamic equation of the robot joint motion and the state space expression of the robot control system have been established. Based on this, a robot closed-loop trajectory tracking control system has been designed. A robust trajectory tracking motion control method for robots based on sliding mode variable structure theory has been proposed, and a sliding mode control model for the robot has been constructed. The uncertain parameters in the control model have been compensated by the neural network in real-time, and the sliding mode robust control law of the robot manipulator has been solved and obtained. A suitable Lyapunov function has been selected to prove the stability of the system. This method enhances the expansibility of the robot control system and shortens the development cycle of the controller. The trajectory tracking simulation experiment of the robot manipulator proves that the sliding mode variable structure control can effectively restrain the influence of disturbance and uncertainty on the robot motion stability, and meet the design requirements of the control system with fast response, high tracking accuracy and sound stability. Finally, the engineering practicability and superiority of sliding mode variable structure control have been further verified by field operation experiments.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Spangardt, Christoph, Christoph Keßler, Ramona Dobrzewski, Antonia Tepler, Simon Hanio, Bernd Klaubert, and Lorenz Meinel. "Leveraging Dissolution by Autoinjector Designs." Pharmaceutics 14, no. 11 (November 21, 2022): 2544. http://dx.doi.org/10.3390/pharmaceutics14112544.

Повний текст джерела
Анотація:
Chemical warfare or terrorism attacks with organophosphates may place intoxicated subjects under immediate life-threatening and psychologically demanding conditions. Antidotes, such as the oxime HI-6, which must be formulated as a powder for reconstitution reflecting the molecule’s light sensitivity and instability in aqueous solutions, dramatically improve recovery—but only if used soon after exposure. Muscle tremors, anxiety, and loss of consciousness after exposure jeopardize proper administration, translating into demanding specifications for the dissolution of HI-6. Reflecting the patients’ catastrophic situation and anticipated desire to react immediately to chemical weapon exposure, the dissolution should be completed within ten seconds. We are developing multi-dose and single-dose autoinjectors to reliably meet these dissolution requirements. The temporal and spatial course of dissolution within the various autoinjector designs was profiled colorimetrically. Based on these colorimetric insights with model dyes, we developed experimental setups integrating online conductometry to push experiments toward the relevant molecule, HI-6. The resulting blueprints for autoinjector designs integrated small-scale rotor systems, boosting dissolution across a wide range of viscosities, and meeting the required dissolution specifications driven by the use of these drug products in extreme situations.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Roy, Bishwajit, Maheshwari Prasad Singh, Mosbeh R. Kaloop, Deepak Kumar, Jong-Wan Hu, Radhikesh Kumar, and Won-Sup Hwang. "Data-Driven Approach for Rainfall-Runoff Modelling Using Equilibrium Optimizer Coupled Extreme Learning Machine and Deep Neural Network." Applied Sciences 11, no. 13 (July 5, 2021): 6238. http://dx.doi.org/10.3390/app11136238.

Повний текст джерела
Анотація:
Rainfall-runoff (R-R) modelling is used to study the runoff generation of a catchment. The quantity or rate of change measure of the hydrological variable, called runoff, is important for environmental scientists to accomplish water-related planning and design. This paper proposes (i) an integrated model namely EO-ELM (an integration of equilibrium optimizer (EO) and extreme learning machine (ELM)) and (ii) a deep neural network (DNN) for one day-ahead R-R modelling. The proposed R-R models are validated at two different benchmark stations of the catchments, namely river Teifi at Glanteifi and river Fal at Tregony in the UK. Firstly, a partial autocorrelation function (PACF) is used for optimal number of lag inputs to deploy the proposed models. Six other well-known machine learning models, called ELM, kernel ELM (KELM), and particle swarm optimization-based ELM (PSO-ELM), support vector regression (SVR), artificial neural network (ANN) and gradient boosting machine (GBM) are utilized to validate the two proposed models in terms of prediction efficiency. Furthermore, to increase the performance of the proposed models, paper utilizes a discrete wavelet-based data pre-processing technique is applied in rainfall and runoff data. The performance of wavelet-based EO-ELM and DNN are compared with wavelet-based ELM (WELM), KELM (WKELM), PSO-ELM (WPSO-ELM), SVR (WSVR), ANN (WANN) and GBM (WGBM). An uncertainty analysis and two-tailed t-test are carried out to ensure the trustworthiness and efficacy of the proposed models. The experimental results for two different time series datasets show that the EO-ELM performs better in an optimal number of lags than the others. In the case of wavelet-based daily R-R modelling, proposed models performed better and showed robustness compared to other models used. Therefore, this paper shows the efficient applicability of EO-ELM and DNN in R-R modelling that may be used in the hydrological modelling field.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Dai, Jinyang. "Improving Random Forest Algorithm for University Academic Affairs Management System Platform Construction." Advances in Multimedia 2022 (July 15, 2022): 1–9. http://dx.doi.org/10.1155/2022/8064844.

Повний текст джерела
Анотація:
Using the data of students’ learning process recorded in the network teaching platform to predict students’ learning performance, assist teachers to analyze learning situation, formulate teaching strategies, and warn about students’ learning state is a hot spot in the field of mixed curriculum research in recent years. In view of the complexity, heterogeneity, and security of college educational administration data and the difficulty of predicting and analyzing college students’ achievements, this paper designs a college educational administration management system platform based on improved random forest algorithm. Combining the advantages of three data-driven prediction algorithms, namely, random forest, extreme gradient boosting (XGBoost), and gradient boosting decision tree (GBDT), a model based on improved random forest algorithm is proposed. It is proved that this method is a noninferior prediction method. Secondly, the model is applied to practical problems to solve the problem of predicting college students’ grades. An experiment is carried out on the real data set provided by a municipal education bureau. The results show that the proposed model not only achieves good prediction accuracy, but also solves the stability problem of the model after adding new data, which will contribute to the iterative optimization of the model, improve the universality of the model, and help continuously track the learning behavior characteristics of college students in different semesters.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Manthos, I., S. Aune, J. Bortfeldt, F. Brunbauer, C. David, D. Desforge, G. Fanourakis, et al. "Precise timing and recent advancements with segmented anode PICOSEC Micromegas prototypes." Journal of Instrumentation 17, no. 10 (October 1, 2022): C10009. http://dx.doi.org/10.1088/1748-0221/17/10/c10009.

Повний текст джерела
Анотація:
Abstract Timing information in current and future accelerator facilities is important for resolving objects (particle tracks, showers, etc.) in extreme large particles multiplicities on the detection systems. The PICOSEC Micromegas detector has demonstrated the ability to time 150 GeV muons with a sub-25 ps precision. Driven by detailed simulation studies and a phenomenological model which describes stochastically the dynamics of the signal formation, new PICOSEC designs were developed that significantly improve the timing performance of the detector. PICOSEC prototypes with reduced drift gap size (∼119 µm) achieved a resolution of 45 ps in timing single photons in laser beam tests (in comparison to 76 ps of the standard PICOSEC detector). Towards large area detectors, multi-pad PICOSEC prototypes with segmented anodes has been developed and studied. Extensive tests in particle beams revealed that the multi-pad PICOSEC technology provides also very precise timing, even when the induced signal is shared among several neighbouring pads. Furthermore, new signal processing algorithms have been developed, which can be applied during data acquisition and provide real time, precise timing.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Swartz, S. M., A. Parker, and C. Huo. "Theoretical and empirical scaling patterns and topological homology in bone trabeculae." Journal of Experimental Biology 201, no. 4 (February 15, 1998): 573–90. http://dx.doi.org/10.1242/jeb.201.4.573.

Повний текст джерела
Анотація:
Trabecular or cancellous bone is a major element in the structural design of the vertebrate skeleton, but has received little attention from the perspective of the biology of scale. In this study, we investigated scaling patterns in the discrete bony elements of cancellous bone. First, we constructed two theoretical models, representative of the two extremes of realistic patterns of trabecular size changes associated with body size changes. In one, constant trabecular size (CTS), increases in cancellous bone volume with size arise through the addition of new elements of constant size. In the other model, constant trabecular geometry (CTG), the size of trabeculae increases isometrically. These models produce fundamentally different patterns of surface area and volume scaling. We then compared the models with empirical observations of scaling of trabecular dimensions in mammals ranging in mass from 4 to 40x10(6)g. Trabecular size showed little dependence on body size, approaching one of our theoretical models (CTS). This result suggests that some elements of trabecular architecture may be driven by the requirements of maintaining adequate surface area for calcium homeostasis. Additionally, we found two key consequences of this strongly negative allometry. First, the connectivity among trabecular elements is qualitatively different for small versus large animals; trabeculae connect primarily to cortical bone in very small animals and primarily to other trabeculae in larger animals. Second, small animals have very few trabeculae and, as a consequence, we were able to identify particular elements with a consistent position across individuals and, for some elements, across species. Finally, in order to infer the possible influence of gross differences in mechanical loading on trabecular size, we sampled trabecular dimensions extensively within Chiroptera and compared their trabecular dimensions with those of non-volant mammals. We found no systematic differences in trabecular size or scaling patterns related to locomotor mode.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Skjaeveland, S. M., L. M. Siqveland, A. Kjosavik, W. L. Hammervold Thomas, and G. A. Virnovsky. "Capillary Pressure Correlation for Mixed-Wet Reservoirs." SPE Reservoir Evaluation & Engineering 3, no. 01 (February 1, 2000): 60–67. http://dx.doi.org/10.2118/60900-pa.

Повний текст джерела
Анотація:
Summary For water-wet reservoirs, several expressions may be used to correlate capillary pressure, or height above the free water level, with the water saturation. These correlations all feature a vertical asymptote at the residual water saturation where the capillary pressure goes to plus infinity. We have developed a general capillary pressure correlation that covers primary drainage, imbibition, secondary drainage, and hysteresis scanning loops. The graph exhibits an asymptote at the residual saturation of water and of oil where the capillary pressure goes to plus and minus infinity, respectively. The shape of the correlation is simple yet flexible as a sum of two terms, each with two adjustable parameters and is verified by laboratory experiments and well-log data. An associated hysteresis scheme is also verified by experimental data. The correlation can be used to make representative capillary pressure curves for numerical simulation of reservoirs with varying wettability and to model and interpret flooding processes. Introduction Many capillary pressure correlations have been suggested in the literature,1–5 and they typically have two adjustable parameters. One parameter expresses the pore size distribution and hence the curvature of the pc curve, the other the actual level of the capillary pressure, i.e., the entry or the mean capillary pressure. Most of the correlations are limited to primary drainage and positive capillary pressures. Huang et al.5 extended their correlation to include all four branches of the bounding hysteresis loop: spontaneous and forced imbibition, and spontaneous and forced secondary drainage. They employed the same primary drainage expression to each branch, scaled to fit the measured pc axis crossing. We have chosen to base the general capillary pressure correlation for mixed-wet reservoir rock on the simple power-law form of Brooks and Corey 2,3 for primary drainage capillary pressure from Sw=1 to SwR. The classical expression for a water-wet core may be slightly rewritten to facilitate the extension of scope, p c d = c w d ( S w − S w R 1 − S w R ) a w d , ( 1 ) where cwd is the entry pressure, 1/awd the pore size distribution index,6 and SwR the residual (irreducible) water saturation. The main reason for choosing this basis is the experimental verification of Eq. 12,3 and its simplicity. According to Morrow,7 there is now wide acceptance of the view that most reservoirs are at wettability conditions other than completely water-wet. To our knowledge, however, no comprehensive, validated correlation has been published for mixed-wet reservoirs. The lack of correlation makes it difficult to properly model displacement processes where imbibition is of importance and data are scarce, e.g., bottom water drive and water-alternate-gas injection. In this article, we present a general capillary pressure correlation and an associated hysteresis loop scheme. We try to demonstrate the applicability of the correlation by fitting data from a series of membrane and centrifuge experiments on fresh cores, and we show that the correlation is well suited to represent measured capillary pressure curves over a wide range of rock types. Also, by analyzing well-log data from the same well in a bottom-water driven North Sea sandstone reservoir at several points in time, we are able to model the transition from the initial primary drainage saturation distribution to the later observed imbibition profile. The correlation crosses the zero capillary pressure axis at two points for the imbibition and the secondary drainage branches. These points, together with the residual saturations, define the Amott-Harvey wettability index.7 Thus, variations in wettability, e.g., with height, could be incorporated into the correlation. We adopt the terminology of Morrow7 to characterize the capillary pressure curve, Fig. 1: "drainage" denotes a fluid flow process where the water saturation is decreasing, even for an oil-wet porous medium; "imbibition" denotes a process where the oil saturation is decreasing; "spontaneous" imbibition occurs for positive capillary pressure, "forced" imbibition for negative capillary pressure; "spontaneous" (secondary) drainage occurs for negative capillary pressure, and "forced" (secondary) drainage for positive capillary pressure; "primary" drainage denotes the initial drainage process starting from Sw=1.0; and, for completeness, "primary" imbibition denotes a imbibition process starting from So=1. Correlation The design idea for the correlation is as follows: Eq. 1 is valid for a completely water-wet system and, if index w for water is substituted by index o it is equally valid for a completely oil-wet system. For other cases between these limits, a correlation should be symmetrical with respect to the two fluids since neither dominates the wettability. One way to achieve a symmetrical form that is correct in the extremes is to sum the two limiting expressions, i.e., to sum the water branch given by Eq. 1 and a similar oil branch, resulting in the general expression, p c = c w ( S w − S w R 1 − S w R ) a w + c o ( S o − S o R 1 − S o R ) a o . ( 2 ) The a's and c's are constants and there is one set for imbibition and another for drainage. An imbibition curve from SwR to SoR is modeled by Eq. 2 and the four constants (awi, aoi, cwi, coi), and a secondary drainage curve from SoR to SwR by the constants (awd, aod, cwd, cod). The constraints on the constants are that aw, ao, cw are positive numbers and co is a negative number. The plot of Eq. 2, both for imbibition and drainage, therefore consists of two branches, a positive water branch with an asymptote at Sw= SwR and a negative oil branch with an asymptote at Sw= SoR Fig. 1. Depicted in Fig. 1 are (1) the primary drainage curve starting at Sw =1 modeled by Eq. 2 with co=0 and cw equal to the entry pressure; (2) the primary imbibition curve from Eq. 2 with cw=0 and co equal to the entry pressure of water into a 100% oil saturated core; and (3) the bounding (secondary) imbibition and secondary drainage curves forming the largest possible hysteresis loop.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Sun, Tianjun, Bo Zhang, Mengyang Cao, and Fritz Drasgow. "Faking Detection Improved: Adopting a Likert Item Response Process Tree Model." Organizational Research Methods, April 15, 2021, 109442812110029. http://dx.doi.org/10.1177/10944281211002904.

Повний текст джерела
Анотація:
With the increasing popularity of noncognitive inventories in personnel selection, organizations typically wish to be able to tell when a job applicant purposefully manufactures a favorable impression. Past faking research has primarily focused on how to reduce faking via instrument design, warnings, and statistical corrections for faking. This article took a new approach by examining the effects of faking (experimentally manipulated and contextually driven) on response processes. We modified a recently introduced item response theory tree modeling procedure, the three-process model, to identify faking in two studies. Study 1 examined self-reported vocational interest assessment responses using an induced faking experimental design. Study 2 examined self-reported personality assessment responses when some people were in a high-stakes situation (i.e., selection). Across the two studies, individuals instructed or expected to fake were found to engage in more extreme responding. By identifying the underlying differences between fakers and honest respondents, the new approach improves our understanding of faking. Percentage cutoffs based on extreme responding produced a faker classification precision of 85% on average.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Shao, Yanli, Huawei Zhu, Rui Wang, Ying Liu, and Yusheng Liu. "A Simulation Data-Driven Design Approach for Rapid Product Optimization." Journal of Computing and Information Science in Engineering 20, no. 2 (January 3, 2020). http://dx.doi.org/10.1115/1.4045527.

Повний текст джерела
Анотація:
Abstract Traditional design optimization is an iterative process of design, simulation, and redesign, which requires extensive calculations and analysis. The designer needs to adjust and evaluate the design parameters manually and continually based on the simulation results until a satisfactory design is obtained. However, the expensive computational costs and large resource consumption of complex products hinder the wide application of simulation in industry. It is not an easy task to search the optimal design solution intelligently and efficiently. Therefore, a simulation data-driven design approach which combines dynamic simulation data mining and design optimization is proposed to achieve this purpose in this study. The dynamic simulation data mining algorithm—on-line sequential extreme learning machine with adaptive weights (WadaptiveOS-ELM)—is adopted to train the dynamic prediction model to effectively evaluate the merits of new design solutions in the optimization process. Meanwhile, the prediction model is updated incrementally by combining new “good” data set to reduce the modeling cost and improve the prediction accuracy. Furthermore, the improved heuristic optimization algorithm—adaptive and weighted center particle swarm optimization (AWCPSO)—is introduced to guide the design change direction intelligently to improve the search efficiency. In this way, the optimal design solution can be searched automatically with less actual simulation iterations and higher optimization efficiency, and thus supporting the rapid product optimization effectively. The experimental results demonstrate the feasibility and effectiveness of the proposed approach.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії