Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Sensitivity indices, given data, first order indices.

Articles de revues sur le sujet « Sensitivity indices, given data, first order indices »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleurs articles de revues pour votre recherche sur le sujet « Sensitivity indices, given data, first order indices ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les articles de revues sur diverses disciplines et organisez correctement votre bibliographie.

1

Melese, Abdisa Shiferaw, Oluwole Daniel Makinde et Legesse Lemecha Obsu. « Mathematical modelling and analysis of coffee berry disease dynamics on a coffee farm ». Mathematical Biosciences and Engineering 19, no 7 (2022) : 7349–73. http://dx.doi.org/10.3934/mbe.2022347.

Texte intégral
Résumé :
<abstract><p>This paper focuses on a mathematical model for coffee berry disease infestation dynamics. This model considers coffee berry and vector populations with the interaction of fungal pathogens. In order to gain an insight into the global dynamics of coffee berry disease transmission and eradication on any given coffee farm, the assumption of logistic growth with a carrying capacity reflects the fact that the amount of coffee plants depends on the limited size of the coffee farm. First, we show that all solutions of the chosen model are bounded and non-negative with positive initial data in a feasible region. Subsequently, endemic and disease-free equilibrium points are calculated. The basic reproduction number with respect to the coffee berry disease-free equilibrium point is derived using a next generation matrix approach. Furthermore, the local stability of the equilibria is established based on the Jacobian matrix and Routh Hurwitz criteria. The global stability of the equilibria is also proved by using the Lyapunov function. Moreover, bifurcation analysis is proved by the center manifold theory. The sensitivity indices for the basic reproduction number with respect to the main parameters are determined. Finally, the numerical simulations show the agreement with the analytical results of the model analysis.</p></abstract>
Styles APA, Harvard, Vancouver, ISO, etc.
2

Younes, Anis, Jabran Zaouali, François Lehmann et Marwan Fahs. « Sensitivity and identifiability of hydraulic and geophysical parameters from streaming potential signals in unsaturated porous media ». Hydrology and Earth System Sciences 22, no 7 (2 juillet 2018) : 3561–74. http://dx.doi.org/10.5194/hess-22-3561-2018.

Texte intégral
Résumé :
Abstract. Fluid flow in a charged porous medium generates electric potentials called streaming potential (SP). The SP signal is related to both hydraulic and electrical properties of the soil. In this work, global sensitivity analysis (GSA) and parameter estimation procedures are performed to assess the influence of hydraulic and geophysical parameters on the SP signals and to investigate the identifiability of these parameters from SP measurements. Both procedures are applied to a synthetic column experiment involving a falling head infiltration phase followed by a drainage phase. GSA is used through variance-based sensitivity indices, calculated using sparse polynomial chaos expansion (PCE). To allow high PCE orders, we use an efficient sparse PCE algorithm which selects the best sparse PCE from a given data set using the Kashyap information criterion (KIC). Parameter identifiability is performed using two approaches: the Bayesian approach based on the Markov chain Monte Carlo (MCMC) method and the first-order approximation (FOA) approach based on the Levenberg–Marquardt algorithm. The comparison between both approaches allows us to check whether FOA can provide a reliable estimation of parameters and associated uncertainties for the highly nonlinear hydrogeophysical problem investigated. GSA results show that in short time periods, the saturated hydraulic conductivity (Ks) and the voltage coupling coefficient at saturation (Csat) are the most influential parameters, whereas in long time periods, the residual water content (θs), the Mualem–van Genuchten parameter (n) and the Archie saturation exponent (na) become influential, with strong interactions between them. The Mualem–van Genuchten parameter (α) has a very weak influence on the SP signals during the whole experiment. Results of parameter estimation show that although the studied problem is highly nonlinear, when several SP data collected at different altitudes inside the column are used to calibrate the model, all hydraulic (Ks,θs,α,n) and geophysical parameters (na,Csat) can be reasonably estimated from the SP measurements. Further, in this case, the FOA approach provides accurate estimations of both mean parameter values and uncertainty regions. Conversely, when the number of SP measurements used for the calibration is strongly reduced, the FOA approach yields accurate mean parameter values (in agreement with MCMC results) but inaccurate and even unphysical confidence intervals for parameters with large uncertainty regions.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Van Niekerk, Elna, et Luke Sandham. « Visual interpretation of ASTER satellite data, Part 1 : Geologic mapping in the Cradle of Humankind World Heritage Site ». Suid-Afrikaanse Tydskrif vir Natuurwetenskap en Tegnologie 26, no 3 (21 septembre 2007) : 177–95. http://dx.doi.org/10.4102/satnt.v26i3.132.

Texte intégral
Résumé :
Since the first earth observing satellite was launched in 1972, remote sensing has become a powerful tool in the arsenal of geoscientists. This satellite became known as Landsat 1 and carried the Multispectral Scanner (MSS) delivering imagery at a spatial resolution of 80, and spectral resolution from blue to near infrared. Ongoing satellite and sensor development to the end of the century produced the Landsat Thematic Mapper (TM) with improved spatial and spectral resolution, as well as the SPOT series of satellites delivering the highest spatial but limited spectral resolution. These developments culminated in the SPOT 4 (1998) and Landsat Enhanced Thematic Mapper (1999) sensors. While Landsat ETM in particular provided much improved spatial and spectral resolutions, on the basis of which a large amount of geoscientific remote sensing was conducted world wide, the data did not provide adequate spectral and spatial sensitivity to be optimally effective for geological mapping at the local scale. On 18 December 1999 the Terra platform was launched, carrying five remote sensing instruments, including ASTER (Advanced Space borne Thermal Emission and Reflection Radiometer). ASTER consists of three separate instrument subsystems, each operating in a different spectral region, and using separate optical systems. These are the Visible and Very Near Infrared (VNIR) subsystem with a 15m-spatial resolution, the Short Wave Infrared (SWIR) subsystem with a 30m-spatial resolution and the Thermal Infrared (TIR) subsystem with a 90m-spatial resolution. ASTER effectively offers an improvement on Landsat MSS, Landsat TM, Landsat ETM+ and SPOT spectral and spatial resolutions. Given the paucity of published research on geological remote sensing at the local scale in South Africa, and particularly on the use of ASTER for geological mapping in South Africa, it is imperative that the value of ASTER be investigated. This article reports on the improved detail and scale achieved in the mapping of litho-stratigraphy, geological structures and mining-related features by the visual interpretation of processed ASTER images. ASTER imagery obtained from the EOS website was subjected to a range of image enhancement and analysis techniques including colour composites, band ratios, normalised difference indices, regression and decorrelation, in order to obtain optimal visual interpretability. Eight images thus obtained could be used for visual analysis, and it became evident that litho-stratigraphy, faults, fracture zones and elements of the regional seam system, as well as remnants of mining activities, were readily identifiable. Some of these were in accordance with the most recent and accurate geological map of the area, but many of them had apparently not been mapped. These features were annotated and were verified by field checks. In all cases the accuracy of detection and location from satellite imagery was confirmed on the ground. The improved detail and accuracy obtained by visual interpretation of processed ASTER satellite data for mapping a section of the Cradle of Humankind World Heritage Site demonstrated the potential value of this data for a variety of other geoscience applications. It appears that the improved accuracy can be ascribed jointly to the higher spatial and spectral resolution provided by ASTER data.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Jafari, Marzieh, et Khaled Akbari. « Global sensitivity analysis approaches applied to parameter selection for numerical model-updating of structures ». Engineering Computations 36, no 4 (13 mai 2019) : 1282–304. http://dx.doi.org/10.1108/ec-08-2018-0336.

Texte intégral
Résumé :
Purpose This paper aims to measure the sensitivity of the structure’s deformation numerical model (NM) related to the various types of the design parameters, which is a suitable method for parameter selection to increase the time of model-updating. Design/methodology/approach In this research, a variance-based sensitivity analysis (VBSA) approach is proposed to measure the sensitivity of NM of structures. In this way, the contribution of measurements of the structure (such as design parameter values and geometry) on the output of NM is studied using first-order and total-order sensitivity indices developed by Sobol’. In this way the generated data set of parameters by considering different distributions such as Gaussian or uniform distribution and different order as input along with, the resulted deformation variables of NM as output has been submitted to the Sobol’ indices estimation procedure. To the verification of VBSA results, a gradient-based sensitivity analysis (SA), which is developed as a global SA method has been developed to measure the global sensitivity of NM then implemented over the NM’s results of a tunnel. Findings Regarding the estimated indices, it has been concluded that the derived deformation functions from the tunnel’s NM usually are non-additive. Also, some parameters have been determined as most effective on the deformation functions, which can be selected for model-updating to avoid a time-consuming process, so those may better to be considered in the group of updating parameters. In this procedure for SA of the model, also some interactions between the selected parameters with other parameters, which are beneficial to be considered in the model-updating procedure, have been detected. In this study, some parameters approximately (27 per cent of the total) with no effect over the all objective functions have been determined to be excluded from the parameter candidates for model-updating. Also, the resulted indices of implemented VBSA were approved during validation by the gradient-based indices. Practical implications The introduced method has been implemented for a circular lined tunnel’s NM, which has been created by Fast Lagrangian Analysis of Continua software. Originality/value This paper plans to apply a statistical method, which is global on the results of the NM of a soil structure by a complex system for parameter selection to avoid the time-consuming model-updating process.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Rodríguez-Guisado, Esteban, Antonio Ángel Serrano-de la Torre, Eroteida Sánchez-García, Marta Domínguez-Alonso et Ernesto Rodríguez-Camino. « Development of an empirical model for seasonal forecasting over the Mediterranean ». Advances in Science and Research 16 (26 août 2019) : 191–99. http://dx.doi.org/10.5194/asr-16-191-2019.

Texte intégral
Résumé :
Abstract. In the frame of MEDSCOPE project, which mainly aims at improving predictability on seasonal timescales over the Mediterranean area, a seasonal forecast empirical model making use of new predictors based on a collection of targeted sensitivity experiments is being developed. Here, a first version of the model is presented. This version is based on multiple linear regression, using global climate indices (mainly global teleconnection patterns and indices based on sea surface temperatures, as well as sea-ice and snow cover) as predictors. The model is implemented in a way that allows easy modifications to include new information from other predictors that will come as result of the ongoing sensitivity experiments within the project. Given the big extension of the region under study, its high complexity (both in terms of orography and land-sea distribution) and its location, different sub regions are affected by different drivers at different times. The empirical model makes use of different sets of predictors for every season and every sub region. Starting from a collection of 25 global climate indices, a few predictors are selected for every season and every sub region, checking linear correlation between predictands (temperature and precipitation) and global indices up to one year in advance and using moving averages from two to six months. Special attention has also been payed to the selection of predictors in order to guaranty smooth transitions between neighbor sub regions and consecutive seasons. The model runs a three-month forecast every month with a one-month lead time.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Bürger, G., T. Q. Murdock, A. T. Werner, S. R. Sobie et A. J. Cannon. « Downscaling Extremes—An Intercomparison of Multiple Statistical Methods for Present Climate ». Journal of Climate 25, no 12 (15 juin 2012) : 4366–88. http://dx.doi.org/10.1175/jcli-d-11-00408.1.

Texte intégral
Résumé :
Abstract Five statistical downscaling methods [automated regression-based statistical downscaling (ASD), bias correction spatial disaggregation (BCSD), quantile regression neural networks (QRNN), TreeGen (TG), and expanded downscaling (XDS)] are compared with respect to representing climatic extremes. The tests are conducted at six stations from the coastal, mountainous, and taiga region of British Columbia, Canada, whose climatic extremes are measured using the 27 Climate Indices of Extremes (ClimDEX; http://www.climdex.org/climdex/index.action) indices. All methods are calibrated from data prior to 1991, and tested against the two decades from 1991 to 2010. A three-step testing procedure is used to establish a given method as reliable for any given index. The first step analyzes the sensitivity of a method to actual index anomalies by correlating observed and NCEP-downscaled annual index values; then, whether the distribution of an index corresponds to observations is tested. Finally, this latter test is applied to a downscaled climate simulation. This gives a total of 486 single and 162 combined tests. The temperature-related indices pass about twice as many tests as the precipitation indices, and temporally more complex indices that involve consecutive days pass none of the combined tests. With respect to regions, there is some tendency of better performance at the coastal and mountaintop stations. With respect to methods, XDS performed best, on average, with 19% (48%) of passed combined (single) tests, followed by BCSD and QRNN with 10% (45%) and 10% (31%), respectively, ASD with 6% (23%), and TG with 4% (21%) of passed tests. Limitations of the testing approach and possible consequences for the downscaling of extremes in these regions are discussed.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Kanwal, S., M. K. Siddiqui, E. Bonyah, T. S. Shaikh, I. Irshad et S. Khalid. « Ordering Acyclic Connected Structures of Trees Having Greatest Degree-Based Invariants ». Complexity 2022 (16 mars 2022) : 1–16. http://dx.doi.org/10.1155/2022/3769831.

Texte intégral
Résumé :
Being building block of data sciences, link prediction plays a vital role in revealing the hidden mechanisms that lead the networking dynamics. Since many techniques depending in vertex similarity and edge features were put forward to rule out many well-known link prediction challenges, many problems are still there just because of unique formulation characteristics of sparse networks. In this study, we applied some graph transformations and several inequalities to determine the greatest value of first and second Zagreb invariant, S K and S K 1 invariants, for acyclic connected structures of given order, diameter, and pendant vertices. Also, we determined the corresponding extremal acyclic connected structures for these topological indices and provide an ordering (with 5 members) giving a sequence of acyclic connected structures having these indices from greatest in decreasing order.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Upadhyaya, Shruti, et R. Ramsankaran. « Multi-Index Rain Detection : A New Approach for Regional Rain Area Detection from Remotely Sensed Data ». Journal of Hydrometeorology 15, no 6 (1 décembre 2014) : 2314–30. http://dx.doi.org/10.1175/jhm-d-14-0006.1.

Texte intégral
Résumé :
Abstract In this article, a new approach called Multi-Index Rain Detection (MIRD) is suggested for regional rain area detection and was tested for India using Kalpana-1 satellite data. The approach was developed based on the following hypothesis: better results should be obtained for combined indices than an individual index. Different combinations (scenarios) were developed by combining six commonly used rain detection indices using AND and OR logical connectives. For the study region, an optimal rain area detection scenario and optimal threshold values of the indices were found through a statistical multi-decision-making technique called the Technique for Order Preference by Similarity Ideal Solution (TOPSIS). The TOPSIS analysis was carried out based on independent categorical statistics like probability of detection, probability of no detection, and Heidke skill score. It is noteworthy that for the first time in literature, an attempt has been made (through sensitivity analysis) to understand the influence of the proportion of rain/no-rain pixels in the calibration/validation dataset on a few commonly used statistics. Thus, the obtained results have been used to identify the above-mentioned independent categorical statistics. Based on the results obtained and the validation carried out with different independent datasets, scenario 8 (TIRt &lt; 260 K and TIRt − WVt &lt; 19 K, where TIRt and WVt are the brightness temperatures from thermal IR and water vapor, respectively) is found to be an optimal rain detection index. The obtained results also indicate that the texture-based indices [standard deviation and mean of 5 × 5 pixels at time t (mean5)] did not perform well, perhaps because of the coarse resolution of Kalpana-1 data. It is also to be noted that scenario 8 performs much better than the Roca method used in the Indian National Satellite (INSAT) Multispectral Rainfall Algorithm (IMSRA) developed for India.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Razmyan, S., et F. Hosseinzadeh Lotfi. « An Application of Monte-Carlo-Based Sensitivity Analysis on the Overlap in Discriminant Analysis ». Journal of Applied Mathematics 2012 (2012) : 1–14. http://dx.doi.org/10.1155/2012/315868.

Texte intégral
Résumé :
Discriminant analysis (DA) is used for the measurement of estimates of a discriminant function by minimizing their group misclassifications to predict group membership of newly sampled data. A major source of misclassification in DA is due to the overlapping of groups. The uncertainty in the input variables and model parameters needs to be properly characterized in decision making. This study combines DEA-DA with a sensitivity analysis approach to an assessment of the influence of banks’ variables on the overall variance in overlap in a DA in order to determine which variables are most significant. A Monte-Carlo-based sensitivity analysis is considered for computing the set of first-order sensitivity indices of the variables to estimate the contribution of each uncertain variable. The results show that the uncertainties in the loans granted and different deposit variables are more significant than uncertainties in other banks’ variables in decision making.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Del Corso, Giulio, Roberto Verzicco et Francesco Viola. « Sensitivity analysis of an electrophysiology model for the left ventricle ». Journal of The Royal Society Interface 17, no 171 (octobre 2020) : 20200532. http://dx.doi.org/10.1098/rsif.2020.0532.

Texte intégral
Résumé :
Modelling the cardiac electrophysiology entails dealing with the uncertainties related to the input parameters such as the heart geometry and the electrical conductivities of the tissues, thus calling for an uncertainty quantification (UQ) of the results. Since the chambers of the heart have different shapes and tissues, in order to make the problem affordable, here we focus on the left ventricle with the aim of identifying which of the uncertain inputs mostly affect its electrophysiology. In a first phase, the uncertainty of the input parameters is evaluated using data available from the literature and the output quantities of interest (QoIs) of the problem are defined. According to the polynomial chaos expansion, a training dataset is then created by sampling the parameter space using a quasi-Monte Carlo method whereas a smaller independent dataset is used for the validation of the resulting metamodel. The latter is exploited to run a global sensitivity analysis with nonlinear variance-based indices and thus reduce the input parameter space accordingly. Thereafter, the uncertainty probability distribution of the QoIs are evaluated using a direct UQ strategy on a larger dataset and the results discussed in the light of the medical knowledge.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Schmitz, Kai. « LISA Sensitivity to Gravitational Waves from Sound Waves ». Symmetry 12, no 9 (9 septembre 2020) : 1477. http://dx.doi.org/10.3390/sym12091477.

Texte intégral
Résumé :
Gravitational waves (GWs) produced by sound waves in the primordial plasma during a strong first-order phase transition in the early Universe are going to be a main target of the upcoming Laser Interferometer Space Antenna (LISA) experiment. In this short note, I draw a global picture of LISA’s expected sensitivity to this type of GW signal, based on the concept of peak-integrated sensitivity curves (PISCs) recently introduced in two previous papers. In particular, I use LISA’s PISC to perform a systematic comparison of several thousands of benchmark points in ten different particle physics models in a compact fashion. The presented analysis (i) retains the complete information on the optimal signal-to-noise ratio, (ii) allows for different power-law indices describing the spectral shape of the signal, (iii) accounts for galactic confusion noise from compact binaries, and (iv) exhibits the dependence of the expected sensitivity on the collected amount of data. An important outcome of this analysis is that, for the considered set of models, galactic confusion noise typically reduces the number of observable scenarios by roughly a factor of two, more or less independent of the observing time. The numerical results presented in this paper are also available in the online repository Zenodo.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Wang, Shubin, Yukun Tian, Xiaogang Deng, Qianlei Cao, Lei Wang et Pengxiang Sun. « Disturbance Detection of a Power Transmission System Based on the Enhanced Canonical Variate Analysis Method ». Machines 9, no 11 (6 novembre 2021) : 272. http://dx.doi.org/10.3390/machines9110272.

Texte intégral
Résumé :
Aiming at the characteristics of dynamic correlation, periodic oscillation, and weak disturbance symptom of power transmission system data, this paper proposes an enhanced canonical variate analysis (CVA) method, called SLCVAkNN, for monitoring the disturbances of power transmission systems. In the proposed method, CVA is first used to extract the dynamic features by analyzing the data correlation and establish a statistical model with two monitoring statistics T2 and Q. Then, in order to handling the periodic oscillation of power data, the two statistics are reconstructed in phase space, and the k-nearest neighbor (kNN) technique is applied to design the statistics nearest neighbor distance DT2 and DQ as the enhanced monitoring indices. Further considering the detection difficulty of weak disturbances with the insignificant symptoms, statistical local analysis (SLA) is integrated to construct the primary and improved residual vectors of the CVA dynamic features, which are capable to prompt the disturbance detection sensitivity. The verification results on the real industrial data show that the SLCVAkNN method can detect the occurrence of power system disturbance more effectively than the traditional data-driven monitoring methods.
Styles APA, Harvard, Vancouver, ISO, etc.
13

Li, Lingcheng, Gautam Bisht et L. Ruby Leung. « Spatial heterogeneity effects on land surface modeling of water and energy partitioning ». Geoscientific Model Development 15, no 14 (19 juillet 2022) : 5489–510. http://dx.doi.org/10.5194/gmd-15-5489-2022.

Texte intégral
Résumé :
Abstract. Understanding the influence of land surface heterogeneity on surface water and energy fluxes is crucial for modeling earth system variability and change. This study investigates the effects of four dominant heterogeneity sources on land surface modeling, including atmospheric forcing (ATM), soil properties (SOIL), land use and land cover (LULC), and topography (TOPO). Our analysis focused on their impacts on the partitioning of precipitation (P) into evapotranspiration (ET) and runoff (R), partitioning of net radiation into sensible heat and latent heat, and corresponding water and energy fluxes. An initial set of 16 experiments were performed over the continental US (CONUS) using the E3SM land model (ELMv1) with different combinations of heterogeneous and homogeneous datasets. The Sobol' total and first-order sensitivity indices were utilized to quantify the relative importance of the four heterogeneity sources. Sobol' total sensitivity index measures the total heterogeneity effects induced by a given heterogeneity source, consisting of the contribution from its own heterogeneity (i.e., the first-order index) and its interactions with other heterogeneity sources. ATM and LULC are the most dominant heterogeneity sources in determining spatial variability of water and energy partitioning, mainly contributed by their own heterogeneity and slightly contributed by their interactions with other heterogeneity sources. Their heterogeneity effects are complementary, both spatially and temporally. The overall impacts of SOIL and TOPO are negligible, except TOPO dominates the spatial variability of R/P across the transitional climate zone between the arid western and humid eastern CONUS. Accounting for more heterogeneity sources improves the simulated spatial variability of water and energy fluxes when compared with ERA5-Land reanalysis dataset. An additional set of 13 experiments identified the most critical components within each heterogeneity source, which are precipitation, temperature, and longwave radiation for ATM, soil texture, and soil color for SOIL and maximum fractional saturated area parameter for TOPO.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Wang, Yan, et Pingyu Jiang. « Fluctuation evaluation and identification model for small-batch multistage machining processes of complex aircraft parts ». Proceedings of the Institution of Mechanical Engineers, Part B : Journal of Engineering Manufacture 231, no 10 (3 novembre 2015) : 1820–37. http://dx.doi.org/10.1177/0954405415612371.

Texte intégral
Résumé :
The key to improve the machining quality of workpiece is to decrease the process fluctuation, which requires identifying the fluctuation sources first. For small-batch multistage machining processes of complex aircraft parts, how to identify the fluctuation sources efficiently has become a difficult issue due to the limited shop-floor data and the complicated interactive effects among different stages. Aiming at this issue, a fluctuation evaluation and identification model for small-batch multistage machining process is proposed based on the sensitivity analysis theory. In order to improve the data utilization, an analytical structure of the fluctuation evaluation and identification model for small-batch multistage machining process is presented, which comprises four levels, namely, part level, multistage level, single-stage level and quality feature level. Corresponding to the four levels in the analytical structure, four fluctuation analysis indices are proposed to quantitatively evaluate the fluctuation level of different parts and identify the weak stages and elements that result in the abnormal fluctuation in the process flow. A five-stage deep-hole machining process of aircraft landing gear is used as a case to verify the proposed model.
Styles APA, Harvard, Vancouver, ISO, etc.
15

Burdon, R. D. « Early selection in tree breeding : principles for applying index selection and inferring input parameters ». Canadian Journal of Forest Research 19, no 4 (1 avril 1989) : 499–504. http://dx.doi.org/10.1139/x89-076.

Texte intégral
Résumé :
Early selection is essential after the first generation of a tree-breeding programme. Two main issues that it raises are (i) the optimal age for selection for the next generation and (ii) how best to use early assessment data for the selection. Attention is given mainly to the latter issue. Early selection is considered as a special type of indirect selection that can embrace multiple traits. Where a sequence of early measurements has been made for single traits it is possible to treat the expressions of a trait at different dates as separate traits in a selection index. Constructing a least-squares selection index with this feature involves a number of questions. Outstanding among these is whether, and in what way, early measurements should be used as covariates for adjusting later ones. Such indices depend, however, on good estimates of phenotypic and genetic variance–covariance matrices. Potential aids for deriving and refining such matrices are reviewed, including the "Lambeth relationship" for age–age correlations and partial correlation analysis. However, analysis of the sensitivity of expected gains with respect to variations in assumed genetic parameters is strongly advocated before decisions are made on age of selection.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Shilov, A. A., N. A. Kochergin, V. I. Ganyukov, A. N. Kokov, K. A. Kozyrin, A. A. Korotkevich et O. L. Barbarash. « Comparability of scintigraphy data with coronary angiography after surgical myocardial revascularization ». Regional blood circulation and microcirculation 18, no 3 (7 octobre 2019) : 23–28. http://dx.doi.org/10.24884/1682-6655-2019-18-3-23-28.

Texte intégral
Résumé :
Introduction. Radionuclide imaging is included in diagnostic methods after PCI and CABG in patients with symptoms, but the recommendations caution against routine testing in all asymptomatic patients after revascularization. The paper shows the results of single-photon emission computed tomography after hybrid coronary myocardial revascularization; an analysis of the sensitivity and specificity of three methods of surgical myocardial revascularization was carried out in 12 months.Aim of the study was to determine the sensitivity and specificity of SPECT in determining coronary artery stenosis ≥ 50 % after performing three methods of surgical myocardial revascularization: CABG, PCI, and hybrid myocardial revascularization in patients with coronary artery disease and multi-vascular coronary lesion.Material and methods. A retrospective analysis of 82 patients with stable forms of coronary artery disease who underwent myocardial revascularization for the presence of the multivascular coronary lesion was carried out. The patients were divided into three groups: the first group consisted of 40 patients who underwent CABG, the second – 29 patients after PCI, and the third – 23 patients who underwent hybrid myocardial revascularization.Results. All patients after myocardial revascularization, on average, after 21.8±8.6 months, were hospitalized, where singlephoton emission computed tomography of the myocardium with 99mTc-technetril (SPECT) and control coronarography/ shuntography were performed. The frequency of the presence of significant stenosis during coronary angiography with a perfusion defect of ≥5 % on SPECT during exercise was 50, 50 and 33 % in the CABG, PCI, and hybrid revascularization, respectively (p=0.894). The least sensitivity of SPECT was after hybrid myocardial revascularization (20 %), while in the CABG group, the sensitivity was 71.4 % (p = 0.190). The SPECT specificity indices were much higher: in the GABG, PCI, and hybrid revascularization groups, respectively, 75.8, 79 and 88.9 % (p=0.530).Conclusion. There is no significant relationship between the size of the defect on SPECT and coronary angiography data, regardless of the type of surgical myocardial revascularization in patients after myocardial revascularization. Detection of a perfusion defect with a load of more than 10% in SPECT after surgical myocardial revascularization is the basis for coronary angiography in order to exclude stent restenosis or shunt dysfunction, as well as progression of coronary atherosclerosis.
Styles APA, Harvard, Vancouver, ISO, etc.
17

Kaddech, Nabil, Noomen Guelmami, Tore Bonsaksen, Radhouene Doggui, Chiraz Beji et Jalila El Ati. « Adaptation and Psychometric Evidence of the ARABIC Version of the Diabetes Self-Management Questionnaire (A-DSMQ) ». Healthcare 10, no 5 (21 mai 2022) : 951. http://dx.doi.org/10.3390/healthcare10050951.

Texte intégral
Résumé :
(1) Background: Diabetic patients must engage in self-care practices in order to maintain optimal glycemic control, hence reducing the likelihood of developing complications, and enhance the overall quality of their lives. The Diabetes Self-care Management Questionnaire (DSMQ) is a tool for assessing self-management habits that may be used to predict glycemic control in people with diabetes. However, no Arabic language version of the instrument has been found. Therefore, we adapted an Arabic language version of the instrument in Tunisia. The purpose of the current research aimed to assess the psychometric features of the Tunisian version of the DSMQ in patients with type 2 diabetes. (2) Method: Two samples including both genders, one exploratory (n = 208, mean age 53.2 ± 8.3) and one confirmatory (n = 441, mean age 53.4 ± 7.4), completed an adapted Arabic language version of the DSMQ, a sociodemographic questionnaire and information about their HbA1C levels. (3) Results: The exploratory factor analysis revealed that the 15 items of the A-DSMQ fit well with the data. Likewise, the alpha coefficients for the A-DSMQ factors were above 0.80: for “Glucose Management” (GM), “Dietary Control” (DC), “Physical Activity” (PA), and “Heath-Care Use” (HU). The fit indices for the CFA were good, and the four-factor solution was confirmed. The Average Variance Extracted values and Fornell–Larcker criterion established the convergent and discriminant validity, respectively. The concurrent validity of the tool was established through the statistically significant negative relationships between the A-DSMQ factors and HbA1C, in addition to its positive association with the practice of physical activity measured by the IPAQ. (4) Conclusions: Given the high EFA factor loadings, the CFA fit indices, the correlation matrix, the sensitivity analysis, the convergent validity, and the excellent internal consistency of the A-DSMQ, it can be concluded that the A-DSMQ is an effective psychometric tool for diabetes self-management in Tunisia.
Styles APA, Harvard, Vancouver, ISO, etc.
18

Liu, Zuojun, Cheng Xiao, Tieling Zhang et Xu Zhang. « Research on Fault Detection for Three Types of Wind Turbine Subsystems Using Machine Learning ». Energies 13, no 2 (17 janvier 2020) : 460. http://dx.doi.org/10.3390/en13020460.

Texte intégral
Résumé :
In wind power generation, one aim of wind turbine control is to maintain it in a safe operational status while achieving cost-effective operation. The purpose of this paper is to investigate new techniques for wind turbine fault detection based on supervisory control and data acquisition (SCADA) system data in order to avoid unscheduled shutdowns. The proposed method starts with analyzing and determining the fault indicators corresponding to a failure mode. Three main system failures including generator failure, converter failure and pitch system failure are studied. First, the indicators data corresponding to each of the three key failures are extracted from the SCADA system, and the radar charts are generated. Secondly, the convolutional neural network with ResNet50 as the backbone network is selected, and the fault model is trained using the radar charts to detect the fault and calculate the detection evaluation indices. Thirdly, the support vector machine classifier is trained using the support vector machine method to achieve fault detection. In order to show the effectiveness of the proposed radar chart-based methods, support vector regression analysis is also employed to build the fault detection model. By analyzing and comparing the fault detection accuracy among these three methods, it is found that the fault detection accuracy by the models developed using the convolutional neural network is obviously higher than the other two methods applied given the same data condition. Therefore, the newly proposed method for wind turbine fault detection is proved to be more effective.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Pickup, David I., Robert M. Bernard, Eugene Borokhovski, Anne C. Wade et Rana M. Tamim. « Systematically Searching Empirical Literature in the Social Sciences : Results from Two Meta-Analyses Within the Domain of Education ». Российский психологический журнал 15, no 4 (31 janvier 2019) : 245–65. http://dx.doi.org/10.21702/rpj.2018.4.10.

Texte intégral
Résumé :
Introduction. This paper provides an overview of the information retrieval strategy employed for two meta-analyses, conducted by a systematic review team at Concordia University (Montreal, QC, Canada). Both papers draw on standards first articulated by H.M. Cooper and further developed by the Campbell Collaboration, which promote a comprehensive approach to systematically searching an extensive array of resources (bibliographic databases, print resources, citation indices, etc.) in order to locate both published and unpublished research. The goal is to verify if searching comprehensively through multiple resources retrieves studies that are unique, and hence, improve the overall representativeness of a diverse body of literature. We also analyze the sensitivity and specificity of the results by data source. Methods. In order to determine the source sensitivity, we consider percentage of results from each source retrieved for full-text review. In order to determine the source specificity, we derive a percentage from the total number of studies included in the final meta-analysis compared against the overall number of initial results found. Results. Results demonstrate the need to search beyond the subject-specific databases of a particular discipline as unique results can be found in many places. Databases for related disciplines provided 129 unique includes to each meta-analysis, and multidisciplinary databases provided 44 and 99 unique includes for the two meta-analyses in question respectively. Manual search techniques were much more sensitive and specific than electronic searches of databases and yield a higher percentage of final includes. Discussion. The results demonstrate the utility of a comprehensive information retrieval methodology like that proposed by the Campbell Collaboration, which goes beyond the main subject databases to locate the full range of information sources, including grey literature.
Styles APA, Harvard, Vancouver, ISO, etc.
20

Bhardwaj, Utkarsh, Angelo Palos Teixeira et C. Guedes Soares. « Probabilistic Collapse Design and Safety Assessment of Sandwich Pipelines ». Journal of Marine Science and Engineering 10, no 10 (5 octobre 2022) : 1435. http://dx.doi.org/10.3390/jmse10101435.

Texte intégral
Résumé :
This paper presents an approach for probabilistic design and safety assessment of sandwich pipelines under external pressure. The methodology consists of the categorisation of sandwich pipeline collapse strength models based on interlayer adhesion conditions. The models are validated by comparing their predictions against collapse test data of sandwich pipelines. The accuracy of the strength models and their prediction uncertainty are used to select the best model in each category. Regarding interlayer adhesion categories, uncertainty propagation of models’ predictions over a wide range is assessed by the Monte Carlo simulation method. The proposed methodology is demonstrated using a case study of a sandwich pipeline with adequate probabilistic modelling of the basic random variables. Different limit states are defined for three categories of sandwich pipelines, based on which structural reliability indices are estimated. In employing the First Order Reliability Method for sensitivity analysis, the importance of basic variables of the limit states is evaluated. Later, a parametric analysis is conducted, presenting reliability variations for several design and operational scenarios of sandwich pipelines. Finally, to achieve a uniform level of structural reliability of sandwich pipelines, a few suggestions are provided, and practical partial safety factors are calculated. The results of the present analysis can provide guidance on the probabilistic design and operational safety assessment of sandwich pipelines.
Styles APA, Harvard, Vancouver, ISO, etc.
21

Granero-Belinchon, Carlos, Aurelie Michel, Jean-Pierre Lagouarde, Jose A. Sobrino et Xavier Briottet. « Night Thermal Unmixing for the Study of Microscale Surface Urban Heat Islands with TRISHNA-Like Data ». Remote Sensing 11, no 12 (18 juin 2019) : 1449. http://dx.doi.org/10.3390/rs11121449.

Texte intégral
Résumé :
Urban Heat Islands (UHIs) at the surface and canopy levels are major issues in urban planification and development. For this reason, the comprehension and quantification of the influence that the different land-uses/land-covers have on UHIs is of particular importance. In order to perform a detailed thermal characterisation of the city, measures covering the whole scenario (city and surroundings) and with a recurrent revisit are needed. In addition, a resolution of tens of meters is needed to characterise the urban heterogeneities. Spaceborne remote sensing meets the first and the second requirements but the Land Surface Temperature (LST) resolutions remain too rough compared to the urban object scale. Thermal unmixing techniques have been developed in recent years, allowing LST images during day at the desired scales. However, while LST gives information of surface urban heat islands (SUHIs), canopy UHIs and SUHIs are more correlated during the night, hence the development of thermal unmixing methods for night LSTs is necessary. This article proposes to adapt four empirical unmixing methods of the literature, Disaggregation of radiometric surface Temperature (DisTrad), High-resolution Urban Thermal Sharpener (HUTS), Area-To-Point Regression Kriging (ATPRK), and Adaptive Area-To-Point Regression Kriging (AATPRK), to unmix night LSTs. These methods are based on given relationships between LST and reflective indices, and on invariance hypotheses of these relationships across resolutions. Then, a comparative study of the performances of the different techniques is carried out on TRISHNA synthesized images of Madrid. Since TRISHNA is a mission in preparation, the synthesis of the images has been done according to the planned specification of the satellite and from initial Aircraft Hyperspectral Scanner (AHS) data of the city obtained during the DESIREX 2008 capaign. Thus, the coarse initial resolution is 60 m and the finer post-unmixing one is 20 m. In this article, we show that: (1) AATPRK is the most performant unmixing technique when applied on night LST, with the other three techniques being undesirable for night applications at TRISHNA resolutions. This can be explained by the local application of AATPRK. (2) ATPRK and DisTrad do not improve significantly the LST image resolution. (3) HUTS, which depends on albedo measures, misestimates the LST, leading to the worst temperature unmixing. (4) The two main factors explaining the obtained performances are the local/global application of the method and the reflective indices used in the LST-index relationship.
Styles APA, Harvard, Vancouver, ISO, etc.
22

Likhvantseva, V. G., V. N. Trubilin, E. V. Korosteleva, S. G. Kapkova et V. A. Vygodin. « Features of Visual Dysfunctions in Patients with Primary Hypothyroidism and Thyrotoxicosis ». Ophthalmology in Russia 19, no 3 (7 octobre 2022) : 584–93. http://dx.doi.org/10.18008/1816-5095-2022-3-584-593.

Texte intégral
Résumé :
Purpose: to study the prevalence and nature of visual dysfunctions in patients with primary hypothyroidism and thyrotoxicosis.Material and methods. The material for this study was the results of a survey of 54 patients (108 eyes) with thyroid dysfunctions: 32 people (64 eyes) with primary untreated hypothyroidism and 22 people (44 eyes) with primary untreated thyrotoxicosis. Static automated perimetry and dedicated shortwave infrared (blue-yellow) perimetry were performed. The average total value of the photosensitivity of each (n = 74) tested point of the field of view was analyzed, the topography of the location of focal defects was studied, and the severity of impairments to photosensitivity was assessed by aggregate signs.Results. Reliably high sensitivity (92.6 %) and specificity (50.0 %) of short-wave infrared perimetry in relation to static automated perimetry were revealed. In thyroid dysfunctions, the prevalence of optic neuropathy reaches 93 % according to the data of short-wave infrared perimetry versus 7 % of static automated perimetry. It is manifested by a diffuse decrease in light sensitivity to blue stimulus with an increase in the depth of depression from the center to the periphery in both types of thyroid dysfunction. Against this background, with primary hypothyroidism, focal defects appeared in the form of first-order scotomas, and with primary thyrotoxicosis, second-order scotomas. Scotomas were located at the periphery of the central visual field, 20–30° from the fixation point. In the analyzed groups, high average group indices of the maximum corrected visual acuity were established, which allows us to speak about the safety of the photopic (cone) component of the visual analyzer.Conclusion. The pattern of photosensitivity disorders, the topography of the location of the loci of local defects revealed by the short-wave infrared perimetry, indicate that the earliest signs of optical neuropathy are manifested at the level of photoreceptors — selectively in the S-cones. Decreased sensitivity to blue stimulus (440 nm) refers to an acquired color anomaly called tritanopia; which can be present with high visual function, is most often associated with a decrease in the number of S-cones and a deficiency of retinol (the source of cyanolab synthesis).
Styles APA, Harvard, Vancouver, ISO, etc.
23

Ouédraogo, Gathenya et Raude. « Projecting Wet Season Rainfall Extremes Using Regional Climate Models Ensemble and the Advanced Delta Change Model : Impact on the Streamflow Peaks in Mkurumudzi Catchment, Kenya ». Hydrology 6, no 3 (26 août 2019) : 76. http://dx.doi.org/10.3390/hydrology6030076.

Texte intégral
Résumé :
Each year, many African countries experience natural hazards such as floods and, because of their low adaptative capabilities, they hardly have the means to face the consequences, and therefore suffer huge economic losses. Extreme rainfall plays a key role in the occurrence of these hazards. Therefore, climate projection studies should focus more on extremes in order to provide a wider range of future scenarios of extremes which can aid policy decision making in African societies. Some researchers have attempted to analyze climate extremes through indices reflecting extremes in climate variables such as rainfall. However, it is difficult to assess impacts on streamflow based on these indices alone, as most hydrological models require daily data as inputs. Others have analyzed climate projections through general circulation models (GCMs) but have found their resolution too coarse for regional studies. Dynamic downscaling using regional climate models (RCMs) seem to address the limitation of GCMs, although RCMs might still lack accuracy due to the fact that they also contain biases that need to be eliminated. Given these limitations, the current study combined both dynamic and statistical downscaling methods to correct biases and improve the reproduction of high extremes by the models. This study’s aim was to analyze extreme high flows under the projection of extreme wet rainfall for the horizon of 2041 of a Kenyan South Coast catchment. The advanced delta change (ADC) method was applied on observed data (1982–2005), control (1982–2005) and near future (2018–2041) from an ensemble mean of multiple regional climate models (RCMs). The created future daily rainfall time series was introduced in the HEC-HMS (Hydrologic Engineering Center’s Hydrologic Modeling System) hydrological model and the generated future flow were compared to the baseline flow at the gaging station 3KD06, where the observed flow was available. The findings suggested that in the study area, the RCMs, bias corrected by the ADC method, projected an increase in rainfall wet extremes in the first rainy season of the year MAMJ (March–April–May–June) and a decrease in the second rainy season OND (October–November–December). The changes in rainfall extremes, induced a similar change pattern in streamflow extremes at the gaging station 3KD06, meaning that an increase/decrease in rainfall extremes generated an increase/decrease in the streamflow extremes. Due to lack of long-term good quality data, the researchers decided to perform a frequency analysis for up to a 50 year return period in order to assess the changes induced by the ADC method. After getting a longer data series, further analysis could be done to forecast the maximum flow to up to 1000 years, which could serve as design flow for different infrastructure.
Styles APA, Harvard, Vancouver, ISO, etc.
24

Towner, A. P. M., C. L. Brogan, T. R. Hunter et C. J. Cyganowski. « VLA Observations of Nine Extended Green Objects in the Milky Way : Ubiquitous Weak, Compact Continuum Emission, and Multi-epoch Emission from Methanol, Water, and Ammonia Masers ». Astrophysical Journal 923, no 2 (1 décembre 2021) : 263. http://dx.doi.org/10.3847/1538-4357/ac2c86.

Texte intégral
Résumé :
Abstract We have observed a sample of nine Extended Green Objects (EGOs) at 1.3 and 5 cm with the Very Large Array (VLA) with subarcsecond resolution and ∼7–14 μJy beam−1-sensitivities in order to characterize centimeter continuum emission as it first appears in these massive protoclusters. We find an EGO-associated continu um emission—within 1″ of the extended 4.5 μm emission—in every field, which is typically faint (order 101–102 μJy) and compact (unresolved at 0″.3–0″.5). The derived spectral indices of our 36 total detections are consistent with a wide array of physical processes, including both non-thermal (19% of detections) and thermal free–free processes (e.g., ionized jets and compact H ii regions, 78% of sample) and warm dust (1 source). We also find an EGO-associated 6.7 GHz CH3OH and 22 GHz H2O maser emission in 100% of the sample and a NH3 (3,3) masers in ∼45%; we do not detect any NH3 (6,6) masers at ∼5.6 mJy beam−1 sensitivity. We find statistically-significant correlations between L radio and L bol at two physical scales and three frequencies, consistent with thermal emission from ionized jets, but no correlation between L H 2 O and L radio for our sample. From these data, we conclude that EGOs likely host multiple different centimeter continuum-producing processes simultaneously. Additionally, at our ∼1000 au resolution, we find that all EGOs except G18.89−0.47 contain 1 ∼ 2 massive sources based on the presence of CH3OH maser groups, which is consistent with our previous work suggesting that these are typical massive protoclusters, in which only one to a few of the young stellar objects are massive.
Styles APA, Harvard, Vancouver, ISO, etc.
25

Bitsch, Florian, Philipp Berger, Arne Nagels, Irina Falkenberg et Benjamin Straube. « S142. NOVEL CONCEPTS OF SOCIAL IMPAIRMENTS IN SCHIZOPHRENIA ». Schizophrenia Bulletin 46, Supplement_1 (avril 2020) : S90. http://dx.doi.org/10.1093/schbul/sbaa031.208.

Texte intégral
Résumé :
Abstract Background Social deficits are a well-known phenotype of schizophrenia, which strongly influence the clinical progress of patients. A core substrate of these dysfunctions are altered Theory of Mind (ToM) processes which critically shape social interactions and can impress in an exaggerated and atrophied form in the disorder. Although there is a well-established link between clinical outcome measures and ToM deficits, less evidence exists about the neural and behavioral mechanisms underlying a specific core function responsible for forming interpersonal mental representations, which in turn aid to optimize social interactions. Along this line, an urgent clinical issue is how alterations in these interpersonal predictive processes translate into clinical issues and whether they can be positively influenced by psychological group interventions. Methods In a set of different studies, we used functional and structural magnetic resonance imaging and a dynamic social interactive task, a modified prisoner’s dilemma game, in which the participants can form mental representations of different interaction partners in order to optimize their joint interaction sequences. Methodologically, we have drawn on several sophisticated methods ranging from graph theoretic network indices, functional and effective connectivity to structural covariance analyses and linked them with behavioral and clinical outcome measures. Results Our data shows a central relevance of the right temporo-parietal junction (rTPJ) in forming mental representations in healthy subjects, given the region integrates memory processing streams during social interactions. Further behavioral analyses indicate that these mechanisms are relevant for interpersonal adaptive processes during social interactions. In patients with schizophrenia, we have found dysfunctions in this important mechanism, indicated by a reduced integration of neural pathways from the temporal lobe, such as the hippocampus. These alterations are associated with behavioral indices of dysfunctions in generating mental representations during an interpersonal interaction. When we examined neural computations in the entire ToM network, we found for the first time that the rTPJ has reduced out-going effective connections to brain regions linked with higher-order cognition, such as the dmPFC, in patients. On a conceptual level, this finding might be associated with dysfunctional updating processes from brain regions located in the temporal lobe, which will be an ongoing empirical question. Discussion Our results indicate a central pathological relevance of the rTPJ for social dysfunctions in schizophrenia. First, the regions seem to be less informed by memory processing streams, relevant to update social information by previous information. Additionally, we have shown that in the core mentalizing network the region integrates to a lesser extent into the dmPFC, which might be associated with our previous findings. Our results show concrete targets for specific interventions to improve the clinical important social-cognitive dysfunctions occurring in the disorder. Hence, we suggest approaches to enhance the functioning of brain mechanisms relevant for human connections that can facilitate the patients’ clinical outcome.
Styles APA, Harvard, Vancouver, ISO, etc.
26

Besier, T., M. Pillhofer, S. Botzenhart, U. Ziegenhain, H. Kindler, G. Spangler, I. Bovenschen, S. Gabler et A. Künster. « Child Abuse and Neglect : Screening for Risks During the Perinatal Period ». Geburtshilfe und Frauenheilkunde 72, no 05 (mai 2012) : 397–402. http://dx.doi.org/10.1055/s-0031-1298442.

Texte intégral
Résumé :
Purpose: Currently, there is a claim for earlier interventions for families in order to prevent child maltreatment. Here, a screening instrument to assess risk indicators for child abuse and neglect already in the context of maternity clinics is introduced. The present study is the first report on the psychometric properties of this instrument, the “short questionnaire for risk indices around birth” (RIAB). Material and Methods: Data were collected in the context of three different studies conducted at Ulm University Hospital. To examine interrater reliability eight case vignettes were rated by n = 90 study participants (50 students and 40 experts working at a maternity clinic). Criterion validity was examined in two studies applying the German version of the child abuse potential inventory CAPI (n = 96 families at risk and n = 160 additional families). Results: Both laymen and experts were able to understand and use the screening instrument correctly, leading to a high agreement with the sample solutions given. A high concordance was found between parentsʼ and expertsʼ ratings: In case of no reported risk factors applying the screening instrument RIAB, parents themselves reported significantly less stressors and burdens, compared to those parents with an indication for a thorough examination as pointed out in the RIAB. Conclusion: In the context of maternity clinics the RIAB is a useful, broadly applicable instrument, screening for existing risk factors at the earliest and thus allowing for the initiation of specific interventions when needed.
Styles APA, Harvard, Vancouver, ISO, etc.
27

Traver, José Emilio, Cristina Nuevo-Gallardo, Inés Tejado, Javier Fernández-Portales, Juan Francisco Ortega-Morán, J. Blas Pagador et Blas M. Vinagre. « Cardiovascular Circulatory System and Left Carotid Model : A Fractional Approach to Disease Modeling ». Fractal and Fractional 6, no 2 (26 janvier 2022) : 64. http://dx.doi.org/10.3390/fractalfract6020064.

Texte intégral
Résumé :
Cardiovascular diseases (CVDs) remain the leading cause of death worldwide, according to recent reports from the World Health Organization (WHO). This fact encourages research into the cardiovascular system (CVS) from multiple and different points of view than those given by the medical perspective, highlighting among them the computational and mathematical models that involve experiments much simpler and less expensive to be performed in comparison with in vivo or in vitro heart experiments. However, the CVS is a complex system that needs multidisciplinary knowledge to describe its dynamic models, which help to predict cardiovascular events in patients with heart failure, myocardial or valvular heart disease, so it remains an active area of research. Firstly, this paper presents a novel electrical model of the CVS that extends the classic Windkessel models to the left common carotid artery motivated by the need to have a more complete model from a medical point of view for validation purposes, as well as to describe other cardiovascular phenomena in this area, such as atherosclerosis, one of the main risk factors for CVDs. The model is validated by clinical indices and experimental data obtained from clinical trials performed on a pig. Secondly, as a first step, the goodness of a fractional-order behavior of this model is discussed to characterize different heart diseases through pressure–volume (PV) loops. Unlike other models, it allows us to modify not only the topology, parameters or number of model elements, but also the dynamic by tuning a single parameter, the characteristic differentiation order; consequently, it is expected to provide a valuable insight into this complex system and to support the development of clinical decision systems for CVDs.
Styles APA, Harvard, Vancouver, ISO, etc.
28

Cheng, Geng, Zao Li, Shuting Xia, Mingfei Gao, Maosheng Ye et Tingting Shi. « Research on the Spatial Sequence of Building Facades in Huizhou Regional Traditional Villages ». Buildings 13, no 1 (10 janvier 2023) : 174. http://dx.doi.org/10.3390/buildings13010174.

Texte intégral
Résumé :
Under the influences of the regional environment, building communities within traditional villages exhibit regional styles and features. Based on the research team’s early studies, and given the protection and renewal practices of Huizhou traditional villages in Southern Anhui Province, China, this study investigated the spatial sequences of building facades and explicated the laws of these spatial sequences. This research involved a series of technical steps. First, in the case selection stage, typical traditional villages and spatial sequence paths were established. Second, in the data acquisition stage, 3D laser scanning technology was used to acquire building elevation data and conduct 3D modelling. Finally, the measurement indices were determined by vector analysis of the data. Factor analysis and cluster analysis were suitable for the reduction and classification of the above data in order to explore the constitution law of building units. Meanwhile, the regularity of the facade organization of building groups was further quantified by examining the combination and connection relationships between the building and spatial patterns. Then, the laws of facade organization of the building groups were explicated. The purpose of this study is not only to achieve accurate inheritance of historical data information, but also to explore the centralized contiguity mechanism behind the traditional villages through external features from the perspective of rescue. The results demonstrated that there are spatial sequences represented by building facades in Huizhou traditional villages. Moreover, internal laws of “largely identical but with minor differences” in the building unit composition and building group organization were identified. These findings: (1) provide a deeper understanding of the regional characteristics of Huizhou traditional villages in Southern Anhui Province, China; (2) offer a foundation for practical administration requirements; and (3) recognize a novel research perspective and a feasible technical route for the protection of traditional villages in other regions, with an appreciation for the value of spatial sequences.
Styles APA, Harvard, Vancouver, ISO, etc.
29

Yoshihara, Masaaki, Kuniaki Bandoh et Anthony Marmarou. « Cerebrovascular carbon dioxide reactivity assessed by intracranial pressure dynamics in severely head injured patients ». Journal of Neurosurgery 82, no 3 (mars 1995) : 386–93. http://dx.doi.org/10.3171/jns.1995.82.3.0386.

Texte intégral
Résumé :
✓ Appropriate management of intracranial pressure (ICP) in severely head injured patients depends in part on the cerebral vessel reactivity to PCO2; loss of CO2 reactivity has been associated with poor outcome. This study describes a new method for evaluating vascular reactivity in head-injured patients by determining the sensitivity of ICP change to alterations in PCO2. This method was combined with measurements of the pressure volume index (PVI), which allowed calculation of blood volume change necessary to alter ICP. The objective of this study was to investigate the ICP response and the blood volume change corresponding to alterations in PCO2 and to examine the correlation of responsivity and outcome as measured on the Glasgow Outcome Scale. The PVI and ICP at different end-tidal PCO2 levels produced by mild hypo- and hyperventilation were obtained in 49 patients with Glasgow Coma Scale scores of less than 8 and over a wide range of PCO2 (25 to 40 mm Hg) in eight patients. Given the assumption that the PVI remained constant during alteration of PaCO2, the estimated blood volume change per torr change of PCO2 was calculated by the following equation: BVR = PVI × Δlog(ICP)/ΔPCO2, where BVR = blood volume reactivity. The data in this study showed that PVI remained stable with changes in PCO2, thus validating the assumption used in the blood volume estimates. Moreover, the response of ICP to PCO2 alterations followed an exponential curve that could be described in terms of the responsivity indices to capnic stimuli. It was found that responsivity to hypocapnia was reduced by 50% compared to responsivity to hypercapnia measured within 24 hours of injury (p < 0.01). The sensitivity of ICP to estimated blood volume changes in patients with a PVI of less than 15 ml was extremely high with only 4 ml of blood required to raise ICP by 10 mm Hg. The authors conclude from these data that, following traumatic injury, the resistance vessels are in a state of persistent vasoconstriction, possibly due to vasospasm or compression. Furthermore, BVR correlates with outcome on the Glasgow Coma Scale, indicating that assessment of cerebrovascular response within the first 24 hours of injury may be of prognostic value.
Styles APA, Harvard, Vancouver, ISO, etc.
30

Tikunov, Vladimir, et Tamara Vatlina. « Environmental assessment of the territories on the example of municipalities of the Smolensk region ». InterCarto. InterGIS 27, no 4 (2021) : 175–83. http://dx.doi.org/10.35595/2414-9179-2021-4-27-175-183.

Texte intégral
Résumé :
The article discusses approaches to assessing the health status of children on the example of a model research region—the Smolensk region. A description of the data set for an 18-year observation period is given in order to identify territorial differences in the incidence rates of ecologically caused diseases. When assessing the health of the population at the regional level, first of all, it is necessary to select a population group, formalize and standardize the base of parameters characterizing health, and also apply such data processing methods that will allow unambiguous interpretation of the results. All these requirements can be taken into account by applying mathematical and cartographic modeling. The cartographic component is a continuation and development of a mathematical model that ensures the processing of initial data in accordance with the goals and objectives of medical and geographical research. The subsequent cartographic interpretation of mathematical calculations leads to the creation of spatial visualization, which also serves as a tool for multilateral analysis of the results. For the analysis were taken indicators of the general morbidity of children, following the International Classification of Diseases, in the following classes: respiratory diseases; diseases of the digestive system; diseases of the skin and subcutaneous tissue; diseases of the musculoskeletal system and connective tissue. As a result, the ranking of the studied territorial units (25 municipal districts and 15 cities) was obtained according to four morbidity indicators. Based on these data, a series of maps was created, reflecting the spatial distribution of health characteristics over an 18-year observation period. The results obtained using absolute indicators revealed a gap in the values of the integral indices. The application of the methodology can contribute to the formulation of goals and objectives of strategies for the socio-economic development of regions and municipalities, measures to improve the health of children.
Styles APA, Harvard, Vancouver, ISO, etc.
31

Naderzadeh, Mahdiyeh, Hossein Arabalibeik, Mohammad Reza Monazzam et Ismaeil Ghasemi. « Comparative Analysis of AHP-TOPSIS and Fuzzy AHP Models in Selecting Appropriate Nanocomposites for Environmental Noise Barrier Applications ». Fluctuation and Noise Letters 16, no 04 (21 novembre 2017) : 1750038. http://dx.doi.org/10.1142/s0219477517500389.

Texte intégral
Résumé :
Choosing the right material in the design of environmental noise barriers has always been a challenging issue in acoustics. In less-developed countries, the material selection is affected by many factors from various aspects, which makes the decision-making very complicated. This study attempts to compare and assign weights to the most important indices affecting the choice of appropriate noise barrier material. These criteria include absorption coefficient, transparency, tensile modulus, strength at yield, elongation at break, impact strength, flexural modulus, hardness, and cost. For this purpose, experts' opinions was gathered through a total of 13 questionnaires and used for assigning weights by Analytic Hierarchy Process (AHP) and Fuzzy Analytic Hierarchy process (FAHP) techniques. According to the AHP results, impact strength, with only a minor difference of 0.093 compared to the AHP, was recognized as the most important criterion. Finally, the optimal composite material was selected using two different methods; first by Technique for Order-Preference by Similarity to Ideal Solution (TOPSIS) based on the weights obtained from AHP, and next by directly applying the obtained weights from FAHP to the true measured values of parameters. As the results show, in both abovementioned methods, Polycarbonate-SiO2 0.3% with roughened surface (PCSI3-R) received the highest score and was selected as the preferred composite material. Given the close similarity of the results, to determine the superiority of one method over the other, some noise was added to the original data set from the mechanical and acoustic tests and then the variance of the changes in the final orders of preferences was calculated. This indicates the robustness of the method against the measurement errors and noise. The results shows that under the same circumstances, the overall order shift variance in the classic TOPSIS is six times higher than that of the fuzzy AHP method.
Styles APA, Harvard, Vancouver, ISO, etc.
32

Heckmann, T., K. Gegg, A. Gegg et M. Becht. « Sample size matters : investigating the effect of sample size on a logistic regression susceptibility model for debris flows ». Natural Hazards and Earth System Sciences 14, no 2 (17 février 2014) : 259–78. http://dx.doi.org/10.5194/nhess-14-259-2014.

Texte intégral
Résumé :
Abstract. Predictive spatial modelling is an important task in natural hazard assessment and regionalisation of geomorphic processes or landforms. Logistic regression is a multivariate statistical approach frequently used in predictive modelling; it can be conducted stepwise in order to select from a number of candidate independent variables those that lead to the best model. In our case study on a debris flow susceptibility model, we investigate the sensitivity of model selection and quality to different sample sizes in light of the following problem: on the one hand, a sample has to be large enough to cover the variability of geofactors within the study area, and to yield stable and reproducible results; on the other hand, the sample must not be too large, because a large sample is likely to violate the assumption of independent observations due to spatial autocorrelation. Using stepwise model selection with 1000 random samples for a number of sample sizes between n = 50 and n = 5000, we investigate the inclusion and exclusion of geofactors and the diversity of the resulting models as a function of sample size; the multiplicity of different models is assessed using numerical indices borrowed from information theory and biodiversity research. Model diversity decreases with increasing sample size and reaches either a local minimum or a plateau; even larger sample sizes do not further reduce it, and they approach the upper limit of sample size given, in this study, by the autocorrelation range of the spatial data sets. In this way, an optimised sample size can be derived from an exploratory analysis. Model uncertainty due to sampling and model selection, and its predictive ability, are explored statistically and spatially through the example of 100 models estimated in one study area and validated in a neighbouring area: depending on the study area and on sample size, the predicted probabilities for debris flow release differed, on average, by 7 to 23 percentage points. In view of these results, we argue that researchers applying model selection should explore the behaviour of the model selection for different sample sizes, and that consensus models created from a number of random samples should be given preference over models relying on a single sample.
Styles APA, Harvard, Vancouver, ISO, etc.
33

Ryabtsev, P. S., K. A. Nadein, A. A. Bartenev et V. A. Zaikin. « Effect of beta-glucan on the organism of broilers ». International bulletin of Veterinary Medicine 3 (2020) : 71–76. http://dx.doi.org/10.17238/issn2072-2419.2020.3.71.

Texte intégral
Résumé :
Our studies are devoted to the search for new drugs with antioxidant, antihypoxant, immunomodulatory effects in order to in-crease the productivity of poultry. Due to the fact that beta-glucan has not previously been studied on poultry, before determining the effective therapeutic range of its doses, it is necessary to study the effect of a conditionally therapeutic dose of the drug on the body. A conditionally therapeu-tic dose - hereinafter - means the average therapeutic dose of a drug offered by the manufacturers of the respective drugs, calcu-lated for humans (mammals).The main pur-pose is to study the effect of beta-glucan on meat productivity, morphological, biochemi-cal indices of blood. In the experiment we used clinically healthy young hens of meat breeds of three months of age of the “Cobb 500” cross. The broilers of the first group (n = 9) were not prescribed the drug, they served as controls; the second group (n = 9) were given beta-glucan at 10 mg/kg body weight, corresponding to 125 g/t of food, for 14 days. The use of beta-glucan was accom-panied by an increase in the average daily body weight, gain by 4,9%, slaughter yield by 1,5%, a significant increase in blood erythrocyte content by 32,6% on day 7, monocyte level by half, a decrease in total bilirubin by 21,0%, and a decrease in blood malonic dialdehyde by 10,8% on day 14. Cumulatively all data collected indicates positive effect of beta-glucan on meat productivity, morphological composition of blood broilers and metabolism in organism and prospects of its use in poultry industry.
Styles APA, Harvard, Vancouver, ISO, etc.
34

Best, Giles, Kyle Crassini, Williams Stevenson et Stephen P. Mulligan. « Inhibition of Mitogen Activated Protein Kinase Kinase (MEK1) Is Effective Against CLL Cells Cultured in Media Alone or in a Supportive Microenvironment and Is Synergistic with Fludarabine in a Mechanism That Involves Decreased Levels of Reactive Oxygen Species and MCL-1 Protein ». Blood 120, no 21 (16 novembre 2012) : 1804. http://dx.doi.org/10.1182/blood.v120.21.1804.1804.

Texte intégral
Résumé :
Abstract Abstract 1804 Background Despite the high response rates of patients with Chronic Lymphocytic Leukemia (CLL) to the fludarabine (F), cyclophosphamide (C), rituximab (R) regimen, relapsed or refractory disease is common. Novel therapeutic approaches are required that are effective in this setting. Targeting specific signaling molecules is proving an effective strategy for treating patients who are refractory to FCR. Given that the mitogen-activated protein kinase pathway (MAPK) pathway is constitutively active in CLL cells and that inhibitors of mitogen-activated protein kinase kinase (MEK1) in this pathway are in clinical trials for solid tumors, we sought to investigate the potential of MEK1 as a therapeutic target in CLL. Results Inhibition of MEK1/2 using MEK inhibitor I (MEKi; Calbiochem/Merck) induced apoptosis in the MEC1 cell line and in 18 patient samples. Importantly, sensitivity of the patient samples occurred irrespective of ATM/TP53 functional status, of poor prognostic features or of treatment history. MEKi was also effective against 4 CLL patient samples cultured in an in vitro model of the tumor microenvironment, albeit with a significantly higher IC50 than observed against CLL cells cultured in media alone. As fludarabine-based therapies have become the mainstay of CLL treatment, we investigated the effect of combining the MEK inhibitor with this purine analogue. Synergy between MEKi and fludarabine was apparent against the MEC-1 cell line and 10 patient samples. Dose reduction indices (DRI) calculated from the drug combination indicate this synergy was predominantly due to an increase in fludarabine sensitivity. Investigation of the mechanisms of the synergy between MEKi and fludarabine suggests decreased levels of reactive oxygen species (ROS) and expression of the pro-survival protein, MCL-1, may be contributing factors (see figure). Summary These data suggest for the first time that inhibition of MEK1/2 may represent a potential therapeutic option for CLL patients. The efficacy of the MEK inhibitor against CLL cells cultured in the supportive in vitro environment suggest that this approach may also be effective at targeting the proliferative fraction of CLL cells in the tumor microenvironment. As clinical trials of MEK1/2 inhibitors are currently underway in solid tissue malignancies, our data suggest that trials of these agents may also be warranted for high risk CLL. Disclosures: No relevant conflicts of interest to declare.
Styles APA, Harvard, Vancouver, ISO, etc.
35

Goodwin, Grace J., Julia E. Maietta, Anthony O. Ahmed, Nia A. Hopkins, Sara A. Moore, Jessica Rodrigues, Melanie S. Pascual et al. « A-96 Stability of ImPACT’s Latent Structure from Baseline to Post Concussion Assessment ». Archives of Clinical Neuropsychology 36, no 6 (30 août 2021) : 1143–44. http://dx.doi.org/10.1093/arclin/acab062.114.

Texte intégral
Résumé :
Abstract Objective ImPACT is commonly used for sport-concussion management. Baseline and post-concussion tests serve as within-athlete comparisons for return-to-play decision-making. Previous literature has questioned whether ImPACT’s five composites accurately represent the internal structure of its cognitive scores. A recent alternative four-factor structure has strong confirmatory evidence for baseline scores (Maietta et al., doi:10.1037/pas0001014). The present study examined the stability of these constructs post-concussion. Method The current study utilized a case-matched design (age, sex, sport category) to select a sample of 3560 high school athletes’ baseline (n = 1780) and post-concussion (n = 1780) assessments. Multi-group CFA of first-order, hierarchical, and bifactor models was conducted to assess measurement invariance (configural, metric, scalar, and residual invariance) between baseline and post-concussion samples. Change in comparative fit indices was interpreted as the primary indicator of model invariance. Results ImPACT’s five composite structure, as well as the hierarchical and bifactor models, exhibited inadequate fit to the baseline and post-concussion data. The four-factor model demonstrated superior fit in the baseline sample and good fit in the post-concussion sample. The four-factor structure demonstrated invariance across injury status (baseline to post-concussion). Conclusion Given that ImPACT’s scores are used for return-to-play decision-making, it is important that they are psychometrically sound. Recent literature suggests that ImPACT’s five composites are not an adequate representation of the cognitive constructs. Findings support validity of the four-factor structure despite injury status, suggesting these cognitive constructs are assessable at both pre- and post-concussion. Additional research is needed to determine implications of these findings for tracking cognitive change following sport-related concussion and making return-to-play decisions.
Styles APA, Harvard, Vancouver, ISO, etc.
36

Burgdorf, Angela-Maria. « A global inventory of quantitative documentary evidence related to climate since the 15th century ». Climate of the Past 18, no 6 (24 juin 2022) : 1407–28. http://dx.doi.org/10.5194/cp-18-1407-2022.

Texte intégral
Résumé :
Abstract. Climatic variations have impacted societies since the very beginning of human history. In order to keep track of climatic changes over time, humans have thus often closely monitored the weather and natural phenomena influencing everyday life. Resulting documentary evidence from archives of societies enables invaluable insights into the past climate beyond the timescale of instrumental and early instrumental measurements. This information complements other proxies from archives of nature, such as tree rings in climate reconstructions, as documentary evidence often covers seasons (e.g., winter) and regions (e.g., Africa, eastern Russia, Siberia, China) that are not well covered with natural proxies. While a mature body of research on detecting climate signals from historical documents exists, the large majority of studies is confined to a local or regional scale and thus lacks a global perspective. Moreover, many studies from before the 1980s have not made the transition into the digital age and hence are essentially forgotten. Here, I attempt to compile the first-ever systematic global inventory of quantitative documentary evidence related to climate extending back to the Late Medieval Period. It combines information on past climate from all around the world, retrieved from many studies of documentary (i.e., written) sources. Historical evidence ranges from personal diaries, chronicles, and administrative and clerical documents to ship logbooks and newspaper articles. They include records of many sorts, e.g., tithe records, rogation ceremonies, extreme events like droughts and floods, and weather and phenological observations. The inventory, published as an electronic Supplement, is comprised of detailed event chronologies, time series, proxy indices, and calibrated reconstructions, with the majority of the documentary records providing indications on past temperature and precipitation anomalies. The overall focus is on document-based time series with significant potential for climate reconstruction. For each of the almost 700 records, extensive meta-information and directions to the data (if available) are given. To highlight the potential of documentary data for climate science, three case studies are presented and evaluated with different global reanalysis products. This comprehensive inventory promotes the first ever global perspective on quantitative documentary climate records and thus lays the foundation for incorporating documentary evidence into climate reconstruction on a global scale, complementing (early) instrumental measurements and natural climate proxies.
Styles APA, Harvard, Vancouver, ISO, etc.
37

Deshpande, Deshpande S., Mary Jo Lechowicz, Rajni Sinha, Jonathan L. Kaufman, Lawrence H. Boise, Sagar Lonial et Christopher R. Flowers. « The Combination of Romidepsin and Bortezomib Results in Synergistic Induction of Apoptosis in Human B-Lymphoma Cell Lines. » Blood 114, no 22 (20 novembre 2009) : 1689. http://dx.doi.org/10.1182/blood.v114.22.1689.1689.

Texte intégral
Résumé :
Abstract Abstract 1689 Poster Board I-715 Introduction The use of the proteasome inhibitor bortezomib has demonstrated activity in multiple myeloma and lymphomas. The HDAC inhibitor romidepsin is being evaluated in CTCL and PTCL, though its activity in B-cell lymphomas is less clear. We hypothesized that the combination of bortezomib and romidepsin would result in synergistic apoptosis in different B-cell NHL cell lines based upon the observed activity of this combination in more mature B-cell malignancies such as myeloma. Experimental Design Daudi, HT, Ramos and SUDHL-4 cell lines were exposed to different concentrations of bortezomib and romidepsin, separately, concurrently, and sequentially. Cell viability was assessed using MTT-assay, induced apoptosis was evaluated using Annexin V and PI staining from 24-48 hours. Apoptosis was also evaluated using western blot analysis of caspases and PARP cleavage. LC3 and HDAC6 level expressions were performed to determine if the effect of the combination was a result of the aggresome or autophagy pathway. Cell cycle studies were also performed to study if there were any changes after treating cells with the combination. Results The combination of bortezomib and romidepsin resulted in synergistic B-cell apoptosis as measured by MTT-assay with combination indices of < 0.5. This was associated with increased caspases and PARP cleavage as early as 24 hours after exposure. Order of addition experiments demonstrated definite sequence specificity. When romidepsin was added first, and 6 hours later followed by bortezomib, apoptosis was enhanced, compared to both agents being given concurrently or when bortezomib was administered first. Cell cycle analysis studies demonstrated that pretreatment of cells with romidepsin for 6 hours followed by the addition of bortezomib arrested the cells in G2M phase. HDAC6 expression was significantly reduced following combination therapy, and LC3-I was cleaved to LC3-II in treated cells suggesting that the combination affected aggresome formation and autophagy. Conclusion The combination of romidepsin and bortezomib at low nanomolar concentrations suggests that this may be an important clinical combination to test in patients with relapsed or refractory B-cell malignancies. Sequence of administration data is currently being tested to determine if the effect is a result of autophagy inhibition as is seen in myeloma cell lines. Additional mechanistic studies will be presented with the goals of identifying predictors of response that can then be validated in prospective clinical trials. Disclosures Lechowicz: Gloucester: Consultancy. Kaufman:Millennium: Consultancy; Genzyme: Consultancy; Celgene: Consultancy; Merck: Research Funding; Celgene: Research Funding. Lonial:Gloucester: Research Funding; Novartis: Consultancy; BMS: Consultancy; Millennium: Consultancy, Research Funding; Celgene: Consultancy. Flowers:Millennium: Research Funding.
Styles APA, Harvard, Vancouver, ISO, etc.
38

Dzhus, P. P., O. V. Sydorenko, O. V. Bilous, R. G. Pashyan, R. F. Katsevych et O. V. Martynyuk. « ASSESSMENT OF BULLS ON THEIR OWN PERFORMANCE AS A PART OF IMPROVING DOMESTIC POPULATION ABERDEEN-ANGUS BREED OF CATTLE ». Animal Breeding and Genetics 52 (1 novembre 2016) : 17–22. http://dx.doi.org/10.31073/abg.52.03.

Texte intégral
Résumé :
Introduction. Aberdeen-Angus breed, selection achievement of United Kingdom, long time ago ceased to be merely a cultural heritage and became a global transcontinental obtainment in beef cattle breeding. Productive "attractiveness" of Aberdeen-Angus breed makes it a popular genetic resource in the cattle branch production. It causes the optimistic results of the statistical analysis of farm animal biodiversity for data of European EFABIS data base, according which the status of this breed can be defined as "not at risk". Ukrainian population of Aberdeen-Angus formed in 1961 by importing breeding stock from Canada and animals of compact small type of Scotland (1962) and the establishment of breeding plant at the research station "Vorzel" of Agrarian Academy of Ukraine. The modern breed area covers 11 regions of Ukraine. According to the State register of breeding subjects in animal breeding on 01.01.2016, the stock has 7637 controlled heads (including 3475 cows, 80 bulls) and concentrated in 23 breeding subjects. For a long time, the Principal breeding center of Ukraine engaged with breeding farms’ development and controlled the situation in the breed. At this institution base bulls were evaluated, semen was sampled and stored, information database of individual data was being formed and automated, breeding program were developed and plans for the bulls’ matching were formed. Currently low share of artificial insemination at 18%, lack of control of live bulls involvement in the matching campaign, limited activities of regional breeding associations on centralized bulls’ assessment resulted in irreversible changes in the genetic structure of Aberdeen-Angus population, phenotypic manifestation of which is the youngsters’ growth and development declining, low efficiency of feed conversion, cows’ milk production decreasing, impairment in reproductive quality, increased exterior faults’ and genetic anomalies level. One of the measures for improvement of breeding herds is individual evaluation of bulls, which can optimize the selection and matching of bulls for breeding stock for calves of high breeding value production. However, the re-orientation in consumer demand, incapability of internal market to ensure profitable beef production and breaks of export-import relations naturally led to a weakening of motivation for breeding bulls branch business and its state control. Thus, according to the technology of beef cattle breeding mainly with natural mating feasible is the realization of sires’ on their own performance evaluation initial phase directly at the base of breeding farms in accordance with "Instructions on beef bulls’ selection" and to perform the Ministry of Agriculture Order N154 on 13.04.2016 on the approving of the "Procedure of sires’ breeding value determination by pedigree, their own performance, and progeny quality testing." Analyzing the quantitative and qualitative indices of economic activity we’ve found that one of the prospective objects for future beef bull evaluation is breeding farm "Buffalo" of Manevychi district, Volyn region. There are 850 Aberdeen-Angus dams (cows, heifers), evaluation power of the farm gives the possibility to evaluate simultaneously more than 400 animals. So, the aim of this paper is the analysis of Aberdeen-Angus bulls’ evaluation on their own performance results. Materials and methods of research. The study engaged 30 Aberdeen Angus breeding bulls of "Buffalo" farm. The selection of animals for evaluation was performed at the 210 days’ age with previously conducted individual analysis of their growth during suckling period, using the materials of electronic information database "ORSEK-M". At the time of evaluation diet of growing calves presupposed gain getting not below 1200 g per day. Analysis of the growth and development of youngsters was done by the results of monthly weighing during the period from 8 to 12 months. Key bull’s measurements was taken at 12 months’ age. The evaluation was carried out according to Regulation "Instruction on beef bull selection". Statistical data processing was performed, using the Microsoft Excel software. Results. Algorithms of determination and calculation of selection indices for evaluation of beef animals are chosen by representative organizations and approved at the level of each state. For countries participating INTERBULL (INTERBEEF) bulls’ evaluation results are converted to a common information data base on which the matching and comparison of data for further use in the breeding work correction. The main features taken into account in the assessment of the breeding value are share of pure blood, live weight at different ages, the intensity of growth for average daily gains, exterior parameters (body measurements, linear features), milk production, calving ease, temperament (for some breeds, such as Charolais), term of economic use, sperm productivity indices and others. According to international recommendations EBV and EPD indices are calculated, which define contribution weight of each feature into integrated breeding value of an animal. Under the current law of Ukraine, the evaluation of beef animals is done with the definition of an integrated class at appraisal, bulls are evaluated by index A – own performance, B – quality of progeny. The main results of sires’ evaluation were obtained during the process of native beef breeds creation. In the course of our studies we’ve initially selected bulls of Aberdeen-Angus breed, taking into account the indices of their individual growth till 7-months’ age. Totally there were selected sons of 7 Aberdeen-Angus bulls, including 5 native and 2 of German selection. Native bulls were of Wright Iver 9251195, BV Vinton 1342, Sauthoma Extra 715968, V.B.M. Henri 158013 lines. At 210 days’ age the average live weight of calves was 228,03 ± 6,750 kg, the average daily gain – 964,1 ± 30,881 g. Coefficient of variation for average daily gain at 17.5% reflects both the individual differences in eating behaviour of the researched calves during suckling and the differences in their mothers’ milk production and nutritional value of milk. The average live weight of animals evaluated at 12 months was 389,3 ± 8,35 kg, average daily gain when growing – 1114,47 ± 34,208 g. The coefficients of variability of these traits are under 11.5% and 16.8% accordingly. Average live weight at weaning and at 12 months’ age exceeded its corresponding values, determined due to the minimum requirements for live weight of beef calves to reach the complex class "elite" and "elite-record." Phenotypic features of farm animals’ body built are the indicators of species’ and breeds’ specificity and individual characteristics of the organism, the totality of which forms the parametric basis for primary estimation of genetic potential of productivity. Expressiveness, harmony and age matching of body parts outline a general picture of individual growth and development and reflect the level of balanced nutrition and optimal technology accepted as a whole. At the group studied the bull's average height in rump at the age of 12 months was 115,70 ± 0,622 cm with a coefficient of variation 2.9%. Chest girth is 158,33 ± 1,18 cm with variability 1.2%. Average body length was 126,63 ± 1,162 cm with a coefficient of variation 5.0%. Testicular circumference, as one of the evaluation parameters of bull reproductive system, was 31,10 ± 0,564 cm with a coefficient of variability 9.9%. Thus, among the recorded traits, the largest variability was indicated on live weight, average daily gain and scrotal circumference. The least variability was indicated on rump height and chest girth. Average value of complex selection index on all the researched bulls is 100,5 ± 0,9. According to the positions of Instructions, 14 calves with complex selection index above 100.0 may be allowed for further use as at the herds with natural mating, so put for assessment on sperm productivity at the State Enterprise "Volynian regional agricultural production enterprise on breeding business in animal breeding." This will allow to re-new the genetic material store from native valued representatives of the Aberdeen-Angus breed and partially restore local control over the use of sires in breeding herds as well as in households. Therefore, it is feasible to continue similar research involving a larger number of animals, to consider the power of influence of mother genotype and conduct further evaluation of sires of Aberdeen-Angus breed on performance of their sons and daughters. Conclusions. In similar conditions of feeding and management the realization of the genetic potential of productivity of Aberdeen-Angus bulls is different. The given results are the first step in the organization of systematic evaluation of sires’ breeding value, analysing of inheritance of key traits of growth and development of animals and rationalization of the use of genetic resources of the breed in general and reduction in cost per unit of production in live and carcass weight.
Styles APA, Harvard, Vancouver, ISO, etc.
39

Angelakis, E., L. Fuhrmann, I. Myserlis, J. A. Zensus, I. Nestoras, V. Karamanavis, N. Marchili, T. P. Krichbaum, A. Kraus et J. P. Rachen. « F-GAMMA : Multi-frequency radio monitoring ofFermiblazars ». Astronomy & ; Astrophysics 626 (juin 2019) : A60. http://dx.doi.org/10.1051/0004-6361/201834363.

Texte intégral
Résumé :
Context. The advent of theFermigamma-ray space telescope with its superb sensitivity, energy range, and unprecedented capability to monitor the entire 4πsky within less than 2–3 h, introduced a new standard in time domain gamma-ray astronomy. Among several breakthroughs,Fermihas – for the first time – made it possible to investigate, with high cadence, the variability of the broadband spectral energy distribution (SED), especially for active galactic nuclei (AGN). This is necessary for understanding the emission and variability mechanisms in such systems. To explore this new avenue of extragalactic physics theFermi-GST AGN Multi-frequency Monitoring Alliance (F-GAMMA) programme undertook the task of conducting nearly monthly, broadband radio monitoring of selected blazars, which is the dominant population of the extragalactic gamma-ray sky, from January 2007 to January 2015. In this work we release all the multi-frequency light curves from 2.64 to 43 GHz and first order derivative data products after all necessary post-measurement corrections and quality checks.Aims. Along with the demanding task to provide the radio part of the broadband SED in monthly intervals, the F-GAMMA programme was also driven by a series of well-defined fundamental questions immediately relevant to blazar physics. On the basis of the monthly sampled radio SEDs, the F-GAMMA aimed at quantifying and understanding the possible multiband correlation and multi-frequency radio variability, spectral evolution and the associated emission, absorption and variability mechanisms. The location of the gamma-ray production site and the correspondence of structural evolution to radio variability have been among the fundamental aims of the programme. Finally, the programme sought to explore the characteristics and dynamics of the multi-frequency radio linear and circular polarisation.Methods. The F-GAMMA ran two main and tightly coordinated observing programmes. The Effelsberg 100 m telescope programme monitoring 2.64, 4.85, 8.35, 10.45, 14.6, 23.05, 32, and 43 GHz, and the IRAM 30 m telescope programme observing at 86.2, 142.3, and 228.9 GHz. The nominal cadence was one month for a total of roughly 60 blazars and targets of opportunity. In a less regular manner the F-GAMMA programme also ran an occasional monitoring with the APEX 12 m telescope at 345 GHz. We only present the Effelsberg dataset in this paper. The higher frequencies data are released elsewhere.Results. The current release includes 155 sources that have been observed at least once by the F-GAMMA programme. That is, the initial sample, the revised sample after the firstFermirelease, targets of opportunity, and sources observed in collaboration with a monitoring programme following up onPlancksatellite observations. For all these sources we release all the quality-checked Effelsberg multi-frequency light curves. The suite of post-measurement corrections and flagging and a thorough system diagnostic study and error analysis is discussed as an assessment of the data reliability. We also release data products such as flux density moments and spectral indices. The effective cadence after the quality flagging is around one radio SED every 1.3 months. The coherence of each radio SED is around 40 min.Conclusions. The released dataset includes more than 3 × 104measurements for some 155 sources over a broad range of frequencies from 2.64 GHz to 43 GHz obtained between 2007 and 2015. The median fractional error at the lowest frequencies (2.64–10.45 GHz) is below 2%. At the highest frequencies (14.6–43 GHz) with limiting factor of the atmospheric conditions, the errors range from 3% to 9%, respectively.
Styles APA, Harvard, Vancouver, ISO, etc.
40

Lavrinenko, I. A., et O. V. Lavrinenko. « Classification and mapping of habitats in the northwestern part of the Bolshezemelskaya tundra ». Geobotanical mapping, no 2021 (2021) : 20–53. http://dx.doi.org/10.31111/geobotmap/2021.20.

Texte intégral
Résumé :
The integrity and preservation of natural habitats is the basis for the existence of flora and fauna, as well as many aspects of the life of the indigenous population. The high sensitivity of Arctic landscapes and natural ecosystems to anthropogenic and climatic factors predetermines the need for monitoring of habitats. Classification and inventory of Arctic habitats is made on the example of a key site in the Bolshezemelskaya tundra — adjacent tundra to the Bolvansky Nos Cape (23.7 km2) (Fig. 1). The diagnostics of biotopes was carried out on the basis of a previously developed typological scheme of territorial units of vegetation (TUV), which, along with the syntaxonomic composition, takes into account the features of ecology and spatial organization (Lavrinenko, 2020b; Lavrinenko, Lavrinenko, 2021). The diagnostics of higher units of habitats is based on their position on the generalized geomorphological profile and relief elements, which predetermine the peculiarities of the impact of the entire variety of en vironmental and climatic factors on biotopes. The types of spatial structures (temporal and ecological series, complexes, and combinations) of heterogeneous TUVs, reflecting the location features, intensity, direction, and the result of the environmental factors interaction, are the main diagnostic characteristics of habitats. The classification of vegetation and position of syntaxa, taking into account their confinement to TUVs, underlie the accurate diagnosis of biotopes. The phytosociological (= Braun-Blanquet) classification is the basis of the TUVs nomenclature. The list of syntaxa of different ranks (Matveyeva, Lavrinenko, 2021) is the basis for the formation of the TUVs categories names that diagnose biotopes. A digital elevation model (DEM) of the key area was made using ArcticDEM data (https://www.pgc.umn.edu/data/arcticdem/) to estimate the location of TUVs as habitat indicators (Fig. 2a). NDWI (Normalized Difference Water Index) (McFeeters, 1996) and NDVI (Normalized Difference Vegetation Index), which reflects the reserves of green phytomass (Walker et al., 2003) (Fig. 2b) were calculated from Sentinel-2A satellite images. Spatial combination of several layers – high-resolution satellite images, DEM, spectral indices (Fig. 3), in GIS made it possible to characterize the important indicators of biotopes. Habitats of two categories of the highest, first level — AB and CB, confined to large elements of the landscape, are found in a key area in the tundra zone. The categories of the second level (AB1, …, CB3) differ in their position on the generalized geomorphological profile, from the highest positions (AB1 — eluvial locations) to the lowest ones (CB3 — accumulative marine terraces). The features of the substrate, along with the position on the profile, were taken into account when identifying categories of biotopes of the third categories. Thus, in the AB1 category, habitats of a lower level differ significantly in terms of soil characteristics: AB1.1, sandy; AB1.2 — loamy-gravelly carbonate, AB1.3 — gleyzems and peat-gleyzems. The well-pronounced physiognomic (color, texture) and spectral (indices, signatures) characteristics of the TUVs levels, along with the position in the relief and features of the substrate, were used to distinguish the fourth and lower habitat categories. Diagnostics of plant communities forming TUVs was carried out on the basis of reference signatures (using Sentinel-2 images) of those phytocoenoses in which geobotanical relevés were made with coordinate reference and syntaxonomic affiliation was established. Terrestrial plots are assigned to 2 categories of habitats of the first level, 7 — of the second, 13 — of the third and 18 — of the fourth, which include all the diversity of biotopes of the key site and unite those that are close in their position on the geomorphological profile and ecological indicators. All categories of habitats of the third level, and in some cases the fourth one, are diagnosed with TUVs classes (Lavrinenko, 2020b), represented by simple and complex combinations of plant communities of different syntaxa. The characteristics of vegetation and soils, the composition of syntaxa (those that are described) are given for categories of the second – third levels. More than 1100 contours, including 140 represented by water bodies, have been identified in the key area. The habitats map of the northwestern part of the Bolshezemelskaya tundra was prepared on a scale of 1 : 25 000. It demonstrates the diversity of biotopes in the study site; terrestrial plots classified as habitat categories of the first —fourth levels are presented on it (Fig. 29, 30). The main emphasis in the identification and characterization of habitats is made on their resource potential for species and communities of plants and animals, as well as for humans. This immediately transfers the question of the significance and relevance of such works from the field of fundamental academic research on the study and mapping of biotopes, to the field of direct practical application of the results obtained. Different categories of habitats have different resource values for certain biological objects, which makes it possible to characterize them from the standpoint of ecological, economic and environmental significance.
Styles APA, Harvard, Vancouver, ISO, etc.
41

Sishchikov, Dmitry S., et Sergey A. Alent’ev. « Diagnosis of infectious complications in patients with acute pancreatitis ». Russian Military Medical Academy Reports 41, no 2 (22 juillet 2022) : 195–201. http://dx.doi.org/10.17816/rmmar104603.

Texte intégral
Résumé :
BACKGROUND: Up to 30% of patients with acute pancreatitis suffer from the severe form of the disease, of which 30% of cases are lethal, significantly rising to 80% at the development of infectious complications. AIM: To improve treatment results of patients with acute pancreatitis due to early diagnostics of infectious complications. MATERIALS AND METHODS: Expression of CD64 antigen on neutrophil membrane (DC64 index of neutrophils) was studied using a flow cytometer Cytomics FC500 (Beckman Coulter, USA) using 3-color combination of direct monoclonal antibodies Beckman Coulter: СD14FITC/CD64PE/CD45PC5. Material was whole blood. RESULTS: A direct correlation of moderate strength with the development of sepsis was found for both the procalcitonin value and C-reactive protein concentration at this period of illness. However, no relations of endogenous intoxication markers with the development of infectious complications were found (correlation coefficients less than 0.4). Thus, it should be noted that the difference in the indices of the studied parameters of the CD64 antigen expression in the groups began to increase exactly during the 23rd week of the disease. CONCLUSION: Based on the literature data, we formulated a working hypothesis, which states that the degree of CD64 receptor expression on peripheral blood neutrophils is an early marker of infectious complications of acute pancreatitis. The value of average fluorescence intensity index of molecules CD64 equal to 10 conventional units was accepted as a threshold value with regard to the development of IE, and the value of 15 conventional unitsas a threshold value with regard to sepsis. The study was conducted in a prospective group of 28 patients. In accordance with the provision of the working hypothesis, the patients were divided into 3 groups depending on the level of average fluorescence intensity index of molecules CD64. Expression of the CD64 receptor on peripheral blood granulocytes as an early laboratory marker of infectious complications of the disease was studied for the first time. We determined sensitivity and specificity, optimal terms of the given research, detected regularities of CD64 expression changes in the course of acute pancreatitis, correlations with other clinical and laboratory indexes, including prospective markers of infection (procalcitonin, C-reactive protein), with the integral scales of severity estimation of patients with acute destructive pancreatitis. Determination of the level of the receptor CD64 expression on the peripheral blood neutrophils showed that this marker reflects the dynamics of the disease course and gives the possibility of the early diagnostics of the infectious complications of acute pancreatitis. The use of this method provides additional information about the development of the surgical infection. It is important that the changes of CD64 antigen expression in dynamics outrun other markers of systemic inflammatory response and sepsis. CD64 antigen expression data on peripheral blood neutrophils is an additional factor in determining differentiated surgical tactics in phase I of the diseases development with regard to acute fluid collections in patients with acute pancreatitis.
Styles APA, Harvard, Vancouver, ISO, etc.
42

Falcone, John N., Maurice A. Hurd, Sonal Kumar, Michele Yeung, Carolyn Newberry, Marie Yanielle So, Gregory Dakin et al. « The Ratio of Unsaturated to Saturated Fatty Acids is a Distinguishing Feature of NAFLD in Subjects With Metabolic Disease ». Journal of the Endocrine Society 5, Supplement_1 (1 mai 2021) : A421. http://dx.doi.org/10.1210/jendso/bvab048.859.

Texte intégral
Résumé :
Abstract Non-alcoholic fatty liver disease (NAFLD) is a highly prevalent chronic liver disease affecting at least a quarter of the world’s population. NAFLD is commonly associated with other metabolic conditions such as insulin resistance, type 2 diabetes, obesity, and dyslipidemia. Given the liver’s prominent role in regulating glucose and lipid homeostasis, we hypothesized that subjects with NAFLD have a distinct profile of blood analytes. This investigation examines the association between NAFLD and circulating markers of glucose and lipid metabolism in order to identify a NAFLD-specific metabolite panel that can be used as a predictive biomarker in future studies. We are performing a cross-sectional study in 500 subjects to identify genetic and hormonal factors that correlate with the presence of NAFLD. This abstract reports a preliminary analysis of the results from the first 45 subjects enrolled. Fasting blood samples were collected from 31 subjects with NAFLD and 14 subjects with other metabolic diseases (‘Other’) and without radiologic evidence of NAFLD. The following analytes were measured: serum alanine aminotransferase (ALT), total cholesterol, direct-LDL, HDL, triglycerides, ApoB, small dense LDL-C (sdLDL), VLDL, Lp(a), cholesterol absorption/production markers (beta-sitosterol, campesterol, lathosterol, and desmosterol), glucose, insulin, hemoglobin A1C, adiponectin, hs-CRP, and fatty acids (saturated and unsaturated). Homeostasis model assessment of insulin resistance (HOMA-IR) was calculated from glucose and insulin levels, and fatty acids were batched together by structural similarity and reported as indices. The groups were compared using multiple t-tests or the Kolmogorov-Smirnov test when data were non-parametric. The NAFLD group had a mean age 48.4 ± 12.9 yrs and BMI 32.9 ± 6.6 kg/m2. These participants were 61% female and 58% had dyslipidemia, 25% pre-diabetes, and 25% type 2 diabetes. The Other group had a mean age 49.9 ± 12.9 yrs and BMI 39.1 ± 15.6 kg/m2. They were 64% female and 57% had dyslipidemia, 14% pre-diabetes, and 21% type 2 diabetes. ALT was higher in the NAFLD group (55 ± 40 vs 27 ± 22 IU/L, P&lt;0.001). Intriguingly, the saturated fatty acid index was elevated in the NAFLD group (32.5 ± 1.9 vs 30.1 ± 2.2 %, P&lt;0.05), and the omega-6 fatty acid index was elevated in the Other group (42.9 ± 3.7 vs 38.5 ± 4.7 %, P&lt;0.05). These changes led to an unsaturated/saturated fatty acid ratio that was significantly lower in the NAFLD group (2.0 ± 0.1 vs 2.3 ± 0.2, P&lt;0.01). There were no other significant differences in the blood metabolites and hormones. In this small sample comparing subjects with metabolic disease with and without NAFLD, levels of ALT and the ratio of circulating unsaturated/saturated fatty acids are distinguishing features of NAFLD. These may be helpful measures to identify subjects with metabolic disease that require further evaluation for NAFLD.
Styles APA, Harvard, Vancouver, ISO, etc.
43

Galieni, Angelica, Fabio Stagnari, Giovanna Visioli, Nelson Marmiroli, Stefano Speca, Giovanni Angelozzi, Sara D'Egidio et Michele Pisante. « Nitrogen fertilisation of durum wheat : a case study in Mediterranean area during transition to conservation agriculture ». Italian Journal of Agronomy 10, no 1s (24 mars 2016) : 12. http://dx.doi.org/10.4081/ija.2016.662.

Texte intégral
Résumé :
Nitrogen (N) nutrition plays a key role for high yields and quality in durum wheat (<em>Triticum turgidum</em> L. subsp. <em>durum</em> (Desf.) Husn); in Mediterranean environments, data regarding N fertilisation management during the transition phase to conservation agriculture (CA) are limited. The aim of this work was to study the effects of N fertiliser forms and rates on yield and some quality traits of durum wheat, during the transition period to CA in Mediterranean areas; moreover, indication on the recommendable N form/rate combinations have been given. Field trials were carried out in south of Italy, during the first two years of transition to CA (from 2010 to 2012) in a durum wheat-based rotation. Following a split-plot design arranged on a randomised complete blocks with three replications, two N forms (main plots) - urea and calcium nitrate - and four N rates (sub-plots) - 50, 100, 150 and 200 kg N ha<sup>–1</sup> - plus an un-fertilised Control, were compared. The following parameters were analysed: grain yield, N-input efficiency, grains protein concentration (GPC), total gluten, gluten fractions and minerals concentration in kernels. Calcium nitrate gave the highest yield (4.48 t ha–1), as predicted by the quadratic model, at 146 kg N ha<sup>–1</sup>, on average. This was particularly noticeable in 2012, when the distribution of rainfall and temperatures regimes as well as residues’ status could have favoured such N-form. These results were confirmed by the observed higher values of all indices describing N-input efficiency. High GPC values (14.8%) were predicted at slightly higher N-rates (173 kg N ha<sup>–1</sup>, averaging both N forms). In particular, gluten proteins and glutenin/gliadin ratio accrued as the N doses increased, reaching the highest values at 150 kg N ha<sup>–1</sup>, also positively affecting the quality of durum wheat flour. Iron and zinc concentrations were noticeably increased (38% and 37% on average) by N supply, probably due to the enhanced water use efficiency under CA systems. Principal component analysis summarised properly the obtained results: analysing the N-rates at 150 kg N ha<sup>–1</sup>, it was confirmed that yields and quality characteristics of durum wheat were optimised in the wettest year (2011) with calcium nitrate. Moreover, the scarce amount of residues characterising the transition phase to CA, requires N application rates not lower than 150 kg ha–1 in order to ensure stable yields and important quality traits. These N rates should be modulated as the accumulation of crop residues increases over time, thanks to long-term effects of CA on soil chemical, physical and biological properties.
Styles APA, Harvard, Vancouver, ISO, etc.
44

Mazur, N. P., Ye I. Fedorovych et V. V. Fedorovych. « USEFUL FEATURES OF DAIRY COWS AND THEIR CONNECTION WITH PRODUCTIVE LONGEVITY ». Animal Breeding and Genetics 56 (4 décembre 2018) : 50–64. http://dx.doi.org/10.31073/abg.56.07.

Texte intégral
Résumé :
A fundamental genetic improvement of domestic dairy breeds is carried out by using the gene pool of the best foreign breeds, in particular Holstein. This approach has greatly improved the milk productivity of cows, but has led to a significant deterioration in reproduction, longevity, product quality, general animal health etc. Taking into account that the increase of the milk productivity of cows leads to a shortening of their use, this problem will only become aggravated over time. Therefore, it is now necessary to direct scientific research into a comprehensive assessment of animals taking into account the signs of lifelong productivity. In view of the above, the purpose of our research was to study the economic utility signs of dairy cows and their relationship with productive longevity. The research was conducted on cows of Holstein (n = 2902), Ukrainian Black- (n = 14876) and Red-and-White (n = 2176) dairy breeds. To characterize the economic useful signs of animals of the studied breeds based on the materials of the primary zootechnical and breeding records, weighed growth of animals, reproductive capacity, milk productivity and duration and efficiency of lifetime use were studied. It was established that heifers of investigated dairy breeds were characterized by a moderate intensity of growth of live weight, as evidenced by average daily increments from birth to 18 months of age: in animals of Holstein breed – 644, Ukrainian Black-and-White milk breed – 641 and Ukrainian Red-and-White dairy breed 692 g. The first fruitful insemination of heifers of the Holstein breed occurred on average at the age of 19.1, Ukrainian Black and Red-and-White dairy breeds - 20.4 and 20.8 months, while their live weight at that was 405.3; 414.3 and 438.5 kg respectively. The milk yield of the Holstein breed cows, depending on the lactation, was 4846–7920 kg, the fat content in milk was 3.63–3.74% and the amount of milk fat was 181.2–279.7 kg, the Ukrainian Black-and-White milk breed was 4008–6317 kg, 3.63–3.70% and 148.6–228.8 kg respectively, while Ukrainian Red-and-White milk is 4578–6592 kg, 3.74–3.87% and 177.2–245.9 kg. The cows of the given breeds were used in herds only 2.32–2.50 lactation. The highest life milk yield was noted in animals of the Holstein breed (18,669 kg), and the lowest (14,940 kg) in the cows of the Ukrainian Red-and-White breed. Correlation analysis of economic characteristics of dairy cows with indicators of their productive longevity confirms the possibility of conducting an indirect predictive selection of animals in order to form high-yielding herds with long-term economic use. Among the studied features, the greatest predictive value (P < 0.001) for the indicators of life expectancy, productive use, lactation, the number of lactation per life, life yield and lifetime of milk fat have yield of cows for the first (r = -0.217 – +0.205) and the best lactation (r = +0.061 – +0.609). An intermediary predictive selection of animals can also be carried out according to the duration of their first service period (r = -0.462 – +0.106) and live weight during growing season (r = -0.286 – +0.126). It was established that live weight at the first insemination and the first calving significantly influenced the indicators of life yields, economic use and lactation of animals, as evidenced mainly by higher and reliable values of correlation coefficients between these indices. It should also be noted that the correlation coefficients between the live weight of cows in the first calving and the duration of life, productive use, lactation and the number of lactations in life were somewhat higher but negative values (r = -0,130 – -0,070), compared with the between live weight at the time of first insemination and the above indicators of longevity of animals (r = -0,037 – +0,094). This suggests that the effect of live weight on the first calving of animals on their longevity was somewhat higher than the effect of live weight at the first insemination. Our data show that the selection of cows by the age of the first calving and the duration of the first lactation is not significant, since there is practically no link between these features and the indicators of productive longevity.
Styles APA, Harvard, Vancouver, ISO, etc.
45

Kaskova, L. F., K. M. Popik, L. P. Ulasevych, I. U. Vaschenko et E. E. Berezhnaya. « AGE SPECIFIC CHARACTERISTICS OF ORAL HYGIENE LEVEL OF SCHOOLCHILDREN ». Ukrainian Dental Almanac, no 2 (19 juin 2019) : 70–74. http://dx.doi.org/10.31718/2409-0255.2.2019.14.

Texte intégral
Résumé :
One of the most important reasons of dental caries appearance in children is improper oral hygiene which results in the accumulation of plaque containing significant amount of pathogenic microflora. Its vital activity products cause demineralization of tooth hard tissues. This is relevant for children of all ages but the most important in first years after teething as teeth are poorly mineralized and have tendency to the appearance of carious process. Totally 412 children aged 6 to 16 years old attending organized children's collectives (schoolchildren from 1st to 11th year of studying) were examined. Oral cavity examination was carried out according to the generally accepted method. The level of oral hygiene was determined by Fedorov-Volodkina’s and Green-Vermillion’s indices. The research was conducted in the each age period separately and in age groups corresponding to 6-9 years (group I - elementary school), 10-13 years (group II - secondary school), 14-16 years (group III - high school). Statistical processing of the obtained data was carried out using the Student’s method. The results were considered significant at p <0.05. Analysis of oral hygiene level according to Fedorov-Volodkіna’s technique revealed that children aged 6 to 9 years (I group) had “unsatisfactory” oral hygiene level. The research of each age period reveals that only children of 9 years old take care after their oral cavities properly. We have detected significant differences of oral hygiene level of children with caries and with intact teeth. The worst oral hygiene level was observed in children of 6 years old affected by caries. In children 10-13 years old (Group II) the average index of oral hygiene is 2.20 ± 0.08 points which corresponds to unsatisfactory level. This is a slightly better index comparing to the previous age group. At the every age, the hygiene index in children affected by caries is worse comparing to those who aren’t affected by it (p <0,05). In children from the high school affected by caries the level of of the oral cavity hygiene ranges from 2,27 ± 0,09 in 11 years to 2,60 ± 0,21 in 10 years. These results correspond to the unsatisfactory level, but probable age differences were not discovered. Children without caries have “satisfactory” level of oral hygiene in 10 years and 11 years old and “good” level at 12 and 13 years. Thus, we observe the improvement of oral hygiene level in children with increasing of their age, especially for those with intact teeth comparing to the primary schoolchildren. High school children (Group III) also had an unsatisfactory average hygiene index (2.34 ± 0.10 points) and unsatisfactory rates of caries for children, which corresponded to the indices of I and II observed groups. In children who had no carious lesions detected the level o oral f hygiene at 14, 15 and 16 years was “good”. This fact indicates improvement of manual skills on oral health care of high school children. For more objective study concerning the oral hygiene cavity of different children, the Green-Vermillion’s index was also used. It makes possible assessment the state of the entire oral cavity. The average index of oral hygiene in children of the Ist group corresponds to the average value and is evaluated as “satisfactory” state of oral hygiene (Table 2). In children with caries (1.29 ± 0.09 points) and with intact teeth (0.99 ± 0.04 points) we observe the corresponding clinical situation. Significant changes were found in children of 9 years with intact teeth comparing to 6, 7 and 8 years old children. Thus, we observe an improvement of oral hygiene skills in children from elementary school. Children without caries lesions of the ІІnd and ІІІrd research groups have “satisfactory” and “good” of oral hygiene level. For those with caries in all age periods the result is “satisfactory”. The index of oral hygiene for children with and without caries have significant difference. In order to assess the level of oral hygiene objectively the preference should be given to Green-Vermillion’s index. Particular attention should be paid to the level of oral hygiene of children from elementary school pupils, since it is worse comparing to the oral hygiene level of children from secondary and high school. We do not observe a significant improvement in oral hygiene in senior children comparing to ones from secondary school. This causes the necessity in constant education and control over oral cavity care in schoolchildren.
Styles APA, Harvard, Vancouver, ISO, etc.
46

Stulberg, Eric, Alexander Zheutlin, Raymond Strobel, Katherine He et Adelyn Beil. « 2412 Cost effectiveness analysis of operative Versus antibiotic management for uncomplicated appendicitis ». Journal of Clinical and Translational Science 2, S1 (juin 2018) : 79–80. http://dx.doi.org/10.1017/cts.2018.279.

Texte intégral
Résumé :
OBJECTIVES/SPECIFIC AIMS: (1) Evaluate the relative incremental cost-effectiveness [cost per quality-adjusted life year (QALY) gained] of antibiotics, laparotomy, and laparascopy for the initial treatment of uncomplicated appendicitis. (2) Detect if the relative incremental cost-effectiveness of each treatment differs by age, namely in pediatric patients, adult patients, and geriatric patients. (3) Use deterministic and probabilistic sensitivity analyses to assess the robustness of our findings when varying multiple model parameters. METHODS/STUDY POPULATION: Study Population and Analytic Approach: The population under analysis is a simulated population of those aged 1–90 diagnosed with uncomplicated appendicitis with computed tomography (CT) in the emergency department. Pregnant women and those younger than 1 year old were excluded from our analysis. We simulated our population through a Markov state-transition simulation model. Using this model, we estimated the lifelong costs and effects on QALYs from the use of antibiotics, laparoscopy, and laparotomy for a given hypothetical individual with uncomplicated appendicitis. This model allowed for the incorporation of both the short-term and long-term effects of each respective treatment option. The primary outcome of the model was the cost per additional QALY gained. The analysis was conducted using a healthcare perspective. A 100 age-year time horizon was used. A 3% discount rate was applied to both the costs and effects in the model. Transition states are depicted. Surgical state rates were derived from HCUP. Treatment failure of antibiotics was defined as recurrent appendicitis within one year of antibiotic treatment. This was determined using results from prior RCTs and a Cochrane review of antibiotic management for uncomplicated appendicitis. Recurrent appendicitis was defined as recurrent appendicitis after 1 year of antibiotic treatment, using rates of appendicitis applied to the general population by age group. National age-adjusted mortality rates were applied to account for death due to causes unrelated to appendicitis. To assess differential results by age, different acute and long-term outcome, cost, and state transition rates were applied to 3 age groups: a pediatric group (1–17 years old), an adult group (18–64 years old), and a geriatric group (65+ years old). As an individual progressed through the model until age 100, the respective parameters would change to adjust for the transitions between the 3 life stages. Outcomes After Appendicitis: Lifetime QALYs were incorporated throughout the study for short-term and long-term health states. There is limited availability of QALY data in the literature pertaining to the health states specific to appendicitis. Due to this limitation, however, calculated quality of life (QoL) indices for 2015 created by Wu et al. were utilized for this study. QALYs were subsequently derived by multiplying QoL by the appropriate duration of time spent in a respective health status. Transition rates between health states were abstracted from the existing literature. Costs: Direct medical costs were obtained from HCUP statistics from the 2014 fiscal year for all age groups in the nationwide network. This database contains all costs of care related to surgical appendicitis intervention, however it lacks costs associated with antibiotic-only management. To account for these costs, data was extracted from current available literature, and the resulting average was applied to our model. Sensitivity Analysis: One-way analyses by cost of procedure and effectiveness of antibiotic protocol were undertaken to account for regional variation in costs and improvements in antibiotic therapy, respectively. For cost of procedure sensitivity analysis, costs were varied by 1 standard deviation below and above the mean cost per treatment group per age. These costs were then compared to a designated reference group. Antibiotic sensitivity analysis was conducted by reducing the effectiveness of antibiotics from the maximum reported effectiveness down to 0, with the goal of obtaining a level of effectiveness at which antibiotics were no longer cost-effective. A probabilistic Monte-Carlo sensitivity analysis was then employed to determine the percent likelihood of each treatment arm being cost-effective at a level of $100,000 per additional QALY. The probabilistic sensitivity analysis was then repeated to determine the percent likelihood of each treatment arm being the dominant option, in that it lowers costs and adds QALYs. RESULTS/ANTICIPATED RESULTS: Our model examined the cost-effectiveness of 3 different treatment options for patients with acute uncomplicated appendicitis: laparoscopic appendectomy, laparotomy appendectomy, and an antibiotic regimen. We first examined the cost-effectiveness of each of these strategies in comparison to laparotomy. Laparoscopic appendectomy was shown to be superior to laparotomy in regards to costs and QALYs for patients ages 18 to 65+, while there was very little difference for patients ages 1–17. For those aged 1–17, laparoscopy had an additional cost of $90.00 with an associated gain of 0.1 QALYs compared with laparotomy. For those aged 18–64, laparoscopy had a net cost-savings of $3437.03 with an associated gain of 0.13 QALYs compared with laparotomy. For those aged 65+, laparoscopy had a net cost-savings of $5713.55 with an associated gain of 0.13 QALYs compared to laparotomy. Antibiotic management was superior to laparotomy as it relates to both costs and QALYs for all 3 age cohorts. For those aged 1–17, antibiotic management had a net cost-savings of $5972.55, with an associated gain of 0.6 QALYs compared with laparotomy. For those aged 18–64, antibiotic management had a net cost-savings of $6621.00 with an associated gain of 0.5 QALYs compared with laparotomy. For those aged 65+, antibiotic management had a net cost-savings of $11,953.00 with an associated gain of 0.21 QALYs compared with laparotomy. We then assessed the cost-effectiveness of antibiotics relative to laparoscopy. In all 3 age groups, antibiotics added QALYs and were cost-saving. For those aged 1–17, antibiotic management had a net cost-savings of $6062.55, with an associated gain of 0.6 QALYs compared with laparotomy. For those aged 18–64, antibiotic management had a net cost-savings of $3183.97 with an associated gain of 0.5 QALYs compared with laparotomy. For those aged 65+, antibiotic management had a net cost-savings of $6239.45 with an associated gain of 0.21 QALYs compared with laparotomy. Sensitivity Analysis: We first examined the effect of varying costs on our results. Costs for all interventions were varied by 1 standard deviation above and below the average costs used in our original model, yielding 3 cost estimate levels: high cost (1 standard deviation above), middle cost (average cost reported in model), low cost (1 standard deviation below). For all 3 cost estimate levels of antibiotics, antibiotics persistently dominated laparotomy for all 3 age groups. Laparoscopy dominated at all cost levels in age groups 18–64 and 65+ but had a positive ICER for both high and medium cost levels in the 1–17 age group. We then varied effectiveness (one minus the failure rate) of antibiotic treatment in each age group to assess at what level of effectiveness to antibiotics become dominant relative to laparotomy. In ages 1–17, antibiotic treatment became dominant at 43.8%; in ages 18–64, antibiotic treatment became dominant at 33%; and in ages 65+, there was no level of antibiotic effectiveness that did not result in this therapy being dominant over laparotomy. Probabilistic Monte-Carlo sensitivity analysis is pending, but we anticipate antibiotics having a high likelihood of being both cost-effective and dominant relative to the other 2 treatment options. DISCUSSION/SIGNIFICANCE OF IMPACT: We performed a cost-effective analysis comparing surgery versus antibiotic management for uncomplicated appendicitis. Our study found that antibiotic therapy was the dominant strategy in all age groups as it yielded lower costs and additional QALYs gained compared with laparotomy and laparoscopy. Appendicitis is the most common surgical emergencies worldwide, with a lifetime risk of 6.9% in females and 8.6% in males (Körner 1997). For over 100 years, open appendectomy had been the established treatment for appendicitis, but current management has evolved with the advent of laparoscopy and now growing use of antibiotics for treatment of appendicitis. There is growing interest in nonoperative management of uncomplicated appendicitis, given both an aging population that is increasingly frail and vulnerable to surgical complications and concerns over skyrocketing medical costs. Our model showed that antibiotic-only management was cost-effective in all age groups. This has important implications for management of appendicitis, where current management is to offer antibiotic-only management only in the “rare cases” where the patient is unfit for surgery or refuses surgery. Our data show that medical management of appendicitis not only is cheaper, but also provides more QALYs in all age groups. Our study has several limitations. First, we conducted our analysis under the assumption that all patients will be cured of appendicitis following surgical intervention. Some patients following appendectomy will develop symptoms of appendicitis and be diagnosed with “stump appendicitis,” which can occur in stumps as short as 0.5cm and can present as late as 50 years following initial surgery (Kanona, 2012). Additionally, any intraperitoneal surgery can lead to late complications such as small bowel obstruction from adhesions following surgery. Thus, our assumption that patients following appendectomy will return to the general population’s QALYs and mortality rate is not necessarily an accurate reflection of all clinical courses. However, the overwhelming majority of appendectomy patients recover fully post-surgery and we do not believe the above complications would significantly change our analysis. We also assumed that all patients with recurrent appendicitis following medical management would undergo surgery. However, patients who underwent nonoperative management at initial appendicitis may be more likely to be ineligible for surgery or refuse surgery during this second case of appendicitis. In addition, data were sparse for QALYs for the complications of open and laparoscopic surgery. We estimated these numbers from the EQ-5D, which while perhaps not accurate, we believe to be the best approximation given the available data. The next steps in evaluating the use of nonoperative management in uncomplicated appendicitis would be to validate the use of nonoperative management in elderly populations and to develop more accurate diagnostic criteria for uncomplicated Versus complicated appendicitis. Additionally, with increasing attention on antibiotic-resistant micro-organisms, policy decisions on the use of nonoperative management must also consider antibiotic stewardship. While one dose of perioperative antibiotics is indicated for appendectomy, treatment strategies from trial protocols for antibiotic-only management require significantly more antibiotics—some protocols require 1–3 days of IV antibiotics followed by up to 10 days of oral antibiotics. This study provides a cost-effectiveness analysis of treatment options for acute uncomplicated appendicitis among varying age groups. Our analysis demonstrates the benefit of antibiotics for initial therapy in the management of acute uncomplicated appendicitis. While the historic gold standard of laparotomy still is present as the first line treatment option in many physicians’ minds, new evidence indicates that the advancement of other methods, whether surgical via laparoscopic removal of the appendix or medical via improved antibiotic regimens, suggests better alternatives exist. Our study builds upon a growing body of literature supporting initial treatment of acute uncomplicated appendicitis with antibiotics, before surgical intervention.
Styles APA, Harvard, Vancouver, ISO, etc.
47

Ortemenka, Ye P., S. I. Tarnavska et T. V. Krasnova. « Diagnostic and predictor role of some paraclinical markers in the differential diagnosis of acute infectious-inflammatory processes of the lower respiratory tract in children ». Modern pediatrics. Ukraine, no 7(111) (29 novembre 2020) : 14–21. http://dx.doi.org/10.15574/sp.2020.111.14.

Texte intégral
Résumé :
Diagnosis of acute infectious-inflammatory processes of the lower respiratory tract with a respect to justify etiotropic therapy is often based on evaluation of the activity of blood inflammatory markers and data of lungs' X-ray examination, but scientific evidence of their informativity in the differential diagnosis of community-acquired pneumonia and acute bronchitis is conflicting. Purpose — to study the predictor role of some paraclinical indices in the verification of infectious and inflammatory diseases of the lower respiratory tract (community-acquired pneumonia and acute obstructive bronchitis) in children of different ages in order to optimize the treatment. Materials and methods. To achieve the goal of the study, a cohort of patients with acute infectious-inflammatory pathology of children with different ages (75 patients) who received inpatient treatment at the pulmonology department of the Regional Children's Clinical Hospital in Chernivtsi has been formed by the method of simple random sampling. The first (I) clinical group was formed by 51 patients with a verified diagnosis of community-acquired pneumonia (CAP), acute course, and the second (II) clinical group included 24 children, in which the infiltrative acute process in the lungs was excluded, but who had broncho-obstructive syndrome. According to the main clinical characteristics, the comparison groups have been comparable. The results of the study have been analyzed by parametric («P», Student's criterion) and non-parametric («Рϕ», Fisher's angular transform method) calculation methods, and methods of clinical epidemiology with an evaluation of the diagnostic value of the tests has been performed taking into account their sensitivity (Se) and specificity (Sp), as well as attributive (AR) and relative (RR) risks, and the odd ratio (OR) of the event, taking into account their 95% confidence intervals (95% CI). Results. The analysis of the obtained dada has showed that in the patients with CAP such common inflammatory blood markers (leukocytosis, relative neutrophilosis, shift of leukocyte formula to the left, elevation of erythrocyte sedimentation rate (ESR) or high level of CRP — С-reactive protein) are characterized by low sensitivity (Se in range between 11% and 63%) indicating that they are inadvisable for use as the screening tests for the verification of pneumonia. At the same time, it has been shown that these inflammatory blood markers are characterized by sufficient specificity (in the range from 75% to 93%) in the verification of pneumonia only under their significant increase (total leukocyte count >15.0x109, ESR>10 mm/h and CRP level in blood >6 mg/ml), indicating that they are enough, but only for confirming inflammation of the lung parenchyma. From the standpoint of clinical epidemiology, it has been proved that the asymmetry of findings at lung radiographs (asymmetry of pulmonary enhancement, asymmetric changes of lung roots and, especially, the presence of infiltrative changes at lung parenchyma) are the most informative diagnostic tests in pneumonia verification (ST=90–95%) and have a statistically significant predictor role in the final diagnosis (OR=11.6–150). When assessing the hemogram in children of the II clinical group it has been found that only the relative number of band neutrophils <5%, as a diagnostic test, had an insignificant amount (16%) of false-positive results, which allows to use this marker in confirming the diagnosis of acute obstructive bronchitis, but not as its predictor (OR=2.21; 95% CI: 0.69–7.06) or screening test (Se=29%). At the same time, a significant diagnostic and predictor role of the chest X-ray examination in the differential diagnosis of acute BOS with pneumonia has been established. Namely, symmetrical alteration of the lung root architecture at chest radiographs in the absence of infiltrative changes in the pulmonary fields was characterized by few false-negative results (10%), which allow the use of this feature as a screening pattern in the diagnosis of acute obstructive bronchitis. The absence of changes of pulmonary at chest radiographs should be used to confirm the diagnosis of acute obstructive bronchitis (Sp=98%), but not as a screening sign due to the significant number of negative results in the presence of the disease (Se=48%). Conclusions. In general, the low diagnostic and predicting role of the common blood inflammatory markers for the diagnosis of acute inflammation of the lung parenchyma in children of different ages, as well as in the differential diagnosis of pneumonia and acute obstructive bronchitis have been confirmed. At the same time, it has been found that such radiological features as asymmetry of pulmonary pattern enhancement and the presence of asymmetric infiltrative changes of the lung parenchyma are the most informative diagnostic tests in the verification of pneumonia (Se=80–88% and Sp=90–95%), and have a statistically significant predictor role in the final diagnosis (OR=38.95–150). It has been shown that symmetrical changes of lung roots (their deformation, widening or infiltration) at chest radiographs in the absence of infiltrations in the pulmonary fields, as well as the absence of changes in the pulmonary pattern, have a statistically significant predictor role in the diagnosis of acute obstructive bronchitis (OR=20,78–55,0). The study was carried out in accordance with the principles of the Helsinki Declaration. The study protocol was approved by the Local Ethics Committee of the institution specified in the work. Informed consent was obtained from the parents of the children for the research. The authors declare no conflicts of interest. Key words: community-acquired pneumonia, obstructive bronchitis, children, diagnostic value, predictors.
Styles APA, Harvard, Vancouver, ISO, etc.
48

Pa´dua, K. G. O. « Nonisothermal Gravitational Equilibrium Model ». SPE Reservoir Evaluation & ; Engineering 2, no 02 (1 avril 1999) : 211–17. http://dx.doi.org/10.2118/55972-pa.

Texte intégral
Résumé :
Summary This work presents a new computational model for the non-isothermal gravitational compositional equilibrium. The works of Bedrikovetsky [Mathemathical Theory of Oil and Gas Recovery, Kluwer Academic Publishers, London, (1993)] (gravity and temperature) and of Whitson and Belery ("Compositional Gradients in Petroleum Reservoirs," paper SPE 28000, presented at the 1994 University of Tulsa Centennal Petroleum Engineering Symposium, Tulsa, 29-31 August) (algorithm) are the basis of the mathematical formulation. Published data and previous simplified models validate the computational procedure. A large deep-water field in Campos Basin, Brazil, exemplifies the practical application of the model. The field has an unusual temperature gradient opposite to the Earth's thermal gradient. The results indicate the increase of oil segregation with temperature decrease. The application to field data suggests the reservoir could be partially connected. Fluid composition and property variation are extrapolated to different depths with its respective temperatures. The work is an example of the application of thermodynamic data to the evaluation of reservoir connectivity and fluid properties distribution. Problem Compositional variations along the hydrocarbon column are observed in many reservoirs around the world.1–4 They may affect reservoir/fluid characteristics considerably leading to different field development strategies.5 These variations are caused by many factors, such as gravity, temperature gradient, rock heterogeneity, hydrocarbon genesis and accumulation processes.6 In cases where thermodynamic associated factors (gravity and temperature) are dominant (mixing process in the secondary migration), existing gravitational compositional equilibrium (GCE) models7,8 provide an explanation of most observed variations. However, in some cases8,9 the thermal effect could have the same order of magnitude as the gravity effect. The formulation for calculating compositional variation under the force of gravity for an isothermal system was first given by Gibbs10 $$\mu {ci}(p, Z, T)=\mu {i}(p {{\rm ref}}, Z {{\rm ref}}, T {{\rm ref}}) - m {i}g(h - h {{\rm ref}}),\eqno ({\rm 1})$$ $$\mu {ci}=\delta [nRT\,{\rm ln}(f {i})]/\delta x,\eqno ({\rm 2})$$ $$f {i}=f({\rm EOS}),\eqno ({\rm 3})$$where p =pressure, T=temperature, Z=fluid composition, m=mass, ? c=chemical potential, h=depth, ref=reference, EOS=equation of state, i=component indices, R=real gas constant, n=number of moles, f=fugacity, ln=natural logarithm, x=component concentration, and g=gravitational acceleration. In 1930 Muskat11 provided an exact solution to Eq. (1), assuming a simplified equation of state and ideal mixing. Because of the oversimplified assumptions, the results suggest that gravity has a negligible effect on the compositional variation in reservoir systems. In 1938, Sage and Lacey12 used a more realistic equation of state (EOS), Eq. (3), to evaluate Eq. (2). At that time, the results showed significant composition variations with depth and greater ones for systems close to critical conditions. Schulte13 solved Eq. (1) using a cubic equation of state (3) in 1980. The results showed significant compositional variations. They also suggested a significant effect of the interaction coefficients and the aromatic content of the oil as well as a negligible effect of the EOS type (Peng-Robinson and Soave-Redlich-Kwong) on the final results. A simplified formulation that included gravity and temperature separately was presented by Holt et al.9 in 1983. Example calculations, limited to binary systems, suggest that thermal effects can be of the same magnitude as gravity effects. In 1988, Hirschberg5 discussed the influence of asphaltenes on compositional grading using a simplified two component model (asphaltenes and non-asphaltenes). He concluded, that for oils with oil gravity &lt;35°API, the compositional variations are mainly caused by asphalt segregation and the most important consequences are the large variations in oil viscosity and the possible formation of tar mats. Montel and Gouel7 presented an algorithm in 1985 for solving the GCE problem using an incremental hydrostatic term instead of solving for pressure directly. Field case applications of GCE models were presented by Riemens et al.2 in 1985, and by Creek et al.1 in 1988. They reported some difficulties in matching observed and calculated data but, in the end, it was shown that most compositional variations could be explained by the effect of gravity. Wheaton14 and Lee6 presented GCE models that included capillary forces in 1988 and 1989, respectively. Lee concluded that the effect of capillarity can become appreciable in the neighborhood of 1 ?m pore radius. In 1990, an attempt to combine the effects of gravity and temperature for a system of zero net mass flux was presented by Belery and Silva.15 The multicomponent model was an extension of earlier work by Dougherty and Drickamer16 that was originally developed in 1955 for binary liquid systems. The comparison of calculated and observed data from Ekofisk field in the North Sea is, however, not quantitatively accurate (with or without thermal effect). An extensive discussion and the formal mathematical treatment of compositional grading using irreversible thermodynamics, including gravitational and thermal fields, was presented by Bedrikovetsky17 in 1993. Due to the lack of necessary information on the values of thermal diffusion coefficients, which in general are obtained experimentally only for certain mixtures in narrow ranges of pressure and temperature, simplified models were proposed. In 1994, Hamoodi and Abed3 presented a field case of a giant Middle East reservoir with areal and vertical variations in its composition.
Styles APA, Harvard, Vancouver, ISO, etc.
49

Timchy, K. I., V. T. Smetanin et O. I. Sidashenko. « PROBLEMS SOLUTION OF SPECIAL IDENTIFICATION OF EISENIA CULTURAL POPULATIONS ». Animal Breeding and Genetics 54 (29 novembre 2017) : 156–61. http://dx.doi.org/10.31073/abg.54.20.

Texte intégral
Résumé :
Introduction. Intensification of various aspects of modern agricultural production, based on the use of a large number of mineral fertilizers and chemical means of plant and animal protection, actualizes the development and mastering of natural-like methods for restoring the quality of soils and bottom deposits of inland water bodies. The goal is their successful use in cultivating cultivated plants and conducting remediation measures. One such approach is vermiculturing – breeding of earthworms of the family Lumbricidae, for biotransformation of depleted soils and organic wastes in order to obtain biohumus. Relevance. Earthworms differ significantly in biological characteristics from animals, traditionally bred in agriculture. When working with them, a selection problem arises to assess their phenotypes – it is difficult to identify individuals and assess their performance. In this regard, the main task of effective selection becomes more complicated. Today in scientific sources on the study of the genetic structure of earthworms natural populations, their karyotypes and morphological features are not few data [3]. But the development of vermiculture, based on breeding and industrial use for biotransformation of earthworms, requires a clear species identification of cultural lines of invertebrate animals of the family Lumbricidae. Research objective. The aim of the study was to investigate the morphological and cytogenetic features of the Eisenia worms population that is being forms. Materials and methods. While forming of the new population, the worms that were purchased by the Department of Biotechnology of the UGHTU in the association "Bioconversion" and previously described as E. foetida were used. Of the array of these animals, 6 worms were selected that became the founders of the new population. After increasing the number of up to 300 animals, we formed groups of 20 individuals, each group irradiated with a laser of the LGN-208b type With a power of 1 mW, a wavelength of 633 nm, a beam diameter of 14 mm, various exposures in time from 5 to 30 min. The control was not irradiated. Irradiated animals were bred in separate groups and studied morphological, biochemical and cytogenetic features. Morphological study of the species affiliation was carried out that was determined and compared with the descriptions of these species given in the works of foreign taxonomy [5]. The intensity of the pigmentation of the integument of the body was determined in animals. All further morpho-metric studies were carried out on worms fixed in 75% ethanol. By the method of microscopy, parameters such as length and diameter of the body, total number of segments, location of the segments of the girdle, pubertal ridges and the first dorsal pore, type of setae were analyzed. Karyological analysis was performed with worms selected at the time of highest sexual activity. Preparations were prepared from the tissue of the seminal sacks by the method previously successfully used to study karyotypes of lumbricids [6]. The worms were injected with 0.1% solution of colchicine into the pre-lobe zone for 19 hours. 20 min to the autopsy. The animals were immobilized in 75% ethanol solution and digested along the median spinal line. Removed spermatic bags hypotone 50 min. in the distillate and fixed in three steps in a mixture of acetic acid and ethanol in a ratio of 1:3. Chromosome preparations were made by imprinting. Genetic markings were performed by electrophoresis in a 7.5% polyacrylamide gel Tris-EDTA • Na2-borate system with pH = 8.5 [7] 1 hour 20 minutes at a voltage of 200 V and a current strength of 140–mA. Extract of enzymes and proteins was obtained by grinding the final segments of the body with a size of 5–10 mm in the distillate in the ratio 1:1. After switching of the electric current, the gel was treated with a solution containing a special substrate that specifically reacts with the enzyme under study, forming spots corresponding to the spectra of the enzymes on the gels. The genotype of the individual according to the locus encoding the enzyme being studied is determined by the nature of the distribution of the spots on the gel [8]. Results of the research. The carried out researches have shown that the groups of animals under study belong to the species E. foetida, and by other attributes to the species E. venetta, which caused the problem of the species affiliation of the earthworm array when working to form a new population. Thus, the morphological indices studied have revealed that animals for all morphological features refer to the species Eisenia foetida. During the study of cytogenetic, it was found that the karyotype of the animals was 36 chromosomes but it should be 22. Just kind of Eisenia foetida is composed of 22 chromosomes and karyotypes of other species of the genus Eisenia have 36 chromosomes. Therefore, biochemical gene marking was carried out on enzyme systems, in particular nonspecific esterases. Nonspecific esterases of different species of the genus Eisenia differ in molecular weight. Esterаs of E. foetida have a lower mass than esterases of E. veneta. Our studies showed that in the place of the locus of spectra of nonspecific esterases, the individuals under study belong to the species E. veneta. Nonspecific esterases of different species of the genus Eisenia differ in molecular weight. Esterases E. foetida have a lower mass than E. veneta esterase. Our studies have shown that individuals at the locus of spectra of non-specific esterases belong to the species E. veneta. Conclusions The research may be a theoretical hypothesis for certain types of animal identification in vermiculture and creating biological diversity in its population. Despite the fact that it originates from 6 individuals obtained from a single array of animals, polymorphism by esterases showed a fairly high level of genetic variability in the forming line that indicates the reserve of its genetic variability and allows it to hope for its successful development in the future.
Styles APA, Harvard, Vancouver, ISO, etc.
50

Schertzer, D., et S. Lovejoy. « EGS Richardson AGU Chapman NVAG3 Conference : Nonlinear Variability in Geophysics : scaling and multifractal processes ». Nonlinear Processes in Geophysics 1, no 2/3 (30 septembre 1994) : 77–79. http://dx.doi.org/10.5194/npg-1-77-1994.

Texte intégral
Résumé :
Abstract. 1. The conference The third conference on "Nonlinear VAriability in Geophysics: scaling and multifractal processes" (NVAG 3) was held in Cargese, Corsica, Sept. 10-17, 1993. NVAG3 was joint American Geophysical Union Chapman and European Geophysical Society Richardson Memorial conference, the first specialist conference jointly sponsored by the two organizations. It followed NVAG1 (Montreal, Aug. 1986), NVAG2 (Paris, June 1988; Schertzer and Lovejoy, 1991), five consecutive annual sessions at EGS general assemblies and two consecutive spring AGU meeting sessions. As with the other conferences and workshops mentioned above, the aim was to develop confrontation between theories and experiments on scaling/multifractal behaviour of geophysical fields. Subjects covered included climate, clouds, earthquakes, atmospheric and ocean dynamics, tectonics, precipitation, hydrology, the solar cycle and volcanoes. Areas of focus included new methods of data analysis (especially those used for the reliable estimation of multifractal and scaling exponents), as well as their application to rapidly growing data bases from in situ networks and remote sensing. The corresponding modelling, prediction and estimation techniques were also emphasized as were the current debates about stochastic and deterministic dynamics, fractal geometry and multifractals, self-organized criticality and multifractal fields, each of which was the subject of a specific general discussion. The conference started with a one day short course of multifractals featuring four lectures on a) Fundamentals of multifractals: dimension, codimensions, codimension formalism, b) Multifractal estimation techniques: (PDMS, DTM), c) Numerical simulations, Generalized Scale Invariance analysis, d) Advanced multifractals, singular statistics, phase transitions, self-organized criticality and Lie cascades (given by D. Schertzer and S. Lovejoy, detailed course notes were sent to participants shortly after the conference). This was followed by five days with 8 oral sessions and one poster session. Overall, there were 65 papers involving 74 authors. In general, the main topics covered are reflected in this special issue: geophysical turbulence, clouds and climate, hydrology and solid earth geophysics. In addition to AGU and EGS, the conference was supported by the International Science Foundation, the Centre Nationale de Recherche Scientifique, Meteo-France, the Department of Energy (US), the Commission of European Communities (DG XII), the Comite National Francais pour le Programme Hydrologique International, the Ministere de l'Enseignement Superieur et de la Recherche (France). We thank P. Hubert, Y. Kagan, Ph. Ladoy, A. Lazarev, S.S. Moiseev, R. Pierrehumbert, F. Schmitt and Y. Tessier, for help with the organization of the conference. However special thanks goes to A. Richter and the EGS office, B. Weaver and the AGU without whom this would have been impossible. We also thank the Institut d' Etudes Scientifiques de Cargese whose beautiful site was much appreciated, as well as the Bar des Amis whose ambiance stimulated so many discussions. 2. Tribute to L.F. Richardson With NVAG3, the European geophysical community paid tribute to Lewis Fry Richardson (1881-1953) on the 40th anniversary of his death. Richardson was one of the founding fathers of the idea of scaling and fractality, and his life reflects the European geophysical community and its history in many ways. Although many of Richardson's numerous, outstanding scientific contributions to geophysics have been recognized, perhaps his main contribution concerning the importance of scaling and cascades has still not received the attention it deserves. Richardson was the first not only to suggest numerical integration of the equations of motion of the atmosphere, but also to attempt to do so by hand, during the First World War. This work, as well as a presentation of a broad vision of future developments in the field, appeared in his famous, pioneering book "Weather prediction by numerical processes" (1922). As a consequence of his atmospheric studies, the nondimensional number associated with fluid convective stability has been called the "Richardson number". In addition, his book presents a study of the limitations of numerical integration of these equations, it was in this book that - through a celebrated poem - that the suggestion that turbulent cascades were the fundamental driving mechanism of the atmosphere was first made. In these cascades, large eddies break up into smaller eddies in a manner which involves no characteristic scales, all the way from the planetary scale down to the viscous scale. This led to the Richardson law of turbulent diffusion (1926) and tot he suggestion that particles trajectories might not be describable by smooth curves, but that such trajectories might instead require highly convoluted curves such as the Peano or Weierstrass (fractal) curves for their description. As a founder of the cascade and scaling theories of atmospheric dynamics, he more or less anticipated the Kolmogorov law (1941). He also used scaling ideas to invent the "Richardson dividers method" of successively increasing the resolution of fractal curves and tested out the method on geographical boundaries (as part of his wartime studies). In the latter work he anticipated recent efforts to study scale invariance in rivers and topography. His complex life typifies some of the hardships that the European scientific community has had to face. His educational career is unusual: he received a B.A. degree in physics, mathematics, chemistry, biology and zoology at Cambridge University, and he finally obtained his Ph.D. in mathematical psychology at the age of 47 from the University of London. As a conscientious objector he was compelled to quit the United Kingdom Meteorological Office in 1920 when the latter was militarized by integration into the Air Ministry. He subsequently became the head of a physics department and the principal of a college. In 1940, he retired to do research on war, which was published posthumously in book form (Richardson, 1963). This latter work is testimony to the trauma caused by the two World Wars and which led some scientists including Richardson to use their skills in rational attempts to eradicate the source of conflict. Unfortunately, this remains an open field of research. 3. The contributions in this special issue Perhaps the area of geophysics where scaling ideas have the longest history, and where they have made the largest impact in the last few years, is turbulence. The paper by Tsinober is an example where geometric fractal ideas are used to deduce corrections to standard dimensional analysis results for turbulence. Based on local spontaneous breaking of isotropy of turbulent flows, the fractal notion is used in order to deduce diffusion laws (anomalous with respect to the Richardson law). It is argued that his law is ubiquitous from the atmospheric boundary layer to the stratosphere. The asymptotic intermittency exponent i hypothesized to be not only finite but to be determined by the angular momentum flux. Schmitt et al., Chigirinskaya et al. and Lazarev et al. apply statistical multifractal notions to atmospheric turbulence. In the former, the formal analogy between multifractals and thermodynamics is exploited, in particular to confirm theoretical predictions that sample-size dependent multifractal phase transitions occur. While this quantitatively explains the behavior of the most extreme turbulent events, it suggests that - contrary to the type of multifractals most commonly discussed in the literature which are bounded - more violent (unbounded) multifractals are indeed present in the atmospheric wind field. Chigirinskaya et al. use a tropical rather than mid-latitude set to study the extreme fluctuations form yet another angle: That of coherent structures, which, in the multifractal framework, are identified with singularities of various orders. The existence of a critical order of singularity which distinguishes violent "self-organized critical structures" was theoretically predicted ten years ago; here it is directly estimated. The second of this two part series (Lazarev et al.) investigates yet another aspect of tropical atmospheric dynamics: the strong multiscaling anisotropy. Beyond the determination of universal multifractal indices and critical singularities in the vertical, this enables a comparison to be made with Chigirinskaya et al.'s horizontal results, requiring an extension of the unified scaling model of atmospheric dynamics. Other approaches to the problem of geophysical turbulence are followed in the papers by Pavlos et al., Vassiliadis et al., Voros et al. All of them share a common assumption that a very small number of degrees of freedom (deterministic chaos) might be sufficient for characterizing/modelling the systems under consideration. Pavlos et al. consider the magnetospheric response to solar wind, showing that scaling occurs both in real space (using spectra), and also in phase space; the latter being characterized by a correlation dimension. The paper by Vassiliadis et al. follows on directly by investigating the phase space properties of power-law filtered and rectified gaussian noise; the results further quantify how low phase space correlation dimensions can occur even with very large number of degrees of freedom (stochastic) processes. Voros et al. analyze time series of geomagnetic storms and magnetosphere pulsations, also estimating their correlation dimensions and Lyapounov exponents taking special care of the stability of the estimates. They discriminate low dimensional events from others, which are for instance attributed to incoherent waves. While clouds and climate were the subject of several talks at the conference (including several contributions on multifractal clouds), Cahalan's contribution is the only one in this special issue. Addressing the fundamental problem of the relationship of horizontal cloud heterogeneity and the related radiation fields, he first summarizes some recent numerical results showing that even for comparatively thin clouds that fractal heterogeneity will significantly reduce the albedo. The model used for the distribution of cloud liquid water is the monofractal "bounded cascade" model, whose properties are also outlined. The paper by Falkovich addresses another problem concerning the general circulation: the nonlinear interaction of waves. By assuming the existence of a peak (i.e. scale break) at the inertial oscillation frequency, it is argued that due to remarkable cancellations, the interactions between long inertio-gravity waves and Rossby waves are anomalously weak, producing a "wave condensate" of large amplitude so that wave breaking with front creation can occur. Kagan et al., Eneva and Hooge et al. consider fractal and multifractal behaviour in seismic events. Eneva estimates multifractal exponents of the density of micro-earthquakes induced by mining activity. The effects of sample limitations are discussed, especially in order to distinguish between genuine from spurious multifractal behaviour. With the help of an analysis of the CALNET catalogue, Hooge et al. points out, that the origin of the celebrated Gutenberg-Richter law could be related to a non-classical Self-Organized Criticality generated by a first order phase transition in a multifractal earthquake process. They also analyze multifractal seismic fields which are obtained by raising earthquake amplitudes to various powers and summing them on a grid. In contrast, Kagan, analyzing several earthquake catalogues discussed the various laws associated with earthquakes. Giving theoretical and empirical arguments, he proposes an additive (monofractal) model of earthquake stress, emphasizing the relevance of (asymmetric) stable Cauchy probability distributions to describe earthquake stress distributions. This would yield a linear model for self-organized critical earthquakes. References: Kolmogorov, A.N.: Local structure of turbulence in an incompressible liquid for very large Reynolds number, Proc. Acad. Sci. URSS Geochem. Sect., 30, 299-303, 1941. Perrin, J.: Les Atomes, NRF-Gallimard, Paris, 1913. Richardson, L.F.: Weather prediction by numerical process. Cambridge Univ. Press 1922 (republished by Dover, 1965). Richardson, L.F.: Atmospheric diffusion on a distance neighbour graph. Proc. Roy. of London A110, 709-737, 1923. Richardson, L.F.: The problem of contiguity: an appendix of deadly quarrels. General Systems Yearbook, 6, 139-187, 1963. Schertzer, D., Lovejoy, S.: Nonlinear Variability in Geophysics, Kluwer, 252 pp, 1991.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie