To see the other types of publications on this topic, follow the link: Combined eqns.

Journal articles on the topic 'Combined eqns'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 38 journal articles for your research on the topic 'Combined eqns.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Kalinitchev, Anatoliy I. "Сoncentration waves behaviour and the chromatographic displacement development in the sorbentsnanocomposites during the multicomponent mass transfer and visualisation of the sorption kinetics process." Сорбционные и хроматографические процессы 19, no. 5 (October 30, 2019): 512–24. http://dx.doi.org/10.17308/sorpchrom.2019.19/1166.

Full text
Abstract:
There is considered the Multi-(6th)-component Mass Transfer (MMT) inside the planar matrices ofthe sorbent-NanoComposite (NC) by the computerized modelling. During the MMT kinetics in the NC planar-membrane the chromatographic Displacement Development (DD) for the propagating modes of the twoconcentration Xm(1,2)(L,T)-principal waves is modeled for the two principal m1,2-sorbate components (m=1,2)of the Multi(6)-components NC MMT combined “Diffusion, and sorption” system.The computerized modelling mentioned here is based on the mathematical solution of the MMTmulti 6-components Eqns. partial differential including as the basis the author’s bi-functional NC MMTModels. The main advantage of the NC Models considered concludes in the introduction of the two sorbatediffusing principal Pi(3.4)-components into the consideration. The similarity and the differences between themulticomponent Xn(L,T)-concentration waves propagation for the MMT processes in the modern NC matrixand in the chromatographic column are discussed.The visualization of the kinetics of the MMT процесс is realized by the creation of the Sci. computerizedAnimations: “SCA.avi” video-files which demonstrate visually (after the program start) the propagationof the multi(n)-component Xn(1-6)(L,T)-concentration waves through the NC matrixe. Here the “SCA.avianimations display the DD chromatographic effect during oral presentation with the mentioned DDdisplacementof the X2-concentration waves by the X1-waves of the 1-component (displacer).
APA, Harvard, Vancouver, ISO, and other styles
2

Birch, H., P. S. Mikkelsen, J. K. Jensen, and H. C. Holten Lützhøft. "Micropollutants in stormwater runoff and combined sewer overflow in the Copenhagen area, Denmark." Water Science and Technology 64, no. 2 (July 1, 2011): 485–93. http://dx.doi.org/10.2166/wst.2011.687.

Full text
Abstract:
Stormwater runoff contains a broad range of micropollutants. In Europe a number of these substances are regulated through the Water Framework Directive, which establishes Environmental Quality Standards (EQSs) for surface waters. Knowledge about discharge of these substances through stormwater runoff and combined sewer overflows (CSOs) is essential to ensure compliance with the EQSs. Results from a screening campaign including more than 50 substances at four stormwater discharge locations and one CSO in Copenhagen are reported here. Heavy metal concentrations were detected at levels similar to earlier findings, e.g., with copper found at concentrations up to 13 times greater than the Danish standard for surface waters. The concentration of polyaromatic hydrocarbons (PAHs) exceeded the EQSs by factors up to 500 times for stormwater and 2,000 times for the CSO. Glyphosate was found in all samples whilst diuron, isoproturon, terbutylazine and MCPA were found only in some of the samples. Diethylhexylphthalate (DEHP) was also found at all five locations in concentrations exceeding the EQS. The results give a valuable background for designing further monitoring programmes focusing on the chemical status of surface waters in urban areas.
APA, Harvard, Vancouver, ISO, and other styles
3

Ceylan, Huseyin. "Optimal Design of Signal Controlled Road Networks Using Differential Evolution Optimization Algorithm." Mathematical Problems in Engineering 2013 (2013): 1–11. http://dx.doi.org/10.1155/2013/696374.

Full text
Abstract:
This study proposes a traffic congestion minimization model in which the traffic signal setting optimization is performed through a combined simulation-optimization model. In this model, the TRANSYT traffic simulation software is combined with Differential Evolution (DE) optimization algorithm, which is based on the natural selection paradigm. In this context, the EQuilibrium Network Design (EQND) problem is formulated as a bilevel programming problem in which the upper level is the minimization of the total network performance index. In the lower level, the traffic assignment problem, which represents the route choice behavior of the road users, is solved using the Path Flow Estimator (PFE) as a stochastic user equilibrium assessment. The solution of the bilevel EQND problem is carried out by the proposed Differential Evolution and TRANSYT with PFE, the so-called DETRANSPFE model, on a well-known signal controlled test network. Performance of the proposed model is compared to that of two previous works where the EQND problem has been solved by Genetic-Algorithms- (GAs-) and Harmony-Search- (HS-) based models. Results show that the DETRANSPFE model outperforms the GA- and HS-based models in terms of the network performance index and the computational time required.
APA, Harvard, Vancouver, ISO, and other styles
4

Gabriel, O., and M. Zessner. "Discussion of an environment quality standard based assessment procedure for permitting discharge." Water Science and Technology 54, no. 11-12 (December 1, 2006): 119–27. http://dx.doi.org/10.2166/wst.2006.832.

Full text
Abstract:
The ‘combined approach’ as a requirement of the EC Water Framework Directive pools an emission-based approach with an approach based on environmental quality standards (EQS) to improve European water quality. The implementation of the EQS-based approach poses problems of defining a reference water discharge, defining a distance, where the EQS are obligatory and thus have to be controlled after point discharge, considering incomplete mixing. To elaborate a simple assessment procedure including the aspects mentioned above is the point of discussion in this paper. On the basis of easily available data and references from several European countries, recommendations for an Austrian assessment procedure are presented.
APA, Harvard, Vancouver, ISO, and other styles
5

Stransky, D., I. Kabelkova, and V. Bares. "Stochastic approach to the derivation of emission limits for wastewater treatment plants." Water Science and Technology 59, no. 12 (June 1, 2009): 2305–10. http://dx.doi.org/10.2166/wst.2009.276.

Full text
Abstract:
Stochastic approach to the derivation of WWTP emission limits meeting probabilistically defined environmental quality standards (EQS) is presented. The stochastic model is based on the mixing equation with input data defined by probability density distributions and solved by Monte Carlo simulations. The approach was tested on a study catchment for total phosphorus (Ptot). The model assumes input variables independency which was proved for the dry-weather situation. Discharges and Ptot concentrations both in the study creek and WWTP effluent follow log-normal probability distribution. Variation coefficients of Ptot concentrations differ considerably along the stream (cv=0.415–0.884). The selected value of the variation coefficient (cv=0.420) affects the derived mean value (Cmean=0.13 mg/l) of the Ptot EQS (C90=0.2 mg/l). Even after supposed improvement of water quality upstream of the WWTP to the level of the Ptot EQS, the WWTP emission limits calculated would be lower than the values of the best available technology (BAT). Thus, minimum dilution ratios for the meaningful application of the combined approach to the derivation of Ptot emission limits for Czech streams are discussed.
APA, Harvard, Vancouver, ISO, and other styles
6

Gabriel, Oliver, Katerina Ruzicka, and Norbert Kreuzinger. "Upgrading Vienna's wastewater treatment plant – linking point source emissions to Environmental Quality Standards." Water Science and Technology 65, no. 7 (April 1, 2012): 1290–97. http://dx.doi.org/10.2166/wst.2012.010.

Full text
Abstract:
The new water quality protection approach of the EU combines the control of emissions with instream Environmental Quality Standards (=EQS). Since 1 April 2006 and actually relevant in the version of 2010 in Austria, priority substances from list A of the EUROPEAN DIERECTIVE 76/464 and further EQS of relevant chemical substances (list B), identified by a national risk assessment, have to be reached to achieve a good ecological state in the surface water (Edict for Water Quality Standards, 2006; changes to the Edict for Water Quality Standards 2010). The practical assessment of these substances after point source emissions is prescribed in the Edict, but rarely carried out. In this paper, two substances, namely: (1) ammonium (list B); and (2) nonylphenol, an endocrine disrupting compound (list A) are presented to discuss: (i) the improvement of treatment efficiency due to the upgrade of a large Waste Water Treatment Plant (=WWTP); (ii) the relevance of mixing processes and modelling as a method to control EQS after point source emissions; and (iii) the improvement of water quality in the ambient surface waters. It is shown that the improved treatment in the case of nonylphenol leads to emission values which fall below the EQS, making an assessment unnecessary. In the case of ammonium emission, values are significantly reduced and violation of EQS is avoided, while mixing modelling is shown to be a suitable instrument to address the resulting instream concentrations at different border conditions.
APA, Harvard, Vancouver, ISO, and other styles
7

Vercauteren, Toon, Marina Näsman, Fredrica Nyqvist, Dorien Brosens, Rodrigo Serrat, and Dury Sarah. "MULTIDIMENSIONAL CIVIC ENGAGEMENT OF OLDER PEOPLE: A LATENT CLASS ANALYSIS." Innovation in Aging 7, Supplement_1 (December 1, 2023): 509–10. http://dx.doi.org/10.1093/geroni/igad104.1673.

Full text
Abstract:
Abstract Civic engagement of older people remains an understudied topic in gerontological research, focusing for the biggest part on formal activities, such as volunteering and political engagement. This research proposes a more inclusive definition referring to multidimensional civic engagement which includes associational engagement, formal and informal volunteering, digital engagement, and formal and informal political engagement. Additionally, research fails to address whether older people combine different civic activities or are only engaged in one. Using EQLS data (2016) collected in 33 European countries, this research examined the multidimensional civic engagement of older people and whether they combine civic activities. Descriptive analysis was used to map the multidimensional civic engagement of those over 65 years (n = 9,031). Latent class analysis was conducted to create profiles among those who are civically engaged to study whether activities are combined and to explore the socio-structural and social capital resources of each profile (n = 6,142). The results indicate that over two-thirds of older Europeans are engaged in minimally one civic activity. Five profiles can be differentiated, ranging from low diversity profiles to a high diversity profile regarding the chance of engaging in a multitude of civic activities in later life. The results clearly show that having more resources enables diverse civic engagement in later life. This study appropriately appreciates the multidimensional civic engagement of older people and identifies the resources that must be considered when promoting multidimensional civic engagement.
APA, Harvard, Vancouver, ISO, and other styles
8

Chantarangkul, Veena, Bruno Cesana, Pier Mannuccio Mannucci, and Armando Tripodi. "Calibration of Local Systems with Lyophilized Calibrant Plasmas Improves the Interlaboratory Variability of the INR in the Italian External Quality Assessment Scheme." Thrombosis and Haemostasis 82, no. 12 (1999): 1621–26. http://dx.doi.org/10.1055/s-0037-1614889.

Full text
Abstract:
SummaryCalibration with lyophilized calibrant plasmas certified in terms of PT with International Reference Preparations for thromboplastin has been proposed to minimize the effect of coagulometers on the INR. Aim of this study was to test the ability of local calibration with lyophilized calibrant plasmas, combined with a modified statistical approach, to improve the interlaboratory variability of the INR measured on two test plasmas (one coumarin and one artificially-depleted) by participants in the External Quality Assessment Scheme (EQAS). Sets of lyophilized calibrant and test plasmas were sent to the participants in the EQAS, who were asked to determine PT with their own reagent/ instrument combination (local system). Results were returned as PT together with information on the type of local system, the stated International Sensitivity Index (ISI) and the geometric mean of PTs determined by testing with the local system fresh plasmas from 20 healthy subjects. Ninety-two participants using 9 and 11 brands of reagents and instruments returned results. The CV of the INR determined with the stated ISI for the coumarin (Mean INR = 4.39) and artificially-depleted (Mean INR = 4.23) test plasmas were 11.2% and 10.3% and were reduced on the average by 34% and 54%, respectively, when the INR was calculated with the local ISI.In conclusions, results from this field study involving laboratories and testing systems representative of the real situation in oral anticoagulant monitoring in our country, indicate that local calibration by artificially-depleted plasmas, combined with the proposed statistical approach, is suitable to improve the interlaboratory agreement on the INR.
APA, Harvard, Vancouver, ISO, and other styles
9

Klinkenberg, Lieke JJ, Eef GWM Lentjes, and Arjen-Kars Boer. "Clinical interpretation of prostate-specific antigen values: Type of applied cut-off value exceeds methods bias as the major source of variation." Annals of Clinical Biochemistry: International Journal of Laboratory Medicine 56, no. 2 (February 24, 2019): 259–65. http://dx.doi.org/10.1177/0004563218822665.

Full text
Abstract:
Background Prostate-specific antigen is the biochemical gold standard for the (early) detection and monitoring of prostate cancer. Interpretation of prostate-specific antigen is both dependent on the method and cut-off. The aim of this study was to examine the effect of method-specific differences and cut-off values in a national external quality assessment scheme (EQAS). Methods The Dutch EQAS for prostate-specific antigen comprised an annual distribution of 12 control materials. The results of two distributions were combined with the corresponding cut-off value. Differences between methods were quantified by simple linear regression based on the all laboratory trimmed mean. To assess the clinical consequence of method-specific differences and cut-off values, a clinical data-set of 1040 patients with an initial prostate-specific antigen measurement and concomitant conclusive prostate biopsy was retrospectively collected. Sensitivity and specificity for prostate cancer were calculated for all EQAS participants individually. Results In the Netherlands, seven different prostate-specific antigen methods are used. Interestingly, 67% of these laboratories apply age-specific cut-off values. Methods showed a maximal relative difference of 26%, which were not reflected in the cut-off values. The largest differences were caused by the type of cut-off, for example in the Roche group the cut-off value differed maximal 217%. Clinically, a fixed prostate-specific antigen cut-off has a higher sensitivity than an age-specific cut-off (mean 89% range 86–93% versus 79% range 63–95%, respectively). Conclusions This study shows that the differences in cut-off values exceed the method-specific differences. These results emphasize the need for (inter)national harmonization/standardization programmes including cut-off values to allow for laboratory-independent clinical decision-making.
APA, Harvard, Vancouver, ISO, and other styles
10

Dewi, Cinantya Nirmala, Febty Febriani, Titi Anggono, Syuhada Syuhada, Mohamad Ramdhan, Mohammad Hasib, Aditya Dwi Prasetio, et al. "ASSESSMENT OF ULTRA-LOW FREQUENCY (ULF) GEOMAGNETIC PHENOMENA ASSOCIATED WITH EARTHQUAKES IN THE WESTERN PART OF JAVA ISLAND, INDONESIA DURING 2020." Rudarsko-geološko-naftni zbornik 39, no. 1 (2024): 55–64. http://dx.doi.org/10.17794/rgn.2024.1.6.

Full text
Abstract:
Ultra-low frequency (ULF) geomagnetic analysis is a robust method for earthquake (EQ) forecasting. We conducted a simultaneous study of EQ precursors around the western part of Java Island in 2020 using wavelet transform (WT) and detrended fluctuation analysis (DFA) methods. ULF geomagnetic data (March to December 2020, 16:00–21:00 UTC or 23.00–04.00 LT) from Lampung Selatan (LPS) geomagnetic station were used to assess the precursors. We analyzed four EQs with an epicenter distance (R) of around 100 km from LPS station and a magnitude (M) greater than 5 Mw. We analyzed changes in the SZ/SG values and α values from the WT and DFA analyses against the threshold (µ±2σ) to identify anomalies related to the EQs. The result showed that SZ/SG anomalies occurred simultaneously with a decrease in α values several weeks prior to probable source EQ when there was a very low geomagnetic activity (Dst ≤ -30 nT). The Mw5.4 (07/07/2020) EQ might be the main source that led to the appearance of the precursor since it had the highest magnitude and KLS values compared to others. The combined WT and DFA results showed anomalies 1.5–13 weeks before the Mw5.4 (07/07/2020) EQ. The results suggest that WT and DFA are suitable methods for detecting EQ precursors but more work is needed to link the precursors to specific EQs.
APA, Harvard, Vancouver, ISO, and other styles
11

Gecheva, Gana, Vesela Yancheva, Iliana Velcheva, Elenka Georgieva, Stela Stoyanova, Desislava Arnaudova, Violeta Stefanova, et al. "Integrated Monitoring with Moss-Bag and Mussel Transplants in Reservoirs." Water 12, no. 6 (June 24, 2020): 1800. http://dx.doi.org/10.3390/w12061800.

Full text
Abstract:
For the first time, transplants with moss-bags and mussels together were applied to study the water quality in standing water bodies. The tested species: Fontinalis antipyretica Hedw. and Sinanodonta woodiana (Lea, 1834) were collected from unpolluted sites and analyzed to obtain background levels. Then, the moss and mussels were left in cages for a period of 30 days in three reservoirs where both are not present naturally. Two of the reservoirs suffer from old industrial contamination and one is affected by untreated wastes. Twenty-four compounds were studied, among them trace elements Al, As, Cd, Co, Cr, Cu, Fe, Hg, Mn, Ni, Pb, Zn and organic priority substances: six polybrominated diphenyl ethers (PBDEs) congeners and short-chain chlorinated paraffins (SCCPs). The trace element accumulation was significant after the exposition period in all studied stations. PBDEs and SCCPs were also accumulated up to two times more in the moss tissues. PBDEs in the mussels exceeded the environmental quality standard (EQS). The applied combined transplants, and especially the moss-bags, revealed severe contamination with heavy metals not detected by the water samples. The moss and the mussel followed a different model of trace element and PBDEs accumulation. The SCCPs levels were alarmingly high in all plant samples. The study confirmed PBDEs and SCCPs as bioaccumulative compounds and suggested that an EQS for SCCPs in biota needs to be established.
APA, Harvard, Vancouver, ISO, and other styles
12

Axpe, Inge, Arantzazu Rodríguez-Fernández, Eider Goñi, and Iratxe Antonio-Agirre. "Parental Socialization Styles: The Contribution of Paternal and Maternal Affect/Communication and Strictness to Family Socialization Style." International Journal of Environmental Research and Public Health 16, no. 12 (June 21, 2019): 2204. http://dx.doi.org/10.3390/ijerph16122204.

Full text
Abstract:
The aim of this study is two-fold: (a) to determine the general degree of family affect/communication and strictness by examining the combination of the two classical dimensions of mother parenting style: affect/communication and strictness, and (b) to analyze the impact of both parents’ affect and strictness on the family style, thereby exploring the specific contribution made by each parent’s style and dimension. Participants were 1190 Spanish students, 47.1% boys and 52.3% girls (M = 14.68; SD = 1.76). The Affect Scale (EA-H) and the Rules and Demandingness Scale (ENE-H) (both by Fuentes, Motrico, and Bersabé, 1999) were used. Structural equation models (SEMs) were extracted using the EQS program. The results reveal that it is not the father’s and the mother’s parenting style combined, but rather the combination of maternal and paternal affect/communication, and maternal and paternal strictness which generates one perception of family affect and another of family strictness. The results also indicated that the weight of both dimensions varies in accordance with the parent’s gender, with maternal dimensions playing a more important role in family socialization style.
APA, Harvard, Vancouver, ISO, and other styles
13

Feng, Liang, Veronica Pazzi, Emanuele Intrieri, Teresa Gracchi, and Giovanni Gigli. "Joint detection and classification of rockfalls in a microseismic monitoring network." Geophysical Journal International 222, no. 3 (June 12, 2020): 2108–20. http://dx.doi.org/10.1093/gji/ggaa287.

Full text
Abstract:
SUMMARY A rockfall (RF) is a ubiquitous geohazard that is difficult to monitor or predict and poses a significant risk for people and transportation in several hilly and mountainous environments. The seismic signal generated by RF carries abundant physical and mechanical information. Thus, signals can be used by researchers to reconstruct the event location, onset time, volume and trajectory, and develop an efficient early warning system. Therefore, the precise automatic detection and classification of RF events are important objectives for scientists, especially in seismic monitoring arrays. An algorithm called DESTRO (DEtection and STorage of ROckfalls) aimed at combining seismic event automatic detection and classification was implemented ad hoc within the MATLAB environment. In event detection, the STA/LTA (short-time-average through long-time-average) method combined with other parameters, such as the minimum duration of an RF and the minimum interval time between two continuous seismic events is used. Furthermore, nine significant features based on the frequency, amplitude, seismic waveform, duration and multiple station attributes are newly proposed to classify seismic events in a RF environment. In particular, a three-step classification method is proposed for the discrimination of five different source types: RFs, earthquakes (EQs), tremors, multispike events (MSs) and subordinate MS events. Each component (vertical, east–west and north–south) at each station within the monitoring network is analysed, and a three-step classification is performed. At a given time, the event series detected from each component are integrated and reclassified component by component and station by station into a final event-type series as an output result. By this algorithm, a case study of the seven-month-long seismic monitoring of a former quarry in Central Italy was investigated by means of four triaxial velocimeters with continuous acquisition at a sampling rate of 200 Hz. During this monitoring period, a human-induced RF simulation was performed, releasing 95 blocks (in which 90 blocks validated) of different sizes from the benches of the quarry. Consequently, 64.9 per cent of EQs within 100 km were confirmed in a one-month monitoring period, 88 blocks in the RF simulation were classified correctly as RF events and 2 blocks were classified as MSs given their small energy. Finally, an ad hoc section of the algorithm was designed specifically for RF classification combined with EQ recognition. The algorithm could be applied in slope seismic monitoring to monitor the dynamic states of rock masses, as well as in slope instability forecasting and risk evaluation in EQ-prone areas.
APA, Harvard, Vancouver, ISO, and other styles
14

Rudall, S., and A. P. Jarvis. "Diurnal fluctuation of zinc concentration in metal polluted rivers and its potential impact on water quality and flux estimates." Water Science and Technology 65, no. 1 (January 1, 2012): 164–70. http://dx.doi.org/10.2166/wst.2011.834.

Full text
Abstract:
Diurnal fluctuations of metals have been observed in the South Tyne river catchment, UK, in both upland tributaries and major river reaches. Zinc exhibits the most pronounced cyclicity, with concentrations increasing during the night to a maximum near 05:00 before decreasing during the day. This trend is the inverse of pH and temperature observations, which are thought to be the predominant drivers behind the cyclicity. Photosynthetic reactions of biomass and algae alter the pH within the river systems, therefore promoting hydrous metal oxide precipitation during the daylight which consequently allows divalent cations including zinc to sorb onto them. This mechanism may be supported by direct uptake of zinc by algae and other biogeochemical reactions which combine to create large differences in zinc concentrations; during base flow zinc concentrations increased by 326% from the minima over 48 hours. Maximum concentrations are not being captured during routine water quality analysis resulting in inaccurate and misleading EQS results and total flux estimations, for example the annual total zinc flux in a small tributary increases from 17 to 76 tonnes/year when routine grab sample data are supplemented with 24-hour sampling results.
APA, Harvard, Vancouver, ISO, and other styles
15

Kasap, Ekrem, Kun Huang, Than Shwe, and Dan Georgi. "Formation-Rate-Analysis Technique: Combined Drawdown and Buildup Analysis for Wireline Formation Test Data." SPE Reservoir Evaluation & Engineering 2, no. 03 (June 1, 1999): 271–80. http://dx.doi.org/10.2118/56841-pa.

Full text
Abstract:
Summary The formation-rate-analysis (FRASM) technique is introduced. The technique is based on the calculated formation rate by correcting the piston rate with fluid compressibility. A geometric factor is used to account for irregular flow geometry caused by probe drawdown. The technique focuses on the flow from formation, is applicable to both drawdown and buildup data simultaneously, does not require long buildup periods, and can be implemented with a multilinear regression, from which near-wellbore permeability, p * and formation fluid compressibility are readily determined. The field data applications indicate that FRA is much less amenable to data quality because it utilizes the entire data set. Introduction A wireline formation test (WFT) is initiated when a probe from the tool is set against the formation. A measured volume of fluid is then withdrawn from the formation through the probe. The test continues with a buildup period until pressure in the tool reaches formation pressure. WFTs provide formation fluid samples and produce high-precision vertical pressure profiles, which, in turn, can be used to identify formation fluid types and locate fluid contacts. Wireline formation testing is much faster compared with the regular pressure transient testing. Total drawdown time for a formation test is just a few seconds and buildup times vary from less than a second (for permeability of hundreds of millidarcy) to half a minute (for permeability of less than 0.1 md), depending on system volume, drawdown rate, and formation permeability. Because WFT tested volume can be small (a few cubic centimeters), the details of reservoir heterogeneity on a fine scale are given with better spatial resolution than is possible with conventional pressure transient tests. Furthermore, WFTs may be preferable to laboratory core permeability measurements since WFTs are conducted at in-situ reservoir stress and temperature. Various conventional analysis techniques are used in the industry. Spherical-flow analysis utilizes early-time buildup data and usually gives permeability that is within an order of magnitude of the true permeability. For p* determination, cylindrical-flow analysis is preferred because it focuses on late-time buildup data. However, both the cylindrical- and spherical-flow analyses have their drawbacks. Early-time data in spherical-flow analysis results in erroneous p* estimation. Late-time data are obtained after long testing times, especially in low-permeability formations; however, long testing periods are not desirable because of potential tool "sticking" problems. Even after extended testing times, the cylindrical-flow period may not occur or may not be detectable on WFTs. When it does occur, permeability estimates derived from the cylindrical-flow period may be incorrect and their validity is difficult to judge. New concepts and analysis techniques, combined with 3-D numerical studies, have recently been reported in the literature.1–7 Three-dimensional numerical simulation studies1–6 have contributed to the diagnosis of WFT-related problems and the improved analysis of WFT data. The experimental studies7 showed that the geometric factor concept is valid for unsteady state probe pressure tests. This study presents the FRA technique8 that can be applied to the entire WFT where a plot for both drawdown and buildup periods renders straight lines with identical slopes. Numerical simulation studies were used to generate data to test both the conventional and the FRA techniques. The numerical simulation data are ideally suited for such studies because the correct answer is known (e.g., the input data). The new technique and the conventional analysis techniques are also applied to the field data and the results are compared. We first review the theory of conventional analysis techniques, then present the FRA technique for combined drawdown and buildup data. A discussion of the numerical results and the field data applications are followed by the conclusions. Analysis Techniques It has been industry practice to use three conventional techniques, i.e., pseudo-steady-state drawdown (PSSDD), spherical and cylindrical-flow analyses, to calculate permeability and p* Conventional Techniques Pseudo-Steady-State Drawdown (PSSDD). When drawdown data are analyzed, it is assumed that late in the drawdown period the pressure drop stabilizes and the system approaches to a pseudo-steady state when the formation flow rate is equal to the drawdown rate. PSSDD permeability is calculated from Darcy's equation with the stabilized (maximum) pressure drop and the flowrate resulting from the piston withdrawal:9–11 $$k {d}=1754.5\left({q\mu \over r {i}\Delta p {{\rm max}}}\right),\eqno ({\rm 1})$$where kd=PSSDD permeability, md. The other parameters are given in Nomenclature.
APA, Harvard, Vancouver, ISO, and other styles
16

Schäfer, G. "Comment on the Paper "Non-kinematicity of the Dilation-of-time Relation of Einstein for Time-intervals" by S. Golden." Zeitschrift für Naturforschung A 55, no. 9-10 (October 1, 2000): 845. http://dx.doi.org/10.1515/zna-2000-9-1017.

Full text
Abstract:
Abstract In a recent paper [1] S. Golden is trying an interpretation of Einstein's theory of special relativity solely based on the propagation of light-pulses which aims at circumventing the problems scientists sometimes do have with the "twin paradox", i.e. with the physical reality of the kinematical dilation-of-time. However it is well known that propagation of light does not cover the whole structure of relativistic spacetime because of the conformal invariance of the Maxwell equations. Thus the paper [1] is fundamentally incomplete in its applied physical tools. In my comment it will be shown that the title of the paper is a misconception and that another aspect in the paper is false too. If S. Golden would have treated in the paper not only light-pulses but also decaying systems the shortcomings would not have occurred. The sole tool of the author for relating inertial systems is the exchange of light-pulses (including reflections) between two inertial systems, called A-system and B-system, with time coordinates t and r, respectively. The light-pulses are propagating either into the direction of the inertial-systems relative velocity or into the opposite direction. No doubt, the ratio At/AT of the time-intervals between fixed-position (origins of the systems) passage-times of the same two light-pulses in the A-system and in the B-system is given by, combine the Eqs. (6) and (12) in the paper in question,
APA, Harvard, Vancouver, ISO, and other styles
17

Petrosian, Vahé. "The Evolution and Luminosity Function of Quasars." Symposium - International Astronomical Union 194 (1999): 105–12. http://dx.doi.org/10.1017/s0074180900161832.

Full text
Abstract:
I report results from analysis of data from several quasar samples (Durham/AAT, LBQS, HBQS and EQS) on the density and the luminosity evolution of quasars. We have used new statistical methods whereby we combine these different samples with varying selection criteria and multiple truncations. With these methods the luminosity evolution can be found through an investigation of the correlation of the bivariate distribution of luminosities and redshifts. Of the two most commonly used models for luminosity evolution, L = ekt(z) and L = (1 + z)k', we find that the second form, with k' = 2.58 (one σ range [2.14,2.91]), gives a better description of the data at all luminosities. Using this form of luminosity evolution we determine a global luminosity function and the evolution of the co-moving density for the two classes of cosmological models. We find a gradual increase of the co-moving density up to z ˜ 2, at which point the density peaks and begins to decrease rapidly. This is in agreement with results from high redshift surveys and in disagreement with the pure luminosity evolution (i.e. constant co-moving density) model. We find that the local luminosity function exhibits the usual double power law behavior. The luminosity density is found to increase rapidly at low redshift and to reach a peak at around z ≍ 2. This result is compared with those from high redshift surveys and with the evolution of the star formation rate.
APA, Harvard, Vancouver, ISO, and other styles
18

Santos-Vijande, María Leticia, José Ángel López-Sánchez, and Celina González-Mieres. "Organizational learning, innovation, and performance in KIBS." Journal of Management & Organization 18, no. 6 (November 2012): 870–904. http://dx.doi.org/10.1017/s183336720000050x.

Full text
Abstract:
AbstractThere is widespread agreement that organizational learning (OL) and firms' innovative culture (innovativeness) positively influence organizational innovation (OI), which ultimately fosters long-term competitiveness. However, there is more limited empirical evidence on the role of OL as a forerunner of innovativeness, or on the combined effects of OL and innovativeness on OI and how performance is ultimately improved. In this research, OI is evaluated as a firm's actual ability to regularly adopt and implement more technical and administrative innovations with a greater degree of incorporated novelty relative to their main competitors. The aim is to approach innovation from a comprehensive viewpoint and to assess the attainment of superior competitive advantage in the innovation field. Effects on performance are evaluated at both the organizational level and in the commercialization of new services by means of two different conceptual model. These models are tested on a sample of 246 knowledge intensive business services (KIBS) located in Spain. We used polychoric correlations (Lee, Poon, & Bentler, 1995), together with a robust methodological approach, to analyze categorical variables in structural equation systems in EQS. The empirical results show that OL is an important antecedent for innovativeness, and that the latter plays a key role in the adoption of more technical and administrative innovations with a greater degree of incorporated novelty. Organizational learning exerts a direct effect on administrative innovation efforts although, contrary to previous research, the mediating role of innovativeness is required for the former to affect technical innovation. The research also supports the influence of OI on the attainment of competitive advantages at the business level and in the performance of new services. The greater ability of KIBS to innovate thus constitutes an invaluable resource to foster customer performance and profitability at the business level and in the commercialization of new service offerings.
APA, Harvard, Vancouver, ISO, and other styles
19

Santos-Vijande, María Leticia, José Ángel López-Sánchez, and Celina González-Mieres. "Organizational learning, innovation, and performance in KIBS." Journal of Management & Organization 18, no. 6 (November 2012): 870–904. http://dx.doi.org/10.5172/jmo.2012.18.6.870.

Full text
Abstract:
AbstractThere is widespread agreement that organizational learning (OL) and firms' innovative culture (innovativeness) positively influence organizational innovation (OI), which ultimately fosters long-term competitiveness. However, there is more limited empirical evidence on the role of OL as a forerunner of innovativeness, or on the combined effects of OL and innovativeness on OI and how performance is ultimately improved. In this research, OI is evaluated as a firm's actual ability to regularly adopt and implement more technical and administrative innovations with a greater degree of incorporated novelty relative to their main competitors. The aim is to approach innovation from a comprehensive viewpoint and to assess the attainment of superior competitive advantage in the innovation field. Effects on performance are evaluated at both the organizational level and in the commercialization of new services by means of two different conceptual model. These models are tested on a sample of 246 knowledge intensive business services (KIBS) located in Spain. We used polychoric correlations (Lee, Poon, & Bentler, 1995), together with a robust methodological approach, to analyze categorical variables in structural equation systems in EQS. The empirical results show that OL is an important antecedent for innovativeness, and that the latter plays a key role in the adoption of more technical and administrative innovations with a greater degree of incorporated novelty. Organizational learning exerts a direct effect on administrative innovation efforts although, contrary to previous research, the mediating role of innovativeness is required for the former to affect technical innovation. The research also supports the influence of OI on the attainment of competitive advantages at the business level and in the performance of new services. The greater ability of KIBS to innovate thus constitutes an invaluable resource to foster customer performance and profitability at the business level and in the commercialization of new service offerings.
APA, Harvard, Vancouver, ISO, and other styles
20

Elskens, Marc, Kersten Van Langenhove, Vincent Carbonnel, Natacha Brion, and Steven J. Eisenreich. "Dynamics of estrogenic activity in an urban river receiving wastewater effluents: effect-based measurements with CALUX." Water Emerging Contaminants & Nanoplastics 2, no. 2 (2023): 9. http://dx.doi.org/10.20517/wecn.2023.15.

Full text
Abstract:
Estrogenic substances (ES) in an urban river Zenne (BE) dominated by wastewater effluents were assessed over the course of one year. To measure the bioequivalent (BEQ) 17 β-estradiol (E2) concentrations of ES, the biological effect-based methodology - the Chemical-Activated LUciferase gene eXpression (CALUX) bioassay was used. Daily water discharges were collected from January 2015 to February 2016 at or near the sampling stations in the Brussels Capital Region. An annual water budget shows that approximately 50% of the Zenne River flow downstream is from wastewater effluent. The estrogenic activity and yearly average ES load in influents and effluents of wastewater treatment plants (WWTPs) located in the North and South, combined sewer overflows (CSOs) and the Zenne River, were assessed for upstream and downstream of two WWTPs of Brussels. Both WWTPs with activated sludge treatment remove more than 90% of the ES. The influent concentrations of ES at the South and North WWTPs ranged from 30-359 and 18-55 ng E2 eq./L, respectively. The effluent concentrations of ES ranged from 1.0-2.1 and 1.1-6.6 ng E2 eq./L at WWTP-S and -N, respectively. The yearly average ES loads were 0.05-0.14 and 0.39-1.5 g E2 eq./d for WWTP-S and -N, respectively. The temporal variation of E2-eq concentrations at the river stations Z3 and Z5 (upstream) ranged from 1 to 2 ng E2 eq./L, while the ES activity at sites Z9 and Z11 (downstream) varied from 2-17 ng E2 eq./L and from 1-8 ng/L ng E2 eq./L, respectively. The relative ES loads to the Zenne River are as follows: WWTPs (31%), CSOs (27%), upstream Zenne (15%), a missing source (14%), and local tributaries (13%). ES in the Zenne River behave in a pseudo-persistent manner because of continuous input from the WWTPs and slow degradation in the 18 km river stretch. The BEQ concentration of E2 exceeds the EU environmental quality standards (EQS) of 0.4 ng E2/L throughout the Zenne River.
APA, Harvard, Vancouver, ISO, and other styles
21

Pa´dua, K. G. O. "Nonisothermal Gravitational Equilibrium Model." SPE Reservoir Evaluation & Engineering 2, no. 02 (April 1, 1999): 211–17. http://dx.doi.org/10.2118/55972-pa.

Full text
Abstract:
Summary This work presents a new computational model for the non-isothermal gravitational compositional equilibrium. The works of Bedrikovetsky [Mathemathical Theory of Oil and Gas Recovery, Kluwer Academic Publishers, London, (1993)] (gravity and temperature) and of Whitson and Belery ("Compositional Gradients in Petroleum Reservoirs," paper SPE 28000, presented at the 1994 University of Tulsa Centennal Petroleum Engineering Symposium, Tulsa, 29-31 August) (algorithm) are the basis of the mathematical formulation. Published data and previous simplified models validate the computational procedure. A large deep-water field in Campos Basin, Brazil, exemplifies the practical application of the model. The field has an unusual temperature gradient opposite to the Earth's thermal gradient. The results indicate the increase of oil segregation with temperature decrease. The application to field data suggests the reservoir could be partially connected. Fluid composition and property variation are extrapolated to different depths with its respective temperatures. The work is an example of the application of thermodynamic data to the evaluation of reservoir connectivity and fluid properties distribution. Problem Compositional variations along the hydrocarbon column are observed in many reservoirs around the world.1–4 They may affect reservoir/fluid characteristics considerably leading to different field development strategies.5 These variations are caused by many factors, such as gravity, temperature gradient, rock heterogeneity, hydrocarbon genesis and accumulation processes.6 In cases where thermodynamic associated factors (gravity and temperature) are dominant (mixing process in the secondary migration), existing gravitational compositional equilibrium (GCE) models7,8 provide an explanation of most observed variations. However, in some cases8,9 the thermal effect could have the same order of magnitude as the gravity effect. The formulation for calculating compositional variation under the force of gravity for an isothermal system was first given by Gibbs10 $$\mu {ci}(p, Z, T)=\mu {i}(p {{\rm ref}}, Z {{\rm ref}}, T {{\rm ref}}) - m {i}g(h - h {{\rm ref}}),\eqno ({\rm 1})$$ $$\mu {ci}=\delta [nRT\,{\rm ln}(f {i})]/\delta x,\eqno ({\rm 2})$$ $$f {i}=f({\rm EOS}),\eqno ({\rm 3})$$where p =pressure, T=temperature, Z=fluid composition, m=mass, ? c=chemical potential, h=depth, ref=reference, EOS=equation of state, i=component indices, R=real gas constant, n=number of moles, f=fugacity, ln=natural logarithm, x=component concentration, and g=gravitational acceleration. In 1930 Muskat11 provided an exact solution to Eq. (1), assuming a simplified equation of state and ideal mixing. Because of the oversimplified assumptions, the results suggest that gravity has a negligible effect on the compositional variation in reservoir systems. In 1938, Sage and Lacey12 used a more realistic equation of state (EOS), Eq. (3), to evaluate Eq. (2). At that time, the results showed significant composition variations with depth and greater ones for systems close to critical conditions. Schulte13 solved Eq. (1) using a cubic equation of state (3) in 1980. The results showed significant compositional variations. They also suggested a significant effect of the interaction coefficients and the aromatic content of the oil as well as a negligible effect of the EOS type (Peng-Robinson and Soave-Redlich-Kwong) on the final results. A simplified formulation that included gravity and temperature separately was presented by Holt et al.9 in 1983. Example calculations, limited to binary systems, suggest that thermal effects can be of the same magnitude as gravity effects. In 1988, Hirschberg5 discussed the influence of asphaltenes on compositional grading using a simplified two component model (asphaltenes and non-asphaltenes). He concluded, that for oils with oil gravity <35°API, the compositional variations are mainly caused by asphalt segregation and the most important consequences are the large variations in oil viscosity and the possible formation of tar mats. Montel and Gouel7 presented an algorithm in 1985 for solving the GCE problem using an incremental hydrostatic term instead of solving for pressure directly. Field case applications of GCE models were presented by Riemens et al.2 in 1985, and by Creek et al.1 in 1988. They reported some difficulties in matching observed and calculated data but, in the end, it was shown that most compositional variations could be explained by the effect of gravity. Wheaton14 and Lee6 presented GCE models that included capillary forces in 1988 and 1989, respectively. Lee concluded that the effect of capillarity can become appreciable in the neighborhood of 1 ?m pore radius. In 1990, an attempt to combine the effects of gravity and temperature for a system of zero net mass flux was presented by Belery and Silva.15 The multicomponent model was an extension of earlier work by Dougherty and Drickamer16 that was originally developed in 1955 for binary liquid systems. The comparison of calculated and observed data from Ekofisk field in the North Sea is, however, not quantitatively accurate (with or without thermal effect). An extensive discussion and the formal mathematical treatment of compositional grading using irreversible thermodynamics, including gravitational and thermal fields, was presented by Bedrikovetsky17 in 1993. Due to the lack of necessary information on the values of thermal diffusion coefficients, which in general are obtained experimentally only for certain mixtures in narrow ranges of pressure and temperature, simplified models were proposed. In 1994, Hamoodi and Abed3 presented a field case of a giant Middle East reservoir with areal and vertical variations in its composition.
APA, Harvard, Vancouver, ISO, and other styles
22

Zhang, Yuanyuan, Liangxiong Wei, Min Guo, Wei Wang, Yufang Sun, Junfeng Wang, and Liangyin Chen. "VN-NDP: A Neighbor Discovery Protocol Based on Virtual Nodes in Mobile WSNs." Sensors 19, no. 21 (October 31, 2019): 4739. http://dx.doi.org/10.3390/s19214739.

Full text
Abstract:
As an indispensable part of Internet of Things (IoT), wireless sensor networks (WSNs) are more and more widely used with the rapid development of IoT. The neighbor discovery protocols are the premise of communication between nodes and networking in energy-limited self-organizing wireless networks, and play an important role in WSNs. Because the node energy is limited, neighbor discovery must operate in an energy-efficient manner, that is, under the condition of a given energy budget, the neighbor discovery performance should be as good as possible, such that the discovery latency would be as small as possible and the discovered neighbor percentage as large as possible. The indirect neighbor discovery mainly uses the information of the neighbors that have been found by a pairwise discovery method to more efficiently make a re-planning of the discovery wake-up schedules of the original pairwise neighbor discovery, thereby improving the discovery energy efficiency. The current indirect neighbor discovery methods are mainly divided into two categories: one involves removing the inefficient active slots in the original discovery wake-up schedules, and the other involves adding some efficient active slots. However, the two categories of methods have their own limitations. The former does not consider that this removal operation destroys the integrity of the original discovery wake-up schedules and hence the possibility of discovering new neighbors is reduced, which adversely affects the discovered neighbor percentage. For the latter category, there are still inefficient active slots that were not removed in the re-planned wake-up schedules. The motivation of this paper is to combine the advantages of these two types of indirect neighbor discovery methods, that is, to combine the addition of efficient active slots and the removal of inefficient active slots. To achieve this goal, this paper proposes, for the first time, the concept of virtual nodes in neighbor discovery to maximize the integrity of the original wake-up schedules and achieve the goals of adding efficient active slots and removing inefficient active slots. Specifically, a virtual node is a collaborative group that is formed by nodes within a small range. The nodes in a collaborative group share responsibility for the activating task of one member node, and the combination of these nodes’ wake-up schedules forms the full wake-up schedule of a node that only uses a pairwise method. In addition, this paper proposes a set of efficient group management mechanisms, and the key steps affecting energy efficiency are analyzed theoretically to obtain the energy-optimal parameters. The extended simulation experiments in multiple scenarios show that, compared with other methods, our neighbor discovery protocol based on virtual nodes (VN-NDP) has a significant improvement in average discovery delay and discovered neighbor percentage performance at a given energy budget. Compared with the typical indirect neighbor discovery algorithm EQS, a neighbor discovery with extended quorum system, our proposed VN-NDP method reduces the average discovery delay by up to 10 . 03 % and increases the discovered neighbor percentage by up to 18 . 35 % .
APA, Harvard, Vancouver, ISO, and other styles
23

Perkins, T. K., and J. A. Gonzalez. "The Effect of Thermoelastic Stresses on Injection Well Fracturing." Society of Petroleum Engineers Journal 25, no. 01 (February 1, 1985): 78–88. http://dx.doi.org/10.2118/11332-pa.

Full text
Abstract:
Abstract When a cool fluid such as water is injected into a hot reservoir, a growing region of cooled rock is established around the injection well. The rock matrix within the cooled region contracts, and a thermoelastic stress field is induced around the well. For typical waterflooding of a moderately deep reservoir, horizontal earth stresses may be reduced by several hundred psi. If the injection pressure is too high or if suspended solids in the water plug the formation face at the perforations, the formation will be fractured hydraulically. As the fracture grows, the flow system evolves from an essentially circular geometry in the plan view to one characterized more nearly as elliptical. This paper considers thermoelastic stresses that would result from cooled regions of fixed thickness and of elliptical cross section. The stresses for an infinitely thick reservoir have been deduced from information available in public literature. A numerical method has been developed to calculate thermoelastic stresses induced within elliptically shaped regions of finite thickness. Results of these two approaches were combined, and empirical equations were developed to give an approximate but convenient, explicit method for estimating induced stresses. An example problem is given that shows how this theory can be applied to calculate the fracture lengths, bottomhole pressures (BHP's), and elliptical shapes of the flood front as the injection process progresses. Introduction When fluids are injected into a well, such as during waterflooding or other secondary or tertiary recovery processes, the temperatures of the injected fluids are typically cooler than the in-situ reservoir temperatures. A region of cooled rock forms around each injection well, and this region grows as additional fluid is injected. Formation rock within the cooled region contracts, and this leads to a decrease in horizontal earth stress near the injection well. In Ref. 1, the magnitude of the reduction in horizontal earth stress was given for the case of a radially symmetrical cooled region. Another factor, which may occur simultaneously, is the plugging of formation rock by injected solids. There is extensive literature indicating that waters normally available for injection contain suspended solids. Laboratory tests demonstrate that these waters, when injected into formation rocks, can plug the face of the rock or severely limit injectivity. In field operations, injection often simply continues at a BHP that is high enough to initiate and extend hydraulic fractures." The injected fluid then can leak off readily through the large fracture face area. Because of the lowering of horizontal earth stresses that results from cold fluid injection, hydraulic fracturing pressures can be much lower than would be expected for an ordinary low-leakoff hydraulic fracturing treatment. For this reason, the well operator may not be aware that injected fluid is being distributed through an extensive hydraulic fracture. If injection conditions are such that a hydraulic fracture is created, then the flow system will evolve from an essentially circular geometry in the plan view to one characterized more nearly as elliptical. In this paper, thermoelastic stresses for cooled regions of fixed thickness and of elliptical cross section are determined, and a theory of hydraulic fracturing of injection wells is developed. Conditions under which secondary fractures (perpendicular to the primary, main fracture) will open also are discussed. Finally, an example problem is given to illustrate how this theory can be applied to calculate fracture lengths, BHP'S, and elliptical shapes of the flood front as the injection process progresses. Thermoelastic Stresses in Regions of Elliptical Cross Section If fluid of constant viscosity is injected into a line crack (representing a two-wing, vertical hydraulic fracture), the flood front will progress outward. so its outer boundary at any time can be described approximately as an ellipse that is confocal with the line crack. If the injected fluid is at a temperature different from the formation temperature, a region of changed rock temperature with fairly sharply defined boundaries will progress outward from the injection well but lag behind the flood front. The outer boundary of the region of changed temperature also will be elliptical in its plan view and confocal with the line crack (see Fig. 1). Stresses within the region of altered temperature, as well as stress in the surrounding rock, which remains at its initial temperature, will be changed because of the expansion or contraction of the rock within the region of altered temperature. The thermoelastic stresses within an infinitely tall cylinder of elliptical cross section can be determined from information available in the literature. 10 The interior thermoelastic stresses perpendicular and parallel to the major axes of the ellipse are given by Eqs. 1 and 2, respectively. SPEJ P. 78^
APA, Harvard, Vancouver, ISO, and other styles
24

Ma, Yulei, Miho Kageyama, Hisaaki Gyoten, and Motoaki Kawase. "Dimensionless Moduli Governing Under-Rib Transport Phenomenon in Polymer Electrolyte Fuel Cell." ECS Meeting Abstracts MA2023-02, no. 38 (December 22, 2023): 1867. http://dx.doi.org/10.1149/ma2023-02381867mtgabs.

Full text
Abstract:
Introduction The performance of polymer electrolyte fuel cell (PEFC) can be considerably improved by promoting the under-rib convection or cross flow between adjacent gas channels, e.g., applying the serpentine and staggered partially narrowed flow fields[1]. Although a lot of efforts have been devoted to analyze the under-rib transport phenomenon, an analytical model which quantitively demonstrates the dominant factors of the under-rib processes has not been established. In this study, a theoretical analysis of the relations between oxygen reduction reaction (ORR) and mass transfer capacity is carried out. The dimensionless moduli which combine physical properties, flow field dimensions and operating conditions are derived, which provides a theory for optimization of flow field and operating condition design for PEFC. Theoretical Analysis The water is assumed to only exist in vapor phase in a steady-state isothermal cell. The gas diffusion layer (GDL) is considered as uniform porous medium, in which Darcy’s law is applicable. The even velocity and oxygen mass fraction profiles in through-plane direction inside GDL is considered. The ORR was experimentally proven as a first order reaction of oxygen partial pressure according to our previous study,[2] and is regarded as a surface reaction, so that the oxygen consumption rate can be expressed as Eq. (1). The ORR rate constant, which is the function of the cathode overpotential or electromotive force, is assumed to be fixed across the rib. The theoretical model was derived from one-dimensioned mass balance of oxygen, which exhibits that the under-rib process is dominated by 3 dimensionless moduli, Péclet number Pe, Thiele number φ and the ratio of oxygen mass fractions θ at two sides of the under-rib GDL, which were defined by Eqs. (2)–(4). Péclet number represents the ratio of oxygen convection rate to oxygen diffusion rate, which is determined by the pressure difference between adjacent channels. Thiele number represents the ratio of oxygen diffusion resistance to reaction resistance. The average oxygen mass fraction can thus be calculated from Eq. (5). Results and Discussion The theoretical model was compared with the computational fluid dynamics (CFD) simulation in the computational domain shown in Fig. 1, where the GDL was placed between two parallel gas channels, instead of being laid under the gas channel, and the active area was only located under the rib. In the CFD simulation, the velocity and pressure distributions were calculated by solving Navier-Stokes equations, and the component distribution dependencies on density, diffusivity and viscosity were also considered. Fig. 2 shows the under-rib average current density at different geometries and operating conditions. The dimensionless moduli calculated from the inlet conditions offer an accurate current density prediction, which demonstrates these moduli dominate the under-rib transport phenomenon. The typical operating condition gives φ = 1.5 and Pe = 4, where the model provides the average current density with high confidence. The model provides lower oxygen mass fraction than the CFD results especially in case of θ = 1, which exists in the staggered partially narrowed flow field. The reason can be attributed to the interface between gas channel and GDL, where the low velocity cannot eliminate the mass transport resistance, so that the model overestimated the oxygen mass fraction at rib boundaries, as shown in Fig. 3. The effects of dimensionless moduli on the current density are exhibited in Fig. 4. In case of θ = 1, when Pe is lower than 5, the under-rib convection cannot obviously boost the under-rib oxygen concentration, which was also reported in our previous study[1]. On the other hand, in case of θ< 1 and Pe > 0, although little increase on Pe remarkably improves the under-rib mass transfer which explains the good performance of serpentine flow field, the benefits from the under-rib convection reduces when Pe is high. Additionally, since oxygen cannot be supplied sufficiently from the gas channel, higher φ gives lower under-rib oxygen concentration. Conclusions The theoretical model for describing the under-rib transport phenomenon was established and verified by CFD calculation, which was found to be dominated by 3 dimensionless moduli. The model demonstrates that the Péclet number is required to be large enough in partially narrowed flow fields to maximize the profits from the under-rib convection. Acknowledgment This work was supported by the FC-Platform Program: Development of design-for-purpose numerical simulators for attaining long life and high performance project (FY 2020–FY 2023) conducted by the New Energy and Industrial Technology Development Organization (NEDO), Japan. Reference [1] Y. Ma et al., ECS Trans., 109 (9), 171–197 (2022). [2] M. Kawase et al., ECS Trans., 75(14), 147–156 (2016). Figure 1
APA, Harvard, Vancouver, ISO, and other styles
25

Raghavan, R., Wei Chun Chu, and J. R. Jones. "Practical Considerations in the Analysis of Gas-Condensate Well Tests." SPE Reservoir Evaluation & Engineering 2, no. 03 (June 1, 1999): 288–95. http://dx.doi.org/10.2118/56837-pa.

Full text
Abstract:
Summary Several pressure buildup tests are analyzed with a view to evaluate the potential of the ideas given in the literature. A broad range of tests is examined to demonstrate the characteristics of responses in wells producing below the dew point. Methods to obtain quantitative information that is consistent for different tests are outlined. The specific contributions of this article are as follows. First, in this article we examine field data, second, we look at multiple rates, third, we examine unfractured and fractured wells, fourth, we look at wells that have been produced for a short time and those produced for a long time, fifth, we consider both depletion-type and cycling scenarios, and, sixth we tie pressure data to relative permeability and PVT data. Many of these issues are addressed for the first time. Introduction Because of the extraordinary success of the diffusivity equation in enabling us to analyze pressure measurements and the conveniences derived there from, the analysis of pressure responses subject to the influences of multiphase flow is, at best, provided as only a perfunctory treatment in the literature. Single-phase flow is the paradigm in this area of reservoir engineering. The reluctance in shifting from this paradigm may be partially attributed to the perception that relative-permeability measurements are not reliable enough for us to analyze the rapid changes in pressure that occur over a very short period of time. The other principal reason is that a simple method needs to be devised to relate the relative permeability to pressure, although studies have suggested procedures to address this issue.1,2 In this article we provide information for those interested in using multiphase-flow concepts for analyzing pressure-buildup tests in wells producing gas-condensate reservoirs. This class of tests was chosen for a number of reasons besides the fact that the gas-condensate system provides an opportunity to combine both single-phase and two-phase flow concepts. Since we consider multiphase flow under multiple-rate conditions, there are very few theoretical ideas to guide us. The simulations of Jones et al.2,3 provide us with a starting point. These works merely examine a single buildup following a single drawdown with the well flowing at a constant rate or a constant pressure. Since no theoretical evaluations of multirate tests are available, we have conducted a number of simulations using a compositional model to ensure that the explanations we provide are plausible. We do not concentrate on the synthetic situations, however, because the same information may be conveyed by the field-case illustrations. In the following, we examine five tests to demonstrate important features of buildup responses in gas-condensate reservoirs. Four of these tests are in "depletion" systems and the fifth one discusses buildup tests in a pressure-maintenance project. Background The depletion tests we consider presume that the results of a constant-composition-expansion (CCE) test on a representative sample are available. An equation of state, tuned to this sample, provides information on molar density and viscosity. In addition, we assume that appropriate relative-permeability measurements are available. Using this information, we proceed to analyze buildup tests using the concepts suggested by Jones, Vo, and Raghavan.3 The buildup tests for the pressure-maintenance system are evaluated using the single-phase analog because information on the in-situ composition (pressure-maintenance project) is unavailable to us. These tests are analyzed by the composite-reservoir formulation.4Figs. 1 and 2 present the pertinent CCE and relative-permeability information used in this work. We consider a wide range of mixtures with the maximum liquid dropout in the range of 0.07 to 0.35. Mixtures 1, 2, and 3 are for depletion experiments, and mix 4 applies to the test for the well in the pressure-maintenance project. Justification for the use of relative-permeability curves is based on the fact that these curves are also used in matching performance and making production forecasts. As expected, the relative permeability to oil is negligibly small until the liquid saturation becomes quite large. Table 1 presents properties that are needed to analyze the buildup tests. Our primary focus in all of the following is to obtain a consistent interpretation of multiple buildup tests after the wellbore pressure has fallen below the dew-point pressure. Theoretical Considerations We use single-phase and two-phase analogs to analyze pressure measurements. Our focus will be the interpretation of buildup measurements. The single-phase analog given by $$m(p)={\int {p {wf, s}}^{p {ws}}}\,{\rho {g}\over \mu {{\rm g}}}\,{\rm d}p,\eqno ({\rm 1})$$ is essentially identical to the analog commonly used for dry-gas systems. Here, ? is the molar density, ? is the viscosity, pwf, s is the pressure at the time of shut-in, pws is the shut-in pressure, and the subscript g refers to the gas phase. This analog takes advantage of the unique character of the condensate system, namely that, under normal circumstances, the condensate is immobile over substantial portions of the reservoir. Thus, if the variation in the relative permeability for the gas phase is negligibly small over the region where liquid is immobile, then this analog should be useful whenever this region of the reservoir begins to influence the well response. (In all of the following, we assume that water is immobile.)
APA, Harvard, Vancouver, ISO, and other styles
26

Zhang, Yongxia. "Diagnostic value of echocardiography combined with serum C-reactive protein level in chronic heart failure." Journal of Cardiothoracic Surgery 18, no. 1 (March 25, 2023). http://dx.doi.org/10.1186/s13019-023-02176-7.

Full text
Abstract:
Abstract Background Chronic heart failure (CHF) is regarded as common clinical heart disease. This study aims to investigate the clinical diagnostic value of echocardiography (Echo) and serum C-reactive protein (CRP) levels in patients with CHF. Methods A total of 75 patients with CHF (42 males, 33 females, age 62.72 ± 1.06 years) were enrolled as study subjects, with 70 non-CHF subjects (38 males, 32 females, age 62.44 ± 1.28 years) as controls. The left ventricular ejection fraction (LVEF), fraction shortening rate of the left ventricle (FS), and early to late diastolic filling (E/A) were determined by Echo, followed by an examination of the expression of serum CRP by ELISA. In addition, the Pearson method was used to analyze the correlation between echocardiographic quantitative parameters (EQPs) (LVEF, FS, and E/A) and serum CRP levels. Receiver operating characteristic (ROC) curve was adopted to evaluate the diagnostic efficacy of EQPs and serum CRP levels for CHF. The independent risk factors for CHF patients were measured by logistics regression analysis. Results The serum CRP level of CHF patients was elevated, the values of LVEF and FS decreased, and the E/A values increased. ROC curve revealed that the EQPs (LVEF, FS, and E/A) combined with serum CRP had high diagnostic values for CHF patients. Logistic regression analysis showed that the EQPs (LVEF, FS, and E/A) and serum CRP levels were independent risk factors for CHF patients. Conclusion Echo combined with serum CRP level has high clinical diagnostic values for CHF patients.
APA, Harvard, Vancouver, ISO, and other styles
27

Du, Huijun, Kang An, Rong Wang, Zhipeng Yin, Feng Peng, Larry Lüer, Christoph J. Brabec, Lei Ying, and Ning Li. "Achieving High External Quantum Efficiency for ITIC‐Based Organic Solar Cells with Negligible Homo Energy Offsets." Advanced Energy Materials, December 26, 2023. http://dx.doi.org/10.1002/aenm.202301965.

Full text
Abstract:
AbstractMinimizing energy loss in organic solar cells (OSCS) is critical for attaining high photovoltaic performance. Among the parameters that correlated to photovoltaic performance, the energy offsets between donor–acceptor pairs play a vital role in photoelectric conversion processes. For so far reported a large number of non‐fullerene acceptors (NFAs), only Y6 and its derivatives can achieve external quantum efficiencies (EQEs) over 80% with negligible energy offsets when combined with polymeric donors. Thus, understanding the relationship between energy offsets and energy losses in representative NFAs is the key to further enhancing the efficiency of OSCs. In this study, a series of wide‐bandgap polymer donors based on pyrrolo[3,4‐f]benzotriazole‐5,7(6H)‐dione (TzBI) and benzo[1,2‐c:4,5‐c′] dithiophene‐4,8‐dione building blocks are combined with representative NFAs, including ITIC and Y6, to gain deep insights into their photovoltaic performances and related energy losses. Outstanding EQEs (≈70%) and suppressed non‐radiative recombination are achieved at negligible energy offsets. Moreover, it is noted that a prolonged exciton lifetime of acceptor is not essential to obtain high EQEs in OSCs with negligible energy offsets. Eventually, ITIC derivatives with high electroluminescence efficiencies and near‐infrared absorptions have the potential to be assembled to obtain high‐efficiency OSCs.
APA, Harvard, Vancouver, ISO, and other styles
28

Lu, Xueying, Qingyang Wang, Xinliang Cai, Yupei Qu, Zhiqiang Li, Chenglong Li, and Yue Wang. "Exciplex Hosts for Constructing Green Multiple Resonance Delayed Fluorescence OLEDs with High Color Purity And Low Efficiency Roll‐Offs." Advanced Functional Materials, January 20, 2024. http://dx.doi.org/10.1002/adfm.202313897.

Full text
Abstract:
AbstractA carbazole‐based hole‐transport‐type (p‐type) host, BPhCz, is developed using a nonsymmetrical connection strategy between two identical groups. Two benzimidazole–triazine‐based electron acceptor materials with superior electron transport abilities, namely SFX‐PIM‐TRZ and DSFX‐PIM‐TRZ, are designed to fabricate exciplex‐host systems combined with BPhCz. Exciplexes exhibit excellent carrier transport characteristics and appropriate energy levels and can serve as hosts for green multiple resonance‐induced thermally activated delayed fluorescence (MR‐TADF) devices. Efficient green MR‐TADF organic light‐emitting diodes with high color purity and low efficiency roll‐offs are successfully fabricated using the exciplexes prepared from BPhCz:SFX‐PIM‐TRZ and BPhCz:DSFX‐PIM‐TRZ as hosts, which show low driving voltages of 2.6 and 2.7 V, high maximum external quantum efficiencies (EQEs) of 35.7% and 35.5%, ultrapure green emission with Commission Internationale de L'Eclairage coordinates of (0.27, 0.69) and (0.28, 0.69), and high EQEs of 31% and 30.5% at 1000 cd m−2, respectively.
APA, Harvard, Vancouver, ISO, and other styles
29

Ab Wahid, Roslina, and Nigel Peter Grigg. "A draft framework for quality management system auditor education: findings from the initial stage of a Delphi study." TQM Journal ahead-of-print, ahead-of-print (December 15, 2020). http://dx.doi.org/10.1108/tqm-08-2020-0193.

Full text
Abstract:
PurposeChanges in structure and conceptual underpinnings of ISO 9001 mean that quality management system (QMS) auditors require a wide knowledge base and skill set to effectively evaluate contemporary QMS and add value to the process. Hence, this study presents an open curriculum framework of the knowledge, skills and attributes for quality auditor education.Design/methodology/approachThis study describes the first two phases of a three-phase study examining the educational requirements for external quality auditors (EQAs). Phase 1 involved a review of relevant international literature on auditor competence and education; Phase 2 involved the collection of qualitative data from a panel of experts, combined with the initial round of a Delphi study. A thematic analysis was used to analyze the findings from the questionnaire.FindingsThe findings of this study suggests there is a need to improve EQAs education, as most experts reported the quality of audit to be variable, inconsistent, poor and diminishing in value. The most important improvements to auditor education are to update and improve the auditors' knowledge on the requirements of the ISO 9001 standard and technology in business and skills such as report writing, communication, IT understanding and analytical ability. Some of the attributes reported as being desirable to instill in EQAs include the following: objectivity, integrity, ethics and professionalism; being observant, perceptive, articulate and confident; having good judgment; flexibility, adaptivity and diplomacy, fairness and open-mindedness.Originality/valueThis study highlights the need for wider EQA education based on the gap identified in its performance. The resulting framework can be adopted by accreditation and certification bodies to evaluate and improve their auditors' audit performance.
APA, Harvard, Vancouver, ISO, and other styles
30

Song, Zhenzhen, Jiajia Zhang, Bing Liu, Hao Wang, Lijun Bi, and Qingxia Xu. "Practical application of European biological variation combined with Westgard Sigma Rules in internal quality control." Clinical Chemistry and Laboratory Medicine (CCLM), August 29, 2022. http://dx.doi.org/10.1515/cclm-2022-0327.

Full text
Abstract:
Abstract Objectives Westgard Sigma Rules is a statistical tool available for quality control. Biological variation (BV) can be used to set analytical performance specifications (APS). The European Federation of Clinical Chemistry and Laboratory Medicine (EFLM) regularly updates BV data. However, few studies have used robust BV data to determine quality goals and design a quality control strategy for tumor markers. The aim of this study was to derive APS for tumor markers from EFLM BV data and apply Westgard Sigma Rules to establish internal quality control (IQC) rules. Methods Precision was calculated from IQC data, and bias was obtained from the relative deviation of the External quality assurance scheme (EQAS) group mean values and laboratory-measured values. Total allowable error (TEa) was derived using EFLM BV data. After calculating sigma metrics, the IQC strategy for each tumor marker was determined according to Westgard Sigma Rules. Results Sigma metrics achieved for each analyte varied with the level of TEa. Most of these tumor markers except neuron-specific enolase reached 3σ or better based on TEamin. With TEades and TEaopt set as the quality goals, almost all analytes had sigma values below 3. Set TEamin as quality goal, each analyte matched IQC muti rules and numbers of control measurements according to sigma values. Conclusions Quality goals from the EFLM BV database and Westgard Sigma Rules can be used to develop IQC strategy for tumor markers.
APA, Harvard, Vancouver, ISO, and other styles
31

Pilipenko, Vyacheslav, and K. Shiokawa. "A Closer Cooperation between Space and Seismology Communities – a Way to Avoid Errors in Hunting for Earthquake Precursors." Russian Journal of Earth Sciences, February 29, 2024, 1–22. http://dx.doi.org/10.2205/2024es000899.

Full text
Abstract:
The space physicists and the earthquake (EQ) prediction community exploit the same instruments – magnetometers, but for different tasks: space physicists try to comprehend the global electrodynamics of near-Earth space on various time scales, whereas the seismic community develops electromagnetic methods of short-term EQ prediction. The lack of deep collaboration between those communities may result sometimes in erroneous conclusions. In this critical review, we demonstrate some incorrect results caused by a neglect of specifics of geomagnetic field evolution during space weather activation. The considered examples comprise: Magnetic storms as a trigger of EQs; ULF waves as a global EQ precursor; Geomagnetic impulses before seismic shocks; Long-period geomagnetic disturbances generated by strong EQs; Discrimination of underground ULF sources by amplitude-phase gradients; Depression of ULF power as a short-term EQ precursor; and Detection of seimogenic emissions by satellites. To verify the reliability of the above widely disseminated results data from available arrays of fluxgate and search-coil magnetometers have been re-analyzed. In all considered events, the “anomalous” geomagnetic field behavior can be explained by global geomagnetic activity, and it is apparently not associated with seismic activity. This critical review does not claim that ULF electromagnetic field cannot be used as a sensitive indicator of the EQ preparation processes, but we suggest that both communities must cooperate their studies more tightly using data exchange, combined usage of magnetometer networks, organization of CDAW for unique events, etc.
APA, Harvard, Vancouver, ISO, and other styles
32

Li, Saifei, Xiongping Xu, Qingli Lin, Jiahui Sun, Han Zhang, Huaibin Shen, Lin Song Li, and Lei Wang. "Bright and Stable Yellow Quantum Dot Light‐Emitting Diodes Through Core–Shell Nanostructure Engineering." Small, December 28, 2023. http://dx.doi.org/10.1002/smll.202306859.

Full text
Abstract:
AbstractSolution‐processed and efficient yellow quantum dot light‐emitting diodes (QLEDs) are considered key optoelectronic devices for lighting, display, and signal indication. However, limited synthesis routes for yellow quantum dots (QDs), combined with inferior stress‐relaxation of the core–shell interface, pose challenges to their commercialization. Herein, a nanostructure tailoring strategy for high‐quality yellow CdZnSe/ZnSe/ZnS core/shell QDs using a “stepwise high‐temperature nucleation‐shell growth” method is introduced. The synthesized CdZnSe‐based QDs effectively smoothed the release stress of the core–shell interface and revealed a near‐unit photoluminescence quantum yield, with nonblinking behavior and matched energy level, which accelerated radiative recombination and charge injection balance for device operation. Consequently, the yellow CdZnSe‐based QLEDs exhibited a peak external quantum efficiency of 23.7%, a maximum luminance of 686 050 cd m−2, and a current efficiency of 103.2 cd A−1, along with an operating half‐lifetime of 428 523 h at 100 cd m−2. To the best of the knowledge, the luminance and operational stability of the device are found to be the highest values reported for yellow LEDs. Moreover, devices with electroluminescence (EL) peaks at 570–605 nm exhibited excellent EQEs, surpassing 20%. The work is expected to significantly push the development of RGBY‐based display panels and white LEDs.
APA, Harvard, Vancouver, ISO, and other styles
33

Li, Mingyang, Jindi Wang, Jisong Yao, Shalong Wang, Leimeng Xu, and Jizhong Song. "Trade‐Off Between Efficiency and Stability of CsPbBr3 Perovskite Quantum Dot‐Based Light‐Emitting Diodes by Optimized Passivation Ligands for Br/Pb." Advanced Functional Materials, October 3, 2023. http://dx.doi.org/10.1002/adfm.202308341.

Full text
Abstract:
AbstractAlthough significant progress has been made in improving the external quantum efficiencies (EQEs) of perovskite quantum dot (QD) light‐emitting diodes (QLEDs), understanding the degradation mechanism and enhancing stability remain a challenge. Herein, increasing the content of Br‐based passivation ligands is shown to enhance the EQE up to 16.1% by reducing the defects of CsPbBr3 QDs in a Br‐rich environment. However, the operational lifetimes of perovskite QLEDs gradually decrease with the increase of halide content, owing to the intensified ion migration under continuous electric field confirmed by the current behavior of QLEDs and time‐of‐flight secondary‐ion mass spectrometry. Furthermore, a thorough analysis of the relationship between electricity and luminance of QLEDs suggests that a small amount of residue oleic acid ligands could weaken ion migration. Accordingly, a halide‐ and acid‐hybrid (HAH) co‐passivation strategy is proposed to optimize the content of Br‐ and acid‐based ligands, and achieve a maximum EQE of 18.6% and an operational lifetime (T50, extrapolated) of 213 h for CsPbBr3 QLEDs. This approach for passivating QDs combines the high efficiency of Br‐based ligands with the improved stability of acid‐based ligands. The study elucidates the correlation between ligands and device performance, highlighting the significance of two or even multiple ligands for efficient and stable perovskite QLEDs.
APA, Harvard, Vancouver, ISO, and other styles
34

Thakur, Ramendra, and Dhoha AlSaleh. "Drivers of managers’ affect (emotions) and corporate website usage: a comparative analysis between a developed and developing country." Journal of Business & Industrial Marketing ahead-of-print, ahead-of-print (October 14, 2020). http://dx.doi.org/10.1108/jbim-02-2020-0118.

Full text
Abstract:
Purpose Existing literature reveals a general lack of research on business-to-business (B2B) ecommerce showcasing how managers’ affect plays a role in enhancing their attitude toward the businesses they work with. The purpose of this study is to fill that void by ascertaining whether managers’ corporate website knowledge, corporate website expertise and affect toward a corporate site influence their attitude toward the corporate website. It also investigates whether managers’ attitude guides corporate website usage intention in the context of two culturally diverse countries. Design/methodology/approach Data were collected from managers from the USA and Kuwait using an online survey method. Structural equation modeling using EQS 6.2 software was used for analysis. Findings The results indicate that corporate Web knowledge influences Web expertise and affect in the US sample; in the Kuwaiti sample, Web knowledge influences Web expertise but does not influence affect. The findings in both studies reveal that managers’ knowledge about the Web has a positive effect on their attitude toward a business website. For Kuwaiti managers, Web expertise has a positive influence on affect. However, Web expertise does not influence managers’ affect in the US sample. The results further suggest that affect influences a manager’s attitude toward corporate websites in the US and Kuwaiti samples. Originality/value Self-efficacy and affect infusion theories serve as the foundation for this study. This research adds to these two theories in three ways. First, it examines the combined influence of affect and attitude on B2B managers’ intent to use a corporate website. Second, it proposes a single model that examines the combined relationships among managers’ knowledge and managers’ Web expertise that elicit managerial affect toward corporate websites. Third, the proposed model was tested using samples from two diverse countries (developed, the USA, and developing, Kuwait).
APA, Harvard, Vancouver, ISO, and other styles
35

Zhao, Lei, Panfei Chen, Peng Liu, Yuepeng Song, and Deqiang Zhang. "Genetic Effects and Expression Patterns of the Nitrate Transporter (NRT) Gene Family in Populus tomentosa." Frontiers in Plant Science 12 (May 13, 2021). http://dx.doi.org/10.3389/fpls.2021.661635.

Full text
Abstract:
Nitrate is an important source of nitrogen for poplar trees. The nitrate transporter (NRT) gene family is generally responsible for nitrate absorption and distribution. However, few analyses of the genetic effects and expression patterns of NRT family members have been conducted in woody plants. Here, using poplar as a model, we identified and characterized 98 members of the PtoNRT gene family. We calculated the phylogenetic and evolutionary relationships of the PtoNRT family and identified poplar-specific NRT genes and their expression patterns. To construct a core triple genetic network (association - gene expression - phenotype) for leaf nitrogen content, a candidate gene family association study, weighted gene co-expression network analysis (WGCNA), and mapping of expression quantitative trait nucleotides (eQTNs) were combined, using data from 435 unrelated Populus. tomentosa individuals. PtoNRT genes exhibited distinct expression patterns between twelve tissues, circadian rhythm points, and stress responses. The association study showed that genotype combinations of allelic variations of three PtoNRT genes had a strong effect on leaf nitrogen content. WGCNA produced two co-expression modules containing PtoNRT genes. We also found that four PtoNRT genes defined thousands of eQTL signals. WGCNA and eQTL provided comprehensive analysis of poplar nitrogen-related regulatory factors, including MYB17 and WRKY21. NRT genes were found to be regulated by five plant hormones, among which abscisic acid was the main regulator. Our study provides new insights into the NRT gene family in poplar and enables the exploitation of novel genetic factors to improve the nitrate use efficiency of trees.
APA, Harvard, Vancouver, ISO, and other styles
36

Wagner, Jennifer, Greeley G, and Sieber S. "Operating room environmental improvements with Venturi valves and Environmental Quality Indicator risk prediction may help reduce Surgical Site Infections." Medical Research Archives 11, no. 1 (2023). http://dx.doi.org/10.18103/mra.v11i1.3475.

Full text
Abstract:
Introduction: There is mounting evidence supporting the connection of the operating room airborne environment to Surgical Site Infections (SSI). Environmental Quality Indicators (EQI) can be measured to determine the risk of microbial contamination in the OR by room sector. This risk picture is used to inform educated improvements to the aseptic environment. When improvements based on the EQI risk picture are combined with precise control of the airborne environment using Venturi technology, the asepsis of the OR is maintained and the risk of contamination is lower. This study sought to determine if precisely maintained asepsis in the OR based on EQI risk picture lowered the SSI rate. Methods: The environmental quality indicators in a Craniotomy OR were measured and a risk picture, by room sector, was created. The EQIs measured included air change rates, humidity, temperature, pressure, particle counts, occupancy, traffic patterns, air flow and directionality, and door openings, among others. Improvements to the OR performance were made based on the risk picture and Venturi technology was used to precisely control the airborne environment. SSIs were tracked for 17 months prior to improvements and then for 10 months following the improvements. Results: The asepsis of the OR airborne environment was improved and an EQI risk picture was developed following improvements to document improvement. In the 17 months prior to improvements to the OR, there were 14 SSIs out of 430 total surgeries and the SSI rate was 3.9%. In the 10 months following the improvements, there was 1 SSI out of 180 surgeries, and the SSI rate was 0.5%. The reduction in SSIs was statistically significant at p=.0377 following improvements. Conclusion: Improvement of the airborne environment in ORs improves the asepsis and may help reduce SSI.
APA, Harvard, Vancouver, ISO, and other styles
37

Kortenkamp, Andreas, Michael Faust, Thomas Backhaus, Rolf Altenburger, Martin Scholze, Christin Müller, Sibylle Ermler, Leo Posthuma, and Werner Brack. "Mixture risks threaten water quality: the European Collaborative Project SOLUTIONS recommends changes to the WFD and better coordination across all pieces of European chemicals legislation to improve protection from exposure of the aquatic environment to multiple pollutants." Environmental Sciences Europe 31, no. 1 (September 30, 2019). http://dx.doi.org/10.1186/s12302-019-0245-6.

Full text
Abstract:
Abstract Evidence is mounting that chemicals can produce joint toxicity even when combined at levels that singly do not pose risks. Environmental Quality Standards (EQS) defined for single pollutants under the Water Framework Directive (WFD) do not protect from mixture risks, nor do they enable prioritization of management options. Despite some provisions for mixtures of specific groups of chemicals, the WFD is not fit for purpose for protecting against or managing the effects of coincidental mixtures of water-borne pollutants. The conceptual tools for conducting mixture risk assessment are available and ready for use in regulatory and risk assessment practice. Extension towards impact assessment using cumulative toxic unit and mixture toxic pressure analysis based on chemical monitoring data or modelling has been suggested by the SOLUTIONS project. Problems exist in the availability of the data necessary for mixture risk assessments. Mixture risk assessments cannot be conducted without essential input data about exposures to chemicals and their toxicity. If data are missing, mixture risk assessments will be biassed towards underestimating risks. The WFD itself is not intended to provide toxicity data. Data gaps can only be closed if proper feedback links between the WFD and other EU regulations for industrial chemicals (REACH), pesticides (PPPR), biocides (BPR) and pharmaceuticals are implemented. Changes of the WFD alone cannot meet these requirements. Effect-based monitoring programmes developed by SOLUTIONS should be implemented as they can capture the toxicity of complex mixtures and provide leads for new candidate chemicals that require attention in mixture risk assessment. Efforts of modelling pollutant levels and their anticipated mixture effects in surface water can also generate such leads. New pollutant prioritization schemes conceived by SOLUTIONS, applied in the context of site prioritization, will help to focus mixture risk assessments on those chemicals and sites that make substantial contributions to mixture risks.
APA, Harvard, Vancouver, ISO, and other styles
38

Xia, Fangxiao, Wenke Hao, Jinxiu Liang, Yanhua Wu, Feng Yu, Wenxue Hu, Zhi Zhao, and Wei Liu. "Applicability of Creatinine-based equations for estimating glomerular filtration rate in elderly Chinese patients." BMC Geriatrics 21, no. 1 (September 4, 2021). http://dx.doi.org/10.1186/s12877-021-02428-y.

Full text
Abstract:
Abstract Background The accuracy of the estimated glomerular filter rate (eGFR) in elderly patients is debatable. In 2020, a new creatinine-based equation by European Kidney Function Consortium (EKFC) was applied to all age groups. The objective of this study was to assess the appropriateness of the new EKFC equation with Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI), Lund-Malmö Revised (LMR), Berlin Initiative Study 1 (BIS1), and full age spectrum (FAS) equations based on serum creatinine (SCR) for elderly Chinese patients. Methods A total of 612 elderly patients with a measured glomerular filtration rate (mGFR) by the dual plasma sample clearance method with Technetium-99 m-diethylenetriamine-pentaacetic acid (Tc-99 m-DTPA) were divided into four subgroups based on age, sex, mGFR, and whether combined with diabetes. The performance of GFR was assessed while considering bias, precision, accuracy, and root-mean-square error (RMSE). Bland-Altman plots, concordance correlation coefficients (CCCs), and correlation coefficients were applied to evaluate the validity of eGFR. Results The median age of the 612 participants was 73 years, and 386 (63.1%) were male. Referring to mGFR (42.1 ml/min/1.73 m2), the CKD-EPI, LMR, BIS1, FAS, and EKFC equations estimated GFR at 44.4, 41.1, 43.6, 41.8 and 41.9 ml/min/1.73 m2, respectively. Overall, the smallest bias was found for the BIS1 equation (− 0.050 vs. range − 3.015 to 0.795, P<0.05, vs. the CKD-EPI equation). Regarding P30, interquartile range (IQR), RMSE, and GFR category misclassification, the BIS1 equation generally performed more accurately than the other eqs. (73.9%, 12.7, 12.9, and 35.3%, respectively). Nevertheless, no equation achieved optimal performance for the mGFR≥60 ml/min/1.73 m2 subgroup. Bland-Altman analysis showed the smallest mean difference (− 0.3 ml/min/1.73 m2) for the BIS1 equation when compared to the other equations. Conclusions This study suggested that the BIS1 equation was the most applicable for estimating GFR in Chinese elderly patients with moderate to severe renal impairment.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography