Academic literature on the topic 'Final volumes method'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Final volumes method.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Final volumes method"

1

Hokkinen, Lasse, Teemu Mäkelä, Sauli Savolainen, and Marko Kangasniemi. "Computed tomography angiography-based deep learning method for treatment selection and infarct volume prediction in anterior cerebral circulation large vessel occlusion." Acta Radiologica Open 10, no. 11 (November 2021): 205846012110603. http://dx.doi.org/10.1177/20584601211060347.

Full text
Abstract:
Background Computed tomography perfusion (CTP) is the mainstay to determine possible eligibility for endovascular thrombectomy (EVT), but there is still a need for alternative methods in patient triage. Purpose To study the ability of a computed tomography angiography (CTA)-based convolutional neural network (CNN) method in predicting final infarct volume in patients with large vessel occlusion successfully treated with endovascular therapy. Materials and Methods The accuracy of the CTA source image-based CNN in final infarct volume prediction was evaluated against follow-up CT or MR imaging in 89 patients with anterior circulation ischemic stroke successfully treated with EVT as defined by Thrombolysis in Cerebral Infarction category 2b or 3 using Pearson correlation coefficients and intraclass correlation coefficients. Convolutional neural network performance was also compared to a commercially available CTP-based software (RAPID, iSchemaView). Results A correlation with final infarct volumes was found for both CNN and CTP-RAPID in patients presenting 6–24 h from symptom onset or last known well, with r = 0.67 ( p < 0.001) and r = 0.82 ( p < 0.001), respectively. Correlations with final infarct volumes in the early time window (0–6 h) were r = 0.43 ( p = 0.002) for the CNN and r = 0.58 ( p < 0.001) for CTP-RAPID. Compared to CTP-RAPID predictions, CNN estimated eligibility for thrombectomy according to ischemic core size in the late time window with a sensitivity of 0.38 and specificity of 0.89. Conclusion A CTA-based CNN method had moderate correlation with final infarct volumes in the late time window in patients successfully treated with EVT.
APA, Harvard, Vancouver, ISO, and other styles
2

Batista, Elsa, Isabel Godinho, George Rodrigues, and Doreen Rumery. "Validation of the Photometric Method Used for Micropipette Calibration." NCSL International measure 13, no. 1 (2021): 40–45. http://dx.doi.org/10.51843/measure.13.1.4.

Full text
Abstract:
There are two methods generally used for calibration of micropipettes: the gravimetric method described in ISO 8655-6:2002 and the photometric method described in ISO 8655-7:2005. In order to validate the photometric method, several micropipettes of different capacities from 0.1 µL to 1000 µL were calibrated using both methods (gravimetric and photometric) in two different laboratories, IPQ (Portuguese Institute for Quality) and Artel. These tests were performed by six different operators. The uncertainty for both methods was determined and it was verified that the uncertainty component that has a higher contribution to the final uncertainty budget depends on the volume delivered. In the photometric method for small volumes, the repeatability of the pipette is the largest uncertainty component, but for volumes, larger than 100 µL, the photometric instrument is the most significant source of uncertainty. Based on all the results obtained with this study, one may consider the photometric method validated.
APA, Harvard, Vancouver, ISO, and other styles
3

Koopman, Miou S., Olvert A. Berkhemer, Ralph R. E. G. Geuskens, Bart J. Emmer, Marianne A. A. van Walderveen, Sjoerd F. M. Jenniskens, Wim H. van Zwam, et al. "Comparison of three commonly used CT perfusion software packages in patients with acute ischemic stroke." Journal of NeuroInterventional Surgery 11, no. 12 (June 15, 2019): 1249–56. http://dx.doi.org/10.1136/neurintsurg-2019-014822.

Full text
Abstract:
Background and purposeCT perfusion (CTP) might support decision making in patients with acute ischemic stroke by providing perfusion maps of ischemic tissue. Currently, the reliability of CTP is hampered by varying results between different post-processing software packages. The purpose of this study is to compare ischemic core volumes estimated by IntelliSpace Portal (ISP) and syngo.via with core volumes as estimated by RAPID.MethodsThirty-five CTP datasets from patients in the MR CLEAN trial were post-processed. Core volumes were estimated with ISP using default settings and with syngo.via using three different settings: default settings (method A); additional smoothing filter (method B); and adjusted settings (method C). The results were compared with RAPID. Agreement between methods was assessed using Bland–Altman analysis and intraclass correlation coefficient (ICC). Accuracy for detecting volumes up to 25 mL, 50 mL, and 70 mL was assessed. Final infarct volumes were determined on follow-up non-contrast CT.ResultsMedian core volume was 50 mL with ISP, 41 mL with syngo.via method A, 20 mL with method B, 36 mL with method C, and 11 mL with RAPID. Agreement ranged from poor (ISP: ICC 0.41; method A: ICC 0.23) to good (method B: ICC 0.83; method C: ICC 0.85). The bias (1.8 mL) and limits of agreement (−27, 31 mL) were the smallest with syngo.via with additional smoothing (method B). Agreement for detecting core volumes ≤25 mL with ISP was 54% and 57%, 85% and 74% for syngo.via methods A, B, and C, respectively.ConclusionBest agreement with RAPID software is provided by syngo.via default settings with additional smoothing. Moreover, this method has the highest agreement in categorizing patients with small core volumes.
APA, Harvard, Vancouver, ISO, and other styles
4

Wu, Chen-Sen, Lawrence L. Latour, and Steven Warach. "Clinically Meaningful MRI Perfusion Abnormalities in Acute Stroke: Comparison of Analytic Techniques." Stroke 32, suppl_1 (January 2001): 339. http://dx.doi.org/10.1161/str.32.suppl_1.339-a.

Full text
Abstract:
P2 Background: MRI perfusion imaging (PWI) can demonstrate hemodynamic abnormalities in acute stroke. The volume of hypoperfusion derived from calculated perfusion parameter maps has been used to predict tissue at risk for infarction and to identify presumptive ischemic penumbra. It is unclear how best to distinguish true tissue at risk from benign hypoperfusion. A first step toward this goal is identifying clinically significant PWI abnormalities in stroke patients. Our purpose was to evaluate four different perfusion parameter maps to determine which algorithm best correlates with clinical severity. Methods: Twenty patients were retrospectively selected from our database. Selection criteria included 1) acute hemispheric lesion, 2) MRI within 24 hours of symptom onset, and 3) no history of prior stroke. Perfusion maps were derived using four different algorithms to estimate relative mean transit time (rMTT): 1) cerebral blood volume (CBV) / cerebral blood flow (CBF), 2) CBV / peak of the concentration-time curve, 3) time to peak (TTP), and 4) ratio of the 1 st / 0 th moment of the transfer function (first moment method). Abnormal perfusion volumes were derived from ever-increasing thresholds of rMTT delay relative to normal contralateral tissue. The volumes at each delay threshold were correlated with National Institutes of Health Stroke Scale (NIHSS) for each algorithm. Results: Significant correlations between hypoperfusion volumes and NIHSS were found for all algorithms. The first moment method had the highest correlation (r = 0.76) and the correlations for this method were independent of the delay threshold used to derive the volumes. For the other algorithms, the best correlations were observed for volumes including only voxels with delays of 4 seconds or greater. Conclusions: This analysis suggests that the first moment method may have advantages over the others in determining the correlation of hypoperfusion volume to NIHSS. Further analyses correlating acute hypoperfusion volumes to final infarct volumes may help refine the choice of best analytic method for determining clinically relevant PWI abnormalities.
APA, Harvard, Vancouver, ISO, and other styles
5

Ward, Jeffrey S., George R. Stephens, and Francis J. Ferrandino. "Influence of Cutting Method on Stand Growth in Sawtimber Oak Stands." Northern Journal of Applied Forestry 22, no. 1 (March 1, 2005): 59–67. http://dx.doi.org/10.1093/njaf/22.1.59.

Full text
Abstract:
Abstract Many upland oak forests in the eastern United States are approaching economic and biological maturity. A study was established in 1981–1984 in three central Connecticut forests to examine the effects of six distinct cutting methods (shelterwood, diameter limit, multiaged crop tree, high grading, silvicultural clearcut, forest preserve) on stand growth and dynamics in sawtimber oak stands. Board-foot volumes (International 1/4) averaged 8.4 mbf/ac before the initial harvest. Sixty-nine percent of sawtimber trees had butt-log grades of 2 or better. Volume growth was significantly lower on high grading plots (36 bf/ac/year) than on the forest preserve, diameter limit, shelterwood, and multiaged crop tree plots (∼214 bf/ac/year) through two cutting cycles. Total board-foot yield (final volumes plus harvested volumes) for the silvicultural clearcut plots (7.3 mbf/ac) was significantly lower than for uncut, shelterwood, and diameter limit cuts, 12.3, 12.5, and 13.0 mbf/ac, respectively. This study showed that three distinct cutting methods: shelterwood, multiaged crop tree, and forest preserve resulted in similar stand volume growth rates in sawtimber oak stands. The first two methods can be used by landowners who wish to generate income to offset expenses. The choice will depend on the aesthetic and regeneration goals of the landowner. Diameter-limit cutting also had similar volume rates, but it was necessary to lower the diameter limits for the second cutting cycle to maintain economically viable harvests. As a consequence, residual stand structure after the second cutting cycle was similar to that for the high grading plots. Although high grading had the highest harvested volume during the first cutting cycle, low quality of residual trees and depressed stand growth rates indicate it is not a viable option for long-term forest management. North. J. Appl. For. 22(1):59–67.
APA, Harvard, Vancouver, ISO, and other styles
6

Jeong, Jaecheol, Suyeon Jeon, and Yong Seok Heo. "An Efficient Stereo Matching Network Using Sequential Feature Fusion." Electronics 10, no. 9 (April 28, 2021): 1045. http://dx.doi.org/10.3390/electronics10091045.

Full text
Abstract:
Recent stereo matching networks adopt 4D cost volumes and 3D convolutions for processing those volumes. Although these methods show good performance in terms of accuracy, they have an inherent disadvantage in that they require great deal of computing resources and memory. These requirements limit their applications for mobile environments, which are subject to inherent computing hardware constraints. Both accuracy and consumption of computing resources are important, and improving both at the same time is a non-trivial task. To deal with this problem, we propose a simple yet efficient network, called Sequential Feature Fusion Network (SFFNet) which sequentially generates and processes the cost volume using only 2D convolutions. The main building block of our network is a Sequential Feature Fusion (SFF) module which generates 3D cost volumes to cover a part of the disparity range by shifting and concatenating the target features, and processes the cost volume using 2D convolutions. A series of the SFF modules in our SFFNet are designed to gradually cover the full disparity range. Our method prevents heavy computations and allows for efficient generation of an accurate final disparity map. Various experiments show that our method has an advantage in terms of accuracy versus efficiency compared to other networks.
APA, Harvard, Vancouver, ISO, and other styles
7

Wiig, H., M. DeCarlo, L. Sibley, and E. M. Renkin. "Interstitial exclusion of albumin in rat tissues measured by a continuous infusion method." American Journal of Physiology-Heart and Circulatory Physiology 263, no. 4 (October 1, 1992): H1222—H1233. http://dx.doi.org/10.1152/ajpheart.1992.263.4.h1222.

Full text
Abstract:
Steady-state 125I-labeled rat serum albumin (125I-labeled RSA) concentration in plasma was maintained by intravenous infusion of tracer for 72-168 h with an implanted osmotic pump. At the end of the infusion period, the rat was anesthetized and nephrectomized, and extracellular fluid was equilibrated with intravenous 51Cr-labeled EDTA for 4 h. Five minutes before final plasma and tissue sampling, 131I-labeled bovine serum albumin (131I-labeled BSA) was injected intravenously as a plasma volume marker. Samples of skin, muscle, tendon, and intestine were assayed for all three tracers. Apparent distribution volumes were calculated as tissue tracer content/plasma tracer concentration. Interstitial fluid volume (Vi) was calculated as V51Cr-EDTA-V131I-BSA. Steady-state extravascular distribution of 125I-labeled RSA as plasma equivalent volume (Va,p) was calculated as V125I-RSA-V131I-BSA. Steady-state interstitial fluid concentrations of 125I-labeled RSA in skin, muscles, and tendon were measured with nylon wicks implanted postmortem, and steady-state interstitial albumin distribution volumes were recalculated as wick-fluid equivalent volumes (Va,w). Relative albumin exclusion fraction (Ve/Vi) was calculated as 1-Va,w/Vi. For skin and muscle, steady-state 125I-labeled RSA tissue concentrations were reached at 72 h. Ve/Vi for albumin averaged 26% in hindlimb muscle, 41% in hindlimb skin, 30% in back skin, 39% in tail skin, and 54% in tail tendon. For muscle, Ve/Vi corresponds to expectation if all tissue collagen and hyaluronan is dispersed in the interstitium. However, for skin and tendon, albumin exclusion is considerably lower than expected on this basis, suggesting that much of their collagen is organized into dense bundles of fibers containing no fluid accessible to 51Cr-labeled EDTA or 125I-labeled RSA.
APA, Harvard, Vancouver, ISO, and other styles
8

Hrušková, M., and J. Skvrnová. "Use of maturograph and spring oven for the dermination of wheat flour baking characteristics." Czech Journal of Food Sciences 21, No. 2 (November 18, 2011): 71–77. http://dx.doi.org/10.17221/3479-cjfs.

Full text
Abstract:
Quality characteristics of 30 commercial wheat flour samples from Czech industrial mills and 30 wheat flour samples prepared from wheat varieties cultivated in experimental fields (all from wheat harvest 2000) were analysed in detail including maturograph and spring oven (both from Brabender, Germany) as well as bread baking test (Czech method). Specific bread volumes of all flour samples were compared with the bread volumes determined by the oven spring test. The correlation analysis which expressed the relations between wheat flour rheological characteristics and the bread volume is reported. The maturograph parameters correlate significantly with the specific bread volume and the final volume obtained by means of oven spring. All the correlations with the baking test values are high. Both instruments used are suitable for the prediction of the flour baking quality. &nbsp;
APA, Harvard, Vancouver, ISO, and other styles
9

Calbo, Adonal Gimenez, and Amauri Alves Nery. "Methods for Measurement of Gas Volume of Fruits and Vegetables." Journal of the American Society for Horticultural Science 120, no. 2 (March 1995): 217–21. http://dx.doi.org/10.21273/jashs.120.2.217.

Full text
Abstract:
Theory is presented for a differential mass-volume technique to measure non-destructively gas volume (Vg) changes, based only on the initial and final masses and volumes of an organ. Volume was measured using Archimedes' principle, but a non-invasive image analysis procedure could be an improvement. A reduction in Vg during the ripening of `Kada' tomato (Lycopersicon esculentum Mill) fruits, and irreversible Vg changes of 0.02, 0.29, 0.66, 1.2, and 1.3 ml for mature-green fruits compressed by 0, 2.5, 5.0, 7.5. and 10 mm for 5 minutes indicates the potential of this procedure. The method was compared with other methodologies using sweetpotato (Ipomea batatas L.) root segments subjected to vacuum water infiltration. The results were similar to the pycnometric method. The gasometric method underestimated Vg for roots in which the intercellular air volumes where blocked by the water used for infiltration, and large overestimation occurred with the traditional infiltration technique without correction for water absorption. Absolute Vg values were also estimated by semi-pycnometry (defined as the difference between the organ volume measured by water immersion and the organ volume without Vg measured with a pycnometer, after its maceration and elimination of gas bubbles with vacuum). Semi-pycnometry applied to tomato and bell pepper (Capsicum annuum L.) fruits, where the use of tissue segments limits the pycnometric method, and in sweetpotatoes, where the gasometric method overestimates Vg, generated results that were consistently similar to the differential mass-volume method.
APA, Harvard, Vancouver, ISO, and other styles
10

Santos, R. S., D. P. A. Peña, D. D. S. Diniz, G. A. Costa, J. G. A. Queiroz, and S. R. F. Neto. "Brick Drying Simulations by Finite Volume Method." Materials Science Forum 930 (September 2018): 115–19. http://dx.doi.org/10.4028/www.scientific.net/msf.930.115.

Full text
Abstract:
There are numerous studies on the application of ceramic materials, such as bricks, in the various engineering and manufacturing fields. Ceramic bricks are manufactured from humidified clay and are classified as structural ceramics. When exposed to drying the process is not precisely controlled, defects such as cracks, deformations and warping can arise, which compromise the final physical and structural properties of the product. Seeking to solve the procedure through simulations, this work presents a numerical study on a brick drying. A three-dimensional transient model is presented to predict the temperature of the holed ceramic brick and the distribution of the humidity content in a drying situation inside a temperature controlled oven, the heat transfer and mass phenomena are present. As simulations were done in the ANSYS CFX® program, which uses the Finite Volumes Method and presented satisfactory results when compared with the experimental works.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Final volumes method"

1

Резинкіна, Марина Михайлівна. "Розрахунок тривимірних електричних полів в неоднорідних середовищах методом скінченних об'ємів." Thesis, Інститут електродинаміки НАН України, 2005. http://repository.kpi.kharkov.ua/handle/KhPI-Press/30862.

Full text
Abstract:
Дисертація присвячена розвитку теорії розрахунку електричних полів у галузі розробки методів та математичних моделей чисельного розрахунку тривимірних квазістаціонарних електричних полів в неоднорідних діелектричних та слабкопровідних середовищах з гетерогенними включеннями складної конфігурації, яка може змінюватися у часі. Для цього використовуються методи скінченних об’ємів та поглинаючих граничних шарів. Застосування розроблених методик для розрахунків параметрів електричних процесів, які відбуваються під впливом сильних електричних полів в неоднорідних слабкопровідних включеннях зі складною конфігурацією, котра може зміцнюватися під впливом високої напруги, дозволило визначити безпечні режими роботи систем заземлення енергетичних об’єктів, твердої полімерної ізоляції, а також людей при їхньому знаходженні у зоні дії електричних полів.
The dissertation is dedicated to development of the theory of electric field calculation on the base of creation of the methods and mathematical models of numerical calculation of three-dimensional quasi-stationary electric fields in heterogeneous dielectric and weakly conducting mediums with inclusions of complex and changing in time domain configuration. The methods of the final volumes and absorbing boundary layers have been used for these calculations. Usage of the elaborated methods for calculation of the parameters of electric processes upon strong electric field in heterogeneous dielectric and weakly conducting inclusions with complex configuration, which may change upon action of high voltage, has allowed to define the safe working regimes of the systems of grounding of power objects, solid polymeric insulation, as well as people, located in the zone of electric field action.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Final volumes method"

1

Mitchell, Janet B. Methods for tracking volume/intensity change: Final report. Waltham, Mass: Health Policy Research Consortium, Heller Graduate School, Brandeis University, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Caughey, D. A. Multigrid methods for aerodynamic problems in complex geometries: Final report. [Washington, D.C: National Aeronautics and Space Administration, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sonsini, Alessandro, ed. Interazione e mobilità per la ricerca. Florence: Firenze University Press, 2007. http://dx.doi.org/10.36253/978-88-8453-627-3.

Full text
Abstract:
Interazione e mobilità per la ricerca – Materiali del 2° seminario Osdotta 2006. This is the second volume of the DOTTA series dealing with research in Architectural Technology doctorates. It documents the 2nd seminar of the Italian PhDs in Architectural Technology, held in Pescara on 14-15-16 September 2006, comprising an account of the event, the materials elaborated in the course of the seminar and the addresses made at the final round table. This reconstruction makes it possible to identify the fields of interest, providing a synoptic overview of the current directions of research trends in our sector, and to compare and confront the contents and methods of the various thematic ambits, underscoring the fundamental research themes most active in this scientific disciplinary sector. Moreover, it also makes it possible to confirm the educational and communication project pursued by Osdotta, both as an educational-administrative structure of an interactive kind, designed to foster a fertile and intense exchange on the lines of research activated within the framework of the doctoral studies in this ambit, and also as an opportunity to identify the problems and expectations of the area, breaking them down into issues concerning the visibility of the scientific community and research into actions useful for the pursuit of even more efficacious results.
APA, Harvard, Vancouver, ISO, and other styles
4

Andò, Valeria. Euripide, Ifigenia in Aulide. Venice: Fondazione Università Ca’ Foscari, 2021. http://dx.doi.org/10.30687/978-88-6969-513-1.

Full text
Abstract:
This volume contains the first Italian critical edition with introduction, translation and commentary of Euripides’ Iphigenia in Aulis. The tragedy, exhibited posthumously in 405 BCE, stages the first mythical segment of the Trojan War, namely the sacrifice of Iphigenia, daughter of king Agamemnon, head of the Greek army, in order to propitiate the winds that should lead the navy to Troy. A tragedy of intrigue and unveiling, in which all the characters try to oppose the sacrifice, judged to be an impiety despite its sacred essence. It is therefore a tragedy without gods, in which characters of modest moral stature move, unstable, ready to sudden changes of mind, and among whom the protagonist stands out: the girl who, having overcome the dismay for the destiny awaiting her, voluntarily moves towards death on the altar, for a flimsy patriotic ideal and with the illusion of achieving immortal glory. Since the end of the eighteenth century, the text of this tragedy, handed over to us by the manuscript tradition, has been exposed more than others to a rigorous philological criticism that has broken its unity, through considerable expunctions of entire sections and sequences of verses. The volume traces the phases of this critical work, showing its methods – and sometimes its excesses – and choosing a balance line in the constitution of the text. The overall exegesis of the tragedy, which I propose in this study, consists in the belief that, despite the exodus being spurious, the finale, in view of which the entire dramaturgy was composed, still had to contemplate Iphigenia’s salvation. In fact, if the Panhellenic ideal of defence against the barbarians is now meaningless, and if a war of destruction, to begin with, needs the death of an innocent person, then this death must be transcended and the horror of human sacrifice must dissolve. It therefore seems that, once political current events become opaque, the poet’s research tends to create situations of great patheticism in an aesthetic setting of refined beauty.
APA, Harvard, Vancouver, ISO, and other styles
5

United States. National Aeronautics and Space Administration., ed. Particle/continuum hybrid simulation in a parallel computing environment: Final report for the period August 1, 1994 to September 30, 1996 ... grant no. NCC2-5072. Stanford, Calif: Department of Aeronautics and Astronautics, Stanford University, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ostrov, Jamie M., and Sarah M. Coyne. The Future of Relational Aggression, and Final Remarks. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780190491826.003.0019.

Full text
Abstract:
The rapid escalation of research on the development of relational aggression and related constructs has been truly remarkable. Our volume is designed to fill a void in the literature and focus on the development of relational aggression. We conclude this volume by first reviewing some of the key points and implications from the prior chapters. Next, we discuss five future directions for the field: (1) conducting long-term longitudinal studies and adopting a lifespan perspective, (2) striving for advances in methods and technology, (3) using advanced statistics to address collinearity and co-occurrence among aggression subtypes, (4) exploring the role of other forms of aggression, and (5) embracing replication. Finally, we provide some concluding thoughts.
APA, Harvard, Vancouver, ISO, and other styles
7

United States. National Aeronautics and Space Administration., ed. An experimental and numerical investigation of turbulent vortex breakdown and aircraft wakes: Final report, contract NAG 1-1775. [Washington, DC: National Aeronautics and Space Administration, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

M, Sindir Munir, and United States. National Aeronautics and Space Administration., eds. Comparative study of advanced turbulence models for turbomachinery: Contract NAS8-38860, final report. [Washington, DC: National Aeronautics and Space Administration, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Development of an effective method of detecting and identifying foreign odors in grain samples: Final report : Volume 1. 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

P, Chen C., Ziebarth J. P, and University of Alabama. Dept. of Computer Science., eds. A combined Eulerian-volume of fraction-Lagrangian method for atomization simulation: Final report, contract no: NAS 8-38609 D.O. 56. Huntsville, Ala: University of Alabama, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Final volumes method"

1

Machado, Cristian Rivera, and Hiroshan Hettiarachchi. "Composting as a Municipal Solid Waste Management Strategy: Lessons Learned from Cajicá, Colombia." In Organic Waste Composting through Nexus Thinking, 17–38. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-36283-6_2.

Full text
Abstract:
AbstractMunicipal solid waste (MSW) generated in developing countries usually contains a high percentage of organic material. When not properly managed, organic waste is known for creating many environmental issues. Greenhouse gas (GHG) emissions, soil and water contamination, and air pollution are a few examples. On the other hand, proper and sustainable management of organic waste can not only bring economic gains but also reduce the waste volume that is sent for final disposal. Composting is one such recovery method, in which the end product – compost – eventually helps the agricultural industry, and other sectors, making the process an excellent example of nexus thinking in integrated management of environmental resources. The aim of this chapter is to discuss how Cajicá, a small city in Colombia, approached this issue in a methodical way to eventually became one of the leading organic waste composting examples in the whole world, as recognised by the United Nations Environment Programme in 2017. Cajicá launched a source separation and composting initiative called Green Containers Program (GCP) in 2008, based on a successful pilot project conducted in 2005. The organic waste separated at source collected from households, commercial entities, schools, and universities are brought to a privately operated composting plant chosen by the city to produce compost. The compost plant sells compost to the agricultural sector. The participants in the GCP could also receive a bag of compost every 2 months as a token of appreciation. The Cajicá case presents us with many lessons of good practice, not only in the sustainable management of waste but also in stakeholder engagement. It specifically shows how stakeholders should be brought together for long-lasting collaboration and the benefits to society. Finding the correct business model for the project, efforts made in educating the future generation, and technology adaptation to local conditions are also seen as positive experiences that others can learn from in the case of Cajicá’s GCP. Some of the concerns and potential threats observed include the high dependency GCP has on two institutions: the programme financially depends completely on the municipality, and the composting operation depends completely on one private facility. GCP will benefit from having contingency plans to reduce the risk of having these high dependencies.
APA, Harvard, Vancouver, ISO, and other styles
2

Holloway, Ralph L., and Michael S. Yuan. "Endocranial Morphology of A.L. 444-2." In The Skull of Australopithecus afarensis. Oxford University Press, 2004. http://dx.doi.org/10.1093/oso/9780195157062.003.0007.

Full text
Abstract:
The original endocast of A.L. 444-2 consisted of a single plastic cast, colored to show the original fragments (light brown) and the reconstructed missing parts (black). This we label the Rak-Kimbel endocast, which was based on the reconstruction of cranial and facial fragments. Because distortion was severe enough to interfere with morphological description and measurements, and especially the assessment of endocranial capacity, a plaster endocast was received from Yoel Rak in 1998 for purposes of modification. This newer plaster endocast formed the basis for the original endocast reconstruction done by R.L.H., who based the reconstruction on the less distorted side (left) and then doubled its water-displaced volume to achieve the final endocranial volume. As will become clear in our descriptions, this first method required several additions and subtractions to compensate for missing portions, for flash lines left from the casting process, and for distortion remaining in the reconstruction. We concluded that a more accurate reconstruction would result if the portions of the original endocast were separated from reconstructed elements and approximated on a plasticene “core” so that distortion could be effectively eliminated. The second method, which was accomplished mostly by M.S.Y. with minimal guidance from R.L.H., permitted a range of possible reconstructions of the actual brain endocast pieces and provided a range of endocast volumes. This reconstruction methodology, referred to as the “dissection method,” eliminated most of the distortion and obviated the need to correct for flash lines. Although both methods provide a final endocranial capacity very close to what must have been the actual living brain volume of A.L. 444-2, we consider the dissection method to be the more accurate one. Distortion of the endocranial cast mirrors that of the cranium. While the right parietotemporal area appears to be depressed, the left parietotemporal area shows signs of bulging in compensation. In addition, due to a gap that runs anteroposteriorly along the left temporal lobe, there is an artificial increase in the distance between the base of the endocast and its apex of about 3–8 mm on the left side.
APA, Harvard, Vancouver, ISO, and other styles
3

Gavrilova, M. L. "Adaptive Algorithms for Intelligent Geometric Computing." In Machine Learning, 97–104. IGI Global, 2012. http://dx.doi.org/10.4018/978-1-60960-818-7.ch109.

Full text
Abstract:
This chapter spans topics from such important areas as Artificial Intelligence, Computational Geometry and Biometric Technologies. The primary focus is on the proposed Adaptive Computation Paradigm and its applications to surface modeling and biometric processing. Availability of much more affordable storage and high resolution image capturing devices have contributed significantly over the past few years to accumulating very large datasets of collected data (such as GIS maps, biometric samples, videos etc.). On the other hand, it also created significant challenges driven by the higher than ever volumes and the complexity of the data, that can no longer be resolved through acquisition of more memory, faster processors or optimization of existing algorithms. These developments justified the need for radically new concepts for massive data storage, processing and visualization. To address this need, the current chapter presents the original methodology based on the paradigm of the Adaptive Geometric Computing. The methodology enables storing complex data in a compact form, providing efficient access to it, preserving high level of details and visualizing dynamic changes in a smooth and continuous manner. The first part of the chapter discusses adaptive algorithms in real-time visualization, specifically in GIS (Geographic Information Systems) applications. Data structures such as Real-time Optimally Adaptive Mesh (ROAM) and Progressive Mesh (PM) are briefly surveyed. The adaptive method Adaptive Spatial Memory (ASM), developed by R. Apu and M. Gavrilova, is then introduced. This method allows fast and efficient visualization of complex data sets representing terrains, landscapes and Digital Elevation Models (DEM). Its advantages are briefly discussed. The second part of the chapter presents application of adaptive computation paradigm and evolutionary computing to missile simulation. As a result, patterns of complex behavior can be developed and analyzed. The final part of the chapter marries a concept of adaptive computation and topology-based techniques and discusses their application to challenging area of biometric computing.
APA, Harvard, Vancouver, ISO, and other styles
4

Myers, Alicia D. "Introduction." In An Introduction to the Gospels and Acts, 1–15. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780190926809.003.0001.

Full text
Abstract:
This chapter introduces the method and key terms and provides initial background. After introducing the canonical Gospels and Acts, this chapter gives an overview of the book’s methods and assumptions. This book uses a synchronic and ancient audience-oriented approach but incorporates a significant amount of historical background into the conversation. Next, the chapter defines the literary genres of the canonical Gospels as ancient biographies, and Acts as an ancient historiography, noting the questions of historicity these writings also raise. The final major section of the introduction investigates possible literary relationships between the canonical writings, incorporating recent scholarship in an accessible manner. The chapter ends with a brief overview of the flow of the rest of the volume.
APA, Harvard, Vancouver, ISO, and other styles
5

Giles, Bretton T., and Shawn P. Lambert. "Introduction." In New Methods and Theories for Analyzing Mississippian Imagery, 1–32. University Press of Florida, 2021. http://dx.doi.org/10.5744/florida/9781683402121.003.0001.

Full text
Abstract:
Our introduction provides the background for the subsequent case studies in this edited volume by reviewing the historical development and significance of stylistic and iconographic approaches to Mississippian imagery. It also explores the three topics that organize the larger volume: (1) the use of style in Mississippian iconographic studies, (2) interpreting Mississippian imagery, and (3) situating and historicizing Mississippian symbols. The first section considers how style has been conceptualized in archaeological studies, and the way it has how been employed to develop and infer regional styles in Mississippian imagery. The second section discusses how iconographic analyses can be employed to assess the referents of Mississippian imagery, as well as how the (ethno)historic narratives, beliefs and traditions of Native American peoples can be used to infer their significance. The third section examines the way that historical approaches and assessments of Mississippian symbols’ depositional contexts can enhance archaeological perspectives. In the final portion of this chapter, we discuss the subsequent chapters and how they advance through a series of well-contextualized case studies.
APA, Harvard, Vancouver, ISO, and other styles
6

Ottersen, Trygve, Joseph Millum, Jennifer Prah Ruger, Stéphane Verguet, Kjell Arne Johansson, Ezekiel J. Emanuel, Dean T. Jamison, and Ole F. Norheim. "The Future of Priority-Setting in Global Health." In Global Health Priority-Setting, 317–26. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780190912765.003.0018.

Full text
Abstract:
This book has sought to inform efforts to improve systematic, evidence-based priority-setting by assessing the state-of-the-art of methods for priority-setting, engaging with the fundamental normative issues at stake, and providing specific recommendations for improving current practice. This final chapter, written by the eight editors of this volume, provides seven key recommendations for future priority-setting in global health: (1) A more systematic approach to priority-setting in health is needed; (2) Information on cost-effectiveness is essential; (3) Distributional impact needs to be integrated; (4) Stillbirths need to be integrated; (5) Non-health effects need to be integrated; (6) Process needs to be emphasized alongside substantive criteria; and (7) New methods and tools need to be used and further developed.
APA, Harvard, Vancouver, ISO, and other styles
7

Nagaraj, Nagendra, and Chandra J. "Sentence Classification using Machine Learning with Term Frequency–Inverse Document Frequency with N-Gram." In New Frontiers in Communication and Intelligent Systems, 337–46. Soft Computing Research Society, 2021. http://dx.doi.org/10.52458/978-81-95502-00-4-35.

Full text
Abstract:
Automatic text classification has proven to be a vital method for managing and processing a very large text area—the volume of digital materials that is spreading and growing on a daily basis. In general, text plays an important role in classifying, extracting, and summarizing information, searching for text, and answering questions. This paper demonstrates machine learning techniques are used for the text classification process..And also, with the vast rapid growth of text analysis in all areas, the demand for automatic text classification has widely improved by day by day. The pattern of text classification has been the subject of a lot of research and development works in recent times of natural language processing is a field that entails a lot of work. This paper represents a text classification technique using the term frequency-inverse document frequency and N-Gram. Also compared the performances of a different model. The recommended model is adopted with four different algorithms and compared with generated results from the algorithms. The linear support vector machine is most relevant to this work with our proposed model. The final result shows a significant accuracy compared with earlier methods.
APA, Harvard, Vancouver, ISO, and other styles
8

Manning, Jane. "IGOR STRAVINSKY (1882–1971)The Owl and the Pussy-Cat (1966)." In Vocal Repertoire for the Twenty-First Century, Volume 1, 300–302. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780199391028.003.0084.

Full text
Abstract:
This chapter studies Igor Stravinsky's final work, The Owl and the Pussy-Cat. With typical sly relish, Stravinsky seems to be mocking any pomposity in his admirers, confounding everyone by leaving the stage with a brief, light-hearted coda to his cherished large-scale achievements. This well-loved nonsense verse by Edward Lear was the first poem his wife Vera got to know, and the piece is dedicated to her. This chapter reveals that the song has a strong connection between English and Russian schools of absurdist humour. To add to the fun, Stravinsky even here adheres to a strict twelve-tone system, affectionately lampooning the method he favoured for his last works.
APA, Harvard, Vancouver, ISO, and other styles
9

Church, Allan H., David W. Bracken, John W. Fleenor, and Dale S. Rose. "The Handbook of Strategic 360 Feedback." In Handbook of Strategic 360 Feedback, 517–30. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780190879860.003.0031.

Full text
Abstract:
In the final chapter, personal observations of the handbook editors are given, including key themes, predictions, career highlights, and reflections on editing this volume. As noted in the case of 360, there have been so many permutations of the method over the years that very few implementations will look similar; as a result, there can be confusion about what exactly is 360 Feedback. By focusing on Strategic 360 Feedback as an organizing framework, practitioners can now examine a cohesive set of guidelines to shape the direction of this growing practice. Successfully leveraging 360 Feedback to align talent with strategy requires attention to all facets of the process. One of the themes that emerges from this volume is the potential use of self–other rating agreement as a measure of self-awareness and leader effectiveness. Another emerging theme from this volume is the effect that technology is having on the 360 Feedback process. In a different direction, vendors and practitioners should not claim the benefits of collecting feedback under the label of “360 degree” and then not follow the practices that make it effective and appropriate for the advertised use. The bottom line remains that for 360 Feedback to be effective and ethically defensible it must meet the basic criteria presented. Each of the levers of strategic intent, measurement quality, integration, and inclusion must be amped up or modified to achieve the best results in the right contexts and with the fully aligned purpose of the effort in mind.
APA, Harvard, Vancouver, ISO, and other styles
10

Sanguigno, Luigi, Marcello Antonio Lepore, and Angelo Rosario Maligno. "Characterization of Titanium Metal Matrix Composites (Ti-MMC) Made Using Different Manufacturing Routes." In Advances in Transdisciplinary Engineering. IOS Press, 2021. http://dx.doi.org/10.3233/atde210030.

Full text
Abstract:
The mechanical and morphological properties of the unidirectional metal matrix composite (MMC) in titanium alloy reinforced with continuous silicon carbide (SiC) fibres are investigated. The lay-up manufacturing process known as the Foil / Fibre (FF) lay-up was compared with the matrix-coated-fibre (CF) method which promises a better final shape of the reinforcing fibre net. Tensile tests were performed to measure mechanical performance of the manufactured MMCs both longitudinally and transversely respect to the direction of SiC fibres. Elastic behaviour of the investigated MMCs was assumed orthotropic and related to mechanical properties and spatial distribution of the MMC constituents: SiC fibres and Titanium (Ti) matrix. This was achieved using micromechanical modelling based on Finite Element (FE) calculations. FE micromechanical modelling was carried out on the Representative Elementary Volume (REV) of the MMC microstructure resolved by non-destructive analysis such as X-Ray tomography. The analysis carried out highlighted and justified mechanical performance difference between composite laminates containing the same amount of SiC reinforcement fibres for unit of volume but made following different manufacturing routes. To compute overall orthotropic behaviour of the MMC laminate, each constituent was assumed as an elastic isotropic heterogeneity during the averaging. This simplify assumption was validated by comparison with experimental data during the mechanical characterization of the investigated MMC composites.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Final volumes method"

1

Al-Meer, S. H., M. A. Amr, A. I. Helal, and A. T. Al-Kinani. "Ultratrace Determination of Strontium-90 in Environmental Soil Samples From Qatar by Collision/Reaction Cell-Inductively Coupled Plasma Mass Spectrometry (CRC-ICP-MS/MS)." In ASME 2013 15th International Conference on Environmental Remediation and Radioactive Waste Management. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/icem2013-96160.

Full text
Abstract:
Because of the very low level of 90Sr in the environmental soil samples and its determination by beta counting may take several weeks, we developed a procedure for ultratrace determination of 90Sr using collision reaction cell-inductively coupled plasma tandem mass spectrometry (CRC-ICP-MS/MS, Agilent 8800). Soil samples were dried at 105 °C and then heated in a furnace at 550 °C to remove any organics present. 500 g of each soil samples were aliquoted into 2000 ml glass beakers. Each Soils samples were soaked in 2 ppm Sr solution carrier to allow determination of chemical yield. The solid to liquid ratio was 1:1. Finally the soil samples were dried at 105 °C. Five hundred milliliters concentrated nitric acid and 250 ml hydrochloric acid volumes were added on 500 g soil samples. The samples were digested on hot plate at 80 °C to prevent spraying with continuous manual mixing. The leachate solution was separated. The solids were rinsed with 500 ml deionized water, warmed on a hot plate and the leachate plus previous leachate were filtered and the total volume was reduced to 500 ml by evaporation. Final leachate volume was transferred to a centrifuge tubes. The centrifuge tubes were centrifuged at 3,500 rpm for 10 min. The leachate was transferred to a 1 L beaker and heated on a hot plate to evaporate the leachate to dryness. The reside was re-dissolved in 100 ml of 2% HNO3 and reduced by evaporation to 10 mL. The solution was measured directly by CRC-ICP-MS/MS by setting the first quadruple analyzer to m/z 90 and introducing oxygen gas into the reaction cell for elimination isobar interference from zirconium-90. The method was validated by measurements of standard reference materials and applied on environmental soil samples. The overall time requirement for the measurement of strontium-90 by CRC-ICP-MS/MS is 2 days, significantly shorter than any radioanalytical protocol currently available.
APA, Harvard, Vancouver, ISO, and other styles
2

Bergstro¨m, Lena, Maria Lindberg, Anders Lindstro¨m, Bo Wirendal, and Joachim Lorenzen. "Proven Concepts for LLW-Treatment of Large Components for Free-Release and Recycling." In The 11th International Conference on Environmental Remediation and Radioactive Waste Management. ASMEDC, 2007. http://dx.doi.org/10.1115/icem2007-7218.

Full text
Abstract:
This paper describes Studsvik’s technical concept of LLW-treatment of large, retired components from nuclear installations in operation or in decommissioning. Many turbines, heat exchangers and other LLW components have been treated in Studsvik during the last 20 years. This also includes development of techniques and tools, especially our latest experience gained under the pilot project for treatment of one full size PWR steam generator from Ringhals NPP, Sweden. The ambition of this pilot project was to minimize the waste volumes for disposal and to maximize the material recycling. Another objective, respecting ALARA, was the successful minimization of the dose exposure to the personnel. The treatment concept for large, retired components comprises the whole sequence of preparations from road and sea transports and the management of the metallic LLW by segmentation, decontamination and sorting using specially devised tools and shielded treatment cell, to the decision criteria for recycling of the metals, radiological analyses and conditioning of the residual waste into the final packages suitable for customer-related disposal. For e.g. turbine rotors with their huge number of blades the crucial moments are segmentation techniques, thus cold segmentation is a preferred method to keep focus on minimization of volumes for secondary waste. Also a variety of decontamination techniques using blasting cabinet or blasting tumbling machines keeps secondary waste production to a minimum. The technical challenge of the treatment of more complicated components like steam generators also begins with the segmentation. A first step is the separation of the steam dome in order to dock the rest of the steam generator to a specially built treatment cell. Thereafter, the decontamination of the tube bundle is performed using a remotely controlled manipulator. After decontamination is concluded the cutting of the tubes as well as of the shell is performed in the same cell with remotely controlled tools. Some of the sections of steam dome shell or turbine shafts can be cleared directly for unconditional reuse without melting after decontamination and sampling program. Experience shows that the amount of material possible for clearance for unconditional use is between 95 – 97% for conventional metallic scrap. For components like turbines, heat exchangers or steam generators the recycling ratio can vary to about 80–85% of the initial weight.
APA, Harvard, Vancouver, ISO, and other styles
3

Franke, Daniel, Michael Zinn, Shiva Rudraraju, and Frank E. Pfefferkorn. "Influence of Tool Runout on Force Measurement During Internal Void Monitoring for Friction Stir Welding of 6061-T6 Aluminum." In ASME 2021 16th International Manufacturing Science and Engineering Conference. American Society of Mechanical Engineers, 2021. http://dx.doi.org/10.1115/msec2021-64320.

Full text
Abstract:
Abstract The goal of this research was to examine how altering the amount of friction stir tool eccentricity while controlling the amount of slant in the tool shoulder (drivers of oscillatory process forces) effects the generation of process force transients during sub-surface void interaction. The knowledge gained will help improve the accuracy of force-based void monitoring methods that have the potential to reduce the need for post-weld inspection. Process force transients during sub-surface void formation were examined for multiple tools with varying magnitudes of kinematic runout. The eccentric motion of the tool produced oscillations in the process forces at the tools rotational frequency that became distorted when features (flats) on the tool probe interacted with voided volumes, generating an amplitude in the force signals at three times the tool rotational frequency (for three flat tools). A larger tool eccentricity generates a larger amplitude in the force signals at the tool’s rotational frequency that holds a larger potential to create a distortion during void interaction. It was determined that once void becomes large enough to produce an interaction that generates an amplitude at the third harmonic larger than 30% of the amplitude at the rotational frequency in a weld with no interaction (amplitude solely at rotational frequency), the trailing edge of the tool shoulder cannot fully consolidate the void, i.e., it will remain in the final weld. Additionally, once the void exceeds a certain size, the amplitudes of the third harmonics saturate at 70% of the amplitude at the rotational frequency during full consolidation. The interaction between the eccentric probe and sub-surface void was isolated by ensuring any geometric imperfection in the shoulder (slant) with respect to the rotational axis was removed. The results suggest that geometric imperfections (eccentricity and slant) with respect to the tool’s rotational axis must be known when developing a void monitoring method from force transients of this nature.
APA, Harvard, Vancouver, ISO, and other styles
4

Lu, Yumin, Marc Madou, William Crumly, and Eric Jensen. "Non-Silicon Mass Production of Micromachined Biosensors." In ASME 2000 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2000. http://dx.doi.org/10.1115/imece2000-1159.

Full text
Abstract:
Abstract Sophisticated integrated circuit processing of silicon materials has been promoted as the optimum methodology for miniaturized mechanical and chemical sensors for over two decades. However, in micro biomedical applications, advantages of using Si are often not as clear as in mechanical sensors. The high cost of Si, difficulties in packaging, need for modularity and biocompatibility all encourage investigation of non-silicon materials and semi-continuous processing. Metal/polymer hybrid structures are preferred as the substrate materials in this application. The two manufacturing techniques demonstrated here involve non-silicon materials and a modular methodology. They are based on robust printed circuit board and flexible printed circuit [FPC] high volume fabrication techniques using polyimide base films for millimeter scale devices and photoreactive dry film resists for sub-millimeter devices. The core concept shared between the designs is a structure having sensor sites and their electrical contacts on opposing sides of the substrate. This separation of fluid chemistry on one side and dry electrical contact on the other improves reliability, ultimate packaging simplicity, and ease of use. Chemical sensors fabricated by each of these processes as well as the corresponding sensor performance are presented. The two-sided paradigm improves upon single-sided devices by allowing simpler, smaller and higher-yield fabrication of multi-purpose sensor arrays. Finished product yield is additionally enhanced by modularity. That is, each sensor type is created on it’s own sheet, independent of the other sensor types. At final assembly, many sensors of different types can be cut from different sheets and combined into any desired array configuration using contemporary pick and place equipment. A schematic of the two-sided sensor structure is shown in Figure 1. The first approach by Packard-Hughes Interconnect, OSU and Microbionics Inc. for creating these structures is shown in Figure 2. In this case a method of fabricating high volume FPC with integrated contacts (Gold Dot Technology) is leveraged to create a biosensor that can be produced in volumes of greater than a quarter million per batch and at a low assembled cost. A second approach explored by OSU, and Microbionics Inc. has been demonstrated to reduce the sensor size to below 100 μm at similar costs. This is accomplished as shown in Figure 3 by defining the sensor geometry in dry photoresist sheets using a simplified two-layer alignment technique that requires exposure from only one side. Selection of the appropriate technique is application dependent. Preliminary results of potassium ion and chloride sensors are presented.
APA, Harvard, Vancouver, ISO, and other styles
5

Kumar, Saket, Hemanta Sarma, and Brij Maini. "Effect of Temperature on Gas/Oil Relative Permeability in Viscous Oil Reservoirs." In SPE Canadian Energy Technology Conference. SPE, 2022. http://dx.doi.org/10.2118/208897-ms.

Full text
Abstract:
Abstract Oil displacement tests were carried out in a 45-cm long sand-pack at temperatures ranging from 64 to 217 °C using a viscous oil (PAO-100), deionized water and nitrogen gas. It was found that the unsteady-state method was susceptible to several experimental artifacts in viscous oil systems due to a very adverse mobility ratio. However, despite such experimental artifacts, a careful analysis of the displacement data led to obtaining meaningful two-phase gas/oil relative permeability curves. These curves were used to assess the effect of temperature on gas/oil relative permeability in viscous oil systems. We employed a new systematic algorithm to successfully implement a history matching scheme to infer the two-phase gas/heavy oil relative permeabilities from the core-flood data. We noted that at the end of the gas flooding, the "final" residual oil saturation still eluded us even after tens of pore volumes of gas injection. This rendered the experimentally determined endpoint gas relative permeability (krge) and Sor unreliable. In contrast, the irreducible water saturation (Swir) and the endpoint oil relative permeability (kroe) were experimentally achievable. A history-matching technique was used to determine the uncertain parameters of the oil/gas relative permeability curves, including the two exponents of the extended Corey equation (N° and Ng), Sor and krge. The history match showed that kroe and Swir were experimentally achievable and were reliably interpreted. The remaining four parameters (i.e., Corey exponents, true residual oil saturation and gas endpoint relative permeability) were obtained from history matched simulations rather than from experiments. Based on our findings, a new correlation has been proposed to model the effect of temperature on two-phase gas/heavy oil relative permeability.
APA, Harvard, Vancouver, ISO, and other styles
6

Dankowski, Hendrik, Philipp Russell, and Stefan Krüger. "New Insights Into the Flooding Sequence of the Costa Concordia Accident." In ASME 2014 33rd International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/omae2014-23323.

Full text
Abstract:
The tragic accident of the Costa Concordia in January 2012 was one of the most severe large passenger ship accident in Europe in recent times followed by a tremendous public interest. We present the results of an in-depth technical investigation of the flooding sequence which lead to the heeling and grounding of the ship. A fast and explicit numerical flooding simulation method has been developed in the last years to better understand accidents like this one caused by complex and large scale flooding events. The flooding simulation is validated with the help of results from model tests and has been successfully applied to the investigation of several other severe ship accidents. It is based on a quasi-static approach in the time domain which evaluates the hydrostatic equilibrium at each time step. The water fluxes through the openings are computed by a hydraulic model based on the Bernoulli equation. Large and partly flooded openings are taken into account as well as conditional openings like the opening, closing and breaking of doors. The fluxes are integrated in the time domain by a predictor-corrector integration scheme to obtain the water volumes in each compartment involved in the flooding sequence. Due to the fact that the accident happened in calm water at moderate wind speeds close to the shore of the island Giglio this quasi-static numerical flooding simulation can be applied. The results of the technical investigation of the Costa Concordia accident obtained with the help of the developed method are presented. These results match well with the heel and trim motions observed during the accident and the chain of events which lead to the final position of the vessel on the rocks in front of the island Giglio. The explicit and direct approach of the method leads to a fast computational run-time of the numerical method. This allows to study several possible accident scenarios within a short period to investigate for example the influence of the opening and closing of watertight doors and to identify a most likely flooding scenario which lead to this tragic accident.
APA, Harvard, Vancouver, ISO, and other styles
7

Xia, Chunmei, and Jayathi Y. Murthy. "A Finite-Volume Based Time-Splitting Scheme for Computation of Electrodeposition." In ASME 2001 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2001. http://dx.doi.org/10.1115/imece2001/htd-24138.

Full text
Abstract:
Abstract The final step in the LIGA process for manufacturing metallic microstructures is the electroplating of the metal part. The electroplating process can be described by homogeneous dissociation reactions in the electrolyte bath and finite rate surface reactions on the electrode. In this paper we develop a finite volume method for computing electrodeposition in the LIGA process. We employ a time splitting algorithm for coupling the bulk and surface reactions. As an alternative, the species concentration field is solved over the whole domain using a direct method. The two methods are compared to establish the validity and efficiency of the time-splitting algorithm.
APA, Harvard, Vancouver, ISO, and other styles
8

Ferreira, Vitor Hugo de Sousa, and Rosangela B. Z. L. Moreno. "Workflow for Oil Recovery Design by Polymer Flooding." In ASME 2018 37th International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/omae2018-78359.

Full text
Abstract:
Polymer flooding dates from the 1960s. Early applications targeted onshore medium-to-heavy oils up to 100 cP, with limited reservoir temperature and water formation salinity. The number of implemented polymer flooding projects followed oil prices. Since its early days, polymer flooding had overcome many technical obstacles. Advances in polymer manufacturing technology, cost reduction and the use of horizontal wells have pushed polymer flooding as a feasible EOR method. A better understanding of the physical phenomena associated with polymer flow through porous media and technology advancement have extended polymer flooding applications to more viscous oil, higher salinity, and temperature level, as well as to offshore prospects. Meaningful advantages of polymer flooding over conventional methods are consolidated in the literature, such as oil recovery anticipation, incremental oil recovery and reduced volumes of injected and produced water to reach a target recovery factor. Despite all technological advances, polymer flooding needs to be tailored for the specific conditions of the target reservoir. Collect and integrate laboratory, simulation, and field information are essential for a successful polymer flooding application. This paper aims to correlate critical information to the various stages necessary for polymer flooding evaluation and production forecast. First, successfully implemented field cases allow the establishment of ranges for the method application. Once the applicability of polymer flooding is certified, the polymer solution to be injected is designed according to the reservoir characteristics and target conditions. Laboratory tests are performed to determine phase mobilities, polymer retention, and polymer degradation. These parameters are assessed through different experiments, and normalized variables provide data integration. Once the required parameters are determined, it is possible to build a base simulation model. History matching this base model to the laboratory data certifies its validity. An upsized analysis of this model is required to include some degradation phenomena. The 1D laboratory model is extended to a 3D model that incorporates permo-porosity distributions to analyze well characteristics in their radius of influence. The final step is large scale simulation and production forecast. Data integration along each stage and among then all allow the tailoring of the polymer flooding to EOR. The use of normalized parameters to evaluate the results is useful for analysis at different scales, from the laboratory to the reservoir. The proposed workflow can contribute to the design, planning, evaluation, and implementation of polymer flooding in a target field.
APA, Harvard, Vancouver, ISO, and other styles
9

Ladmia, Abdelhak Mohamed, Hamdan A. Alhammadi, Dr Elyes Draoui, Dr Kristian Mogensen, Fahad Mohamed M. Al Hosani, Graham Edmonstone, Ahmed Mohamed Aldhanhani, Faisal Jamaan Ballaith, Ayman Mohamed, and Navindran Juvarajah. "First Successful Smart Liner Deployed in a Side-Tracked Gas Well, Offshore Abu Dhabi." In Abu Dhabi International Petroleum Exhibition & Conference. SPE, 2021. http://dx.doi.org/10.2118/207527-ms.

Full text
Abstract:
Abstract This paper presents a summary of the deployed Smart Liner- SL equivalent to the Limited Entry Liner- LEL as lower completion for the first time in a sidetracked Gas well, Offshore Abu Dhabi. R-1 is subdivided into several sub-layers, the reservoir properties are characterized by low porosity & low permeability (Tight). Reservoir quality in the Upper part is better in terms of porosity & permeability than the lower part. The gas production is mainly from top part of R-1 reservoir, no contribution from Lower part. In 2017, Data gathering was conducted on well A-1 (Coring, Logging & Pressure Points). Actual Gas production Offset wells are restricted from optimal production due to Well Integrity Sustainable Annulus Pressure, to compensate the restricted aged wells due to Well Integrity, Gas production can be increased to 3 times using SL as a stimulation method. The Smart Liner was selected as a lower completion and as a stimulation method for better flow distribution, improved well performance, effective Acid stimulation, also to ensure hole accessibility, allowing aggressive bullhead stimulation at high rate/pressure and high acid concentration at less time ~ 1.5days/job, in addition to eliminating high risk and high cost Coiled Tubing (CT) intervention for stimulation. The first step was to design the SL Completion Workflow with a representative well trajectory for the selected well to be fed and reservoir properties to be extracted from the dynamic model, and then to create a representative stimulation model utilizing property numerical software with all possible scenarios; open hole that represents PPL and suggested SL compartmentalization and holes distribution based on reservoir parameters along the lateral. Once the well model is created, different scenarios for different completion designs are to be run versus different acid concentrations and volumes till achieving the optimum results from stimulation point of view in addition to formation and facilities limitations. Drilling operations were very challenging; fortunately, we succeed to deploy the SL after final adjustment based on FMI Natural Fractures. The Smart Liner as stimulation has proved to be a cost-effective solution for gas wells comparing to advanced stimulation methods in addition to eliminating the high risk and high cost of the Coiled Tubing (CT) intervention for stimulation a huge savings in well construction with maximizing performance.
APA, Harvard, Vancouver, ISO, and other styles
10

Zhou, Yuan, Jingtan Chen, Qingyu Huang, and Houjun Gong. "Study on Final Depth Under Hydraulic Coolant Penetration Condition." In 2017 25th International Conference on Nuclear Engineering. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/icone25-66267.

Full text
Abstract:
Coolant injection mode of molten fuel and coolant interaction (FCI) is a key issue during steam generator tube rupture accident (SGTR) in liquid metal reactors. In the present study the hydraulic breakup behavior of coolant injection mode is investigated using experimental and numerical approaches. Visual experiments are conducted using low-density gasoline as coolant jet and high-density water as denser liquid. The gasoline jet is released into a transparent water tank through a nozzle. The jet breakup behaviors are captured by a DSLR (digital single-lens relax) camera. The images of jet behaviors and the data of gasoline jet penetration depth are obtained and analyzed. By changing nozzle diameter and nozzle height, the parameter effect of jet diameter and jet inlet velocity on final penetration depth are studied. Based on FLUENT15.0, the hydraulic breakup behaviors of gasoline jet are simulated. A 3D axisymmetric model is built and Volume of Fluid (VOF) method is used. The numerical simulation results agree with the experimental results quantitatively and qualitatively. These experimental images and data are helpful to substantiate the understanding of the coolant jet breakup behavior and the pattern of jet penetration depth.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Final volumes method"

1

Delwiche, Michael, Boaz Zion, Robert BonDurant, Judith Rishpon, Ephraim Maltz, and Miriam Rosenberg. Biosensors for On-Line Measurement of Reproductive Hormones and Milk Proteins to Improve Dairy Herd Management. United States Department of Agriculture, February 2001. http://dx.doi.org/10.32747/2001.7573998.bard.

Full text
Abstract:
The original objectives of this research project were to: (1) develop immunoassays, photometric sensors, and electrochemical sensors for real-time measurement of progesterone and estradiol in milk, (2) develop biosensors for measurement of caseins in milk, and (3) integrate and adapt these sensor technologies to create an automated electronic sensing system for operation in dairy parlors during milking. The overall direction of research was not changed, although the work was expanded to include other milk components such as urea and lactose. A second generation biosensor for on-line measurement of bovine progesterone was designed and tested. Anti-progesterone antibody was coated on small disks of nitrocellulose membrane, which were inserted in the reaction chamber prior to testing, and a real-time assay was developed. The biosensor was designed using micropumps and valves under computer control, and assayed fluid volumes on the order of 1 ml. An automated sampler was designed to draw a test volume of milk from the long milk tube using a 4-way pinch valve. The system could execute a measurement cycle in about 10 min. Progesterone could be measured at concentrations low enough to distinguish luteal-phase from follicular-phase cows. The potential of the sensor to detect actual ovulatory events was compared with standard methods of estrus detection, including human observation and an activity monitor. The biosensor correctly identified all ovulatory events during its testperiod, but the variability at low progesterone concentrations triggered some false positives. Direct on-line measurement and intelligent interpretation of reproductive hormone profiles offers the potential for substantial improvement in reproductive management. A simple potentiometric method for measurement of milk protein was developed and tested. The method was based on the fact that proteins bind iodine. When proteins are added to a solution of the redox couple iodine/iodide (I-I2), the concentration of free iodine is changed and, as a consequence, the potential between two electrodes immersed in the solution is changed. The method worked well with analytical casein solutions and accurately measured concentrations of analytical caseins added to fresh milk. When tested with actual milk samples, the correlation between the sensor readings and the reference lab results (of both total proteins and casein content) was inferior to that of analytical casein. A number of different technologies were explored for the analysis of milk urea, and a manometric technique was selected for the final design. In the new sensor, urea in the sample was hydrolyzed to ammonium and carbonate by the enzyme urease, and subsequent shaking of the sample with citric acid in a sealed cell allowed urea to be estimated as a change in partial pressure of carbon dioxide. The pressure change in the cell was measured with a miniature piezoresistive pressure sensor, and effects of background dissolved gases and vapor pressures were corrected for by repeating the measurement of pressure developed in the sample without the addition of urease. Results were accurate in the physiological range of milk, the assay was faster than the typical milking period, and no toxic reagents were required. A sampling device was designed and built to passively draw milk from the long milk tube in the parlor. An electrochemical sensor for lactose was developed starting with a three-cascaded-enzyme sensor, evolving into two enzymes and CO2[Fe (CN)6] as a mediator, and then into a microflow injection system using poly-osmium modified screen-printed electrodes. The sensor was designed to serve multiple milking positions, using a manifold valve, a sampling valve, and two pumps. Disposable screen-printed electrodes with enzymatic membranes were used. The sensor was optimized for electrode coating components, flow rate, pH, and sample size, and the results correlated well (r2= 0.967) with known lactose concentrations.
APA, Harvard, Vancouver, ISO, and other styles
2

Baral, Aniruddha, Jeffery Roesler, and Junryu Fu. Early-age Properties of High-volume Fly Ash Concrete Mixes for Pavement: Volume 2. Illinois Center for Transportation, September 2021. http://dx.doi.org/10.36501/0197-9191/21-031.

Full text
Abstract:
High-volume fly ash concrete (HVFAC) is more cost-efficient, sustainable, and durable than conventional concrete. This report presents a state-of-the-art review of HVFAC properties and different fly ash characterization methods. The main challenges identified for HVFAC for pavements are its early-age properties such as air entrainment, setting time, and strength gain, which are the focus of this research. Five fly ash sources in Illinois have been repeatedly characterized through x-ray diffraction, x-ray fluorescence, and laser diffraction over time. The fly ash oxide compositions from the same source but different quarterly samples were overall consistent with most variations observed in SO3 and MgO content. The minerals present in various fly ash sources were similar over multiple quarters, with the mineral content varying. The types of carbon present in the fly ash were also characterized through x-ray photoelectron spectroscopy, loss on ignition, and foam index tests. A new computer vision–based digital foam index test was developed to automatically capture and quantify a video of the foam layer for better operator and laboratory reliability. The heat of hydration and setting times of HVFAC mixes for different cement and fly ash sources as well as chemical admixtures were investigated using an isothermal calorimeter. Class C HVFAC mixes had a higher sulfate imbalance than Class F mixes. The addition of chemical admixtures (both PCE- and lignosulfonate-based) delayed the hydration, with the delay higher for the PCE-based admixture. Both micro- and nano-limestone replacement were successful in accelerating the setting times, with nano-limestone being more effective than micro-limestone. A field test section constructed of HVFAC showed the feasibility and importance of using the noncontact ultrasound device to measure the final setting time as well as determine the saw-cutting time. Moreover, field implementation of the maturity method based on wireless thermal sensors demonstrated its viability for early opening strength, and only a few sensors with pavement depth are needed to estimate the field maturity.
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Z. J. Final Report - High-Order Spectral Volume Method for the Navier-Stokes Equations On Unstructured Tetrahedral Grids. Office of Scientific and Technical Information (OSTI), December 2012. http://dx.doi.org/10.2172/1056665.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Baral, Aniruddha, Jeffrey Roesler, M. Ley, Shinhyu Kang, Loren Emerson, Zane Lloyd, Braden Boyd, and Marllon Cook. High-volume Fly Ash Concrete for Pavements Findings: Volume 1. Illinois Center for Transportation, September 2021. http://dx.doi.org/10.36501/0197-9191/21-030.

Full text
Abstract:
High-volume fly ash concrete (HVFAC) has improved durability and sustainability properties at a lower cost than conventional concrete, but its early-age properties like strength gain, setting time, and air entrainment can present challenges for application to concrete pavements. This research report helps with the implementation of HVFAC for pavement applications by providing guidelines for HVFAC mix design, testing protocols, and new tools for better quality control of HVFAC properties. Calorimeter tests were performed to evaluate the effects of fly ash sources, cement–fly ash interactions, chemical admixtures, and limestone replacement on the setting times and hydration reaction of HVFAC. To better target the initial air-entraining agent dosage for HVFAC, a calibration curve between air-entraining dosage for achieving 6% air content and fly ash foam index test has been developed. Further, a digital foam index test was developed to make this test more consistent across different labs and operators. For a more rapid prediction of hardened HVFAC properties, such as compressive strength, resistivity, and diffusion coefficient, an oxide-based particle model was developed. An HVFAC field test section was also constructed to demonstrate the implementation of a noncontact ultrasonic device for determining the final set time and ideal time to initiate saw cutting. Additionally, a maturity method was successfully implemented that estimates the in-place compressive strength of HVFAC through wireless thermal sensors. An HVFAC mix design procedure using the tools developed in this project such as the calorimeter test, foam index test, and particle-based model was proposed to assist engineers in implementing HVFAC pavements.
APA, Harvard, Vancouver, ISO, and other styles
5

Kauffman, R. Accelerated screening methods for determining chemical and thermal stability of refreigerant-lubricant mixtures. Part II: Experimental comparison and verification of methods. Final report, volume I. Office of Scientific and Technical Information (OSTI), September 1995. http://dx.doi.org/10.2172/196493.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Saito, Mitsuru, and Kumares Sinha. The Development of Optimal Strategies for Maintenance, Rehabilitation, and Replacement of Highway Bridges: Volume 5 - Priority Ranking Method : Executive Summary and Final Report. West Lafayette, IN: Purdue University, 1989. http://dx.doi.org/10.5703/1288284314171.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kuropiatnyk, D. I. Actuality of the problem of parametric identification of a mathematical model. [б. в.], December 2018. http://dx.doi.org/10.31812/123456789/2885.

Full text
Abstract:
The purpose of the article is to study the possibilities of increasing the efficiency of a mathematical model by identifying the parameters of an object. A key factor for parametrization can be called the consideration of properties of the values of the model at a specific time point, which allows a deeper analysis of data dependencies and correlation between them. However, such a technique does not always work, because in advance it is impossible to predict that the parameters can be substantially optimized. In addition, it is necessary to take into account the fact that minimization reduces the values of parameters without taking into account their real physical properties. The correctness of the final values will be based on dynamically selected parameters, which allows you to modify the terms of use of the system in real time. In the development process, the values of experimentally obtained data with the model are compared, which allows you to understand the accuracy of minimization. When choosing the most relevant parameters, various minimization functions are used, which provides an opportunity to cover a wide range of theoretical initial situations. Verification of the correctness of the decision is carried out with the help of a quality function, which can identify the accuracy and correctness of the optimized parameters. It is possible to choose different types of functional quality, depending on the characteristics of the initial data. The presence of such tools during parametrization allows for varied analysis of the model, testing it on various algorithms, data volumes and conditions of guaranteed convergence of functional methods.
APA, Harvard, Vancouver, ISO, and other styles
8

Prindle, N. H., D. M. Boak, and R. F. Weiner. The second iteration of the Systems Prioritization Method: A systems prioritization and decision-aiding tool for the Waste Isolation Pilot Plant: Volume 3, Analysis for final programmatic recommendations. Office of Scientific and Technical Information (OSTI), May 1996. http://dx.doi.org/10.2172/251433.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography