Academic literature on the topic 'Cut-point finding'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Cut-point finding.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Cut-point finding"

1

CHEITLIN, MELVIN D., and HASSAN KHAYAM-BASHI. "Biomarkers of Myocardial Infarction: Finding the Right Cut-off Point." Cardiology in Review 9, no. 6 (November 2001): 323–24. http://dx.doi.org/10.1097/00045415-200111000-00007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Rota, Matteo, and Laura Antolini. "Finding the optimal cut-point for Gaussian and Gamma distributed biomarkers." Computational Statistics & Data Analysis 69 (January 2014): 1–14. http://dx.doi.org/10.1016/j.csda.2013.07.015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Unal, Ilker. "Defining an Optimal Cut-Point Value in ROC Analysis: An Alternative Approach." Computational and Mathematical Methods in Medicine 2017 (2017): 1–14. http://dx.doi.org/10.1155/2017/3762651.

Full text
Abstract:
ROC curve analysis is often applied to measure the diagnostic accuracy of a biomarker. The analysis results in two gains: diagnostic accuracy of the biomarker and the optimal cut-point value. There are many methods proposed in the literature to obtain the optimal cut-point value. In this study, a new approach, alternative to these methods, is proposed. The proposed approach is based on the value of the area under the ROC curve. This method defines the optimal cut-point value as the value whose sensitivity and specificity are the closest to the value of the area under the ROC curve and the absolute value of the difference between the sensitivity and specificity values is minimum. This approach is very practical. In this study, the results of the proposed method are compared with those of the standard approaches, by using simulated data with different distribution and homogeneity conditions as well as a real data. According to the simulation results, the use of the proposed method is advised for finding the true cut-point.
APA, Harvard, Vancouver, ISO, and other styles
4

Baran, Mehmet, and Sıtkı Sönmezer. "HOW TO GROUP FINANCIAL DATA WITH MAXIMUM HOMOGENEITY?" EMAJ: Emerging Markets Journal 3, no. 1 (February 7, 2013): 13–19. http://dx.doi.org/10.5195/emaj.2013.36.

Full text
Abstract:
Grouping may be an obstacle itself or it may have to be improved to extract better information out of a data stream. Finding trends and dividing a population into parts may be crucial for analyses. This paper offers a modified version of Fisher method that may smoothen the cut point transitions and give out better results. Proven methodology is given with a comparison with the original method. The method may be helpful in forming subgroups in financial data, possibly in technical analyses.Keywords: Grouping, Fisher Method, Trends, Cut points
APA, Harvard, Vancouver, ISO, and other styles
5

Ural, S., and J. Shan. "MIN-CUT BASED SEMANTIC BUILDING LABELING FOR AIRBORNE LIDAR DATA." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences V-2-2020 (August 3, 2020): 305–12. http://dx.doi.org/10.5194/isprs-annals-v-2-2020-305-2020.

Full text
Abstract:
Abstract. Classification and segmentation of buildings from airborne lidar point clouds commonly involve point features calculated within a local neighborhood. The relative change of the features in the immediate surrounding of each point as well as the spatial relationships between neighboring points also need to be examined to account for spatial coherence. In this study we formulate the point labeling problem under a global graph-cut optimization solution. We construct the energy function through a graph representing a Markov Random Field (MRF). The solution to the labeling problem is obtained by finding the minimum-cut on this graph. We have employed this framework for three different labeling tasks on airborne lidar point clouds. Ground filtering, building classification, and roof-plane segmentation. As a follow-up study on our previous ground filtering work, this paper examines our building extraction approach on two airborne lidar datasets with different point densities containing approximately 930K points in one dataset and 750K points in the other. Test results for building vs. non-building point labeling show a 97.9% overall accuracy with a kappa value of 0.91 for the dataset with 1.18 pts/m2 average point density and a 96.8% accuracy with a kappa value of 0.90 for the dataset with 8.83 pts/m2 average point density. We can achieve 91.2% overall average accuracy in roof plane segmentation with respect to the reference segmentation of 20 building roofs involving 74 individual roof planes. In summary, the presented framework can successfully label points in airborne lidar point clouds with different characteristics for all three labeling problems we have introduced. It is robust to noise in the calculated features due to the use of global optimization. Furthermore, the framework achieves these results with a small training sample size.
APA, Harvard, Vancouver, ISO, and other styles
6

Rao, Rahul. "Cognitive impairment in older people with alcohol use disorders in a UK community mental health service." Advances in Dual Diagnosis 9, no. 4 (November 21, 2016): 154–58. http://dx.doi.org/10.1108/add-06-2016-0014.

Full text
Abstract:
Purpose The assessment of cognitive impairment in community services for older people remains under-explored. The paper aims to discuss this issue. Design/methodology/approach Cognitive impairment was examined in 25 people aged 65 and over with alcohol use disorders, on the caseload of community mental health services over a six-month period. All subjects assessed using Addenbrooke’s Cognitive Assessment (ACE-III). Findings In total, 76 per cent of the group scored below the cut-off point for likely dementia but only 45 per cent of people scored below the cut-off point for tests of language, compared with 68-84 per cent people in other domains. Research limitations/implications This finding has implications for the detection of alcohol-related brain cognitive impairment in clinical settings. Practical implications Standardised cognitive testing is common within mental health services for older people, but may also have utility within addiction services. Social implications The early detection of alcohol-related cognitive impairment can improve social outcomes in both drinking behaviour and the social consequences of alcohol-related dementia. Originality/value This may be the first published study of cognitive impairment in patients under a mental team for older people with alcohol use disorders and offers some unique findings within this sampling frame.
APA, Harvard, Vancouver, ISO, and other styles
7

Camarda, Massimo, Antonino La Magna, Patrick Fiorenza, Gaetano Izzo, and Francesco La Via. "Theoretical Monte Carlo Study of the Formation and Evolution of Defects in the Homoepitaxial Growth of SiC." Materials Science Forum 600-603 (September 2008): 135–38. http://dx.doi.org/10.4028/www.scientific.net/msf.600-603.135.

Full text
Abstract:
A novel Monte Carlo kinetic model has been developed and implemented to predict growth rate regimes and defect formation for the homo-epitaxial growth of various SiC polytypes on different substrates. Using this model we have studied the generation of both point like and extended defects in terms of the growth rate and off-cut angle, finding qualitative agreement with both electrical and optical characterization and analytical results.
APA, Harvard, Vancouver, ISO, and other styles
8

Polewski, P., W. Yao, M. Heurich, P. Krzystek, and U. Stilla. "Detection of fallen trees in ALS point clouds by learning the Normalized Cut similarity function from simulated samples." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences II-3 (August 7, 2014): 111–18. http://dx.doi.org/10.5194/isprsannals-ii-3-111-2014.

Full text
Abstract:
Fallen trees participate in several important forest processes, which motivates the need for information about their spatial distribution in forest ecosystems. Several studies have shown that airborne LiDAR is a valuable tool for obtaining such information. In this paper, we propose an integrated method of detecting fallen trees from ALS point clouds based on merging small segments into entire fallen stems via the Normalized Cut algorithm. A new approach to specifying the segment similarity function for the clustering algorithm is introduced, where the attribute weights are learned from labeled data instead of being determined manually. We notice the relationship between Normalized Cut’s similarity function and a class of regression models, which leads us to the idea of approximating the task of learning the similarity function with the simpler task of learning a classifier. Moreover, we set up a virtual fallen tree generation scheme to simulate complex forest scenarios with multiple overlapping fallen stems. The classifier trained on this simulated data yields a similarity function for Normalized Cut. Tests on two sample plots from the Bavarian Forest National Park with manually labeled reference data show that the trained function leads to high-quality segmentations. Our results indicate that the proposed data-driven approach can be a successful alternative to time consuming trial-and-error or grid search methods of finding good feature weights for graph cut algorithms. Also, the methodology can be generalized to other applications of graph cut clustering in remote sensing.
APA, Harvard, Vancouver, ISO, and other styles
9

Mustafa, Faisal, and Roderick Julian Robillos. "PROPER SAMPLE SIZES FOR ENGLISH LANGUAGE TESTING: A SIMPLE STATISTICAL ANALYSIS." Humanities & Social Sciences Reviews 8, no. 4 (August 13, 2020): 442–52. http://dx.doi.org/10.18510/hssr.2020.8443.

Full text
Abstract:
Purpose of study: Small sample size is the most common limitation which restricts the generalization of research results, and this is true to many fields, including language testing. The current study is sought to show the predictive power of sample sizes over the population mean to decide what sample minimum size can be considered as a proper sample size for a language test. Methodology: The data for this quantitative research was 5,250 paper-based TOEFL test scores considered as the population, which includes listening, structure, and reading tests, and it is the most familiar standardized test among EFL researchers. Due to its objective nature, it leaves little chance for bias scores. The score ranged between 30.7% of 417 in the TOEFL scale and 95.7% or 653. Standard error was used as the parameter in deciding the proper sample size. It was the cut-off point when the parameter did not show any obvious change when the sample size was added. We used hierarchical agglomerative clustering with three clusters, determined using 30 indices through the majority rule, in finding out the cut-off point. Main Findings: It was found that the cut-off point is at the sample size of 52 with the range between 46 and 59. Therefore, it can be concluded that the minimum proper sample size for a research study involving a language test is n = 46. Application of this study: The results of this study apply to the area of English language teaching and testing. However, it does not rule out the possibility that the study result applies to tests in other languages. Novelty/Originality of this study: The result of this study should be treated as statistical evidence of the proper sample size to avoid inaccurate or conflicting research results in language teaching where a test is used for analysis.
APA, Harvard, Vancouver, ISO, and other styles
10

Sharma, Raghav, and Navneet Agarwal. "Comparison of CT scan and intraoperative findings of cervical lymph node metastasis in oral squamous cell carcinoma with post-operative histopathology." International Journal of Otorhinolaryngology and Head and Neck Surgery 7, no. 6 (May 26, 2021): 950. http://dx.doi.org/10.18203/issn.2454-5929.ijohns20212114.

Full text
Abstract:
<p class="abstract"><strong>Background: </strong>In oral cancer are 90% are squamous cell carcinomas (SCC) and lymphatic metastasis influences prognosis .With help of contrast CT scan finding done preoperatively and intraoperative finding during neck dissection we tried to generate a scoring system by which we can predict cervical lymph nodes metastasis systematically.</p><p class="abstract"><strong>Methods:</strong> Biopsy proven oral SCC cases underwent surgery between May 2012 to December 2018. Contrast enhanced computerized tomography (CECT) neck, intraoperative finding and post operative HPR (histopathology) were compared for the largest size node in the neck. Sensitivity, specificity, PPV, NPV and accuracy were calculated by using the hpr findings in the neck dissection specimen as control. Out of 68 cases, supraomohyoid neck dissection was done in 16 cases and radical neck dissection in 52 cases. Scores were put in Open Epi screening test software. Best cut off point was calculated using Youden Index.</p><p class="abstract"><strong>Results:</strong> Best cut off score (using Youden index and ROC curve) for CT scan was &gt;1 out of 7 features (size &gt;10 mm, central necrosis of lymph node, matting of lymph node, shape, extracapsular spread, vascular invasion, central hypodensity). Best cut off for intraoperative palpation was ≥3 out of 4 features (size, feel on palpation, adherence to surrounding structure, shape).</p><p class="abstract"><strong>Conclusions: </strong>Intraoperative findings can change the extent of surgery for being high sensitive and specific as compared to CT scan which is high sensitive but low specificity. A scoring system can be generated preoperative and intraoperatively to predict a node being malignant or not.</p><p class="abstract"> </p>
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Cut-point finding"

1

ROTA, MATTEO. "Cut-pont finding methods for continuous biomarkers." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2013. http://hdl.handle.net/10281/40114.

Full text
Abstract:
My PhD dissertation deals with statistical methods for cut-point finding for continuous biomarkers. Categorization is often needed for clinical decision making when dealing with diagnostic (or prognostic) biomarkers and a dichotomous or censored failure time outcome. This allows the definition of two or more prognostic risk groups, or also patient’s stratifications for inclusion in randomized clinical trials (RCTs). We investigate the following cut-point finding methods: minimum P-value, Youden index, concordance probability and point closest to-(0,1) corner in the ROC plane. We compare them by assuming both Normal and Gamma biomarker distributions, showing whether they lead to the identification of the same true cut-point and further investigating their performance by simulation. Within the framework of censored survival data, we will consider here new estimation approaches of the optimal cut-point, which use a conditional weighting method to estimate the true positive and false positive fractions. Motivating examples on real datasets are discussed within the dissertation for both the dichotomous and censored failure time outcome. In all simulation scenarios, the point closest-to-(0,1) corner in the ROC plane and concordance probability approaches outperformed the other methods. Both these methods showed good performance in the estimation of the optimal cut-point of a biomarker. However, to improve results communicability, the Youden index or the concordance probability associated to the estimated cut-point could be reported to summarize the associated classification accuracy. The use of the minimum P-value approach for cut-point finding is not recommended because its objective function is computed under the null hypothesis of absence of association between the true disease status and X. This is in contrast with the presence of some discrimination potential of the biomarker X that leads to the dichotomization issue. The investigated cut-point finding methods are based on measures, i.e. sensitivity and specificity, defined conditionally on the outcome. My PhD dissertation opens the question on whether these methods could be applied starting from predictive values, that typically represent the most useful information for clinical decisions on treatments. However, while sensitivity and specificity are invariant to disease prevalence, predictive values vary across populations with different disease prevalence. This is an important drawback of the use of predictive values for cut-point finding. More in general, great care should be taken when establishing a biomarker cut-point for clinical use. Methods for categorizing new biomarkers are often essential in clinical decision-making even if categorization of a continuous biomarker is gained at a considerable loss of power and information. In the future, new methods involving the study of the functional form between the biomarker and the outcome through regression techniques such as fractional polynomials or spline functions should be considered to alternatively define cut-points for clinical use. Moreover, in spite of the aforementioned drawback related to the use of predictive values, we also think that additional new methods for cut-point finding should be developed starting from predictive values.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Cut-point finding"

1

Ganesan, P., D. Rosy Salomi Victoria, Arun Singh Chouhan, D. Saravanan, Rekha Baghel, and K. Saikumar. "Enhancing the Protection of Information in Digital Voting Using the Fraud Application of Blockchain Technology." In Advances in Multimedia and Interactive Technologies, 134–46. IGI Global, 2023. http://dx.doi.org/10.4018/978-1-6684-6060-3.ch011.

Full text
Abstract:
Elections are conducted electronically instead of using paper ballots to cut down on mistakes and discrepancies. Recently, it has been discovered that paper-based balloting fails owing to security and privacy difficulties, and the electronic balloting approach has been recommended as a replacement. For the sake of keeping your data safe, the authors have designed and developed a hashing algorithm based on SHA-256. The blockchain's adaptability is aided by the sealing of the block concept's incorporation. Consortium blockchain technology is employed to ensure that only the election commission has access to the blockchain database, which candidates and other outside parties cannot modify. When used in the polling method, the methodology discussed in this chapter can yield reliable findings. The authors used a hashing algorithm (SHA-256), block generation, data collection, and result declaration to get to this point.
APA, Harvard, Vancouver, ISO, and other styles
2

Fleming*, Zachariah, Terry Pavlis*, and Ghislain Trullenque*. "Unraveling the multi-phase history of southern Death Valley geology." In Field Excursions from Las Vegas, Nevada: Guides to the 2022 GSA Cordilleran and Rocky Mountain Joint Section Meeting, 67–83. Geological Society of America, 2022. http://dx.doi.org/10.1130/2022.0063(04).

Full text
Abstract:
ABSTRACT This field trip is designed to highlight recent findings in regard to the tectonic history of the southern Death Valley region. During the first day, stops will take place in the Ibex Hills and adjacent Ibex Pass area. These stops were chosen to emphasize recent work that supports multiple phases of extension in the region, and is recorded by the interactions of complexly overprinted normal faults. Mapping of the Ibex Hills revealed an older set of normal faults that have a down-to-the-SW sense of movement and are cross-cut by down-to-the-NW style normal faults. Additionally, the Ibex Pass basin poses a number of questions regarding its stratigraphy and how it relates to the timing and kinematics of the region. Multiple stops within the basin will show the variation of volcanic and sedimentary units across Ibex Pass. The second day of the field trip is focused more so on the more recent transtensional and strike-slip history of southern Death Valley. In particular, recent mapping has correlated features in the Avawatz and Owlshead Mountains that indicate ~40k m of offset along the Southern Death Valley Fault Zone (SDVFZ). Stops will take place along traces of the SDVFZ in the Avawatz Mountains and the Noble Hills. The final stop of the trip is in the Mormon Point turtleback, where the implications of the SDVFZ offset are discussed, alongside the metamorphic rocks at the stop, suggesting the restoration of the Panamint Range partially atop the Black Mountains.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Cut-point finding"

1

Mohammadi, Hossein, and John A. Patten. "Scratch Tests on Granite Using Micro-Laser Assisted Machining Technique." In ASME 2015 International Manufacturing Science and Engineering Conference. American Society of Mechanical Engineers, 2015. http://dx.doi.org/10.1115/msec2015-9327.

Full text
Abstract:
In this study, micro-laser assisted machining (μ-LAM) technique is used to perform scratch test on a granite sample. Rocks are generally considered as brittle materials with poor machinability and severe fracture can be resulted when trying to cut them due to their low fracture toughness. Due to increasing demand for these materials in industry with many applications, finding a fast and cost effective process with higher product quality seems essential. In past research in our research group, it has been demonstrated that machining of brittle materials such as semiconductors and ceramics in ductile regime is possible due to the high pressure phase transformation (HPPT) occurring in the material caused by the high compressive and shear stresses induced by a single point diamond tool tip. Scratch tests were performed on the granite sample and to further augment the process, traditional cutting is coupled with the laser to soften the material and get the higher depth of cut. In this research, results of scratch tests done on granite, with and without laser heating have been compared. The effect of laser heating was studied by verifying the depths of cuts for scratch tests with varying the laser power during the process. Microscopic images and three-dimensional profiles of cuts taken by using a white light interferometer were investigated. Results show that using laser can increase depth of cut and with 15 W laser power it is increased — for different regions of granite sample — from 25% to 95%.
APA, Harvard, Vancouver, ISO, and other styles
2

Akkoyunlu, Sule, and Boriss Siliverstovs. "Does the Law of One Price Hold in a High-Inflation Environment? A Tale of Two Cities in Turkey." In International Conference on Eurasian Economies. Eurasian Economists Association, 2010. http://dx.doi.org/10.36880/c01.00120.

Full text
Abstract:
This study addresses price convergence in two cities in Turkey (Istanbul and Ankara) using the annual data over the three quarters of the 20th century (1922–1998), characterized by prevailing high inflation rates for most of the period. In contrast to the rest of the literature addressing convergence in price levels with a typical result of extremely slow convergence rates at best, we argue that convergence is much easier detected in growth rates rather than levels of prices. We suggest using the bounds testing procedure of Pesaran et al. (2001) for this purpose. We find a clear-cut evidence on the existence of a common driving force behind inflation dynamics in Istanbul and Ankara—a finding that is intuitively appealing from the point of view of economic theory.
APA, Harvard, Vancouver, ISO, and other styles
3

Uzol, Oğuz, Cengiz Camcı, and Boris Glezer. "Aerodynamic Loss Characteristics of a Turbine Blade With Trailing Edge Coolant Ejection: Part 1 — Effect of Cut-Back Length, Spanwise Rib Spacing, Free Stream Reynolds Number and Chordwise Rib Length on Discharge Coefficients." In ASME Turbo Expo 2000: Power for Land, Sea, and Air. American Society of Mechanical Engineers, 2000. http://dx.doi.org/10.1115/2000-gt-0258.

Full text
Abstract:
The internal fluid mechanics losses generated between the blade plenum chamber and a reference point located just downstream of the trailing edge are investigated for a turbine blade trailing edge cooling system. The discharge coefficient Cd is presented as a function of the free stream Reynolds number, cut-back length, spanwise rib spacing and chordwise rib length. The results are presented in a wide range of coolant to free stream mass flow rate ratios. The losses from the cooling system show strong free stream Reynolds number dependency especially at low ejection rates when they are correlated against the coolant to free stream pressure ratio. However, when Cd is correlated against a coolant to free stream mass flow rate ratio, the Reynolds number dependency is eliminated. The current data clearly shows that internal viscous losses due to varying rib lengths do not differ significantly. The interaction of the external wall jet in the cut-back region with the free stream fluid is also a strong contributor to the losses. Since the discharge coefficients do not have Reynolds number dependency at high ejection rates, Cd experiments can be performed at a low free stream Reynolds number. Running a discharge coefficient experiment at low Reynolds number (or even in still air) will sufficiently define the high blowing rate portion of the curve. This approach is extremely time efficient and economical in finding worst possible Cd value for a given trailing edge coolant system.
APA, Harvard, Vancouver, ISO, and other styles
4

Matsuda, Kiyofumi, and Tomoaki Eiju. "Determination of the central position of a rotating object by laser Doppler velocimetry." In OSA Annual Meeting. Washington, D.C.: Optica Publishing Group, 1985. http://dx.doi.org/10.1364/oam.1985.tuf2.

Full text
Abstract:
Determination of the central position of rotation is an important technique. This technique is, for example, useful for the operation of a machine tool. If the height of the tool is not adjusted to the central position of rotation, the central portion of a workplace is left without being cut. Here a new method of using laser Doppler velocimetry (LDV) is proposed for this determination. The principle is based on the fact that the tangential velocity of the rotating object is proportional to the distance between a measuring point and a central position. Therefore the tangential velocity is measured by the differential LDV with a microscope and the central position is determined by finding the place where the tangential velocity becomes zero. In the experiments, the velocities at two points to each axis of the Cartesian coordinate were measured, where the separation of two points is measured in advance. By the simple calculation with measured values, the central position where the tangential velocities become zero could be determined within the accuracy of 1–2 μm.
APA, Harvard, Vancouver, ISO, and other styles
5

Agrawal, Gaurav, Ajit Kumar, Vibhor Verma, Alok Mishra, Vikram Gualti, and Varun Mahajan. "A Curious Case of Addressing Well Integrity Issue with Production Logging." In SPE Annual Technical Conference and Exhibition. SPE, 2021. http://dx.doi.org/10.2118/206036-ms.

Full text
Abstract:
Abstract The productive life of a well can be affected by deterioration of the well integrity which can be due to casing/tubing corrosion, casing damage during drilling/ work over, packer failure, plug failures, cement integrity issues etc. Remedial measures can be executed if the nature of problem is diagnosed. One can receive early warning of a potential problem and obtain data for determining a specific restoration program by running well integrity diagnostic tools. There are various well integrity tools available to cater evaluation needs at present times with different working principles and targeting different well integrity problems such as casing/tubing and cement integrity. However, in challenging situation and complex environments, the tools may not provide complete diagnostics. Production logging can be an effective tool in such scenarios by mapping the flow behaviour in the wellbore and can provide a better idea of the wellbore problems. The well "A" is a development well which was drilled (max angle~ 27º) and completed in the interval X345-X349m and X362-X368m to exploit gas from the reservoir ABC. During initial testing it produced @ 1, 67,000 m3/d with FTHP of 2348psi. Later after acid job rate increased to 2,20,000 m3/d but later well had frequent water loading problem and required frequent activation / stimulation. The well has good reservoir zones as identified on the OH logs, hence, to diagnose the reason for water production, PLT was planned in the well in 2015. The well was producing 1,00,000 m3/d of gas at an FTHP of 1181psi during that time. Annulus pressure build-up was also observed in the well suggesting integrity issue with packer/tubing. During the PL run, it was observed that the packer has fallen and settled across one the perforation and the flow was observed to be ongoing from inside as well as outside the packer element making the flow interpretation tricky. A proper interpretation was carried out taking into consideration all available data and water entry point was confirmed. Based on the results, a well intervention was carried out and after the job, well started producing 1,50,000 m3/d of gas with 0% water cut whereas before it was producing 1,00,000 m3/d of gas with 1900 BPD of water. Thus, the intervention resulted in production enhancement by 50% and reduction in water cut by 100%. This paper highlights the proper analysis of the recorded data for diagnosis of the flow condition in an adverse and complex scenario and finding out the water entry point for a proper remediation of the well integrity and production issue.
APA, Harvard, Vancouver, ISO, and other styles
6

Heikkilä, Mikko, Mikko Huova, Jyrki Tammisto, Matti Linjama, and Jussi Tervonen. "Fuel Efficiency Optimization of a Baseline Wheel Loader and its Hydraulic Hybrid Variants Using Dynamic Programming." In BATH/ASME 2018 Symposium on Fluid Power and Motion Control. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/fpmc2018-8853.

Full text
Abstract:
In this paper, fuel consumption of a 5.7-ton municipal tractor in a wheel loader application is studied, and methods for improving the fuel efficiency are compared with each other. Experimental data from the baseline machine with load-sensing hydraulics has been gathered during a y-pattern cycle, and the data is inputted to an optimization function having realistic loss models for a hydraulic pump and diesel engine. Dynamic programming is used to analyze different system configurations in order to determine optimal control sequence for each system. Besides optimization of variable engine rotational speed on the baseline system during the working cycle (considering the point of operation), three hybrid supply systems are studied: 1) a hydraulic flywheel, 2) parallel supply pumps and 3) a throttled accumulator. These systems utilize a hydraulic accumulator as an energy source/sink alongside the diesel engine. The optimal sequence for charging and discharging of the accumulator is examined in order to minimize the fuel consumption of the machine. The idea is to use the lowest acceptable, constant engine rotational speed, to cut down the diesel losses. In addition, the study covers an analysis of adjusting the engine rotational speed for each point of operation also when the hybrid systems are considered. The results show that finding advantageous engine rotational speed for each loading condition can decrease the fuel consumption of the baseline machine around 14%, whereas hybridization of the supply system can further improve the result by a couple of percentage units. Hybrid systems also reduce engine’s maximum load by making it more uniform, which allegedly reduces emissions. The possibility of engine downsizing to further improve the fuel efficiency of hybrid systems is not considered, because the maximum engine power is usually determined by the hydrostatic transmission of a municipal tractor. However, the study assumes that actuators are controlled using traditional 4/3 proportional control valves; hence, there are still potential for greater fuel savings. For example, applying independent metering valves on the actuator control can further decrease the system losses.
APA, Harvard, Vancouver, ISO, and other styles
7

Aminnaji, Morteza, Alfred Hase, and Laura Crombie. "Anti-Agglomerants: Study of Hydrate Structural, Gas Composition, Hydrate Amount, and Water Cut Effect." In International Petroleum Technology Conference. IPTC, 2023. http://dx.doi.org/10.2523/iptc-22765-ms.

Full text
Abstract:
Abstract Kinetic hydrate inhibitors (KHIs) and anti-agglomerants (AAs) – known as low dosage hydrate inhibitors (LDHIs) – have been used widely for gas hydrate prevention in oil and gas operations. They offer significant advantages over thermodynamic inhibitors (e.g., methanol and glycols). While significant works have been done on KHIs evaluation, AAs suffer from their evaluation in terms of hydrate structural effect, gas composition, water cut, and hydrate amount, which are the main objectives of this work. A Shut-in-Restart procedure was carried out to experimentally evaluate (using a visual rocking cell) various commercial AAs in different gas compositions (from a simple methane system to multicomponent natural gas systems). The kinetics of hydrate growth rate and the amount of hydrate formation in the presence of AAs were also analysed using the recorded pressure-temperature data. The amount of hydrate formation (WCH: percentage of water converted to hydrate) was also calculated by pressure drop and establishing the pressure-temperature hydrate flash. The experimental results from the step heating equilibrium point measurement suggest the formation of multiple hydrate structures or phases in order of thermodynamic stability rather than the formation of simple structure II hydrate in the multicomponent natural gas system. The initial findings of experimental studies show that the performance of AAs is not identical for different gas compositions. This is potentially due to the hydrate structural effect on AAs performance. For example, while a commercially available AA (as tested here) could not prevent hydrate agglomeration/blockage in the methane system (plugging occurred after 2% hydrate formed in the system), it showed a much better performance in the natural gas systems. In addition, while hydrate plugging was not observed in the visual rocking cell in the rich natural gas system with AA (at a high subcooling temperature of ∼15°C), some hydrate agglomeration and hydrate plugging were observed for the lean natural gas system at the same subcooling temperature. It is speculated that methane hydrate structure I is potentially the main reason for hydrate plugging and failure of AAs. Finally, the results indicate that water cut%, gas composition, and AAs concentration have a significant effect on hydrate growth rate and hydrate plugging. In addition to increasing confidence in AAs field use, findings potentially have novel applications with respect to hydrate structural effect on plugging and hydrate plug calculation. A robust pressure-temperature hydrate flash calculation is required to calculate the percent of water converted to hydrate during hydrate growth in the presence of AAs.
APA, Harvard, Vancouver, ISO, and other styles
8

Mohd Sahak, Muhammad Zakwan, Maung Maung Myo Thant, Shazleen Saadon, Thomas Krebs, Paul Verbeek, Mohamed Reda Akdim, and Loreen Villacorte. "Acceleration of Novel Technology Development for Stabilized Emulsion Treatment in EOR Applications." In Abu Dhabi International Petroleum Exhibition & Conference. SPE, 2021. http://dx.doi.org/10.2118/207383-ms.

Full text
Abstract:
Abstract Separation of stable emulsions produced from chemical enhanced oil recovery (CEOR) in a brownfield production system using conventional 3-phase separators is almost impossible, requiring large quantities of chemical demulsifiers to meet oil production specifications. A new and novel high-voltage high-frequency (HVHF) electro-coalescence (EC) technology has been identified as potential method to enhance separation of EOR produced fluid for improving CEOR implementation feasibility. This paper aims to present results and findings from the recent EC technology development against success criteria and parameters associated for fast-track field application. Electrostatic coalescers are used as an emulsion breaker, crude dehydrator or desalter in production systems and refineries. However, significant developments are required to use this EC technology as a potential treatment technology for tight emulsions/rag layers in CEOR applications. A new prototype of Inline EC was developed and tested in a batch test setup to evaluate the separation efficiency using real crude-brine samples and a cocktail of alkaline-surfactant-polymer (ASP) chemicals. The sensitivities of separation efficiency to different water cut, demulsifier concentration, EC voltage/exposure time, concentrations of alkaline, surfactant and polymer in the brine were measured and optimal process conditions were assessed. The results and findings were evaluated based on defined success criteria and parameters associated with separation efficiency such as volume fractions of the emulsion, oil-in-water (OIW) and water-in-oil concentrations (WIO), respectively. On one of PETRONAS CEOR field case study, the test results show that EC reduced 90% of the tight emulsion. In conclusion, EC leads to a substantial improvement in separation efficiency relative to the case without EC for water cuts below the inversion point. It is also found that the EC treatment without added demulsifier is equally effective in breaking the emulsion as adding a demulsifier without EC treatment, and that EC can potentially minimise or eliminate the application of demulsifiers in the production system.
APA, Harvard, Vancouver, ISO, and other styles
9

Yang, Zeyuan, Lele Ming, Si Wu, Yadong Wu, Jie Tian, and Hua Ouyang. "On the Mode Characteristics of Rotating Instability With Different Tip Clearances." In ASME Turbo Expo 2022: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2022. http://dx.doi.org/10.1115/gt2022-82072.

Full text
Abstract:
Abstract Rotating Instability (RI) is an unsteady aerodynamic phenomenon occurring at off-design conditions in axial compressors, featuring side-by-side peaks below the blade passing frequency (BPF) in pressure spectra. Especially when the mode orders of RI are close to the blade number the interaction effect could generate intense tip clearance noise, which could not be cut-off by the duct. Moreover, RI is regarded as a potential indicator for stall and surge. According to previous studies the mechanism of RI could be classified as the unsteady vortex system theory and the shear layer instability theory. This paper presents an experiment in a low-pressure single-rotor compressor, comparing the RI characteristics with two different tip clearances at different operating conditions. A total of 28 Kulite transducers were circumferentially mounted on the casing wall to measure the pressure fluctuation in the two configurations. Utilizing the compressive sensing (CS) method based on the double-uniform sampling point (DUSP) technique the mode orders of RI could be accurately determined. Through throttling the RI phenomenon was observed in different range of flow rates under both tip clearance configurations. The evolution pattern of RI is profoundly affected by the tip clearance size, i.e., the RI phenomenon only presents a two-stage “strengthen-weaken” pattern with the nominal gap, while shows a three-stage “strengthen-weaken-strengthen” pattern with the larger gap. Moreover, the mode characteristics of RI in the frequency domain are analyzed. Based on the experimental results several discussions on the mechanism of RI are proposed. These findings do not conform with the unsteady vortex system theory, but provides evidence for the shear layer instability theory.
APA, Harvard, Vancouver, ISO, and other styles
10

Van der Vegte, Wilhelm F., and Imre Horva´th. "Including Human Behavior in Product Simulations for the Investigation of Use Processes in Conceptual Design: A Survey." In ASME 2006 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2006. http://dx.doi.org/10.1115/detc2006-99541.

Full text
Abstract:
In this paper, approaches for behavioral simulation of humans and human-artifact systems are reviewed The objective was to explore available knowledge for the development of a new method and system for the simulation of use processes of consumer durables in conceptual design. A key issue is to resolve the trade-off between minimizing the modeling and computing effort on the one hand, and maximizing the amount of valuable information obtained from simulations to facilitate improving the product. After drawing up review criteria, we reviewed existing simulation approaches, which we characterized based on the simulation models. We found that the surveyed approaches can only address limited, largely unconnected subsets of the various behaviors that can be simulated. For the most advanced approaches, the subsets can be clustered into three main groups: (i) kinematics and rigid-body kinetics simulated with non-discretized object models, (ii) mechanical-deformation behavior and non-mechanical physical behavior simulated with discretized object models and (iii) interpreted physical behavior (information processing) simulated with finite-state machines. No clear-cut solutions for integrated behavioral simulation of use processes have been found, however, we could identify opportunities to bridge the gaps between the three groups of behavior, which can help us to resolve the aforementioned trade-off. In the first place, it seems that the possibilities for using discretized models in kinematics simulation (especially with consideration of the large deformations that are common in biomechanics) have not been fully explored. Alternatively, a completely new uniform modeling paradigm, possibly based on particles, might also help to resolve the gap between the two distinct groups of physical behaviors. Finally, hybrid simulation techniques can bridge the gap between the observed physical behaviors and interpreted physical behaviors. Here, the combination with the object models commonly used for simulations in group (i) and (ii) seems to be largely unexplored. Our findings offered valuable insights as a starting point for developing an integrated method and system for modeling and simulating use processes. We expect that other researchers dealing with similar issues in combining seemingly disconnected simulation approaches could benefit as well.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Cut-point finding"

1

Brandl, Maria T., Shlomo Sela, Craig T. Parker, and Victor Rodov. Salmonella enterica Interactions with Fresh Produce. United States Department of Agriculture, September 2010. http://dx.doi.org/10.32747/2010.7592642.bard.

Full text
Abstract:
The emergence of food-borne illness outbreaks linked to the contamination of fruits and vegetables is a great concern in industrialized countries. The current lack of control measures and effective sanitization methods prompt the need for new strategies to reduce contamination of produce. Our ability to assess the risk associated with produce contamination and to devise innovative control strategies depends on the identification of critical determinants that affect the growth and the persistence of human pathogens on plants. Salmonella enterica, a common causal agent of illness linked to produce, has the ability to colonize and persist on plants. Thus, our main objective was to identify plant-inducible genes that have a role in the growth and/or persistence of S. enterica on postharvest lettuce. Our findings suggest that in-vitro biofilm formation tests may provide a suitable model to predict the initial attachment of Salmonella to cut-romaine lettuce leaves and confirm that Salmonella could persist on lettuce during shelf-life storage. Importantly, we found that Salmonella association with lettuce increases its acid-tolerance, a trait which might be correlated with an enhanced ability of the pathogen to pass through the acidic barrier of the stomach. We have demonstrated that Salmonella can internalize leaves of iceberg lettuce through open stomata. We found for the first time that internalization is an active bacterial process mediated by chemotaxis and motility toward nutrient produced in the leaf by photosynthesis. These findings may provide a partial explanation for the failure of sanitizers to efficiently eradicate foodborne pathogens in leafy greens and may point to a novel mechanism utilized by foodborne and perhaps plant pathogens to colonize leaves. Using resolvase in vivo expression technology (RIVET) we have managed to identify multiple Salmonella genes, some of which with no assigned function, which are involved in attachment to and persistence of Salmonella on lettuce leaves. The precise function of these genes in Salmonella-leaf interactions is yet to be elucidated. Taken together, our findings have advanced the understanding of how Salmonella persist in the plant environment, as well as the potential consequences upon ingestion by human. The emerging knowledge opens new research directions which should ultimately be useful in developing new strategies and approaches to reduce leaf contamination and enhance the safety of fresh produce.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography