Academic literature on the topic 'Feature-set-difference'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Feature-set-difference.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Feature-set-difference"

1

Susan, Seba, and Madasu Hanmandlu. "Difference theoretic feature set for scale-, illumination- and rotation-invariant texture classification." IET Image Processing 7, no. 8 (November 1, 2013): 725–32. http://dx.doi.org/10.1049/iet-ipr.2012.0527.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bharti, Puja, Deepti Mittal, and Rupa Ananthasivan. "Characterization of chronic liver disease based on ultrasound images using the variants of grey-level difference matrix." Proceedings of the Institution of Mechanical Engineers, Part H: Journal of Engineering in Medicine 232, no. 9 (September 2018): 884–900. http://dx.doi.org/10.1177/0954411918796531.

Full text
Abstract:
Chronic liver diseases are fifth leading cause of fatality in developing countries. Early diagnosis is important for timely treatment and to salvage life. Ultrasound imaging is frequently used to examine abnormalities of liver. However, ambiguity lies in visual interpretation of liver stages on ultrasound images. This difficult visualization problem can be solved by analysing extracted textural features from images. Grey-level difference matrix, a texture feature extraction method, can provide information about roughness of liver surface, sharpness of liver borders and echotexture of liver parenchyma. In this article, the behaviour of variants of grey-level difference matrix in characterizing liver stages is investigated. The texture feature sets are extracted by using variants of grey-level difference matrix based on two, three, five and seven neighbouring pixels. Thereafter, to take the advantage of complementary information from extracted feature sets, feature fusion schemes are implemented. In addition, hybrid feature selection (combination of ReliefF filter method and sequential forward selection wrapper method) is used to obtain optimal feature set in characterizing liver stages. Finally, a computer-aided system is designed with the optimal feature set to classify liver health in terms of normal, chronic liver, cirrhosis and hepatocellular carcinoma evolved over cirrhosis. In the proposed work, experiments are performed to (1) identify the best approximation of derivative (forward, central or backward); (2) analyse the performance of individual feature sets of variants of grey-level difference matrix; (3) obtain optimal feature set by exploiting the complementary information from variants of grey-level difference matrix and (4) analyse the performance of proposed method in comparison with existing feature extraction methods. These experiments are carried out on database of 754 segmented regions of interest formed by clinically acquired ultrasound images. The results show that classification accuracy of 94.5% is obtained by optimal feature set having complementary information from variants of grey-level difference matrix.
APA, Harvard, Vancouver, ISO, and other styles
3

Sun, C., D. Guo, H. Gao, L. Zou, and H. Wang. "A method of version merging for computer-aided design files based on feature extraction." Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science 225, no. 2 (June 20, 2010): 463–71. http://dx.doi.org/10.1243/09544062jmes2159.

Full text
Abstract:
In order to manage the version files and maintain the latest version of the computer-aided design (CAD) files in asynchronous collaborative systems, one method of version merging for CAD files is proposed to resolve the problem based on feature extraction. First of all, the feature information is extracted based on the feature attribute of CAD files and stored in a XML feature file. Then, analyse the feature file, and the feature difference set is obtained by the given algorithm. Finally, the merging result of the difference set and the master files with application programming interface (API) interface functions is achieved, and then the version merging of CAD files is also realized. The application in Catia validated that the proposed method is feasible and valuable in engineering.
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Ying Jie, and Mongi A. Abidi. "The Comparative Study between Difference Actions and Full Actions." Applied Mechanics and Materials 373-375 (August 2013): 500–503. http://dx.doi.org/10.4028/www.scientific.net/amm.373-375.500.

Full text
Abstract:
An appearance-based feature set is proposed. With Hidden Markov Model (HMM) handling any temporal variance, the contributions of features, which are from full foreground sequence and from temporal difference sequence, are compared in details by methods which are based on feature selecting and feature voting. The experimental analysis shows that the comparative contributions can be achieved for human action identifying by the two data sources. This introduces the opportunity to analyze human behavior based on temporal difference sequence instead of full foreground sequence, and validates the far-reaching significance of this work.
APA, Harvard, Vancouver, ISO, and other styles
5

Antoniuk, Izabella, Jarosław Kurek, Artur Krupa, Grzegorz Wieczorek, Michał Bukowski, Michał Kruk, and Albina Jegorowa. "Advanced Feature Extraction Methods from Images of Drillings in Melamine Faced Chipboard for Automatic Diagnosis of Drill Wear." Sensors 23, no. 3 (January 18, 2023): 1109. http://dx.doi.org/10.3390/s23031109.

Full text
Abstract:
In this paper, a novel approach to evaluation of feature extraction methodologies is presented. In the case of machine learning algorithms, extracting and using the most efficient features is one of the key problems that can significantly influence overall performance. It is especially the case with parameter-heavy problems, such as tool condition monitoring. In the presented case, images of drilled holes are considered, where state of the edge and the overall size of imperfections have high influence on product quality. Finding and using a set of features that accurately describes the differences between the edge that is acceptable or too damaged is not always straightforward. The presented approach focuses on detailed evaluation of various feature extraction approaches. Each chosen method produced a set of features, which was then used to train a selected set of classifiers. Five initial feature sets were obtained, and additional ones were derived from them. Different voting methods were used for ensemble approaches. In total, 38 versions of the classifiers were created and evaluated. Best accuracy was obtained by the ensemble approach based on Weighted Voting methodology. A significant difference was shown between different feature extraction methods, with a total difference of 11.14% between the worst and best feature set, as well as a further 0.2% improvement achieved by using the best voting approach.
APA, Harvard, Vancouver, ISO, and other styles
6

Agrahari, Rahul, Matthew Nicholson, Clare Conran, Haytham Assem, and John D. Kelleher. "Assessing Feature Representations for Instance-Based Cross-Domain Anomaly Detection in Cloud Services Univariate Time Series Data." IoT 3, no. 1 (January 29, 2022): 123–44. http://dx.doi.org/10.3390/iot3010008.

Full text
Abstract:
In this paper, we compare and assess the efficacy of a number of time-series instance feature representations for anomaly detection. To assess whether there are statistically significant differences between different feature representations for anomaly detection in a time series, we calculate and compare confidence intervals on the average performance of different feature sets across a number of different model types and cross-domain time-series datasets. Our results indicate that the catch22 time-series feature set augmented with features based on rolling mean and variance performs best on average, and that the difference in performance between this feature set and the next best feature set is statistically significant. Furthermore, our analysis of the features used by the most successful model indicates that features related to mean and variance are the most informative for anomaly detection. We also find that features based on model forecast errors are useful for anomaly detection for some but not all datasets.
APA, Harvard, Vancouver, ISO, and other styles
7

Yan, Jun, and Yan Piao. "Research on the Harris Algorithm of Feature Extraction for Moving Targets in the Video." Applied Mechanics and Materials 741 (March 2015): 378–81. http://dx.doi.org/10.4028/www.scientific.net/amm.741.378.

Full text
Abstract:
This paper studies a problem which is the rapid acquisition feature information of moving targets in a video sequence. Selecting the harris algorithm as a feature extraction algorithm, but harris algorithm has a problem of running slowly .Opting 3 * 3 window as the detection window, analyzing the similarity between the window capacity pixel and central pixel, then using the difference to measure this similarity of pixels, when the difference is greater than the set threshold value, we thought they are different, otherwise they are similar.After that characteristics of regional screening was completed. The target areas were divided into feature areas and non-feature regions, but we only calculated the response functions of characteristic areas. The algorithm’s executing efficiency improved.
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Chunzhong, and Zongben Xu. "Structure Identification-Based Clustering According to Density Consistency." Mathematical Problems in Engineering 2011 (2011): 1–14. http://dx.doi.org/10.1155/2011/890901.

Full text
Abstract:
Structure of data set is of critical importance in identifying clusters, especially the density difference feature. In this paper, we present a clustering algorithm based on density consistency, which is a filtering process to identify same structure feature and classify them into same cluster. This method is not restricted by the shapes and high dimension data set, and meanwhile it is robust to noises and outliers. Extensive experiments on synthetic and real world data sets validate the proposed the new clustering algorithm.
APA, Harvard, Vancouver, ISO, and other styles
9

Jing, Xiao Yuan, Xiang Long Ge, Yong Fang Yao, and Feng Nan Yu. "Feature Extraction Algorithm Based on Sample Set Reconstruction." Applied Mechanics and Materials 347-350 (August 2013): 2241–45. http://dx.doi.org/10.4028/www.scientific.net/amm.347-350.2241.

Full text
Abstract:
When the number of labeled training samples is very small, the sample information people can use would be very little and the recognition rates of traditional image recognition methods are not satisfactory. However, there is often some related information contained in other databases that is helpful to feature extraction. Thus, it is considered to take full advantage of the data information in other databases by transfer learning. In this paper, the idea of transferring the samples is employed and further we propose a feature extraction approach based on sample set reconstruction. We realize the approach by reconstructing the training sample set using the difference information among the samples of other databases. Experimental results on three widely used face databases AR, FERET, CAS-PEAL are presented to demonstrate the efficacy of the proposed approach in classification performance.
APA, Harvard, Vancouver, ISO, and other styles
10

Ji, Linna, Fengbao Yang, and Xiaoming Guo. "Image Fusion Algorithm Selection Based on Fusion Validity Distribution Combination of Difference Features." Electronics 10, no. 15 (July 21, 2021): 1752. http://dx.doi.org/10.3390/electronics10151752.

Full text
Abstract:
Aiming at addressing the problem whereby existing image fusion models cannot reflect the demand of diverse attributes (e.g., type or amplitude) of difference features on algorithms, leading to poor or invalid fusion effect, this paper puts forward the construction and combination of difference features fusion validity distribution based on intuition-possible sets to deal with the selection of algorithms with better fusion effect in dual mode infrared images. Firstly, the distances of the amplitudes of difference features between fused images and source images are calculated. The distances can be divided into three levels according to the fusion result of each algorithm, which are regarded as intuition-possible sets of fusion validity of difference features, and a novel construction method of fusion validity distribution based on intuition-possible sets is proposed. Secondly, in view of multiple amplitude intervals of each difference feature, this paper proposes a distribution combination method based on intuition-possible set ordering. Difference feature score results are aggregated by a fuzzy operator. Joint drop shadows of difference feature score results are obtained. Finally, the experimental results indicate that our proposed method can achieve optimal selection of algorithms that has relatively better effect on the fusion of difference features according to the varied feature amplitudes.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Feature-set-difference"

1

Nastenko, Ievgen Arnoldovich, Oleksandra Olegivna Konoval, Olena Konstantinovna Nosovets, and Volodymyr Anatolevich Pavlov. "Set Classification." In Techno-Social Systems for Modern Economical and Governmental Infrastructures, 44–83. IGI Global, 2019. http://dx.doi.org/10.4018/978-1-5225-5586-5.ch003.

Full text
Abstract:
The classification problem where each object is given by a set of multidimensional measurements that is associated with an unknown dependence is considered. Intersection of sets that define objects from different classes is allowed. In this case, it is natural to found classification algorithms based on the difference between dependencies for the objects belonging to different classes. Two algorithms to convert the set classification problem solution from the initial feature space into (1) the parameters space of the common model structure for all the objects and (2) the parameters spaces of the best structures for each class are proposed, along with a classification algorithm based on the accuracy of object representation by the models based on the structures found for each class. If the objects are described with big data, the approach can be used to transform data into a compact form (model parameters) that preserves the characteristics that are necessary to separate the classes. An approach to solve a problem of clustering sets is proposed. Some examples are given.
APA, Harvard, Vancouver, ISO, and other styles
2

Nesaragi, Naimahmed, and Shivnarayan Patidar. "An Explainable Machine Learning Model for Early Prediction of Sepsis Using ICU Data." In Infectious Diseases and Sepsis [Working Title]. IntechOpen, 2021. http://dx.doi.org/10.5772/intechopen.98957.

Full text
Abstract:
Early identification of individuals with sepsis is very useful in assisting clinical triage and decision-making, resulting in early intervention and improved outcomes. This study aims to develop an explainable machine learning model with the clinical interpretability to predict sepsis onset before 6 hours and validate with improved prediction risk power for every time interval since admission to the ICU. The retrospective observational cohort study is carried out using PhysioNet Challenge 2019 ICU data from three distinct hospital systems, viz. A, B, and C. Data from A and B were shared publicly for training and validation while sequestered data from all three cohorts were used for scoring. However, this study is limited only to publicly available training data. Training data contains 15,52,210 patient records of 40,336 ICU patients with up to 40 clinical variables (sourced for each hour of their ICU stay) divided into two datasets, based on hospital systems A and B. The clinical feature exploration and interpretation for early prediction of sepsis is achieved using the proposed framework, viz. the explainable Machine Learning model for Early Prediction of Sepsis (xMLEPS). A total of 85 features comprising the given 40 clinical variables augmented with 10 derived physiological features and 35 time-lag difference features are fed to xMLEPS for the said prediction task of sepsis onset. A ten-fold cross-validation scheme is employed wherein an optimal prediction risk threshold is searched for each of the 10 LightGBM models. These optimum threshold values are later used by the corresponding models to refine the predictive power in terms of utility score for the prediction of labels in each fold. The entire framework is designed via Bayesian optimization and trained with the resultant feature set of 85 features, yielding an average normalized utility score of 0.4214 and area under receiver operating characteristic curve of 0.8591 on publicly available training data. This study establish a practical and explainable sepsis onset prediction model for ICU data using applied ML approach, mainly gradient boosting. The study highlights the clinical significance of physiological inter-relations among the given and proposed clinical signs via feature importance and SHapley Additive exPlanations (SHAP) plots for visualized interpretation.
APA, Harvard, Vancouver, ISO, and other styles
3

Sasaki, Shiori, Koji Murakami, Yasushi Kiyoki, and Asako Uraki. "Global & Geographical Mapping and Visualization Method for Personal/Collective Health Data with 5D World Map System." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2020. http://dx.doi.org/10.3233/faia200825.

Full text
Abstract:
This paper presents a new knowledge base creation method for personal/collective health data with knowledge of preemptive care and potential risk inspection with a global and geographical mapping and visualization functions of 5D World Map System. The final goal of this research project is a realization of a system to analyze the personal health/bio data and potential-risk inspection data and provide a set of appropriate coping strategies and alert with semantic computing technologies. The main feature of 5D World Map System is to provide a platform of collaborative work for users to perform a global analysis for sensing data in a physical space along with the related multimedia data in a cyber space, on a single view of time-series maps based on the spatiotemporal and semantic correlation calculations. In this application, the concrete target data for world-wide evaluation is (1) multi-parameter personal health/bio data such as blood pressure, blood glucose, BMI, uric acid level etc. and daily habit data such as food, smoking, drinking etc., for a health monitoring and (2) time-series multi-parameter collective health/bio data in the national/regional level for global analysis of potential cause of disease. This application realizes a new multidimensional data analysis and knowledge sharing for both a personal and global level health monitoring and disease analysis. The results are able to be analyzed by the time-series difference of the value of each spot, the differences between the values of multiple places in a focused area, and the time-series differences between the values of multiple locations to detect and predict a potential-risk of diseases.
APA, Harvard, Vancouver, ISO, and other styles
4

Gribetz, Sarit Kattan. "Men’s and Women’s Time." In Time and Difference in Rabbinic Judaism, 135–87. Princeton University Press, 2020. http://dx.doi.org/10.23943/princeton/9780691192857.003.0004.

Full text
Abstract:
This chapter discusses the construction of a gendered temporality by examining a set of daily rituals mandated in rabbinic sources, some of which applied to men and others that were only required of women. It begins with the first ritual discussed in rabbinic sources, the recitation of the Shema prayer. Timing became an essential component of the Shema's recitation, and thus the tractate includes numerous debates about ritual time. One's time, it is suggested, ought to be marked first and foremost by this regularized declaration of devotion to God each morning and evening. Another feature of the rabbinic Shema is that only men became obligated in its recitation. While women are excluded from positive time-bound commandments, an entire set of rituals related to the laws of menstrual purity applies only to women and constructs a woman's time in ways that were markedly different from the time of men. The chapter then traces the development of the laws of bodily purity from biblical texts to rabbinic texts, which focus far greater attention on laws related to the menstruant woman.
APA, Harvard, Vancouver, ISO, and other styles
5

Wong, Andrew K. C., Yang Wang, and Gary C. L. Li. "Pattern Discovery as Event Association." In Machine Learning, 1924–32. IGI Global, 2012. http://dx.doi.org/10.4018/978-1-60960-818-7.ch804.

Full text
Abstract:
A basic task of machine learning and data mining is to automatically uncover <b>patterns</b> that reflect regularities in a data set. When dealing with a large database, especially when domain knowledge is not available or very weak, this can be a challenging task. The purpose of <b>pattern discovery</b> is to find non-random relations among events from data sets. For example, the “exclusive OR” (XOR) problem concerns 3 binary variables, A, B and C=A<img src="http://resources.igi-global.com/Marketing/Preface_Figures/x_symbol.png">B, i.e. C is true when either A or B, but not both, is true. Suppose not knowing that it is the XOR problem, we would like to check whether or not the occurrence of the compound event [A=T, B=T, C=F] is just a random happening. If we could estimate its frequency of occurrences under the random assumption, then we know that it is not random if the observed frequency deviates significantly from that assumption. We refer to such a compound event as an event association pattern, or simply a <b>pattern</b>, if its frequency of occurrences significantly deviates from the default random assumption in the statistical sense. For instance, suppose that an XOR database contains 1000 samples and each primary event (e.g. [A=T]) occurs 500 times. The expected frequency of occurrences of the compound event [A=T, B=T, C=F] under the independence assumption is 0.5×0.5×0.5×1000 = 125. Suppose that its observed frequency is 250, we would like to see whether or not the difference between the observed and expected frequencies (i.e. 250 – 125) is significant enough to indicate that the compound event is not a random happening.<div><br></div><div>In statistics, to test the correlation between random variables, <b>contingency table</b> with chi-squared statistic (Mills, 1955) is widely used. Instead of investigating variable correlations, pattern discovery shifts the traditional correlation analysis in statistics at the variable level to association analysis at the event level, offering an effective method to detect statistical association among events.</div><div><br></div><div>In the early 90’s, this approach was established for second order event associations (Chan &amp; Wong, 1990). A higher order <b>pattern discovery</b> algorithm was devised in the mid 90’s for discrete-valued data sets (Wong &amp; Yang, 1997). In our methods, patterns inherent in data are defined as statistically significant associations of two or more primary events of different attributes if they pass a statistical test for deviation significance based on <b>residual analysis</b>. The discovered high order patterns can then be used for classification (Wang &amp; Wong, 2003). With continuous data, events are defined as Borel sets and the pattern discovery process is formulated as an optimization problem which recursively partitions the sample space for the best set of significant events (patterns) in the form of high dimension intervals from which probability density can be estimated by Gaussian kernel fit (Chau &amp; Wong, 1999). Classification can then be achieved using Bayesian classifiers. For data with a mixture of discrete and continuous data (Wong &amp; Yang, 2003), the latter is categorized based on a global optimization discretization algorithm (Liu, Wong &amp; Yang, 2004). As demonstrated in numerous real-world and commercial applications (Yang, 2002), pattern discovery is an ideal tool to uncover subtle and useful patterns in a database. </div><div><br></div><div>In pattern discovery, three open problems are addressed. The first concerns learning where noise and uncertainty are present. In our method, noise is taken as inconsistent samples against statistically significant patterns. Missing attribute values are also considered as noise. Using a standard statistical <b>hypothesis testing</b> to confirm statistical patterns from the candidates, this method is a less ad hoc approach to discover patterns than most of its contemporaries. The second problem concerns the detection of polythetic patterns without relying on exhaustive search. Efficient systems for detecting monothetic patterns between two attributes exist (e.g. Chan &amp; Wong, 1990). However, for detecting polythetic patterns, an exhaustive search is required (Han, 2001). In many problem domains, polythetic assessments of feature combinations (or higher order relationship detection) are imperative for robust learning. Our method resolves this problem by directly constructing polythetic concepts while screening out non-informative pattern candidates, using statisticsbased heuristics in the discovery process. The third problem concerns the representation of the detected patterns. Traditionally, if-then rules and graphs, including networks and trees, are the most popular ones. However, they have shortcomings when dealing with multilevel and multiple order patterns due to the non-exhaustive and unpredictable hierarchical nature of the inherent patterns. We adopt <b>attributed hypergraph</b> (AHG) (Wang &amp; Wong, 1996) as the representation of the detected patterns. It is a data structure general enough to encode information at many levels of abstraction, yet simple enough to quantify the information content of its organized structure. It is able to encode both the qualitative and the quantitative characteristics and relations inherent in the data set.<br></div>
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Feature-set-difference"

1

Kim, Yong Se. "Form Feature Recognition by Convex Decomposition." In ASME 1991 International Computers in Engineering Conference and Exposition. American Society of Mechanical Engineers, 1991. http://dx.doi.org/10.1115/cie1991-0009.

Full text
Abstract:
Abstract A convex decomposition method, called Alternating Sum of Volumes (ASV), uses convex hulls and set difference operations. ASV decomposition may not converge, which severely limits the domain of geometric objects that can be handled. By combining ASV decomposition and remedial partitioning for the non-convergence, we have proposed a convergent convex decomposition called Alternating Sum of Volumes with Partitioning (ASVP). In this article, we describe how ASVP decomposition is used for recognition of form features. ASVP decomposition can be viewed as a hierarchical volumetric representation of form features. Adjacency and interaction between form features are inherently represented in the decomposition in a hierarchical way. Several methods to enhance the feature information obtained by ASVP decomposition are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
2

Lindberg, Ole, Harry B. Bingham, and Allan Peter Engsig Karup. "A Coupled Finite Difference and Weighted Least Squares Simulation of Violent Breaking Wave Impact." In ASME 2012 31st International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2012. http://dx.doi.org/10.1115/omae2012-83823.

Full text
Abstract:
Two model for simulation of free surface flow are presented. The first model is a finite difference based potential flow model with non-linear kinematic and dynamic free surface boundary conditions. The second model is a weighted least squares based incompressible and inviscid flow model. A special feature of this model is a generalized finite point set method which is applied to the solution of the Poisson equation on an unstructured point distribution. The presented finite point set method is generalized to arbitrary order of approximation. The two models are applied to simulation of steep and overturning wave impacts on a vertical breakwater. Wave groups with five different wave heights are propagated from offshore to the vicinity of the breakwater, where the waves are steep, but still smooth and non-overturning. These waves are used as initial condition for the weighted least squares based incompressible and inviscid model and the wave impacts on the vertical breakwater are simulated in this model. The resulting maximum pressures and forces on the breakwater are relatively high when compared with other studies and this is due to the incompressible nature of the present model.
APA, Harvard, Vancouver, ISO, and other styles
3

Parienté, Frédéric, and Yong Se Kim. "Augmented Convex Decomposition Using Incremental Update for Recognition of Form Features." In ASME 1996 Design Engineering Technical Conferences and Computers in Engineering Conference. American Society of Mechanical Engineers, 1996. http://dx.doi.org/10.1115/96-detc/cie-1342.

Full text
Abstract:
Abstract Alternating Sum of Volumes with Partitioning (ASVP) decomposition is a volumetric representation of a part obtained from its boundary representation that organizes faces of the part in an outside-in hierarchy. ASVP decomposition combines Alternating Sum of Volumes (ASV) decomposition, using convex hulls and set difference operations, and remedial partitioning, using cutting operations and concave edges. A Form Feature Decomposition (FFD) which can serve as a central feature representation for various applications is obtained from ASVP decomposition. The incremental update of convex decomposition is achieved by exploiting its hierarchical structure. For a connected incremental design change, the active components that only need to be updated are localized in a subtree of the decomposition tree called active subtree. Then, the new decomposition is obtained by only updating the active subtree in the old decomposition. In this paper, we present a new decomposition, called Augmented Alternating Sum of Volumes with Partitioning (AASVP) decomposition, that is incrementally constructed using ASV incremental update as a local operation on a decomposition tree. AASVP provides an improved feature recognition capability as it localizes the effect of the change in the decomposition tree, avoids excessive remedial partitioning and catches the designer’s intent in feature editing. AASVP differs from ASVP at the remedial-partitioning nodes by partitioning less. The current remedial partitioning method could be improved such that AASVP decomposition can be constructed directly from the solid model.
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Hongkun, Chaoge Wang, and Jiayu Ou. "Incipient Fault Diagnosis of the Planetary Gearbox Based on Improved Variational Mode Decomposition and Frequency-Weighted Energy Operator." In ASME Turbo Expo 2019: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/gt2019-90572.

Full text
Abstract:
Abstract Planetary gearbox is widely used in large and complex mechanical equipment such as wind power generation, helicopters and petrochemical industry. Gear failures occur frequently in working conditions at low speeds, high service load and harsh operating environments. Incipient fault diagnosis can avoid the occurrence of major accidents and loss of personnel property. Aiming at the problems that the incipient fault of planetary gearbox is difficult to recognize and the number of intrinsic mode functions (IMFs) decomposed by variational mode decomposition (VMD) must be set in advance and can not be adaptively selected, a improved VMD algorithm based on energy difference as an evaluation parameter to automatically determine the decomposition level k is proposed. On this basis, a new method for early fault feature extraction of planetary gearbox based on the improved VMD and frequency-weighted energy operator is proposed. Firstly, the vibration signal is pre-decomposed by VMD, and the energy difference between the component signal and the original signal under different K-values is calculated respectively. The optimal decomposition level k is determined according to the energy difference curve. Then, according to kurtosis criterion, sensitive components are selected from the k modal components obtained by the decomposition to reconstruct. Finally, a new frequency-weighted energy operator is used to demodulate the reconstructed signal. The fault characteristic frequency information of the planetary gearbox can be accurately extracted from the energy spectrum. The method is applied to the simulation fault data and actual data of planetary gearbox, and the weak fault characteristics of planetary gearbox are extracted effectively, and the early fault characteristics are distinguished. The results show that the new method has certain application value and practical significance.
APA, Harvard, Vancouver, ISO, and other styles
5

Gebremariam, Mebrahitom Asmelash, Seow Xiang Yuan, Azmir Azhari, and Tamiru Alemu Lemma. "Remaining Tool Life Prediction Based on Force Sensors Signal During End Milling of Stavax ESR Steel." In ASME 2017 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/imece2017-70058.

Full text
Abstract:
This paper focuses on the prediction of the Remaining Useful Life (RUL) of a carbide insert end mill. As tool life degradation due to wear is the main limitation to machining productivity and part quality, prediction and periodic assessment of the condition of the tool is very helpful for the machining industry. The RUL prediction of tools is demonstrated based on the force sensor signal values using the Support Vector Regression (SVR) method and Neural Network (NN) techniques. End milling tests were performed on a stainless steel workpiece at constant machining parameters and the cutting force signal data was collected using force dynamometer for feature extraction and further analysis. Both the SVR and NN models were compared based on the same set of experimental data for the prediction performance. Results have shown a good agreement between the predicted and actual RUL of the tools for both models. The difference in the level of the prognostic matrices such as accuracy, precision and prediction horizon for both models was discussed.
APA, Harvard, Vancouver, ISO, and other styles
6

Donley, Mark G., and Glen C. Steyer. "Dynamic Analysis of a Planetary Gear System." In ASME 1992 Design Technical Conferences. American Society of Mechanical Engineers, 1992. http://dx.doi.org/10.1115/detc1992-0015.

Full text
Abstract:
Abstract Noise reduction in geared systems is usually achieved by minimizing transmission error or by changing the gear train’s dynamic response. While considerable research has been directed in the past to understanding and controlling the transmission error, the same can not be said of the system dynamic response. Recent efforts at modifying the dynamic response to reduce the sensitivity to transmission error have proven to be very rewarding for parallel shaft gearing applications. In this paper, these efforts are extended to planetary gear set applications. A major difference between planetary gear sets and parallel shaft gears is that in planetary gear sets many gear meshes carry load instead of just one. This feature poses a modeling problem as to how to combine responses due to transmission errors at each loaded mesh to determine the total response. A method is proposed in this paper in which transmission errors at different gear meshes are combined into net vertical, net lateral and net tangential transmission errors. A methodology for computing dynamic mesh force response due to these net transmission errors and for identifying critical components that control the gear train system dynamics is presented. These techniques are useful in understanding the effects of system dynamics on gear noise and in developing quiet gear design. To demonstrate the salient features of the proposed method, an example analysis of a transmission with a planetary gear set is presented.
APA, Harvard, Vancouver, ISO, and other styles
7

Paliwal, Manish, Brian Kern, and D. Gordon Allan. "Evaluation of the Effect of Cement Viscosity on Cement Mantle in Total Knee Arthroplasty." In ASME 2008 International Mechanical Engineering Congress and Exposition. ASMEDC, 2008. http://dx.doi.org/10.1115/imece2008-67967.

Full text
Abstract:
Aseptic loosening of the tibial implant remains one of the major reasons of failure in Total Knee Arthroplasty (TKA). The cement viscosity at the time of application to the bone is of great importance to ensure a long-term success of the arthroplasty, as it influences the cement penetration and stability of the prosthesis. Currently, there are number of cements available with a wide range of viscosities and set times. High viscosity faster-setting cements may significantly reduce operating room times. However, the concern is that this positive feature may be at the expense of decreased penetration into the bone, and hence reduced stability of the construct. The use of four cement types ((DePuy II (DePuy Inc. Warsaw, IN), Endurance (DePuy Inc. Warsaw, IN), Simplex-P (Stryker Corp Kalamazoo, MI), and Palacos (Zimmer, Inc, Warsaw, IN)) were compared and evaluated during TKA using surrogate tibiae, with respect to the depth of cement penetration according to the Knee Society Total Knee Arthroplasty Roentgenographic Evaluation System. On radiographic analysis of the implanted surrogate tibiae, it was found that Simplex had the maximum commulative penetration of 19.2 mm in seven zones in Mediolateral view, and 12.7 mm in three zones in anteroposterior view. In zone seven, the difference was statistically significant when comparing Simplex with Palacos (11 mm vs 4.6 mm, two-tailed P value = 0.035), somewhat significant with Depuy 2 (11 mm vs 6 mm, two tailed P value = 0.08), but the different was not significant when compared with Endurance (11 mm vs 10 mm, two-tailed P value = 0.6345). In Zone 5, the difference was statistically significant with Simplex vs Endurance (0.3 mm vs 2.2 mm, P = 0.028), and with Simplex vs Depuy 2 (0.3 mm vs 2.17 mm, P = 0.012). This study enhances the understanding of the relation between cement viscosities and cement penetration into cancellous bone during TKA.
APA, Harvard, Vancouver, ISO, and other styles
8

Schwartz, Mark P. "McKay Bay Refuse-to-Energy Facility Split-Range Control System." In 13th Annual North American Waste-to-Energy Conference. ASMEDC, 2005. http://dx.doi.org/10.1115/nawtec13-3162.

Full text
Abstract:
The McKay Bay Refuse-to-Energy Facility underwent a three-year retrofit program completed in 2001. The major portion of this work involved the replacement of all four combustion trains. The existing turbine generator set, rated at 22.5 MW, was retained. Each of the four boilers had a maximum continuous rating (MCR) of 62,186 lb/hr of steam; i.e., 248,744 lb/hr with all boilers in operation. The turbine generator operated at about 220,000 lb/hr (about 93% capacity), thus allowing for load swings due to fuel inconsistencies. As a result of the difference between the boiler MCR and the rate into the turbine, excess steam (over 26,000 lb/hr) was sent to an existing bypass (dump) condenser. Upon installing a new automatic governor control system for the turbine during the retrofit, the potential for providing additional control capabilities was realized. Utilizing an available second control feature on the governor control system, a major portion of the bypassed steam could be sent to the turbine via an innovative split-range control system configuration. The formerly bypassed steam was now added to the energy recovered, and had a positive effect on the net kwh/ton of waste. This paper discusses the research, design and installation of the split-range control system, as well as the economics of the project. The capital cost of this system enhancement was recovered in the first three months of operation, and the process continues to operate successfully.
APA, Harvard, Vancouver, ISO, and other styles
9

Zhang, Xiao, Vignesh Suresh, Yi Zheng, Shaodong Wang, Qing Li, Hao Lyu, Beiwen Li, and Hantang Qin. "Surface Roughness Measurement of Additive Manufactured Parts Using Focus Variation Microscopy and Structured Light System." In ASME 2019 14th International Manufacturing Science and Engineering Conference. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/msec2019-2874.

Full text
Abstract:
Abstract Surface roughness is a significant parameter when evaluating the quality of products in the additive manufacturing (AM) industry. AM parts are fabricated layer by layer, which is quite different from traditional formative or subtractive methods. A uniform feature can be obtained along the direction of the AM printhead movement on the surface of manufactured components, and a large waviness value can be found in the direction perpendicular to printhead movement. This unique characteristic differentiates additive manufactured parts from casted or machined parts in the way of measuring and defining surface roughness. Therefore, it is necessary to set up new standards to measure surface roughness of AM parts and analyze the variation in the topographical profile. The most widely used instruments for measuring surface roughness are profilometer and laser scanner, but they cannot generate 3D topographical surfaces in real-time. In this work, two non-contact optical methods based on Focus Variation Microscopy (FVM) and Structured Light System (SLS) were adopted to measure the surface topography of the target components. The FVM captures images of objects at different focus levels. By translating the object’s position based on focus profile, a 3D image is obtained by data fusion. The lab-made microscopic SLS was used to perform simultaneous whole surface scanning with the potential to achieve real-time 3D surface reconstruction. The two optical metrology systems generated two totally different point cloud data sets. Limited research has been conducted to verify whether the point cloud data sets generated from different optical systems are following the same distribution. In this paper, a statistical method was applied to test the difference between two systems. By using data analytics approaches for comparison analysis, it was found that surface roughness based on the FVM and the SLS systems has no significant difference from a data fusion point of view, though point cloud data generated were completely different in values. In addition, this paper provided a standard measurement approach for a real-time, non-contact method to estimate the surface roughness of AM parts. The two metrology techniques can be applied for in-situ real-time surface analysis and process planning for AM.
APA, Harvard, Vancouver, ISO, and other styles
10

Werth, David, and Matthew Havice. "A Review of Common Problems Observed in Cooling Water Intakes and the Use of Physical Models to Develop Effective Solutions." In ASME/JSME/KSME 2015 Joint Fluids Engineering Conference. American Society of Mechanical Engineers, 2015. http://dx.doi.org/10.1115/ajkfluids2015-33776.

Full text
Abstract:
Pump intake structures are a necessary component of the cooling water systems for power plants, process and manufacturing facilities, flood control and water/wastewater applications. Large cooling water systems often use substantial sea / river water intakes or cooling towers to provide the required cooling of the process or circulating water. These structures can be very large and often house multiple pump with capacities ranging in size from a few hundred m3/hr to 60,000 m3/hr or more. With such large flow rates care must be taken to ensure uniform flow to the pump to limit vortex activity, vibration, flow induced cavitation and performance problems. In many cases, a physical hydraulic model study is conducted to evaluate the overall approach flow and the performance of the intake. This paper presents a synopsis of several recent physical model studies and a review of recurring problems associated with common design features. This paper takes a closer look at stop log support walls, an intake design feature common to seawater intakes. This wall is often used to minimize the height of the stop logs. In applications with large variations of water level, such as a seawater intake, there are times when the support walls are submerged significantly, resulting in significant flow disturbances. A feature common to cooling towers is the use of 90-degree suction elbows to supply horizontal pumps. A review of short radius vs. long radius elbow performance is presented. Cooling towers often have another common feature which is a significant difference in depth between the cooling tower basin and the pump sump. This results in typical shallow basins and deeper sumps. A common problem is the utilization of minimum pump submergence to set the water levels without reference to the basin invert elevation. A discussion of choked flow conditions in cooling towers is presented. A final discussion is presented regarding cross-flow and the use of concentrated supply channels in cooling tower applications to facilitate the isolation of individual tower cells. This paper presents a synopsis of several recent physical model studies and a review of recurring problems associated with common intake design features. The results of several model studies are presented to demonstrate the negative impacts that these common intake features have on approach flow conditions. The intent of the paper is to provide the design engineer some additional guidance not offered in industry guidelines or standards with the hope of avoiding common problems which can be costly and difficult to remediate after the intake has been constructed.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography