Academic literature on the topic 'IDEAL DISTANCE MINIMIZATION METHOD'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'IDEAL DISTANCE MINIMIZATION METHOD.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "IDEAL DISTANCE MINIMIZATION METHOD"

1

Khan, Abdullah, Hashim Hizam, Noor Izzri Abdul-Wahab, and Mohammad Lutfi Othman. "Solution of Optimal Power Flow Using Non-Dominated Sorting Multi Objective Based Hybrid Firefly and Particle Swarm Optimization Algorithm." Energies 13, no. 16 (August 18, 2020): 4265. http://dx.doi.org/10.3390/en13164265.

Full text
Abstract:
In this paper, a multi-objective hybrid firefly and particle swarm optimization (MOHFPSO) was proposed for different multi-objective optimal power flow (MOOPF) problems. Optimal power flow (OPF) was formulated as a non-linear problem with various objectives and constraints. Pareto optimal front was obtained by using non-dominated sorting and crowding distance methods. Finally, an optimal compromised solution was selected from the Pareto optimal set by applying an ideal distance minimization method. The efficiency of the proposed MOHFPSO technique was tested on standard IEEE 30-bus and IEEE 57-bus test systems with various conflicting objectives. Simulation results were also compared with non-dominated sorting based multi-objective particle swarm optimization (MOPSO) and different optimization algorithms reported in the current literature. The achieved results revealed the potential of the proposed algorithm for MOOPF problems.
APA, Harvard, Vancouver, ISO, and other styles
2

Restifo, Richard J. "The Pedicled Robertson Mammaplasty: Minimization of Complications in Obese Patients With Extreme Macromastia." Aesthetic Surgery Journal 40, no. 12 (March 16, 2020): NP666—NP675. http://dx.doi.org/10.1093/asj/sjaa073.

Full text
Abstract:
Abstract Background Breast reduction for extreme macromastia in obese patients is a potentially high-risk endeavor. Free nipple grafting as well as a variety of pedicled techniques have been advocated for large reductions in obese patients, but the number of different approaches suggests that no single method is ideal. This paper suggests the Robertson Mammaplasty, an inferior pedicle technique characterized by a curvilinear skin extension onto the pedicle, as a potentially favorable approach to this clinical situation. Objectives The author sought to determine the safety of the Pedicled Robertson Mammaplasty for extreme macromastia in obese patients. Methods The records of a single surgeon’s practice over a 15-year period were retrospectively reviewed. Inclusion criteria were a Robertson Mammaplasty performed with a >3000-g total resection and a patient weight at least 20% above ideal body weight. Records were reviewed for patient characteristics, operative times, and complications. Results The review yielded 34 bilateral reduction patients that met inclusion criteria. The mean resection weight was 1859.2 g per breast, the mean body mass index was 36.4 kg/m2, and the mean sternal notch-to-nipple distance was 41.4 cm. Mean operative time was 122 minutes. There were no cases of nipple necrosis and no major complications that required reoperation under general anesthesia. A total 26.4% of patients had minor complications that required either local wound care or small office procedures, and 4.4% received small revisions under local anesthesia. Conclusions The Pedicled Robertson Mammaplasty is a fast and safe operation that yields good aesthetic results and a relative minimum of complications in the high-risk group of obese patients with extreme macromastia. Level of Evidence: 4
APA, Harvard, Vancouver, ISO, and other styles
3

Aljohani, Khalid. "Optimizing the Distribution Network of a Bakery Facility: A Reduced Travelled Distance and Food-Waste Minimization Perspective." Sustainability 15, no. 4 (February 16, 2023): 3654. http://dx.doi.org/10.3390/su15043654.

Full text
Abstract:
There are many logistics nuances specific to bakery factories, making the design of their distribution network especially complex. In particular, bakery products typically have a shelf life of under a week. To ensure that products are delivered to end-customers with freshness, speed, quality, health, and safety prioritized, the distribution network, facility location, and ordering system must be optimally designed. This study presents a multi-stage framework for a bakery factory comprised of a selection methodology of an optimum facility location, an effective distribution network for delivery operations, and a practical ordering system used by related supply chain actors. The operations function and distribution network are optimized using a multi-criteria decision-making method comprised of the Analytic Hierarchy Process (AHP) to establish optimization criteria and Technique of Order Preference Similarity to the Ideal Solution (TOPSIS) to select the optimal facility location. The optimal distribution network strategy was found using an optimization technique. This framework was applied to a real-life problem for a bakery supply chain in the Western Region, Saudi Arabia. Using a real-life, quantitative dataset and incorporating qualitative feedback from key stakeholders in the supply chain, the developed framework enabled a reduction in overall distribution costs by 14%, decreasing the total travel distance by 16%, and decreasing estimated food waste by 22%. This result was primarily achieved by solving the facility location problem in favor of operating two factories without dedicated storage facilities and implementing the distribution network strategy of direct shipment of products from the bakery to customers.
APA, Harvard, Vancouver, ISO, and other styles
4

GÖÇMEN POLAT, Elifcan. "Distribution Centre Location Selection for Disaster Logistics with Integrated Goal Programming-AHP based TOPSIS Method at the City Level." Afet ve Risk Dergisi 5, no. 1 (June 20, 2022): 282–96. http://dx.doi.org/10.35341/afet.1071343.

Full text
Abstract:
The importance of disaster logistics and its share in the logistics sector are increasing significantly. Most disasters are difficult to predict; therefore, a set of measures seems to be necessary to reduce the risks. Thus, disaster logistics needs to be designed with the pre-disaster and post-disaster measures. These disasters are experienced intensely in Turkey and the importance of these measures becomes more evidential. Therefore, accurate models are required to develop an effective disaster preparedness system. One of the most important decisions to increase the preparedness is to locate the centres for handling material inventory. In this context, this paper analyses the response phase designing the disaster distribution centres in Turkey at the provincial level. AHP (Analytical Hierarchy Process) based TOPSIS (Technique for Order of Preference by Similarity to Ideal Solution) method and goal programming model integration is used to decide alternative locations of distribution centres. TOPSIS method is employed for ranking the locations, which is based on hazard scores, total area, population, and distance to centre. Two conflicting objectives are first proposed in the goal programming formulation, in which maximization of the TOPSIS scores and minimization of the number of distribution centres covering all demands named set covering model are included. Although Gecimli has the highest priority with 0.8 p score in the TOPSIS ranking, Altincevre (0.77) and Buzlupınar (0.75) ensure both the TOPSIS score and coverage of the demand nodes. The results from this paper confirm that the computational results ensure disaster prevention insights especially in regions with limited data.
APA, Harvard, Vancouver, ISO, and other styles
5

Lopez-Perez, Jose J., Uriel H. Hernandez-Belmonte, Juan-Pablo Ramirez-Paredes, Marco A. Contreras-Cruz, and Victor Ayala-Ramirez. "Distributed Multirobot Exploration Based on Scene Partitioning and Frontier Selection." Mathematical Problems in Engineering 2018 (June 20, 2018): 1–17. http://dx.doi.org/10.1155/2018/2373642.

Full text
Abstract:
In mobile robotics, the exploration task consists of navigating through an unknown environment and building a representation of it. The mobile robot community has developed many approaches to solve this problem. These methods are mainly based on two key ideas. The first one is the selection of promising regions to explore and the second is the minimization of a cost function involving the distance traveled by the robots, the time it takes for them to finish the exploration, and others. An option to solve the exploration problem is the use of multiple robots to reduce the time needed for the task and to add fault tolerance to the system. We propose a new method to explore unknown areas, by using a scene partitioning scheme and assigning weights to the frontiers between explored and unknown areas. Energy consumption is always a concern during the exploration, for this reason our method is a distributed algorithm, which helps to reduce the number of communications between robots. By using this approach, we also effectively reduce the time needed to explore unknown regions and the distance traveled by each robot. We performed comparisons of our approach with state-of-the-art methods, obtaining a visible advantage over other works.
APA, Harvard, Vancouver, ISO, and other styles
6

Shu Xuan, Leong, Tengku Juhana Tengku Hashim, and Muhamad Najib Kamarudin. "Optimal location and sizing of distributed generation to minimize losses using whale optimization algorithm." Indonesian Journal of Electrical Engineering and Computer Science 29, no. 1 (January 1, 2022): 15. http://dx.doi.org/10.11591/ijeecs.v29.i1.pp15-23.

Full text
Abstract:
The conventional power plants often bring in power quality concerns for instance high power losses and poor voltage profile to the network which are caused by the locations of power plants that are placed a distance away from loads. With proper planning and systematic allocation, the introduction of distributed generation (DG) into the network will enhance the performance and condition of the power system. This paper utilizes the optimization approach named whale optimization algorithm (WOA) in the search of the most ideal location and size of DG while ensuring the reduction of power losses and the minimization of the voltage deviation. WOA implementation is done in the IEEE 33-bus radial distribution system (RDS) utilizing MATPOWER and MATLAB software for no DG, one DG and two DGs installation. The outcome obtained from using WOA was compared to other well-known optimization methods and WOA has shown its competency after comparison; the optimal location of WOA with other methods showing almost the same result. The best result presented was the system with two DGs installed due to the losses of the system was recorded to be the least compared to one DG or no DG installation.
APA, Harvard, Vancouver, ISO, and other styles
7

He, Chun, Ke Guo, and Huayue Chen. "An Improved Image Filtering Algorithm for Mixed Noise." Applied Sciences 11, no. 21 (November 4, 2021): 10358. http://dx.doi.org/10.3390/app112110358.

Full text
Abstract:
In recent years, image filtering has been a hot research direction in the field of image processing. Experts and scholars have proposed many methods for noise removal in images, and these methods have achieved quite good denoising results. However, most methods are performed on single noise, such as Gaussian noise, salt and pepper noise, multiplicative noise, and so on. For mixed noise removal, such as salt and pepper noise + Gaussian noise, although some methods are currently available, the denoising effect is not ideal, and there are still many places worthy of improvement and promotion. To solve this problem, this paper proposes a filtering algorithm for mixed noise with salt and pepper + Gaussian noise that combines an improved median filtering algorithm, an improved wavelet threshold denoising algorithm and an improved Non-local Means (NLM) algorithm. The algorithm makes full use of the advantages of the median filter in removing salt and pepper noise and demonstrates the good performance of the wavelet threshold denoising algorithm and NLM algorithm in filtering Gaussian noise. At first, we made improvements to the three algorithms individually, and then combined them according to a certain process to obtain a new method for removing mixed noise. Specifically, we adjusted the size of window of the median filtering algorithm and improved the method of detecting noise points. We improved the threshold function of the wavelet threshold algorithm, analyzed its relevant mathematical characteristics, and finally gave an adaptive threshold. For the NLM algorithm, we improved its Euclidean distance function and the corresponding distance weight function. In order to test the denoising effect of this method, salt and pepper + Gaussian noise with different noise levels were added to the test images, and several state-of-the-art denoising algorithms were selected to compare with our algorithm, including K-Singular Value Decomposition (KSVD), Non-locally Centralized Sparse Representation (NCSR), Structured Overcomplete Sparsifying Transform Model with Block Cosparsity (OCTOBOS), Trilateral Weighted Sparse Coding (TWSC), Block Matching and 3D Filtering (BM3D), and Weighted Nuclear Norm Minimization (WNNM). Experimental results show that our proposed algorithm is about 2–7 dB higher than the above algorithms in Peak Signal-Noise Ratio (PSNR), and also has better performance in Root Mean Square Error (RMSE), Structural Similarity (SSIM), and Feature Similarity (FSIM). In general, our algorithm has better denoising performance, better restoration of image details and edge information, and stronger robustness than the above-mentioned algorithms.
APA, Harvard, Vancouver, ISO, and other styles
8

Jawak, Shridhar D., Sagar F. Wankhede, Alvarinho J. Luis, and Keshava Balakrishna. "Impact of Image-Processing Routines on Mapping Glacier Surface Facies from Svalbard and the Himalayas Using Pixel-Based Methods." Remote Sensing 14, no. 6 (March 15, 2022): 1414. http://dx.doi.org/10.3390/rs14061414.

Full text
Abstract:
Glacier surface facies are valuable indicators of changes experienced by a glacial system. The interplay of accumulation and ablation facies, followed by intermixing with dust and debris, as well as the local climate, all induce observable and mappable changes on the supraglacial terrain. In the absence or lag of continuous field monitoring, remote sensing observations become vital for maintaining a constant supply of measurable data. However, remote satellite observations suffer from atmospheric effects, resolution disparity, and use of a multitude of mapping methods. Efficient image-processing routines are, hence, necessary to prepare and test the derivable data for mapping applications. The existing literature provides an application-centric view for selection of image processing schemes. This can create confusion, as it is not clear which method of atmospheric correction would be ideal for retrieving facies spectral reflectance, nor are the effects of pansharpening examined on facies. Moreover, with a variety of supervised classifiers and target detection methods now available, it is prudent to test the impact of variations in processing schemes on the resultant thematic classifications. In this context, the current study set its experimental goals. Using very-high-resolution (VHR) WorldView-2 data, we aimed to test the effects of three common atmospheric correction methods, viz. Dark Object Subtraction (DOS), Quick Atmospheric Correction (QUAC), and Fast Line-of-Sight Atmospheric Analysis of Hypercubes (FLAASH); and two pansharpening methods, viz. Gram–Schmidt (GS) and Hyperspherical Color Sharpening (HCS), on thematic classification of facies using 12 supervised classifiers. The conventional classifiers included: Mahalanobis Distance (MHD), Maximum Likelihood (MXL), Minimum Distance to Mean (MD), Spectral Angle Mapper (SAM), and Winner Takes All (WTA). The advanced/target detection classifiers consisted of: Adaptive Coherence Estimator (ACE), Constrained Energy Minimization (CEM), Matched Filtering (MF), Mixture-Tuned Matched Filtering (MTMF), Mixture-Tuned Target-Constrained Interference-Minimized Filter (MTTCIMF), Orthogonal Space Projection (OSP), and Target-Constrained Interference-Minimized Filter (TCIMF). This experiment was performed on glaciers at two test sites, Ny-Ålesund, Svalbard, Norway; and Chandra–Bhaga basin, Himalaya, India. The overall performance suggested that the FLAASH correction delivered realistic reflectance spectra, while DOS delivered the least realistic. Spectra derived from HCS sharpened subsets seemed to match the average reflectance trends, whereas GS reduced the overall reflectance. WTA classification of the DOS subsets achieved the highest overall accuracy (0.81). MTTCIMF classification of the FLAASH subsets yielded the lowest overall accuracy of 0.01. However, FLAASH consistently provided better performance (less variable and generally accurate) than DOS and QUAC, making it the more reliable and hence recommended algorithm. While HCS-pansharpened classification achieved a lower error rate (0.71) in comparison to GS pansharpening (0.76), neither significantly improved accuracy nor efficiency. The Ny-Ålesund glacier facies were best classified using MXL (error rate = 0.49) and WTA classifiers (error rate = 0.53), whereas the Himalayan glacier facies were best classified using MD (error rate = 0.61) and WTA (error rate = 0.45). The final comparative analysis of classifiers based on the total error rate across all atmospheric corrections and pansharpening methods yielded the following reliability order: MXL > WTA > MHD > ACE > MD > CEM = MF > SAM > MTMF = TCIMF > OSP > MTTCIMF. The findings of the current study suggested that for VHR visible near-infrared (VNIR) mapping of facies, FLAASH was the best atmospheric correction, while MXL may deliver reliable thematic classification. Moreover, an extensive account of the varying exertions of each processing scheme is discussed, and could be transferable when compared against other VHR VNIR mapping methods.
APA, Harvard, Vancouver, ISO, and other styles
9

H. Mohammed, Abbas, and Khattab S. Abdul-Razzaq. "Optimum Design of Steel Trapezoidal Box-Girders Using Finite Element Method." International Journal of Engineering & Technology 7, no. 4.20 (November 28, 2018): 325. http://dx.doi.org/10.14419/ijet.v7i4.20.26130.

Full text
Abstract:
The target of basic plan is to choose part sizes with the ideal proportioning of the in general auxiliary geometry. Regular steel trapezoidal box-supports have been utilized generally in different designing fields. The target of this examination is to create three-dimensional limited component display for the size improvement of steel trapezoidal box-braces. The limited component programming bundle ANSYS was utilized to decide the ideal cross segment measurement for the steel trapezoidal-box support. Two target capacities were considered in this investigation which are: minimization of the strain vitality and minimization of the volume. The plan factors are the width of the best spine, the width of the base rib, the thickness of the best rib, the thickness of the base rib, the stature of the support and the thickness of the networks. The imperatives considered in this examination are the ordinary and shear worry in steel brace and the dislodging at mid-length of the support. Improvement consequences of steel brace show that the ideal territory of cross segment for the strain vitality minimization is more noteworthy than the ideal for volume minimization by 6 %. The base cross area is the financial structure, hence the volume minimization is more pertinence for steel brace advancement.
APA, Harvard, Vancouver, ISO, and other styles
10

Rozylowicz, Laurentiu, Florian P. Bodescu, Cristiana M. Ciocanea, Athanasios A. Gavrilidis, Steluta Manolache, Marius L. Matache, Iulia V. Miu, Ionut C. Moale, Andreea Nita, and Viorel D. Popescu. "Empirical analysis and modeling of Argos Doppler location errors in Romania." PeerJ 7 (January 31, 2019): e6362. http://dx.doi.org/10.7717/peerj.6362.

Full text
Abstract:
Background Advances in wildlife tracking technology have allowed researchers to understand the spatial ecology of many terrestrial and aquatic animal species. Argos Doppler is a technology that is widely used for wildlife tracking owing to the small size and low weight of the Argos transmitters. This allows them to be fitted to small-bodied species. The longer lifespan of the Argos units in comparison to units outfitted with miniaturized global positioning system (GPS) technology has also recommended their use. In practice, large Argos location errors often occur due to communication conditions such as transmitter settings, local environment, and the behavior of the tracked individual. Methods Considering the geographic specificity of errors and the lack of benchmark studies in Eastern Europe, the research objectives were: (1) to evaluate the accuracy of Argos Doppler technology under various environmental conditions in Romania, (2) to investigate the effectiveness of straightforward destructive filters for improving Argos Doppler data quality, and (3) to provide guidelines for processing Argos Doppler wildlife monitoring data. The errors associated with Argos locations in four geographic locations in Romania were assessed during static, low-speed and high-speed tests. The effectiveness of the Douglas Argos distance angle filter algorithm was then evaluated to ascertain its effect on the minimization of localization errors. Results Argos locations received in the tests had larger associated horizontal errors than those indicated by the operator of the Argos system, including under ideal reception conditions. Positional errors were similar to those obtained in other studies outside of Europe. The errors were anisotropic, with larger longitudinal errors for the vast majority of the data. Errors were mostly related to speed of the Argos transmitter at the time of reception, but other factors such as topographical conditions and orientation of antenna at the time of the transmission also contributed to receiving low-quality data. The Douglas Argos filter successfully excluded the largest errors while retaining a large amount of data when the threshold was set to the local scale (two km). Discussion Filter selection requires knowledge about the movement patterns and behavior of the species of interest, and the parametrization of the selected filter typically requires a trial and error approach. Selecting the proper filter reduces the errors while retaining a large amount of data. However, the post-processed data typically includes large positional errors; thus, we recommend incorporating Argos error metrics (e.g., error ellipse) or use complex modeling approaches when working with filtered data.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "IDEAL DISTANCE MINIMIZATION METHOD"

1

CHOUHAN, CHETAN. "MULTIOBJECTIVE ECONOMIC LOAD DISPATCH USING WEIGHTING METHOD." Thesis, 2012. http://dspace.dtu.ac.in:8080/jspui/handle/repository/13939.

Full text
Abstract:
M.TECH
In general, a large scale power system possesses multiple objectives to be achieved. The ideal power system operation is achieved when various objectives like cost of generation, system transmission loss, environmental pollution, security etc. are simultaneously attained with minimum values. Since these objectives are conflicting in nature, it is impossible to achieve the ideal power system operation. In this thesis work, three objectives of Multiobjective Economic Load Dispatch (MOELD) problem-cost of generation, system transmission loss and environmental pollution- are considered. The MOELD problem is formulated as a multiobjective optimization problem using weighting method and a number of noninferior solutions are generated in 3D space. The optimal power system operation is attained by Ideal Distance Minimization method. This method employs the concept of an ‘Ideal Point’ (IP) to scalarize the problems having multiple objectives and it minimizes the Euclidean distance between IP and the set of noninferior solutions. This method has been applied to IEEE 30 bus system.
APA, Harvard, Vancouver, ISO, and other styles
2

Hsieh, Hsun, and 謝. 洵. "Automatic tumor segmentation of breast ultra-sound images using a distance-regularized level-set evolution method with initial contour obtained by guided image filter, L0 gradient minimization smoothing pre-processing, and morphological features." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/t6z6cs.

Full text
Abstract:
碩士
國立清華大學
電機工程學系所
105
Due to the speckle noise and low contrast in breast ultrasound images, it is hard to locate the contour of the tumor by using a single method. In this thesis, a new method for finding an initial contour is proposed, which can improve the result of DRLSE on the segmentation of BUS images. The new method focuses on improving the algorithm proposed by Tsai-Wen Niu, which is a way to search an initial contour based on the local minimum in the images. When the BUS images contain calcification, it is possible to fail in searching of initial contour through such algorithm, hence leading to a poor segmentation result when the initial contour is on the wrong place. Therefore, we acquire a bigger initial contour by using a series of image smoothing methods and binarization, which can eliminate the weak edges and adjust the contrast in BUS images. In addition, some images without local minimum can be successfully detected by using the proposed method. However, the pixel value in these images are similar. It might be hard to accurately separate the tumor region from non-tumor region by the difference of pixel values. These obstacles are conquered by calculating the difference of length and pixel value in the suspect region. The ranking outcome is improved by using the morphological features. After applying DRLSE, our initial contour can reach the tumor region more accurately. To evaluate the result of segmentation, it is compared with the outcome of DRLSE obtained from different initial contours proposed by Tsai-Wen Niu, expansion DRLSE method, and contraction DRLSE method using three evaluation metrics, including ME, RFAE and MHD. The experimental results indicate that the proposed method is basically better than the other methods. However, the initial contour might contain non-tumor region when the edge of the tumor’s boundary is too ambiguous; even so, the proposed method drastically reduce the number of DRLSE iteration and computation time. According to the experimental results, the proposed method has three advantages over the other methods. First, it sets the initial contour automatically which is more efficient than setting the initial contour manually. Second, the region of the initial contour is much bigger than those obtained by the other methods, which can reduce the computation time and the number of DRLSE iteration. Third, if the tumor boundary is distinct, the new initial contour can improve the segmentation result of DRLSE.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "IDEAL DISTANCE MINIMIZATION METHOD"

1

Xu, Ruo-ning, and Xiao-yan Zhai. "An Improved Method for Ranking Fuzzy Numbers by Distance Minimization." In Advances in Intelligent and Soft Computing, 147–53. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-28592-9_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chakraborty, Shankar, Prasenjit Chatterjee, and Partha Protim Das. "Compromise Ranking of Alternatives from Distance to Ideal Solution (CRADIS) Method." In Multi-Criteria Decision-Making Methods in Manufacturing Environments, 343–47. New York: Apple Academic Press, 2023. http://dx.doi.org/10.1201/9781003377030-32.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chen, Angela Yu Yen, and Yutaka Karasawa. "The Simulation for the Ideal Optimum Site Model in the Actual Distance Method." In Smart and Sustainable Supply Chain and Logistics — Challenges, Methods and Best Practices, 63–112. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-15412-6_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lieberman, Elliot R. "Salukvadze's Ideal Distance Minimization Method." In Multi-Objective Programming in the USSR, 23–25. Elsevier, 1991. http://dx.doi.org/10.1016/b978-0-12-449660-6.50011-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Nan, Jiangxia, Ting Wang, and Jingjing An. "Intuitionistic Fuzzy Distance-Based Intuitionistic Fuzzy TOPSIS Method and Application to MADM." In Theoretical and Practical Advancements for Fuzzy System Integration, 72–96. IGI Global, 2017. http://dx.doi.org/10.4018/978-1-5225-1848-8.ch004.

Full text
Abstract:
In this paper, an intuitionistic fuzzy (IF) distance measure between two triangular intuitionistic fuzzy numbers (TIFNs) is developed. The metric properties of the proposed IF distance measures are also studied. Then, based on the IF distance, an extended TOPSIS is developed to solve multi-attribute decision making (MADM) problems with the ratings of alternatives on attributes of TIFNs. In this methodology, the IF distances between each alternative and the TIFN positive ideal-solution are calculated as well as the TIFN negative ideal-solution. Then the relative closeness degrees obtained of each alternative to the TIFN positive ideal solution are TIFNs. Based on the ranking methods of TIFNs the alternatives are ranked. A numerical example is examined to the validity and practicability of the method proposed in this paper.
APA, Harvard, Vancouver, ISO, and other styles
6

Taşabat, Semra Erpolat, and Tuğba Kıral Özkan. "Modified TOPSIS Method With Banking Case Study." In Multi-Criteria Decision Analysis in Management, 189–224. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-7998-2216-5.ch009.

Full text
Abstract:
In this chapter, an alternative measure to Euclidean distance measurement is proposed which is used to calculate positive and negative ideal solutions in the traditional TOPSIS method. Lp Minkowski family and L1 family distance measures were used for this purpose. By taking the averages of the distance measurements in the Lq and L1 families, more general and accurate level units were tried to be obtained. Thus, it was shown that TOPSIS method can give different results according to the distance measure used. The importance of the distance measurement unit was emphasized to rank the alternatives correctly. The implementation and evaluation of the proposed method was carried out through the financial performance of the deposit bank operating in the Turkish Banking Sector. It was seen that the rankings of the alternatives changed according to the distance measurements used. By referring to the distance measurements that can be used in the TOPSIS method, it was shown that the rank of the alternatives can vary according to the preferred distance measure.
APA, Harvard, Vancouver, ISO, and other styles
7

Aruldoss, Martin, Miranda Lakshmi Travis, and Prasanna Venkatesan Venkatasamy. "A Study and Estimation of Different Distance Measures in Generalized Fuzzy TOPSIS to Improve Ranking Order." In Advanced Fuzzy Logic Approaches in Engineering Science, 207–36. IGI Global, 2019. http://dx.doi.org/10.4018/978-1-5225-5709-8.ch010.

Full text
Abstract:
Multi criteria decision making (MCDM) is used to solve multiple conflicting criteria. There are different methods available in MCDM out of which TOPSIS is a well- known method to solve precise and imprecise information. In this chapter, triangular fuzzy TOPSIS is considered which has different steps like normalization, weight, finding of positive ideal solution (PIS) and negative ideal solution (NIS), distance between PIS and NIS, calculating relative closeness coefficient (RCC) value and ranking the alternatives. Out of these different steps a distance method is studied. The distance measures are basically used to find the distance between the target alternative and the best and the least alternatives. The most commonly used distance method is Euclidean distance. Many other distance methods are available such as Manhattan, Bit-vector, Hamming, Chebyshev distance, etc. To obtain the appropriate distance, these methods are evaluated. The proposed approach is applied in banking domain to find the suitable user for multi criteria reporting (MCR).
APA, Harvard, Vancouver, ISO, and other styles
8

Zarandi, Mohammad Hossein Fazel, and Milad Avazbeigi. "A New Optimization Approach to Clustering Fuzzy Data for Type-2 Fuzzy System Modeling." In Cross-Disciplinary Applications of Artificial Intelligence and Pattern Recognition, 499–508. IGI Global, 2012. http://dx.doi.org/10.4018/978-1-61350-429-1.ch025.

Full text
Abstract:
This chapter presents a new optimization method for clustering fuzzy data to generate Type-2 fuzzy system models. For this purpose, first, a new distance measure for calculating the (dis)similarity between fuzzy data is proposed. Then, based on the proposed distance measure, Fuzzy c-Mean (FCM) clustering algorithm is modified. Next, Xie-Beni cluster validity index is modified to be able to valuate Type-2 fuzzy clustering approach. In this index, all operations are fuzzy and the minimization method is fuzzy ranking with Hamming distance. The proposed Type-2 fuzzy clustering method is used for development of indirect approach to Type-2 fuzzy modeling, where the rules are extracted from clustering fuzzy numbers (Zadeh, 1965). Then, the Type-2 fuzzy system is tuned by an inference algorithm for optimization of the main parameters of Type-2 parametric system. In this case, the parameters are: Schweizer and Sklar t-Norm and s-Norm, a-cut of rule-bases, combination of FATI and FITA inference approaches, and Yager parametric defuzzification. Finally, the proposed Type-2 fuzzy system model is applied in prediction of the steel additives in steelmaking process. It is shown that, the proposed Type-2 fuzzy system model is superior in comparison with multiple regressions and Type-1 fuzzy system model, in terms of the minimization the effect of uncertainty in the rule-base fuzzy system models an error reduction.
APA, Harvard, Vancouver, ISO, and other styles
9

Yu, Gao-Feng, Deng-Feng Li, and Jin-Ming Qiu. "Interval-Valued Intuitionistic Fuzzy Multi-Attribute Decision Making Based on Satisfactory Degree." In Theoretical and Practical Advancements for Fuzzy System Integration, 49–71. IGI Global, 2017. http://dx.doi.org/10.4018/978-1-5225-1848-8.ch003.

Full text
Abstract:
The aim of this paper is to propose a satisfactory degree method by using nonlinear programming for solving multi-attribute decision making (MADM) problems in which ratings of alternatives on attributes is expressed via interval-valued intuitionistic fuzzy (IVIF) sets and preference information on attributes is incomplete. Concretely, a nonlinear programming model is firstly explored to determine the satisfactory degree which is the ratio of the square of the weight Euclidean distance between an alternative and the IVIF negative ideal solution (IVIFNIS) to the sum of the square of the weight Euclidean distance between the IVIF negative ideal solution (IVIFNIS) and the IVIF positive ideal solution (IVIFPIS). Another nonlinear programming model is also developed to obtain satisfactory intuitionistic fuzzy sets, and then the general satisfactory degrees of the satisfactory intuitionistic fuzzy sets are used to generate the ranking order of the alternatives. Finally, a real example is employed to verify the applicability of the proposed approach and illustrate its practicality and effectiveness.
APA, Harvard, Vancouver, ISO, and other styles
10

"Optimum Design of Post-Tensioned Axially-Symmetric Cylindrical Reinforced Concrete Walls." In Metaheuristic Approaches for Optimum Design of Reinforced Concrete Structures, 195–209. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-7998-2664-4.ch009.

Full text
Abstract:
In this chapter, an optimization methodology for design of post-tensioned axially symmetric cylindrical reinforced concrete (RC) walls is presented. The objective of optimization is the minimization of total material cost of the wall including concrete, reinforced bars, post-tensioned cables, and form work required for wall and application of the post-tensioning. The optimized values are wall thickness, compressive strength of the concrete, locations and intensities of the post-tensioned loads, the diameter of the reinforcement bars (rebars), and distance between rebars. The optimization process employs the superposition method (SPM) for the analyses of the wall, and the design constraints are defined according to ACI-318: Building code requirements for structural concrete.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "IDEAL DISTANCE MINIMIZATION METHOD"

1

Guang Wu, Yifan Chen, Rui Wang, and Zhongwen Li. "Localization error minimization method with Regulated Neighborhood Distance." In 2014 IEEE International Conference on Consumer Electronics – China. IEEE, 2014. http://dx.doi.org/10.1109/icce-china.2014.7029887.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Liang, Guanghui, Shangjie Ren, and Feng Dong. "An EIT image segmentation method based on projection distance minimization." In 2017 IEEE International Conference on Imaging Systems and Techniques (IST). IEEE, 2017. http://dx.doi.org/10.1109/ist.2017.8261447.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Huang, Shangrong, Jian Zhang, Xinwang Liu, and Lei Wang. "A Method of Discriminative Information Preservation and In-Dimension Distance Minimization Method for Feature Selection." In 2014 22nd International Conference on Pattern Recognition (ICPR). IEEE, 2014. http://dx.doi.org/10.1109/icpr.2014.286.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Göktolga, Ziya Gökalp, Engin Karakış, and Hakan Türkay. "Comparison of the Economic Performance of Turkish Republics in Central Asia with TOPSIS Method." In International Conference on Eurasian Economies. Eurasian Economists Association, 2015. http://dx.doi.org/10.36880/c06.01270.

Full text
Abstract:
The aim of this study is to compare the economic performance of Turkish Republics in Central Asia with Multi Criteria Decision Making (MCDM) methods. Turkish Republics have been experiencing a transition from a centrally planned economy towards a market economy since their independence. In this study important macroeconomic indicators are used to determine economic performance. Economic performance evaluation of the country is an important issue for economic management, investors, creditors and stock investors. Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) method outranks the countries according to the proximity of the positive ideal solution and distance of the negative ideal solution. Economic Performance of Turkish Republics in Central Asia (Azerbaijan, Turkmenistan, Kazakhstan, Kyrgyzstan, and Uzbekistan) are compared with TOPSIS method. İnvestigated with TOPSIS method countries best and worst economic performance years are detected during mentioned period and results are analyzed.
APA, Harvard, Vancouver, ISO, and other styles
5

Merina, Calysta, Nenny Anggraini, and Nashrul Hakiem. "A Comparative Analysis of Test Automation Frameworks Performance for Functional Testing in Android-Based Applications using the Distance to the Ideal Alternative Method." In 2018 Third International Conference on Informatics and Computing (ICIC). IEEE, 2018. http://dx.doi.org/10.1109/iac.2018.8780548.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Barbiero, Alessandro, and Asmerilda Hitaj. "A new method for building a discrete analogue to a continuous random variable based on minimization of a distance between distribution functions." In 2021 International Conference on Data Analytics for Business and Industry (ICDABI). IEEE, 2021. http://dx.doi.org/10.1109/icdabi53623.2021.9655904.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Liu, Kai, Lodewijk Brand, Hua Wang, and Feiping Nie. "Learning Robust Distance Metric with Side Information via Ratio Minimization of Orthogonally Constrained L21-Norm Distances." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/417.

Full text
Abstract:
Metric Learning, which aims at learning a distance metric for a given data set, plays an important role in measuring the distance or similarity between data objects. Due to its broad usefulness, it has attracted a lot of interest in machine learning and related areas in the past few decades. This paper proposes to learn the distance metric from the side information in the forms of must-links and cannot-links. Given the pairwise constraints, our goal is to learn a Mahalanobis distance that minimizes the ratio of the distances of the data pairs in the must-links to those in the cannot-links. Different from many existing papers that use the traditional squared L2-norm distance, we develop a robust model that is less sensitive to data noise or outliers by using the not-squared L2-norm distance. In our objective, the orthonormal constraint is enforced to avoid degenerate solutions. To solve our objective, we have derived an efficient iterative solution algorithm. We have conducted extensive experiments, which demonstrated the superiority of our method over state-of-the-art.
APA, Harvard, Vancouver, ISO, and other styles
8

Akman, Gu¨ls¸en, F. Mine O¨tku¨r, and Gu¨l E. Okudan. "A Distance-Based Multi-Criteria Decision Making Approach to Problem of Supplier Involvement in New Product Development." In ASME 2010 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2010. http://dx.doi.org/10.1115/detc2010-29087.

Full text
Abstract:
Because of rising global competition and more rapid technological changes, the need for faster development of products with higher quality and reliability has increased, also elevating the importance of supplier involvement. Accordingly, companies give high priority to development of relationships with their suppliers, including collaborative product development. This paper focuses on evaluating current suppliers, which are to be involved in design decisions and product development processes. First, an overview of the supplier involvement in product development process is described. Then, a questionnaire form is introduced, which was administered to 40 automotive suppliers to determine the supplier selection criteria’s importance levels. Survey results were evaluated using statistical means for reliability and suitability. Finally, in order to select the best supplier, results were evaluated using a method integrating Analytical Network Process (ANP) and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS). The evaluation criteria were weighted with ANP, and then supplier companies were ranked using TOPSIS methodology.
APA, Harvard, Vancouver, ISO, and other styles
9

Tzannes, N. S., John S. Bodenscharz, and M. A. Tzannes. "Image reconstruction using entropy, relative entropy, and the discrete cosine transform." In OSA Annual Meeting. Washington, D.C.: Optica Publishing Group, 1987. http://dx.doi.org/10.1364/oam.1987.mo3.

Full text
Abstract:
Previously1 we introduced the use of the maximum entropy principle (MEP) for image reconstruction of moment-coded images. The MEP and the minimum relative principle (MREP) are used in reconstructing images previously compressed by retaining a subset of the coefficients of their discrete cosine transform (DCT), a popular method of image compression since it approximates closely the ideal Karhunen-Loeve transform compression. The normal way to reconstruct such compressed images is by using the inverse DCT. The reconstructed image under the MEP is the one that maximizes the entropy of the image subject to constraints reflecting the retained DCT coefficients. Under the MREP, the reconstructed image is obtained by successive minimization of its relative entropy subject to one constraint at a time, with each solution serving as a prior for the next minimization. Both methods were applied to images compressed by DCT, and the results were compared to normal inverse DCT reconstruction. It is concluded that MEP and MREP are better than IDCT, and the improvement is attributed to the fact that these methods make no assumptions about the value of the unretained coefficients, while IDCT assumes them equal to zero.
APA, Harvard, Vancouver, ISO, and other styles
10

Aizawa, Kengo, Masahiro Ueda, Teppei Shimada, Hideki Aoyama, and Kazuo Yamazaki. "High Efficiency Molding by Real-Time Control of Distance Between Nozzle and Melt Pool in Directed Energy Deposition Process." In JSME 2020 Conference on Leading Edge Manufacturing/Materials and Processing. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/lemp2020-8598.

Full text
Abstract:
Abstract Laser metal deposition (LMD) is an additive manufacturing technique, whose performance can be influenced by a considerable number of factors and parameters. Typically, a powder is carried by an inert gas and sprayed by a nozzle, with a coaxial laser beam passing through the nozzle and overlapping the powder flow, thereby generating a molten material pool on a substrate. Monitoring the evolution of this process allows for a better comprehension and control of the process, thereby enhancing the deposition quality. As the metal additive manufacturing mechanism has not yet been elucidated, it is not clear how process parameters affect material properties, molding accuracy, and molding efficiency. When cladding is performed under uncertain conditions, a molded part with poor material properties and dimensional accuracy is created. In this paper, we propose a method for high efficiency molding by controlling the distance between the head nozzle and the molten pool in real time. The distance is identified by an originally developed sensor based on a triangulation method. According to the distance, the head nozzle is automatically controlled into the optimum position. As a result, an ideal molding process can be generated, so that high efficiency molding and high-quality material properties can be obtained. Experimental results show that continuing deposition at the optimum distance assists in achieving deposition efficiency and dimensional accuracy. According to the specific experimental results of this method, the modeling efficiency was increased by 27% compared to the method without correction, and the modeling was successful with an error within 1 mm.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "IDEAL DISTANCE MINIMIZATION METHOD"

1

Anderson, Gerald L., and Kalman Peleg. Precision Cropping by Remotely Sensed Prorotype Plots and Calibration in the Complex Domain. United States Department of Agriculture, December 2002. http://dx.doi.org/10.32747/2002.7585193.bard.

Full text
Abstract:
This research report describes a methodology whereby multi-spectral and hyperspectral imagery from remote sensing, is used for deriving predicted field maps of selected plant growth attributes which are required for precision cropping. A major task in precision cropping is to establish areas of the field that differ from the rest of the field and share a common characteristic. Yield distribution f maps can be prepared by yield monitors, which are available for some harvester types. Other field attributes of interest in precision cropping, e.g. soil properties, leaf Nitrate, biomass etc. are obtained by manual sampling of the filed in a grid pattern. Maps of various field attributes are then prepared from these samples by the "Inverse Distance" interpolation method or by Kriging. An improved interpolation method was developed which is based on minimizing the overall curvature of the resulting map. Such maps are the ground truth reference, used for training the algorithm that generates the predicted field maps from remote sensing imagery. Both the reference and the predicted maps are stratified into "Prototype Plots", e.g. 15xl5 blocks of 2m pixels whereby the block size is 30x30m. This averaging reduces the datasets to manageable size and significantly improves the typically poor repeatability of remote sensing imaging systems. In the first two years of the project we used the Normalized Difference Vegetation Index (NDVI), for generating predicted yield maps of sugar beets and com. The NDVI was computed from image cubes of three spectral bands, generated by an optically filtered three camera video imaging system. A two dimensional FFT based regression model Y=f(X), was used wherein Y was the reference map and X=NDVI was the predictor. The FFT regression method applies the "Wavelet Based", "Pixel Block" and "Image Rotation" transforms to the reference and remote images, prior to the Fast - Fourier Transform (FFT) Regression method with the "Phase Lock" option. A complex domain based map Yfft is derived by least squares minimization between the amplitude matrices of X and Y, via the 2D FFT. For one time predictions, the phase matrix of Y is combined with the amplitude matrix ofYfft, whereby an improved predicted map Yplock is formed. Usually, the residuals of Y plock versus Y are about half of the values of Yfft versus Y. For long term predictions, the phase matrix of a "field mask" is combined with the amplitude matrices of the reference image Y and the predicted image Yfft. The field mask is a binary image of a pre-selected region of interest in X and Y. The resultant maps Ypref and Ypred aremodified versions of Y and Yfft respectively. The residuals of Ypred versus Ypref are even lower than the residuals of Yplock versus Y. The maps, Ypref and Ypred represent a close consensus of two independent imaging methods which "view" the same target. In the last two years of the project our remote sensing capability was expanded by addition of a CASI II airborne hyperspectral imaging system and an ASD hyperspectral radiometer. Unfortunately, the cross-noice and poor repeatability problem we had in multi-spectral imaging was exasperated in hyperspectral imaging. We have been able to overcome this problem by over-flying each field twice in rapid succession and developing the Repeatability Index (RI). The RI quantifies the repeatability of each spectral band in the hyperspectral image cube. Thereby, it is possible to select the bands of higher repeatability for inclusion in the prediction model while bands of low repeatability are excluded. Further segregation of high and low repeatability bands takes place in the prediction model algorithm, which is based on a combination of a "Genetic Algorithm" and Partial Least Squares", (PLS-GA). In summary, modus operandi was developed, for deriving important plant growth attribute maps (yield, leaf nitrate, biomass and sugar percent in beets), from remote sensing imagery, with sufficient accuracy for precision cropping applications. This achievement is remarkable, given the inherently high cross-noice between the reference and remote imagery as well as the highly non-repeatable nature of remote sensing systems. The above methodologies may be readily adopted by commercial companies, which specialize in proving remotely sensed data to farmers.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography