Academic literature on the topic 'Random mapping algorithm'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Random mapping algorithm.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Random mapping algorithm"

1

Wang, Y., R. A. Prade, J. Griffith, W. E. Timberlake, and J. Arnold. "A fast random cost algorithm for physical mapping." Proceedings of the National Academy of Sciences 91, no. 23 (November 8, 1994): 11094–98. http://dx.doi.org/10.1073/pnas.91.23.11094.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Fang, Zhu, and Zhengquan Xu. "Dynamic Random Graph Protection Scheme Based on Chaos and Cryptographic Random Mapping." Information 13, no. 11 (November 14, 2022): 537. http://dx.doi.org/10.3390/info13110537.

Full text
Abstract:
Advances in network technology have enhanced the concern for network security issues. In order to address the problem that hopping graph are vulnerable to external attacks (e.g., the changing rules of fixed graphs are more easily grasped by attackers) and the challenge of achieving both interactivity and randomness in a network environment, this paper proposed a scheme for a dynamic graph based on chaos and cryptographic random mapping. The scheme allows hopping nodes to compute and obtain dynamically random and uncorrelated graph of other nodes independently of each other without additional interaction after the computational process of synchronous mirroring. We first iterate through the chaos algorithm to generate random seed parameters, which are used as input parameters for the encryption algorithm; secondly, we execute the encryption algorithm to generate a ciphertext of a specified length, which is converted into a fixed point number; and finally, the fixed point number is mapped to the network parameters corresponding to each node. The hopping nodes are independently updated with the same hopping map at each hopping period, and the configuration of their own network parameters is updated, so that the updated graph can effectively prevent external attacks. Finally, we have carried out simulation experiments and related tests on the proposed scheme and demonstrated that the performance requirements of the random graphs can be satisfied in both general and extreme cases.
APA, Harvard, Vancouver, ISO, and other styles
3

Xia, Xun, and Ling Chen. "Elastic Optical Network Service-Oriented Architecture (SOA) Used for Cloud Computing and Its Resource Mapping Optimization Scheme." Journal of Nanoelectronics and Optoelectronics 15, no. 4 (April 1, 2020): 442–49. http://dx.doi.org/10.1166/jno.2020.2783.

Full text
Abstract:
In this study, starting from the elastic optical network, the layered and function isolated service-oriented architecture (SOA) is introduced, so as to propose an elastic optical network SOA for cloud computing, and further study the resource mapping of optical network. Linear mapping model, random routing mapping algorithm, load balancing mapping algorithm and link separation mapping algorithm are introduced respectively, and the resource utilization effect of different mapping algorithms for the proposed optical network is compared. During the experiment, firstly, the elastic optical network is tested. It is found that the node utilization and spectrum utilization of the underlying optical fiber level network are significantly improved. Within the average service time of 0.312 s∼0.416 s, the corresponding node utilization and spectrum utilization are 90% and 80% respectively. In the resource mapping experiment, load balancing algorithm and link separation algorithm can effectively improve the mapping success rate of services. Among them, the link separation mapping algorithm can improve the spectrum resource utilization of optical network by 15.6%. The elastic optical network SOA proposed in this study is helpful to improve the use of network resources.
APA, Harvard, Vancouver, ISO, and other styles
4

Zhu, Sheng Jie, and Xin Huan Feng. "Mountain Mapping Relaying Communication Positioning Algorithm." Applied Mechanics and Materials 336-338 (July 2013): 1804–8. http://dx.doi.org/10.4028/www.scientific.net/amm.336-338.1804.

Full text
Abstract:
Naturally, for the mountain mapping, the communication system's power is limited, As a matter of fact, the stations should be built as high as possible in order to eliminate the terrain shielding and radiate to the farther areas, however the number of users or other aspects also counts, In this paper, we introduced new methods to locate the modified location for the base stations in the mountain areas, and with this method, we use less stations to cover the most areas in the given situation. For a random mountain terrain. the coverage rate of this model is as high as 95.1%.
APA, Harvard, Vancouver, ISO, and other styles
5

Lin, Ke, and Rui Man. "A Similarity Algorithm Based on Hash Feature and Random Mapping." Journal of Physics: Conference Series 1865, no. 4 (April 1, 2021): 042020. http://dx.doi.org/10.1088/1742-6596/1865/4/042020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Meng, Lingqi, and Naoki Masuda. "Analysis of node2vec random walks on networks." Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 476, no. 2243 (November 2020): 20200447. http://dx.doi.org/10.1098/rspa.2020.0447.

Full text
Abstract:
Random walks have been proven to be useful for constructing various algorithms to gain information on networks. Algorithm node2vec employs biased random walks to realize embeddings of nodes into low-dimensional spaces, which can then be used for tasks such as multi-label classification and link prediction. The performance of the node2vec algorithm in these applications is considered to depend on properties of random walks that the algorithm uses. In the present study, we theoretically and numerically analyse random walks used by the node2vec. Those random walks are second-order Markov chains. We exploit the mapping of its transition rule to a transition probability matrix among directed edges to analyse the stationary probability, relaxation times in terms of the spectral gap of the transition probability matrix, and coalescence time. In particular, we show that node2vec random walk accelerates diffusion when walkers are designed to avoid both backtracking and visiting a neighbour of the previously visited node but do not avoid them completely.
APA, Harvard, Vancouver, ISO, and other styles
7

Liu, Yian-Kui, and Baoding Liu. "Expected Value Operator of Random Fuzzy Variable and Random Fuzzy Expected Value Models." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 11, no. 02 (April 2003): 195–215. http://dx.doi.org/10.1142/s0218488503002016.

Full text
Abstract:
Random fuzzy variable is mapping from a possibility space to a collection of random variables. This paper first presents a new definition of the expected value operator of a random fuzzy variable, and proves the linearity of the operator. Then, a random fuzzy simulation approach, which combines fuzzy simulation and random simulation, is designed to estimate the expected value of a random fuzzy variable. Based on the new expected value operator, three types of random fuzzy expected value models are presented to model decision systems where fuzziness and randomness appear simultaneously. In addition, random fuzzy simulation, neural networks and genetic algorithm are integrated to produce a hybrid intelligent algorithm for solving those random fuzzy expected valued models. Finally, three numerical examples are provided to illustrate the feasibility and the effectiveness of the proposed algorithm.
APA, Harvard, Vancouver, ISO, and other styles
8

Peng, Weiping, Shuang Cui, and Cheng Song. "One-time-pad cipher algorithm based on confusion mapping and DNA storage technology." PLOS ONE 16, no. 1 (January 20, 2021): e0245506. http://dx.doi.org/10.1371/journal.pone.0245506.

Full text
Abstract:
In order to solve the problems of low computational security in the encoding mapping and difficulty in practical operation of biological experiments in DNA-based one-time-pad cryptography, we proposed a one-time-pad cipher algorithm based on confusion mapping and DNA storage technology. In our constructed algorithm, the confusion mapping methods such as chaos map, encoding mapping, confusion encoding table and simulating biological operation process are used to increase the key space. Among them, the encoding mapping and the confusion encoding table provide the realization conditions for the transition of data and biological information. By selecting security parameters and confounding parameters, the algorithm realizes a more random dynamic encryption and decryption process than similar algorithms. In addition, the use of DNA storage technologies including DNA synthesis and high-throughput sequencing ensures a viable biological encryption process. Theoretical analysis and simulation experiments show that the algorithm provides both mathematical and biological security, which not only has the difficult advantage of cracking DNA biological experiments, but also provides relatively high computational security.
APA, Harvard, Vancouver, ISO, and other styles
9

Hussain, Muhammad Afaq, Zhanlong Chen, Run Wang, Safeer Ullah Shah, Muhammad Shoaib, Nafees Ali, Daozhu Xu, and Chao Ma. "Landslide Susceptibility Mapping using Machine Learning Algorithm." Civil Engineering Journal 8, no. 2 (February 1, 2022): 209–24. http://dx.doi.org/10.28991/cej-2022-08-02-02.

Full text
Abstract:
Landslides are natural disasters that have resulted in the loss of economies and lives over the years. The landslides caused by the 2005 Muzaffarabad earthquake heavily impacted the area, and slopes in the region have become unstable. This research was carried out to find out which areas, as in Muzaffarabad district, are sensitive to landslides and to define the relationship between landslides and geo-environmental factors using three tree-based classifiers, namely, Extreme Gradient Boosting (XGBoost), Random Forest (RF), and k-Nearest Neighbors (KNN). These machine learning models are innovative and can assess environmental problems and hazards for any given area on a regional scale. The research consists of three steps: Firstly, for training and validation, 94 historical landslides were randomly split into a proportion of 7/3. Secondly, topographical and geological data as well as satellite imagery were gathered, analyzed, and built into a spatial database using GIS Environment. Nine layers of landslide-conditioning factors were developed, including Aspect, Elevation, Slope, NDVI, Curvature, SPI, TWI, Lithology, and Landcover. Finally, the receiver operating characteristic (ROC) curve and the area under the ROC curve (AUC) value were used to estimate the model's efficiency. The area under the curve values for the RF, XGBoost, and KNN models are 0.895 (89.5%), 0.893 (89.3%), and 0.790 (79.0%), respectively. Based on the three machine learning techniques, the innovative outputs show that the performance of the Random Forest model has a maximum AUC value of 0.895, and it is more efficient than the other tree-based classifiers. Elevation and Slope were determined as the most important factors affecting landslides in this research area. The landslide susceptibility maps were classified into four classes: low, moderate, high, and very high susceptibility. The result maps are useful for future generalized construction operations, such as selecting and conserving new urban and infrastructural areas. Doi: 10.28991/CEJ-2022-08-02-02 Full Text: PDF
APA, Harvard, Vancouver, ISO, and other styles
10

Purwanto, Anang Dwi, Ketut Wikantika, Albertus Deliar, and Soni Darmawan. "Decision Tree and Random Forest Classification Algorithms for Mangrove Forest Mapping in Sembilang National Park, Indonesia." Remote Sensing 15, no. 1 (December 21, 2022): 16. http://dx.doi.org/10.3390/rs15010016.

Full text
Abstract:
Sembilang National Park, one of the best and largest mangrove areas in Indonesia, is very vulnerable to disturbance by community activities. Changes in the dynamic condition of mangrove forests in Sembilang National Park must be quickly and easily accompanied by mangrove monitoring efforts. One way to monitor mangrove forests is to use remote sensing technology. Recently, machine-learning classification techniques have been widely used to classify mangrove forests. This study aims to investigate the ability of decision tree (DT) and random forest (RF) machine-learning algorithms to determine the mangrove forest distribution in Sembilang National Park. The satellite data used are Landsat-7 ETM+ acquired on 30 June 2002 and Landsat-8 OLI acquired on 9 September 2019, as well as supporting data such as SPOT 6/7 image acquired in 2020–2021, MERIT DEM and an existing mangrove map. The pre-processing includes radiometric and atmospheric corrections performed using the semi-automatic classification plugin contained in Quantum GIS. We applied decision tree and random forest algorithms to classify the mangrove forest. In the DT algorithm, threshold analysis is carried out to obtain the most optimal threshold value in distinguishing mangrove and non-mangrove objects. Here, the use of DT and RF algorithms involves several important parameters, namely, the normalized difference moisture index (NDMI), normalized difference soil index (NDSI), near-infrared (NIR) band, and digital elevation model (DEM) data. The results of DT and RF classification from Landsat-7 ETM+ and Landsat-8 OLI images show similarities regarding mangrove spatial distribution. The DT classification algorithm with the parameter combination NDMI+NDSI+DEM is very effective in classifying Landsat-7 ETM+ image, while the parameter combination NDMI+NIR is very effective in classifying Landsat-8 OLI image. The RF classification algorithm with the parameter Image (6 bands), the number of trees = 100, the number of variables predictor (mtry) is square root (), and the minimum number of node sizes = 6, provides the highest overall accuracy for Landsat-7 ETM+ image, while combining Image (7 bands) + NDMI+NDSI+DEM parameters with the number of trees = 100, mtry = all variables (, and the minimum node size = 6 provides the highest overall accuracy for Landsat-8 OLI image. The overall classification accuracy is higher when using the RF algorithm (99.12%) instead of DT (92.82%) for the Landsat-7 ETM+ image, but it is slightly higher when using the DT algorithm (98.34%) instead of the RF algorithm (97.79%) for the Landsat-8 OLI image. The overall RF classification algorithm outperforms DT because all RF classification model parameters provide a higher producer accuracy in mapping mangrove forests. This development of the classification method should support the monitoring and rehabilitation programs of mangroves more quickly and easily, particularly in Indonesia.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Random mapping algorithm"

1

Katzfuss, Matthias. "Hierarchical Spatial and Spatio-Temporal Modeling of Massive Datasets, with Application to Global Mapping of CO2." The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1308316063.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Liu, Yutong. "Formation Control and Reconfiguration Strategy of Multi-Agent Systems." Thesis, 2020. http://hdl.handle.net/2440/124414.

Full text
Abstract:
Multi-agent systems consist of multiple agents, which detect and interact with their local environments. The formation control strategy is studied to drive multi-agent systems to predefined formations. The process is important because the objective formation is designed such that the group achieves more than the sum of its individuals. In this thesis, we consider formation control strategies and reconfiguration strategy for multi-agent systems. The main research contents are as follows. A formation control scheme is proposed for a group of elliptical agents to achieve a predefined formation. The agents are assumed to have the same dynamics, and communication among the agents limited. The desired formation is realized based on the reference formation and the mapping decision. In the controller design, searching algorithms for both cases of minimum distance and tangents are established for each agent and its neighbors. In order to avoid collision, an optimal path planning algorithm based on collision angles, and a self-center-based rotation algorithm are also proposed. Moreover, randomized method is used to provide the optimal mapping decision for the underlying system. To optimize the former formation control scheme, an adaptive formation control strategy is developed. The multiple elliptical agents can form a predefined formation in any 2D space. The controller is based on the neighborhood of each agent and the optimal mapping decision for the whole group. The collision-free algorithm is built based on direction and distance of avoidance group of each agent. The controller for each agent is adaptive based on the number of elements in its avoidance group, the minimum distance it has and its desired moving distance. The proposed adaptive mapping scheme calculates the repetition rate of optimal mappings in screening group of mapping decisions. The new optimal mapping is constructed by the fixed repeating elements in former mappings and the reorganized elements which are not the same in each optimal mappings based on the screening group. An event-triggered probability-driven control scheme is also investigated for a group of elliptical agents to achieve a predefined formation. The agents are assumed to have the same dynamics, and the control law for each agent is only updated at its event sequence based on its own minimum collision time and deviation time. The collision time of each agent is obtained based on the position and velocity of the others, and the deviation time is linked with the distance between its current position and desired position. The probabilitydriven controller is designed to prevent the stuck problem among agents. The stuck problem for the group means that when the distance between vi agents is too close and their moving directions are crossed, the control input with deterministic direction will cause the agents not to move or to move slowly. To optimize the event-triggered probability-driven controller, a mappingadaptive strategy and an angle-adaptive scheme are also developed. The mapping-adaptive strategy is used to find the optimal mapping to decrease the sum of the moving distance for the whole group, while the angle-adaptive scheme is employed to let the distance between any two elliptical agents is large enough to further ensure there is no collision existed during execution. Reconfiguration strategy is considered for multiple predefined formations. A two-stage reconfiguration strategy is proposed for a group of agents to find its special formation, which can be seen as transition of the predefined formations, during idle time in order to minimize the reconfiguration time. The basic reconfiguration strategy combines with a random mapping algorithm to find optimal special formation. To meet the practical requirements, agents are modeled as circles or ellipses. The anti-overlapping strategies are built to construct the achievable special formation based on the geometric properties of circle and ellipse.
Thesis (Ph.D.) -- University of Adelaide, School of Electrical & Electronic Engineering, 2020
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Random mapping algorithm"

1

Sheehan, Daniel Dean. Interpolating a regular grid of elevations from random points using three algorithms: Kriging, splines, and polynomial surfaces. 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Random mapping algorithm"

1

Gill, Prabhleen Kaur, Shaik Jani Babu, Sonal Singhal, and Nilesh Goel. "FPGA Implementation of Random Feature Mapping in ELM Algorithm for Binary Classification." In Lecture Notes in Electrical Engineering, 504–10. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-4775-1_54.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Parihar, Swapnil Singh, and Shafique Matin. "Flood Inundation Mapping Using C-band Synthetic-aperture Radar and Random Forest Algorithm: A Methodological Basis." In 5th World Congress on Disaster Management, 149–58. London: Routledge, 2022. http://dx.doi.org/10.4324/9781003341932-17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gera, Michael H. "Learning with Mappings and Input-Orderings using Random Access Memory — based Neural Networks." In Artificial Neural Nets and Genetic Algorithms, 183–89. Vienna: Springer Vienna, 1993. http://dx.doi.org/10.1007/978-3-7091-7533-0_28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bouarara, Hadj Ahmed, Reda Mohamed Hamou, Abdelmalek Amine, and Amine Rahmani. "A Fireworks Algorithm for Modern Web Information Retrieval with Visual Results Mining." In Business Intelligence, 649–68. IGI Global, 2016. http://dx.doi.org/10.4018/978-1-4666-9562-7.ch034.

Full text
Abstract:
The popularization of computers, the number of electronic documents available online /offline and the explosion of electronic communication have deeply rocked the relationship between man and information. Nowadays, we are awash in a rising tide of information where the web has impacted on almost every aspect of our life. Merely, the development of automatic tools for an efficient access to this huge amount of digital information appears as a necessity. This paper deals on the unveiling of a new web information retrieval system using fireworks algorithm (FWA-IR). It is based on a random explosion of fireworks and a set of operators (displacement, mapping, mutation, and selection). Each explosion of firework is a potential solution for the need of user (query). It generates a set of sparks (documents) with two locations (relevant and irrelevant). The authors experiments were performed on the MEDLARS dataset and using the validation measures (recall, precision, f-measure, silence, noise and accuracy) by studying the sensitive parameters of this technique (initial location number, iteration number, mutation probability, fitness function, selection method, text representation, and distance measure), aimed to show the benefit derived from using such approach compared to the results of others methods existed in literature (taboo search, simulated annealing, and naïve method). Finally, a result-mining tool was achieved for the purpose to see the outcome in graphical form (3d cub and cobweb) with more realism using the functionalities of zooming and rotation.
APA, Harvard, Vancouver, ISO, and other styles
5

Ellis, Graham. "Cellular Homology." In An Invitation to Computational Homotopy, 127–212. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780198832973.003.0002.

Full text
Abstract:
This chapter introduces more basic concepts of algebraic topology and describes datatypes and algorithms for implementing them on a computer. The basic concepts include: chain complex, chain mapping, chain homotopy, homology of a (simplicial or cubical or permutahedral or CW-) space, persistent homology of a filtered space, cohomology ring of a space, van Kampen diagrams, excision. These are illustrated using computer examples involving digital images, protein backbones, high-dimensional point cloud data, knot complements, discrete groups, and random simplicial complexes.
APA, Harvard, Vancouver, ISO, and other styles
6

Suragala, Ashok, and PapaRao A. V. "Demystifying Disease Identification and Diagnosis Using Machine Learning Classification Algorithms." In Handbook of Research on Emerging Trends and Applications of Machine Learning, 200–249. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-5225-9643-1.ch011.

Full text
Abstract:
The exponential surge in healthcare data is providing new opportunities to discover meaningful data-driven characteristics and patterns of diseases. Machine learning and deep learning models have been employed for many computational phenotyping and healthcare prediction tasks. However, machine learning models are crucial for wide adaption in medical research and clinical decision making. In this chapter, the authors introduce demystifying diseases identification and diagnosis of various disease using machine learning algorithms like logistic regression, naive Bayes, decision tree, MLP classifier, random forest in order to cure liver disease, hepatitis disease, and diabetes mellitus. This work leverages the initial discoveries made through data exploration, statistical analysis of data to reduce the number of explanatory variables, which led to identifying the statistically significant attributes, mapping those significant attributes to the response, and building classification techniques to predict the outcome and demystify clinical diseases.
APA, Harvard, Vancouver, ISO, and other styles
7

Liu, Zhigang, Wenzhong Shi, Deren Li, and Qianqing Qin. "Partially Supervised Classification." In Data Warehousing and Mining, 1216–30. IGI Global, 2008. http://dx.doi.org/10.4018/978-1-59904-951-9.ch069.

Full text
Abstract:
This paper addresses a new classification technique: Partially Supervised Classification (PSC), which is used to identify a specific land-cover class of interest from a remotely sensed image using unique training samples that belongs to a specified class. This paper also presents and discusses a newly proposed novel Support Vector Machine (SVM) algorithm for PSC. Accordingly, its training set includes labeled samples that belong to the class of interest and unlabeled samples of all classes randomly selected from a remotely sensed image. Moreover, all unlabeled samples are assumed to be training samples of other classes and each of them is assigned a weight factor indicating the likelihood of this assumption; hence, the algorithm is called ‘Weighted Unlabeled Sample SVM’ (WUS-SVM). Based on the WUS-SVM, a PSC method is proposed. Experimental results with both simulated and real datasets indicate that the proposed PSC method can achieve encouraging accuracy and is more robust than the 1-SVM and the Spectral Angle Mapping (SAM) method.
APA, Harvard, Vancouver, ISO, and other styles
8

Hu, Dongping, Aihua Yin, Huaying Yan, and Tao Long. "Order-Preserving and Efficient One-to-Many Search on Encrypted Data." In Machine Learning and Artificial Intelligence. IOS Press, 2020. http://dx.doi.org/10.3233/faia200807.

Full text
Abstract:
Order-preserving encryption (OPE) is an useful tool in cloud computing as it allows untrustworthy server to execute range query or exact keyword search directly on the ciphertexts. It only requires sub-linear time in the data size while the queries are occurred. This advantage is very suitable in the cloud where the data volume is huge. However, the order-preserving encryption is deterministic and it leaks the plaintexts’ order and its distribution. In this paper, we propose an one-to-many OPE by taking into account the security and the efficiency. For a given plaintext, the encryption algorithm firstly determines the corresponding ciphertext gap by performing binary search on ciphertext space and plaintext space at the same time. An exact sample algorithm for negative hypergeometric distribution is used to fix the size of the gap. Lastly a value in the gap is randomly chosen as the mapping of the given plaintext. It is proven that our scheme is more secure than deterministic OPE with realizing efficient search. In particular, a practical and exact sampling algorithm for the negative hypergeometric distribution (NHGD) is first proposed.
APA, Harvard, Vancouver, ISO, and other styles
9

Qiu, Xiaomin, Dexuan Sha, and Xuelian Meng. "Optimal Methodology for Detecting Land Cover Change in a Forestry, Lakeside Environment Using NAIP Imagery." In Research Anthology on Ecosystem Conservation and Preserving Biodiversity, 617–40. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-6684-5678-1.ch032.

Full text
Abstract:
Mapping land cover change is useful for various environmental and urban planning applications, e.g. land management, forest conservation, ecological assessment, transportation planning, and impervious surface control. As the optimal change detection approaches, algorithms, and parameters often depend on the phenomenon of interest and the remote sensing imagery used, the goal of this study is to find the optimal procedure for detecting urban growth in rural, forestry areas using one-meter, four-band NAIP images. Focusing on different types of impervious covers, the authors test the optimal segmentation parameters for object-based image analysis, and conclude that the random tree classifier, among the six classifiers compared, is most optimal for land use/cover change detection analysis with a satisfying overall accuracy of 87.7%. With continuous free coverage of NAIP images, the optimal change detection procedure concluded in this study is valuable for future analyses of urban growth change detection in rural, forestry environments.
APA, Harvard, Vancouver, ISO, and other styles
10

Homère Ngandam Mfondoum, Alfred, Igor Casimir Njombissie Petcheu, Frederic Chamberlain Lounang Tchatchouang, Luc Moutila Beni, Mesmin Tchindjang, and Jean Valery Mefire Mfondoum. "Dynamics, Anomalies and Boundaries of the Forest-Savanna Transition: A Novel Remote Sensing-Based Multi-Angles Methodology Using Google Earth Engine." In GIS and Spatial Analysis [Working Title]. IntechOpen, 2022. http://dx.doi.org/10.5772/intechopen.105074.

Full text
Abstract:
This chapter proposes a remote sensing multi-angles methodology to assess the transition at the interface of the forest-savanna land cover. On Sentinel2-A median images of successive dry seasons, three referential and nine analytical spectral indices were computed. The change vector analysis (CVA) was performed, selecting further one magnitude per index. The averaged moving standard deviation index (aMSDI) was proposed to compare spatial intensity of anomalies among selected CVA, and then statistically assessed through spatial and no-spatial autoregression tests. The cross-correlation and simple linear combination (SCL) computations spotted the overall anomaly extent. Three machine learning algorithms, i.e., classification and regression trees (CART), random forest (RF), and support vector machine (SVM), helped mapping the distribution of each specie. As result, the CVA confirmed each index ability to add new information. The aMSDI gave the harmonized interval [0–0.083] among CVA, confirmed with all p−values=0, z−scores>2.5, clustering of anomaly pixel,and adjusted R2≤0.19. Three trends of vegetation distribution were distinguished with 88.7% overall accuracy and 0.86 kappa coefficient. Finally, extremely affected areas were spotted in upper latitudes towards Sahel and desert.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Random mapping algorithm"

1

Kasetkasem, T., P. Aonpong, P. Rakwatin, T. Chanwimaluang, and I. Kumazawa. "A novel land cover mapping algorithm based on random forest and Markov random field models." In IGARSS 2016 - 2016 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2016. http://dx.doi.org/10.1109/igarss.2016.7730334.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sun, Tao, Dongsheng Li, Zhe Quan, Hao Jiang, Shengguo Li, and Yong Dou. "Heavy-ball Algorithms Always Escape Saddle Points." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/488.

Full text
Abstract:
Nonconvex optimization algorithms with random initialization have attracted increasing attention recently. It has been showed that many first-order methods always avoid saddle points with random starting points. In this paper, we answer a question: can the nonconvex heavy-ball algorithms with random initialization avoid saddle points? The answer is yes! Direct using the existing proof technique for the heavy-ball algorithms is hard due to that each iteration of the heavy-ball algorithm consists of current and last points. It is impossible to formulate the algorithms as iteration like xk+1= g(xk) under some mapping g. To this end, we design a new mapping on a new space. With some transfers, the heavy-ball algorithm can be interpreted as iterations after this mapping. Theoretically, we prove that heavy-ball gradient descent enjoys larger stepsize than the gradient descent to escape saddle points to escape the saddle point. And the heavy-ball proximal point algorithm is also considered; we also proved that the algorithm can always escape the saddle point.
APA, Harvard, Vancouver, ISO, and other styles
3

Prasatthong, N., T. Kasetkasem, Y. Tipsuwan, T. Isshiki, T. Chanwimaluang, and P. Hoonsuwan. "An Elevation Mapping Algorithm Based on a Markov Random Field Model for Underwater Exploration." In 2019 16th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON). IEEE, 2019. http://dx.doi.org/10.1109/ecti-con47248.2019.8955260.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Aonpong, Panyanat, Teerasit Kasetkasem, Preesan Rakwatin, Itsuo Kumazawa, and Thitiporn Chanwimaluang. "Combining a random forest algorithm and a level set method for land cover mapping." In 2016 13th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON). IEEE, 2016. http://dx.doi.org/10.1109/ecticon.2016.7561339.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

He, Ze, Shihua Li, Pengfei Zhai, and Yuchuan Deng. "Mapping Rice Planting Area Using Multi-Temporal Quad-Pol Radarsat-2 Datasets and Random Forest Algorithm." In IGARSS 2020 - 2020 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2020. http://dx.doi.org/10.1109/igarss39084.2020.9324017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhafarina, Zhafirah, and P. Wicaksono. "Benthic habitat mapping on different coral reef types using random forest and support vector machine algorithm." In Sixth International Symposium on LAPAN-IPB Satellite, edited by Tien Dat Pham, Kasturi D. Kanniah, Kohei Arai, Gay Jane P. Perez, Yudi Setiawan, Lilik B. Prasetyo, and Yuji Murayama. SPIE, 2019. http://dx.doi.org/10.1117/12.2540727.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kasetkasem, T., P. Phuhinkong, P. Rakwatin, T. Chanwimaluang, and I. Kumazawa. "A flood mapping algorithm from cloud contaminated MODIS time-series data using a Markov random field model." In IGARSS 2014 - 2014 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2014. http://dx.doi.org/10.1109/igarss.2014.6946982.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zhu, Liang, and David Kazmer. "An Extensive Simplex Method Mapping the Global Feasibility." In ASME 2002 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2002. http://dx.doi.org/10.1115/detc2002/dac-34115.

Full text
Abstract:
Understanding the global feasibility of engineering decision-making problems is fundamental to the synthesis of rational engineering decisions. An Extensive Simplex Method is presented to solve the global feasibility for a linear decision model relating multiple decision variables to multiple performance measures, and constrained by corresponding limits. The developed algorithm effectively traverses all extreme points in the feasible space and establishes the graph structure reflecting the active constraints and their connectivity. The algorithm demarcates basic and nonbasic variables at each extreme point, which is exploited to traverse the active constraints and merge the degenerate extreme points. Finally, a random model generator is presented with the capability to control the matrix sparseness and the model degeneracy for an arbitrary number of decision variables and performance measures. The results indicate that all these model properties are significant factors affect the total number of extreme points, their connected graph, and the global feasibility.
APA, Harvard, Vancouver, ISO, and other styles
9

Benbahria, Z., I. Sebari, H. Hajji, and M. F. Smiej. "Automatic Mapping of Irrigated Areas in Mediteranean Context Using Landsat 8 Time Series Images and Random Forest Algorithm." In IGARSS 2018 - 2018 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2018. http://dx.doi.org/10.1109/igarss.2018.8517810.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Chen, Jun, and Joseph Katz. "Advances of the Correlation Mapping Method to Eliminate the Peak-Locking Effect in PIV Analysis." In ASME 2005 Fluids Engineering Division Summer Meeting. ASMEDC, 2005. http://dx.doi.org/10.1115/fedsm2005-77437.

Full text
Abstract:
The Peak-locking effect causes mean bias in most of the existing correlation based algorithms for PIV data analysis. This phenomenon is inherent to the Sub-pixel Curve Fitting (SPCF) through discrete correlation values, which is used to obtain the sub-pixel part of the displacement. A new technique for obtaining sub-pixel accuracy, the Correlation Mapping Method (CMM), was proposed by Chen & Katz [1, 2]. This new method works effectively and the peak-locking disappears in all the previous test cases, including applying to both synthetic and experimental images. The random errors are also significantly reduced. In this paper, an optimization of the algorithm is reported. Using sub-pixel interpolation, the cross-correlation function between image 1 and image 2 is expressed as a polynomial function with unknown displacement, in which the coefficients are determined by the autocorrelation function of the image 1. This virtual correlation function can be matched with the exact correlation value at every point in the vicinity of the discrete correlation peak (a 5×5 pixels area is chosen in the present study). A least square method is used to find the optimal displacement components that minimize the difference between the real and virtual correlation values. The performances of this method at the presence of background noise and out-of-plane motion are investigated by using synthetic images, as well as the influence of under-resolved particle images, and compared with the result of the SPCF method. The advantage of the CMM over SPCF is demonstrated in these studies.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography