To see the other types of publications on this topic, follow the link: Field Code Forest Algorithm.

Journal articles on the topic 'Field Code Forest Algorithm'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Field Code Forest Algorithm.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Halladin-Dąbrowska, Anna, Adam Kania, and Dominik Kopeć. "The t-SNE Algorithm as a Tool to Improve the Quality of Reference Data Used in Accurate Mapping of Heterogeneous Non-Forest Vegetation." Remote Sensing 12, no. 1 (December 20, 2019): 39. http://dx.doi.org/10.3390/rs12010039.

Full text
Abstract:
Supervised classification methods, used for many applications, including vegetation mapping require accurate “ground truth” to be effective. Nevertheless, it is common for the quality of this data to be poorly verified prior to it being used for the training and validation of classification models. The fact that noisy or erroneous parts of the reference dataset are not removed is usually explained by the relatively high resistance of some algorithms to errors. The objective of this study was to demonstrate the rationale for cleaning the reference dataset used for the classification of heterogeneous non-forest vegetation, and to present a workflow based on the t-distributed stochastic neighbor embedding (t-SNE) algorithm for the better integration of reference data with remote sensing data in order to improve outcomes. The proposed analysis is a new application of the t-SNE algorithm. The effectiveness of this workflow was tested by classifying three heterogeneous non-forest Natura 2000 habitats: Molinia meadows (Molinion caeruleae; code 6410), species-rich Nardus grassland (code 6230) and dry heaths (code 4030), employing two commonly used algorithms: random forest (RF) and AdaBoost (AB), which, according to the literature, differ in their resistance to errors in reference datasets. Polygons collected in the field (on-ground reference data) in 2016 and 2017, containing no intentional errors, were used as the on-ground reference dataset. The remote sensing data used in the classification were obtained in 2017 during the peak growing season by a HySpex sensor consisting of two imaging spectrometers covering spectral ranges of 0.4–0.9 μm (VNIR-1800) and 0.9–2.5 μm (SWIR-384). The on-ground reference dataset was gradually cleaned by verifying candidate polygons selected by visual interpretation of t-SNE plots. Around 40–50% of candidate polygons were ultimately found to contain errors. Altogether, 15% of reference polygons were removed. As a result, the quality of the final map, as assessed by the Kappa and F1 accuracy measures as well as by visual evaluation, was significantly improved. The global map accuracy increased by about 6% (in Kappa coefficient), relative to the baseline classification obtained using random removal of the same number of reference polygons.
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Wenyu, Yaqun Zhao, and Sijie Fan. "Cryptosystem Identification Scheme Based on ASCII Code Statistics." Security and Communication Networks 2020 (December 15, 2020): 1–10. http://dx.doi.org/10.1155/2020/8875864.

Full text
Abstract:
In the field of information security, block cipher is widely used in the protection of messages, and its safety naturally attracts people’s attention. The identification of the cryptosystem is the premise of encrypted data analysis. It belongs to the category of attack analysis in cryptanalysis and has important theoretical significance and application value. This paper focuses on the extraction of ciphertext features and the construction of cryptosystem identification classifiers. The main contents and innovations of this paper are as follows. Firstly, inspired by language processing, we propose the feature extraction scheme based on ASCII statistics of ciphertexts which decrease the dimension of data preprocessing. Secondly, on the basis of previous work, we increase the types of block ciphers to eight, encrypt plaintext of the same sizes as experimental objects, and recognize the cryptosystem. Thirdly, we use two machine learning classifiers to perform classification experiments including random forest and SVM. The experimental results show that our scheme can not only improve the identification accuracy of 8 typical block cipher algorithms but also shorten the experimental time and reduce the computation load by greatly minimizing the dimension of the feature vector. And the various evaluation indicators obtained by the scheme have been greatly improved compared with the existing published literature.
APA, Harvard, Vancouver, ISO, and other styles
3

Müller, Hendrik, Christoph Behrens, and David J. E. Marsh. "An optimized Ly α forest inversion tool based on a quantitative comparison of existing reconstruction methods." Monthly Notices of the Royal Astronomical Society 497, no. 4 (August 6, 2020): 4937–55. http://dx.doi.org/10.1093/mnras/staa2225.

Full text
Abstract:
ABSTRACT We present a same-level comparison of the most prominent inversion methods for the reconstruction of the matter density field in the quasi-linear regime from the Ly α forest flux. Moreover, we present a pathway for refining the reconstruction in the framework of numerical optimization. We apply this approach to construct a novel hybrid method. The methods which are used so far for matter reconstructions are the Richardson–Lucy algorithm, an iterative Gauss–Newton method and a statistical approach assuming a one-to-one correspondence between matter and flux. We study these methods for high spectral resolutions such that thermal broadening becomes relevant. The inversion methods are compared on synthetic data (generated with the lognormal approach) with respect to their performance, accuracy, their stability against noise, and their robustness against systematic uncertainties. We conclude that the iterative Gauss–Newton method offers the most accurate reconstruction, in particular at small S/N, but has also the largest numerical complexity and requires the strongest assumptions. The other two algorithms are faster, comparably precise at small noise-levels, and, in the case of the statistical approach, more robust against inaccurate assumptions on the thermal history of the intergalactic medium (IGM). We use these results to refine the statistical approach using regularization. Our new approach has low numerical complexity and makes few assumptions about the history of the IGM, and is shown to be the most accurate reconstruction at small S/N, even if the thermal history of the IGM is not known. Our code will be made publicly available.
APA, Harvard, Vancouver, ISO, and other styles
4

Taghlabi, Faycal, Laila Sour, and Ali Agoumi. "Prelocalization and leak detection in drinking water distribution networks using modeling-based algorithms: a case study for the city of Casablanca (Morocco)." Drinking Water Engineering and Science 13, no. 2 (September 21, 2020): 29–41. http://dx.doi.org/10.5194/dwes-13-29-2020.

Full text
Abstract:
Abstract. The role of a drinking water distribution network (DWDN) is to supply high-quality water at the necessary pressure at various times of the day for several consumption scenarios. Locating and identifying water leakage areas has become a major concern for managers of the water supply, to optimize and improve constancy of supply. In this paper, we present the results of field research conducted to detect and to locate leaks in the DWDN focusing on the resolution of the Fixed And Variable Area Discharge (FAVAD) equation by use of the prediction algorithms in conjunction with hydraulic modeling and the Geographical Information System (GIS). The leak localization method is applied in the oldest part of Casablanca. We have used, in this research, two methodologies in different leak episodes: (i) the first episode is based on a simulation of artificial leaks on the MATLAB platform using the EPANET code to establish a database of pressures that describes the network's behavior in the presence of leaks. The data thus established have been fed into a machine learning algorithm called random forest, which will forecast the leakage rate and its location in the network; (ii) the second was field-testing a real simulation of artificial leaks by opening and closing of hydrants, on different locations with a leak size of 6 and 17 L s−1. The two methods converged to comparable results. The leak position is spotted within a 100 m radius of the actual leaks.
APA, Harvard, Vancouver, ISO, and other styles
5

Zhou, Shuni, Guangxing Wu, Yehong Dong, Yuanxiang Ni, Yuheng Hao, Yunhe Jiang, Chuang Zhou, and Zhiyu Tao. "Evaluations on supervised learning methods in the calibration of seven-hole pressure probes." PLOS ONE 18, no. 1 (January 23, 2023): e0277672. http://dx.doi.org/10.1371/journal.pone.0277672.

Full text
Abstract:
Machine learning method has become a popular, convenient and efficient computing tool applied to many industries at present. Multi-hole pressure probe is an important technique widely used in flow vector measurement. It is a new attempt to integrate machine learning method into multi-hole probe measurement. In this work, six typical supervised learning methods in scikit-learn library are selected for parameter adjustment at first. Based on the optimal parameters, a comprehensive evaluation is conducted from four aspects: prediction accuracy, prediction efficiency, feature sensitivity and robustness on the failure of some hole port. As results, random forests and K-nearest neighbors’ algorithms have the better comprehensive prediction performance. Compared with the in-house traditional algorithm, the machine learning algorithms have the great advantages in the computational efficiency and the convenience of writing code. Multi-layer perceptron and support vector machines are the most time-consuming algorithms among the six algorithms. The prediction accuracy of all the algorithms is very sensitive to the features. Using the features based on the physical knowledge can obtain a high accuracy predicted results. Finally, KNN algorithm is successfully applied to field measurements on the angle of attack of a wind turbine blades. These findings provided a new reference for the application of machine learning method in multi-hole probe calibration and measurement.
APA, Harvard, Vancouver, ISO, and other styles
6

Mohan, Midhun, Rodrigo Vieira Leite, Eben North Broadbent, Wan Shafrina Wan Mohd Jaafar, Shruthi Srinivasan, Shaurya Bajaj, Ana Paula Dalla Corte, et al. "Individual tree detection using UAV-lidar and UAV-SfM data: A tutorial for beginners." Open Geosciences 13, no. 1 (January 1, 2021): 1028–39. http://dx.doi.org/10.1515/geo-2020-0290.

Full text
Abstract:
Abstract Applications of unmanned aerial vehicles (UAVs) have proliferated in the last decade due to the technological advancements on various fronts such as structure-from-motion (SfM), machine learning, and robotics. An important preliminary step with regard to forest inventory and management is individual tree detection (ITD), which is required to calculate forest attributes such as stem volume, forest uniformity, and biomass estimation. However, users may find adopting the UAVs and algorithms for their specific projects challenging due to the plethora of information available. Herein, we provide a step-by-step tutorial for performing ITD using (i) low-cost UAV-derived imagery and (ii) UAV-based high-density lidar (light detection and ranging). Functions from open-source R packages were implemented to develop a canopy height model (CHM) and perform ITD utilizing the local maxima (LM) algorithm. ITD accuracy assessment statistics and validation were derived through manual visual interpretation from high-resolution imagery and field-data-based accuracy assessment. As the intended audience are beginners in remote sensing, we have adopted a very simple methodology and chosen study plots that have relatively open canopies to demonstrate our proposed approach; the respective R codes and sample plot data are available as supplementary materials.
APA, Harvard, Vancouver, ISO, and other styles
7

Yi, Weilin, and Hongliang Cheng. "Investigation on the Optimal Design and Flow Mechanism of High Pressure Ratio Impeller with Machine Learning Method." International Journal of Aerospace Engineering 2020 (November 29, 2020): 1–11. http://dx.doi.org/10.1155/2020/8855314.

Full text
Abstract:
The optimization of high-pressure ratio impeller with splitter blades is difficult because of large-scale design parameters, high time cost, and complex flow field. So few relative works are published. In this paper, an engineering-applied centrifugal impeller with ultrahigh pressure ratio 9 was selected as datum geometry. One kind of advanced optimization strategy including the parameterization of impeller with 41 parameters, high-quality CFD simulation, deep machine learning model based on SVR (Support Vector Machine), random forest, and multipoint genetic algorithm (MPGA) were set up based on the combination of commercial software and in-house python code. The optimization objective is to maximize the peak efficiency with the constraints of pressure-ratio at near stall point and choked mass flow. Results show that the peak efficiency increases by 1.24% and the overall performance is improved simultaneously. By comparing the details of the flow field, it is found that the weakening of the strength of shock wave, reduction of tip leakage flow rate near the leading edge, separation region near the root of leading edge, and more homogenous outlet flow distributions are the main reasons for performance improvement. It verified the reliability of the SVR-MPGA model for multiparameter optimization of high aerodynamic loading impeller and revealed the probable performance improvement pattern.
APA, Harvard, Vancouver, ISO, and other styles
8

Gupta, Surya, Tomislav Hengl, Peter Lehmann, Sara Bonetti, and Dani Or. "SoilKsatDB: global database of soil saturated hydraulic conductivity measurements for geoscience applications." Earth System Science Data 13, no. 4 (April 15, 2021): 1593–612. http://dx.doi.org/10.5194/essd-13-1593-2021.

Full text
Abstract:
Abstract. The saturated soil hydraulic conductivity (Ksat) is a key parameter in many hydrological and climate models. Ksat values are primarily determined from basic soil properties and may vary over several orders of magnitude. Despite the availability of Ksat datasets in the literature, significant efforts are required to combine the data before they can be used for specific applications. In this work, a total of 13 258 Ksat measurements from 1908 sites were assembled from the published literature and other sources, standardized (i.e., units made identical), and quality checked in order to obtain a global database of soil saturated hydraulic conductivity (SoilKsatDB). The SoilKsatDB covers most regions across the globe, with the highest number of Ksat measurements from North America, followed by Europe, Asia, South America, Africa, and Australia. In addition to Ksat, other soil variables such as soil texture (11 584 measurements), bulk density (11 262 measurements), soil organic carbon (9787 measurements), moisture content at field capacity (7382), and wilting point (7411) are also included in the dataset. To show an application of SoilKsatDB, we derived Ksat pedotransfer functions (PTFs) for temperate regions and laboratory-based soil properties (sand and clay content, bulk density). Accurate models can be fitted using a random forest machine learning algorithm (best concordance correlation coefficient (CCC) equal to 0.74 and 0.72 for temperate area and laboratory measurements, respectively). However, when these Ksat PTFs are applied to soil samples obtained from tropical climates and field measurements, respectively, the model performance is significantly lower (CCC = 0.49 for tropical and CCC = 0.10 for field measurements). These results indicate that there are significant differences between Ksat data collected in temperate and tropical regions and Ksat measured in the laboratory or field. The SoilKsatDB dataset is available at https://doi.org/10.5281/zenodo.3752721 (Gupta et al., 2020) and the code used to extract the data from the literature and the applied random forest machine learning approach are publicly available under an open data license.
APA, Harvard, Vancouver, ISO, and other styles
9

Jamali, A., M. Mahdianpari, and İ. R. Karaş. "A COMPARISON OF TREE-BASED ALGORITHMS FOR COMPLEX WETLAND CLASSIFICATION USING THE GOOGLE EARTH ENGINE." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVI-4/W5-2021 (December 23, 2021): 313–19. http://dx.doi.org/10.5194/isprs-archives-xlvi-4-w5-2021-313-2021.

Full text
Abstract:
Abstract. Wetlands are endangered ecosystems that are required to be systematically monitored. Wetlands have significant contributions to the well-being of human-being, fauna, and fungi. They provide vital services, including water storage, carbon sequestration, food security, and protecting the shorelines from floods. Remote sensing is preferred over the other conventional earth observation methods such as field surveying. It provides the necessary tools for the systematic and standardized method of large-scale wetland mapping. On the other hand, new cloud computing technologies for the storage and processing of large-scale remote sensing big data such as the Google Earth Engine (GEE) have emerged. As such, for the complex wetland classification in the pilot site of the Avalon, Newfoundland, Canada, we compare the results of three tree-based classifiers of the Decision Tree (DT), Random Forest (RF), and Extreme Gradient Boosting (XGB) available in the GEE code editor using Sentinel-2 images. Based on the results, the XGB classifier with an overall accuracy of 82.58% outperformed the RF (82.52%) and DT (77.62%) classifiers.
APA, Harvard, Vancouver, ISO, and other styles
10

Rusyn, B. P., O. A. Lutsyk, R. Ya Kosarevych, and Yu V. Obukh. "RECOGNITION OF DAMAGED FOREST WITH THE HELP OF CONVOLUTIONAL MODELS IN REMOTE SENSING." Ukrainian Journal of Information Technology 3, no. 1 (2021): 1–7. http://dx.doi.org/10.23939/ujit2021.03.001.

Full text
Abstract:
The article provides a detailed review of the problem of deforestation, which in recent years has become uncontrolled. The main causes of forest damage are analyzed, among which the most well-known are climate change, diseases and pests. The losses of forestry as a result of tree diseases, which are large-scale and widespread in other countries, are given. The solution of these problems is possible under the condition of high-quality monitoring with the involvement of automated remote sensing tools and modern methods of image analysis, including artificial intelligence approaches such as neural networks and deep learning. The article proposes an approach to automatic localization and recognition of trees affected by drought, which is of great practical importance for environmental monitoring and forestry. A fully connected convolutional model of deep learning using the tensorflow and keras libraries has been developed for localization and recognition. This model consists of a detector network and a separate classifier network. To train and test the proposed network based on images obtained by remote sensing, a training database containing 8500 images was created. A comparison of the proposed model with the existing methods is based on such characteristics as accuracy and speed. The accuracy and speed of the proposed recognition system were evaluated on a validation sample of images, consisting of 1700 images. The model has been optimized for practical use with CPU and GPU due to pseudo quantization during training. This helps to distribute the values of the weights in the learning process and bring their appearance closer to a uniform distribution law, which in turn allows more efficient application of quantization to the original model. The average operating time of the algorithm is also determined. In the Visual C++ environment, based on the proposed model, an expert program has been created that allows to perform the ecological monitoring and analysis of dry forests in the field in real time. Libraries such as OpenCV and Direct were used in software development, and the code supports object-oriented programming standards. The results of the work and the developed software can be used in remote monitoring and classification systems for environmental monitoring and in applied tasks of forestry.
APA, Harvard, Vancouver, ISO, and other styles
11

Małaszek, Maciej, Andrzej Zembrzuski, and Krzysztof Gajowniczek. "ForestTaxator: A tool for detection and approximation of cross-sectional area of trees in a cloud of 3D points." Machine Graphics and Vision 31, no. 1/4 (December 15, 2022): 19–48. http://dx.doi.org/10.22630/mgv.2022.31.1.2.

Full text
Abstract:
In this paper we propose a novel software, named ForestTaxator, supporting terrestrial laser scanning data processing, which for dendrometric tree analysis can be divided into two main processes: tree detection in the point cloud and development of three-dimensional models of individual trees. The usage of genetic algorithms to solve the problem of tree detection in 3D point cloud and its cross-sectional area approximation with ellipse-based model is also presented. The detection and approximation algorithms are proposed and tested using various variants of genetic algorithms. The work proves that the genetic algorithms work very well: the obtained results are consistent with the reference data to a large extent, and the time of genetic calculations is very short. The attractiveness of the presented software is due to the fact that it provides all necessary functionalities used in the forest inventory field. The software is written in C# and runs on the .NET Core platform, which ensures its full portability between Windows, MacOS and Linux. It provides a number of interfaces thus ensuring a high level of modularity. The software and its code are made freely available.
APA, Harvard, Vancouver, ISO, and other styles
12

Liu, Yuansheng, and Jinyan Li. "Hamming-shifting graph of genomic short reads: Efficient construction and its application for compression." PLOS Computational Biology 17, no. 7 (July 19, 2021): e1009229. http://dx.doi.org/10.1371/journal.pcbi.1009229.

Full text
Abstract:
Graphs such as de Bruijn graphs and OLC (overlap-layout-consensus) graphs have been widely adopted for the de novo assembly of genomic short reads. This work studies another important problem in the field: how graphs can be used for high-performance compression of the large-scale sequencing data. We present a novel graph definition named Hamming-Shifting graph to address this problem. The definition originates from the technological characteristics of next-generation sequencing machines, aiming to link all pairs of distinct reads that have a small Hamming distance or a small shifting offset or both. We compute multiple lexicographically minimal k-mers to index the reads for an efficient search of the weight-lightest edges, and we prove a very high probability of successfully detecting these edges. The resulted graph creates a full mutual reference of the reads to cascade a code-minimized transfer of every child-read for an optimal compression. We conducted compression experiments on the minimum spanning forest of this extremely sparse graph, and achieved a 10 − 30% more file size reduction compared to the best compression results using existing algorithms. As future work, the separation and connectivity degrees of these giant graphs can be used as economical measurements or protocols for quick quality assessment of wet-lab machines, for sufficiency control of genomic library preparation, and for accurate de novo genome assembly.
APA, Harvard, Vancouver, ISO, and other styles
13

Wu, Yi, and Yuwen Pan. "Application Analysis of Credit Scoring of Financial Institutions Based on Machine Learning Model." Complexity 2021 (October 22, 2021): 1–12. http://dx.doi.org/10.1155/2021/9222617.

Full text
Abstract:
Credit score is the basis for financial institutions to make credit decisions. With the development of science and technology, big data technology has penetrated into the financial field, and personal credit investigation has entered a new era. Personal credit evaluation based on big data is one of the hot research topics. This paper mainly completes three works. Firstly, according to the application scenario of credit evaluation of personal credit data, the experimental dataset is cleaned, the discrete data is one-HOT coded, and the data are standardized. Due to the high dimension of personal credit data, the pdC-RF algorithm is adopted in this paper to optimize the correlation of data features and reduce the 145-dimensional data to 22-dimensional data. On this basis, WOE coding was carried out on the dataset, which was applied to random forest, support vector machine, and logistic regression models, and the performance was compared. It is found that logistic regression is more suitable for the personal credit evaluation model based on Lending Club dataset. Finally, based on the logistic regression model with the best parameters, the user samples are graded and the final score card is output.
APA, Harvard, Vancouver, ISO, and other styles
14

Kimura, F., and M. Shridhar. "Segmentation-recognition algorithm for zip code field recognition." Machine Vision and Applications 5, no. 3 (June 1992): 199–210. http://dx.doi.org/10.1007/bf02626998.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Butar-Butar, Juli Loisiana, and Misa Br Bukit. "Metode Reversible Self-Dual untuk Konstruksi Kode DNA atas Lapangan Hingga GF(4)." Jambura Journal of Mathematics 4, no. 2 (June 1, 2022): 188–99. http://dx.doi.org/10.34312/jjom.v4i2.13583.

Full text
Abstract:
The DNA molecule chain consists of two complementary strands composed of a sequence of four nucleotide bases, namely adenine (A), cytosine (C), guanine (G) and thymine (T). DNA code is a set of codewords with a fixed length of the alphabet {A, C, T, G}. DNA coding is one application of coding theory over a finite field. The set {A, C, T, G} is identified as finite field GF(4) = {0, 1, w, w2} with w2 + w + 1 = 0. The reversible self-dual (RSD) code over the finite field GF(4) is a code whose dual is itself and the reverse of each codeword contained in the code. This study aims to obtain an algorithm to construct a DNA code derived from the RSD C code on the field to GF(4) which is called the Reversible Self-Dual Method. The aspects studied include the characteristics that form the basis properties of the theory in compiling the DNA code algorithm over the RSD code over GF(4). The compiled algorithm is a DNA code construction method of codeword length even that conforms to the Hamming distance constraint, reverse-complement constraint, and GC-content constraint. The input of the algorithm is a generator matrix of RSD code C with a minimum distance of d and the output is a DNA code that satisfies these three constraints.
APA, Harvard, Vancouver, ISO, and other styles
16

Knyazeva, O. V., К. А. Sych, and O. K. Khotkina. "Principles of criminal law in the field of ensuring environmental safety." E3S Web of Conferences 203 (2020): 05024. http://dx.doi.org/10.1051/e3sconf/202020305024.

Full text
Abstract:
Enormous scale of issues of protecting the natural environment becomes more and more obvious as well as issues of development of human civilization. Current criminal legislation (Art. 2 of the Criminal Code of Russian Federation) points to the protection of environment by means of law. This is the subject of Chapter 26 of the Criminal Code of Russian Federation “Environmental Crimes”, where this group of crimes include water pollution (Art. 250 of the Criminal Code of Russian Federation), air pollution (Art. 251 of the Criminal Code of Russian Federation), marine pollution (Art. 252 of the Criminal Code of Russian Federation) ), damage to the land (Art. 254 of the Criminal Code of Russian Federation), destruction or damage of forest plantations (Art. 262 of the Criminal Code of Russian Federation). These types of crimes are especially relevant at present, after a number of emergency situations associated with river pollution in a number of regions, as well as cases with forest fires. In this regard, attention is drawn to the legislative list of principles of criminal law. In particular, the principle of justice (Article 6 of the Criminal Code of Russian Federation), which affects the interests of perpetrators of a crime. However, it should be recognized that the very category of “justice” presupposes not only consideration of interests of guilty person, but also interests of victim of a crime. Justice, as a principle of criminal law, should include, among other aspects, the restoration of victim's right violated by the crime and compensation for the harm caused to the victim.
APA, Harvard, Vancouver, ISO, and other styles
17

Alebady, Wallaa Yaseen, and Ahmed Abdulkadhim Hamad. "Turbo polar code based on soft-cancelation algorithm." Indonesian Journal of Electrical Engineering and Computer Science 26, no. 1 (April 1, 2022): 521. http://dx.doi.org/10.11591/ijeecs.v26.i1.pp521-530.

Full text
Abstract:
Since the first polar code of Arikan, <span>the research field of polar codes has been continuously active. Improving the performance of finite-code-length polar codes is the central point of this field. In this paper, the parallel concatenated systematic turbo polar code (PCSTPC) model has been proposed to improve the polar codes performance in a finite-length regime. On the encoder side, two systematic polar encoders are used as constituent encoders. While on the decoder side, two single iteration soft-cancelation (SCAN) decoders are used as soft-in-soft-out (SISO) algorithms inside the iterative decoding algorithm of the parallel concatenated systematic turbo polar code (PCSTPC). As compared to the optimized turbo polar code with SCAN and BP decoders, the proposed model has about 0.2 dB and 0.48 dB gains at BER=10<sup>(-4)</sup>, respectively, in addition to 0.1 dB, 0.31 dB, and 0.72 dB gains over the TPC-SSCL32, TPC-SSCL16, and TPC-SSCL8 models, respectively. Moreover, the proposed model offers less complexity in comparison with other models, therefore requiring less memory and time resources.</span>
APA, Harvard, Vancouver, ISO, and other styles
18

VORONOVICH, A. "NON-PARABOLIC MARCHING ALGORITHM FOR SOUND FIELD CALCULATION IN THE OCEAN WAVEGUIDE." Journal of Computational Acoustics 04, no. 04 (December 1996): 399–423. http://dx.doi.org/10.1142/s0218396x96000155.

Full text
Abstract:
An algorithm is presented for calculating sound field in the inhomogeneous ocean waveguide. It does not involve parabolic approximation and can be considered as principally exact (at least for 2D inhomogeneities of the sound speed field). On the other hand, it is “marching” and can be easily implemented as a computer code (note, that marching in this case proceeds in “backward” direction, i.e. towards the source). Those features of the code are similar to couple mode algorithm (COUPLE) developed originally by R. Evans. The principal difference is that suggested code does not assume piecewise constant approximation of the waveguide properties with respect to horizontal coordinates. As a result, the horizontal steps of marching can be increased significantly. The estimate of the efficiency of the approach as compared to stepwise couple modes method is given. The results of the code testing with the help of benchmark problem as well as calculation of sound propagation through a strong inhomogeneity formed by the sub-arctic front are presented. The present version of the code can be used to calculate entries of scattering matrix (S-matrix) for the ocean waveguide as well as travel times of different modes (derivatives of phases of corresponding entries with respect to frequency). A priori restrictions on S-matrix (reciprocity and energy conservation) are also given, and some objective quantitative criterion of the accuracy of the numerical algorithms formulated in terms of S-matrix is suggested.
APA, Harvard, Vancouver, ISO, and other styles
19

WANG, J., D. KONDRASHOV, P. C. LIEWER, and S. R. KARMESIN. "Three-dimensional deformable-grid electromagnetic particle-in-cell for parallel computers." Journal of Plasma Physics 61, no. 3 (April 1999): 367–89. http://dx.doi.org/10.1017/s0022377899007552.

Full text
Abstract:
We describe a new parallel, non-orthogonal-grid, three-dimensional electromagnetic particle-in-cell (EMPIC) code based on a finite-volume formulation. This code uses a logically Cartesian grid of deformable hexahedral cells, a discrete surface integral (DSI) algorithm to calculate the electromagnetic field, and a hybrid logical–physical space algorithm to push particles. We investigate the numerical instability of the DSI algorithm for non-orthogonal grids, analyse the accuracy for EMPIC simulations on non-orthogonal grids, and present performance benchmarks of this code on a parallel supercomputer. While the hybrid particle push algorithm has a second-order accuracy in space, the accuracy of the DSI field solve algorithm is between first and second order for non-orthogonal grids. The parallel implementation of this code, which is almost identical to that of a Cartesian-grid EMPIC code using domain decomposition, achieved a high parallel efficiency of over 96% for large-scale simulations.
APA, Harvard, Vancouver, ISO, and other styles
20

Gao, Xing, Min Li, Juan Juan Huang, and Bing Chang Liu. "A Distributed Storage Algorithm Based on Cauchy RS Code." Applied Mechanics and Materials 336-338 (July 2013): 2188–94. http://dx.doi.org/10.4028/www.scientific.net/amm.336-338.2188.

Full text
Abstract:
Data fault tolerance is a key technology in the field of distributed storage. In this paper, an algorithm to encode massive amounts of data and then distribute storage these data on each node in the data center is proposed, aiming at coping with the serious challenges in the protection of data fault tolerance. The method converts multiplication operation in Cauchy RS coding into a binary multiplication through the transition on bit operation, so that the entire operation on RS encoding is converted to an operation containing only simple XOR operator. The experiment proves that the method is better than the copy and the original RS coding in the data encoding efficiency. Furthermore, it saves the storage space and promotes the application of erasure codes strongly in distributed storage field.
APA, Harvard, Vancouver, ISO, and other styles
21

Li, Xing Wei, Yu Fei Zhou, Xiao Chuang Li, Zhen Shi Wang, and Ze Peng Wu. "Early Forest Fire Identification Algorithm of Infrared Thermal Imager." Advanced Materials Research 610-613 (December 2012): 3323–27. http://dx.doi.org/10.4028/www.scientific.net/amr.610-613.3323.

Full text
Abstract:
Forest fire identification by infrared thermal imager has beening a new research trend. However, currently the most algorithms of forest fire identification are usually designed based on the process designer’s experiences. Unfortunately, few researches on the efficiency of those algorithms have been developed when they were used to identify field forest fire. The response of inchoate forest fire identification algorithm by infrared thermal imager was investigated in this study according to different field testing times and scales. The results show that compare means appears more stability when using it to detect fire within it eliminates most interferences but also meets the requirement of long-space detecting, compared to other two methods.
APA, Harvard, Vancouver, ISO, and other styles
22

Catal, Cagatay, Serkan Tugul, and Basar Akpinar. "Automatic Software Categorization Using Ensemble Methods and Bytecode Analysis." International Journal of Software Engineering and Knowledge Engineering 27, no. 07 (September 2017): 1129–44. http://dx.doi.org/10.1142/s0218194017500425.

Full text
Abstract:
Software repositories consist of thousands of applications and the manual categorization of these applications into domain categories is very expensive and time-consuming. In this study, we investigate the use of an ensemble of classifiers approach to solve the automatic software categorization problem when the source code is not available. Therefore, we used three data sets (package level/class level/method level) that belong to 745 closed-source Java applications from the Sharejar repository. We applied the Vote algorithm, AdaBoost, and Bagging ensemble methods and the base classifiers were Support Vector Machines, Naive Bayes, J48, IBk, and Random Forests. The best performance was achieved when the Vote algorithm was used. The base classifiers of the Vote algorithm were AdaBoost with J48, AdaBoost with Random Forest, and Random Forest algorithms. We showed that the Vote approach with method attributes provides the best performance for automatic software categorization; these results demonstrate that the proposed approach can effectively categorize applications into domain categories in the absence of source code.
APA, Harvard, Vancouver, ISO, and other styles
23

R., Vasanthi,, and Tamilselvi, J. "Heart Disease Prediction Using Random Forest Algorithm." CARDIOMETRY, no. 24 (November 30, 2022): 982–88. http://dx.doi.org/10.18137/cardiometry.2022.24.982988.

Full text
Abstract:
Heart disease is one of the complex diseases and globally many of us suffer from this disease. On time and efficient identification of cardiovascular disease plays a key role in healthcare, particularly within the field of cardiology. An efficient and accurate system to diagnose cardiovascular disease and the system is predicated on machine learning techniques. The system is developed by classification algorithms using Random Forest, Naïve Bayes and Support Vector Machine while standard features selection techniques are used like univerate, feature importance , and correlation matrix for removing irrelevant and redundant features. The features selection are used for feature to extend the classification accuracy and reduce the execution time of the arrangement. The way that aims at finding significant features by applying machine learning techniques leading to improving the accuracy within the prediction of disorder. The heart disease prediction that Random Forest achieved good accuracy as compared to other algorithms.
APA, Harvard, Vancouver, ISO, and other styles
24

Rosita, Rizka, Dwika Ananda Agustina Pertiwi, and Oktaria Gina Khoirunnisa. "Prediction of Hospital Intesive Patients Using Neural Network Algorithm." Journal of Soft Computing Exploration 3, no. 1 (March 30, 2022): 8–11. http://dx.doi.org/10.52465/joscex.v3i1.61.

Full text
Abstract:
This study aims to predict whether the patient deserves to be inpatient or outpatient by comparing several machine learning techniques, namely, logistic regression, decision tree, neural network, random forest, gradient boosting. The research method uses three stages of research, namely data collection, data preprocessing, and data modeling. Implementation of program code using google colab and python programming language. The dataset used as the research sample is Electronic Health Record Predicting data. Based on the accuracy results generated in this study, the use of the Neural Network machine learning algorithm to predict hospitalization decisions for patients has proven to be a machine learning algorithm that has the highest accuracy rate reaching 74, 47% compared to other comparison machine learning algorithms, namely logistic regression, decision tree, neural network, random forest, gradient boosting.
APA, Harvard, Vancouver, ISO, and other styles
25

Aggarwal, Alankrita, Kanwalvir Singh Dhindsa, and P. K. Suri. "Performance-Aware Approach for Software Risk Management Using Random Forest Algorithm." International Journal of Software Innovation 9, no. 1 (January 2021): 12–19. http://dx.doi.org/10.4018/ijsi.2021010102.

Full text
Abstract:
Software quality assurance and related methodologies are quite prominent before actual launching the application so that any type of issues can be resolved at prior notifications. The process of software evaluation is one of the key tasks that are addressed by the quality assurance teams so that the risks in the software suite can be identified and can be removed with prior notifications. Different types of metrics can be used in defect prediction model and widely used metrics are source code and process metrics. The focus of this research manuscript is to develop a narrative architecture and design for software risk management using soft computing in integration with the proposed approach of random forest approach is expected to have the effectual results on multiple parameters with the flavor of multiple decision trees. The proposed approach is integrated with the framework of meta-heuristics with random forest in different substances and elements to produce a new substance.
APA, Harvard, Vancouver, ISO, and other styles
26

Yang, Xiao Fei, and Chang Yuan Han. "Novel Algorithm for Computer-Aided Alignment of Wide Field of View Complex Optical System." Key Engineering Materials 364-366 (December 2007): 1066–71. http://dx.doi.org/10.4028/www.scientific.net/kem.364-366.1066.

Full text
Abstract:
With the advent of the new complex optical system, alignment technology is necessary. In this paper, is presented an alignment algorithm whose main idea is composed of damped least square method and singular value decomposition method. In addition, the how and why to enter the damped factor into algorithm is explained. According to this algorithm, an alignment software package was compiled and compared self-compiled software with alignment package of CODE V. Results show that, this self-compiled software is much more valid than alignment package of CODE V. For a field of view of 2° R-C system, average MTF over the field of view was greater than 0.3 at 50 line pairs /mm.
APA, Harvard, Vancouver, ISO, and other styles
27

Xie, Qi, Gengguo Cheng, Xiao Zhang, and Lei Peng. "Feature Selection Using Improved Forest Optimization Algorithm." Information Technology And Control 49, no. 2 (June 16, 2020): 289–301. http://dx.doi.org/10.5755/j01.itc.49.2.24858.

Full text
Abstract:
Feature selection is one of the hottest topics in the field of machine learning and data mining. In 2016, the feature selection using forest optimization algorithm (FSFOA) was proposed, which had a better classification performance and dimensionality reduction ability. However, there are some shortcomings in FSFOA. Feature Selection using Improved Forest Optimization Algorithm (FSIFOA) is proposed in this article, which aims at solving the problems of FSFOA during the stages of random initialization, forming the candidate population and updating the best tree. FSIFOA uses the Pearson correlation coefficient and the L1 regularization method to replace the random initialization strategy in the initialization stage, uses the method of separating good and bad trees and filling the quantity gap between them to solve the problem of category imbalance in the candidate population generation stage, adds trees of the same precision but different dimension compared with the best tree to the forest in the update stage. In experiment, the new algorithm uses the same data and parameters as the traditional algorithm to test the small, medium and large dimensional data respectively. The results of the experiments show that the new algorithm can improve the classification accuracy of classifiers and increase the dimension reduction ratio compared with the traditional algorithms in the medium and large dimension data set.
APA, Harvard, Vancouver, ISO, and other styles
28

Miller, Eric A. "A Conceptual Interpretation of the Drought Code of the Canadian Forest Fire Weather Index System." Fire 3, no. 2 (June 22, 2020): 23. http://dx.doi.org/10.3390/fire3020023.

Full text
Abstract:
The Drought Code (DC) was developed as part of the Canadian Forest Fire Weather Index System in the early 1970s to represent a deep column of soil that dries relatively slowly. Unlike most other fire danger indices or codes that operate on gravimetric moisture content and use the logarithmic drying equation to represent diffusion, the DC is based on a model that balances daily precipitation and evaporation. This conceptually simple water balance model was ultimately implemented using a “shortcut” equation that facilitated ledgering by hand but also mixed the water balance model with the abstraction equation, obscuring the logic of the model and concealing two important variables. An alternative interpretation of the DC is presented that returns the algorithm to an equivalent but conceptual form that offers several advantages: The simplicity of the underlying water balance model is retained with fewer variables, constants, and equations. Two key variables, daily depth of water storage and actual evaporation, are exposed. The English system of units is eliminated and two terms associated with precipitation are no longer needed. The reduced model does not include or depend on any soil attributes, confirming that the nature of the “DC equivalent soil” cannot be precisely known. While the “Conceptual Algorithm” presented here makes it easier to interpret and understand the logic of the DC, users may continue to use the equivalent “Implemented Algorithm” operationally if they wish.
APA, Harvard, Vancouver, ISO, and other styles
29

Xie, Hui, Feng Hua Wang, and Zhi Tao Huang. "Blind Recognition of Reed-Solomon Codes Based on Histogram Statistic of Galois Field Spectra." Advanced Materials Research 791-793 (September 2013): 2088–91. http://dx.doi.org/10.4028/www.scientific.net/amr.791-793.2088.

Full text
Abstract:
We focus on the recognition of RS code in a reverse-engineering communication system, and an algorithm based on spectral statistic is proposed, which makes use of the spectral characteristics of RS code. When there are some codes with errors in the observations, the spectral histogram statistic is computed, which can improve the probability of detection in the noisy environment. Experimental results are given to illustrate the performances of the method which can get a 90% probability of detection of (255, 223) RS code in the presence of 0.5% code error rate.
APA, Harvard, Vancouver, ISO, and other styles
30

Huang, J. P., Y. Q. Xing, and L. Qin. "REVIEW OF NOISE FILTERING ALGORITHM FOR PHOTON DATA." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-3/W10 (February 7, 2020): 105–10. http://dx.doi.org/10.5194/isprs-archives-xlii-3-w10-105-2020.

Full text
Abstract:
Abstract. As a continuation of Ice, Cloud, and Land Elevation Satellite-1 (ICESat-1)/Geoscience Laser Altimeter System (GLAS), the Ice, Cloud, and Land Elevation Satellite-2 (ICESat-2) , which is equipped with the Advanced Topographic Laser Altimeter (ATLAS) system, was successfully launched in 2018. Since ICESat-1/GLAS has facilitated scientific results in the field of forest structure parameter estimation, how to use the ICESat-2/ATLAS photon cloud data to estimate forest structure parameters has become a hotspot in the field of spaceborne photon data application. However, due to the weak photon characteristics of the ICESat-2/ATLAS system, the system is extremely susceptible to noise, which poses a challenge for its subsequent accurate estimation of forest structural parameters. Aiming to filter out the noise photons, the paper introduces the advantages of the spaceborne lidar system ICESat-2/ATLAS than ICESat-1/GLAS. The paper summarizes the research of the simulated photon-counting lidar (PCL) noise filtering algorithm and noise filtering on spaceborne.
APA, Harvard, Vancouver, ISO, and other styles
31

Zhang, Yanhong, Meng Wang, and Yingfu Yu. "Telecommunications package recommendation algorithm based on Deep forest." Journal of Physics: Conference Series 2078, no. 1 (November 1, 2021): 012014. http://dx.doi.org/10.1088/1742-6596/2078/1/012014.

Full text
Abstract:
Abstract In view of the wide variety of telecom packages and the difficulty of adapting to the needs of users, this paper introduces a recommendation model for telecom packages based on deep forests. This paper first analyzes the telecom package data, and then optimizes the deep forest according to its characteristics such as discrete, continuous attribute interleaving and high coupling characteristics, including the use of decision trees to discretize continuous features and design continuous window sliding mechanism. These methods can improve the ability of deep forest combination high coupling features. Finally, the model optimization measures were verified by detail experiments. The experimental results show that the optimized deep forest can be applied to the telecom package recommendation field. Compared with other shallow models and unoptimized deep forest models, the deep forest model has increased the F1 score by 5%; after adjusting the deep forest hyper parameters, the F1 score can be increased by 2%.
APA, Harvard, Vancouver, ISO, and other styles
32

Butko, G. P. "Economic problems of forest management at current development stage." FORESTRY BULLETIN 24, no. 5 (October 2020): 66–73. http://dx.doi.org/10.18698/2542-1468-2020-5-66-73.

Full text
Abstract:
Research of problems of forest management, formation and development of forest management. Clarification of features in the system «management and planning in the field of use, protection and other functions of forests», according to the current Forest code of the Russian Federation and the concept «forest legislation of the Russian Federation based on the principles of sustainable forest management, conservation of their biological diversity and other useful functions of forests». From the point of view of the practice and system of strategic forest management, specific issues regarding forest management objects are highlighted. Analytical method for obtaining information about natural-historical and economic conditions of the area where the forest management object is located. Analysis of economic activities and study of past experience of forest management in the field of use, protection, protection and reproduction of forests. On the basis of the Forest code, allocated successive stages of forest management such as the design of forest areas and forest parks, the design of operational, protective and reserve forests, as well as the design of measures for the protection and reproduction of forests. Based on the theoretical review and analysis presented in the relationship of the system «results-costs», the main directions of development of forest management are determined. Scientific novelty. The scientific novelty consists in defining the concept of competitiveness of forest capital. Achieving competitive advantages is possible on the basis of sustainable development as a factor of ensuring economic stability. The structure of the forest management process includes progressive elements based on a balance between the growth and depletion of natural resources.
APA, Harvard, Vancouver, ISO, and other styles
33

Bouyuklieva, Stefka, and Iliya Bouyukliev. "An Extension of the Brouwer–Zimmermann Algorithm for Calculating the Minimum Weight of a Linear Code." Mathematics 9, no. 19 (September 22, 2021): 2354. http://dx.doi.org/10.3390/math9192354.

Full text
Abstract:
A modification of the Brouwer–Zimmermann algorithm for calculating the minimum weight of a linear code over a finite field is presented. The aim was to reduce the number of codewords for consideration. The reduction is significant in cases where the length of a code is not divisible by its dimensions. The proposed algorithm can also be used to find all codewords of weight less than a given constant. The algorithm is implemented in the software package QextNewEdition.
APA, Harvard, Vancouver, ISO, and other styles
34

Duarte, Efraín, Juan A. Barrera, Francis Dube, Fabio Casco, Alexander J. Hernández, and Erick Zagal. "Monitoring Approach for Tropical Coniferous Forest Degradation Using Remote Sensing and Field Data." Remote Sensing 12, no. 16 (August 6, 2020): 2531. http://dx.doi.org/10.3390/rs12162531.

Full text
Abstract:
Current estimates of CO2 emissions from forest degradation are generally based on insufficient information and are characterized by high uncertainty, while a global definition of ‘forest degradation’ is currently being discussed in the scientific arena. This study proposes an automated approach to monitor degradation using a Landsat time series. The methodology was developed using the Google Earth Engine (GEE) and applied in a pine forest area of the Dominican Republic. Land cover change mapping was conducted using the random forest (RF) algorithm and resulted in a cumulative overall accuracy of 92.8%. Forest degradation was mapped with a 70.7% user accuracy and a 91.3% producer accuracy. Estimates of the degraded area had a margin of error of 10.8%. A number of 344 Landsat collections, corresponding to the period from 1990 to 2018, were used in the analysis. Additionally, 51 sample plots from a forest inventory were used. The carbon stocks and emissions from forest degradation were estimated using the RF algorithm with an R2 of 0.78. GEE proved to be an appropriate tool to monitor the degradation of tropical forests, and the methodology developed herein is a robust, reliable, and replicable tool that could be used to estimate forest degradation and improve monitoring, reporting, and verification (MRV) systems under the reducing emissions from deforestation and forest degradation (REDD+) mechanism.
APA, Harvard, Vancouver, ISO, and other styles
35

Bechina, I. "DISTINCTION OF POWERS BETWEEN THE STATE AUTHORITIES OF THE RUSSIAN FEDERATION AND SUBJECTS OF THE RUSSIAN FEDERATION IN THE FIELD OF FOREST RELATIONS." National Association of Scientists 5, no. 74 (December 30, 2021): 48–51. http://dx.doi.org/10.31618/nas.2413-5291.2021.5.74.549.

Full text
Abstract:
The purpose of the research is to improve the legislation on the delimitation of powers in the field of forest relations between the state authorities of the Russian Federation and its subjects. In the course of research, the method of system-legal analysis of regulatory legal acts was applied. As a result, proposals have been developed to amend Articles 81-83 of the Forest Code of the Russian Federation. The introduction of a mechanism for delineating powers in the field of forest relations by the federal and regional levels of government will eliminate legislative gaps and contradictions.
APA, Harvard, Vancouver, ISO, and other styles
36

Sikhakolli, Srinivasa, and Asha Sikhakolli. "A Bacterial Foraging Algorithm with Random Forest Classifier for Detecting the Design Patterns in Source Code." International Journal of Intelligent Engineering and Systems 14, no. 2 (April 30, 2021): 95–105. http://dx.doi.org/10.22266/ijies2021.0430.09.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Ding, Jiaman, Weikang Fu, and Lianyin Jia. "Deep Forest and Pruned Syntax Tree-Based Classification Method for Java Code Vulnerability." Mathematics 11, no. 2 (January 15, 2023): 461. http://dx.doi.org/10.3390/math11020461.

Full text
Abstract:
The rapid development of J2EE (Java 2 Platform Enterprise Edition) has brought unprecedented severe challenges to vulnerability mining. The current abstract syntax tree-based source code vulnerability classification method does not eliminate irrelevant nodes when processing the abstract syntax tree, resulting in a long training time and overfitting problems. Another problem is that different code structures will be translated to the same sequence of tree nodes when processing abstract syntax trees using depth-first traversal, so in this process, the depth-first algorithm will lead to the loss of semantic structure information which will reduce the accuracy of the model. Aiming at these two problems, we propose a deep forest and pruned syntax tree-based classification method (PSTDF) for Java code vulnerability. First, the breadth-first traversal of the abstract syntax tree obtains the sequence of statement trees, next, pruning statement trees removes irrelevant nodes, then we use a depth-first based encoder to obtain the vector, and finally, we use deep forest as the classifier to get classification results. Experiments on publicly accessible vulnerability datasets show that PSTDF can reduce the loss of semantic structure information and effectively remove the impact of redundant information.
APA, Harvard, Vancouver, ISO, and other styles
38

Ali Khan, Md Abbas, Mohammad Hanif Ali, A. K. M. Fazlul Haque, Chandan Debnath, and Shohag Kumar Bhowmik. "An efficient and optimized tracking framework through optimizing algorithm in a deep forest using NFC." Indonesian Journal of Electrical Engineering and Computer Science 19, no. 2 (August 1, 2020): 884. http://dx.doi.org/10.11591/ijeecs.v19.i2.pp884-889.

Full text
Abstract:
NFC is applying in various field of contemporary technology. Especially of convenience tag usability in any place. One of the facilities which can be added in the tracking system is the implementation of Near Field Communication in order to guide each tourist in the deep forest or any other location. In the deep forest, tracking or location detection activities need to be done efficiently, like desired path finding in a deep forest. At present, the tracking procedure in deep forest is working with the help of guides or local citizens. Currently, in any restricted area such as the “Sundarban” forest, no outside general people are allowed to travel in the jungle without any authorized guide which is not an efficient way to travel smoothly. The use of Near Field Communication can solve the problem related to lost the way, safety, and easily help the travelers to track the desired destination without the help of human resources or any guide. The NFC tags that hold mapping information of the area, in the point of tag setup all tags will be set up on several trees along with sequence.
APA, Harvard, Vancouver, ISO, and other styles
39

McFadden, Joseph P., Neil W. MacDonald, John A. Witter, and Donald R. Zak. "Fine-textured soil bands and oak forest productivity in northwestern lower Michigan, U.S.A." Canadian Journal of Forest Research 24, no. 5 (May 1, 1994): 928–33. http://dx.doi.org/10.1139/x94-122.

Full text
Abstract:
The relationship between fine-textured soil bands and forest productivity was studied by comparing three mixed-oak (Quercusrubra L. and Quercusalba L.) stands that had little or no fine-textured banding with three stands that had bands. The degree to which soil factors could account for differences in productivity between banded and unbanded stands was examined using two methods, one based on field observations (banding codes) and the other based on laboratory textural analysis. Because stand ages were not significantly different, overstory biomass was used as an index of productivity. Mean overstory biomass in the banded stands was 312 Mg/ha, significantly greater than 170 Mg/ha measured in the unbanded stands. Mean percent clay + silt and mean banding code also were significantly higher in banded than in unbanded stands. Linear regression analysis indicated that mean percent clay + silt accounted for 57% of the variation in overstory biomass, whereas mean banding code accounted for 40% of the variation. In the oak stands we studied, variation in productivity can be explained largely by differences in soil texture associated with fine-textured bands. We also found a positive relationship between mean banding code and mean percent clay + silt (r2 = 0.90), which suggests that the field method of quantifying banding can produce values that are highly correlated with soil texture and, by extension, forest productivity.
APA, Harvard, Vancouver, ISO, and other styles
40

Lv, Fei, and Min Han. "Hyperspectral Image Classification Based on Improved Rotation Forest Algorithm." Sensors 18, no. 11 (October 23, 2018): 3601. http://dx.doi.org/10.3390/s18113601.

Full text
Abstract:
Hyperspectral image classification is a hot issue in the field of remote sensing. It is possible to achieve high accuracy and strong generalization through a good classification method that is used to process image data. In this paper, an efficient hyperspectral image classification method based on improved Rotation Forest (ROF) is proposed. It is named ROF-KELM. Firstly, Non-negative matrix factorization( NMF) is used to do feature segmentation in order to get more effective data. Secondly, kernel extreme learning machine (KELM) is chosen as base classifier to improve the classification efficiency. The proposed method inherits the advantages of KELM and has an analytic solution to directly implement the multiclass classification. Then, Q-statistic is used to select base classifiers. Finally, the results are obtained by using the voting method. Three simulation examples, classification of AVIRIS image, ROSIS image and the UCI public data sets respectively, are conducted to demonstrate the effectiveness of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
41

Diamantopoulou, Maria J. "Simulation of over-bark tree bole diameters, through the RFr (Random Forest Regression) algorithm." Folia Oecologica 49, no. 2 (July 1, 2022): 93–101. http://dx.doi.org/10.2478/foecol-2022-0010.

Full text
Abstract:
Abstract The difficulty of locating and measuring the over-bark tree bole diameters at heights that are far from the ground, is a serious problem in ground-truth data measurements in the field. This problem could be addressed through the application of intelligent systems methods. The paper explores the possibility of applying the Random Forest regression method (RFr) in order to assess, as accurately as possible, the size of the tree bole diameters at any height above the ground, considering data that can be easily measured in the field. For this purpose, diameter measurements of pine trees (Pinus brutia Ten.) from the Seich–Sou urban forest of Thessaloniki, Greece, were used. The effectiveness of the Random Forest regression technique is compared with the results of non-linear regression models that fitted to the available data and evaluated. This research has shown that the RFr method can be a reliable alternative methodology in order to receive accurate information provided by the model, saving time and effort in field.
APA, Harvard, Vancouver, ISO, and other styles
42

Ito, Kengo, Noriyuki Kadoya, Yoshiyuki Katsuta, Shohei Tanaka, Suguru Dobashi, Ken Takeda, and Keiichi Jingu. "Evaluation of the electron transport algorithm in magnetic field in EGS5 Monte Carlo code." Physica Medica 93 (January 2022): 46–51. http://dx.doi.org/10.1016/j.ejmp.2021.12.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Gladkikh, Anatolii A., Anastasiia D. Bakurova, Artem V. Menovshchikov, Basem A. S. Said, and Sergei V. Shakhtanov. "FRACTAL CLUSTERING OF GROUP CODES IN THE SYSTEM OF GALOIS FIELD." Автоматизация Процессов Управления 62, no. 4 (2020): 85–92. http://dx.doi.org/10.35752/1991-2927-2020-4-62-85-92.

Full text
Abstract:
The permutation decoding (PD) of group systematic noise-immune codes is proved to be the most efficient method in using the redundant data entered to the code as against other methods of decoding digital data [1-5]. This opens up the opportunity of solving a complex computational problem of finding an equivalent code (EC), which is used to search for the error vector. The essence of this solution is that the computational procedure of real-time search for EC for each new combination of redundant code is replaced by a preliminary process of training the decoder to put in accordance with each new permutation of characters the generating matrix of EC parameters, which are recorded in the decoder’s memory card during training. Thus, such a memory card is called a cognitive card (CC). The article estimates the memory size of the CC, when using the block code (15,7,5), and shows the possibility of implementing a permutation decoder on basis of existing integrated circuits based on proven statements. For the first time, the apparatus of fractal partitioning of augmented binary Galois fields using the clustering of the common space of code vectors of a given code is used to prove the main statements. An efficient algorithm is presented to search for a set of invertible matrices of rearranged codes that do not provide an EC and for this reason should be primarily detected in the decoding procedure of the received code vectors.
APA, Harvard, Vancouver, ISO, and other styles
44

Boulze, Hugo, Anton Korosov, and Julien Brajard. "Classification of Sea Ice Types in Sentinel-1 SAR Data Using Convolutional Neural Networks." Remote Sensing 12, no. 13 (July 7, 2020): 2165. http://dx.doi.org/10.3390/rs12132165.

Full text
Abstract:
A new algorithm for classification of sea ice types on Sentinel-1 Synthetic Aperture Radar (SAR) data using a convolutional neural network (CNN) is presented. The CNN is trained on reference ice charts produced by human experts and compared with an existing machine learning algorithm based on texture features and random forest classifier. The CNN is trained on two datasets in 2018 and 2020 for retrieval of four classes: ice free, young ice, first-year ice and old ice. The accuracy of our classification is 90.5% for the 2018-dataset and 91.6% for the 2020-dataset. The uncertainty is a bit higher for young ice (85%/76% accuracy in 2018/2020) and first-year ice (86%/84% accuracy in 2018/2020). Our algorithm outperforms the existing random forest product for each ice type. It has also proved to be more efficient in computing time and less sensitive to the noise in SAR data. The code is publicly available.
APA, Harvard, Vancouver, ISO, and other styles
45

Xu, Qingxiang, and Jiesen Yin. "Application of Random Forest Algorithm in Physical Education." Scientific Programming 2021 (September 9, 2021): 1–10. http://dx.doi.org/10.1155/2021/1996904.

Full text
Abstract:
Learning has been a significant emerging field for several decades since it is a great determinant of the world’s civilization and evolution, having a significant impact on both individuals and communities. In general, improving the existing learning activities has a great influence on the global literacy rates. The assessment technique is one of the most important activities in education since it is the major method for evaluating students during their studies. In the new era of higher education, it is clearly stipulated that the administration of higher education should develop an intelligent diversified teaching evaluation model which can assist the performance of students’ physical education activities and grades and pay attention to the development of students’ personalities and potential. Keeping the importance of an intelligent model for physical education, this paper uses factor analysis and an improved random forest algorithm to reduce the dimensions of students’ multidisciplinary achievements in physical education into a few typical factors which help to improve the performance of the students. According to the scores of students at each factor level, the proposed system can more comprehensively evaluate the students’ achievements. In the empirical teaching research of students’ grade evaluation, the improved iterative random forest algorithm is used for the first time. The automatic evaluation of students’ grades is achieved based on the students’ grades in various disciplines and the number of factors indicating the students’ performance. In a series of experiments the performance of the proposed improved random forest algorithm was compared with the other machine learning models. The experimental results show that the performance of the proposed model was better than the other machine learning models by attaining the accuracy of 88.55%, precision of 88.21%, recall of 95.86%, and f1-score of 0.9187. The implementation of the proposed system is anticipated to be very helpful for the physical education system.
APA, Harvard, Vancouver, ISO, and other styles
46

Petrov, George M., and Jack Davis. "Parallelization of an Implicit Algorithm for Multi-Dimensional Particle-in-Cell Simulations." Communications in Computational Physics 16, no. 3 (September 2014): 599–611. http://dx.doi.org/10.4208/cicp.070813.280214a.

Full text
Abstract:
AbstractThe implicit 2D3V particle-in-cell (PIC) code developed to study the interaction of ultrashort pulse lasers with matter [G. M. Petrov and J. Davis, Computer Phys. Comm. 179, 868 (2008); Phys. Plasmas 18, 073102 (2011)] has been parallelized using MPI (Message Passing Interface). The parallelization strategy is optimized for a small number of computer cores, up to about 64. Details on the algorithm implementation are given with emphasis on code optimization by overlapping computations with communications. Performance evaluation for 1D domain decomposition has been made on a small Linux cluster with 64 computer cores for two typical regimes of PIC operation: “particle dominated”, for which the bulk of the computation time is spent on pushing particles, and “field dominated”, for which computing the fields is prevalent. For a small number of computer cores, less than 32, the MPI implementation offers a significant numerical speed-up. In the “particle dominated” regime it is close to the maximum theoretical one, while in the “field dominated” regime it is about 75-80 % of the maximum speed-up. For a number of cores exceeding 32, performance degradation takes place as a result of the adopted 1D domain decomposition. The code parallelization will allow future implementation of atomic physics and extension to three dimensions.
APA, Harvard, Vancouver, ISO, and other styles
47

Noorbasha, Fazal, and K. Suresh. "FPGA implementation of RGB image encryption and decryption using DNA cryptography." International Journal of Engineering & Technology 7, no. 2.8 (March 19, 2018): 397. http://dx.doi.org/10.14419/ijet.v7i2.8.10469.

Full text
Abstract:
The rapid growth in digitization transmission of information in the form of RGB image. During the process transmission of the image in a channel, some data may be degraded due to noise. At receiver side error in data has to be detected and corrected. Hamming code is one of the popular techniques for error detection and correction. In this paper new algorithm proposed for encryption and decryption of RGB image with DNA cryptography and hamming code for secure transmission, and correction. this algorithm first encodes data to hamming code and encrypted to DNA code. Two-bit error detection and correction for each pixel of the image can be performed.DNA code improves security and use of the Hamming code for error detection and correction. For the image of size 256*256 pixel image, it corrects up to 2*256*256 bits in RGB image. The RGB image encryption and decryption design using Verilog and implemented using FPGA (Field Programmable Gate Array).
APA, Harvard, Vancouver, ISO, and other styles
48

Memon, Farida, Aamir Hussain Memon, Shahnawaz Talpur, Fayaz Ahmed Memon, and Rafia Naz Memon. "Design and Co-Simulation of Depth Estimation Using Simulink HDL Coder and Modelsim." July 2016 35, no. 3 (July 1, 2016): 473–82. http://dx.doi.org/10.22581/muet1982.1603.17.

Full text
Abstract:
In this paper a novel VHDL design procedure of depth estimation algorithm using HDL (Hardware Description Language) Coder is presented. A framework is developed that takes depth estimation algorithm described in MATLAB as input and generates VHDL code, which dramatically decreases the time required to implement an application on FPGAs (Field Programmable Gate Arrays). In the first phase, design is carriedout in MATLAB. Using HDL Coder, MATLAB floating- point design is converted to an efficient fixed-point design and generated VHDL Code and test-bench from fixed point MATLAB code. Further, the generated VHDL code of design is verified with co-simulation using Mentor Graphic ModelSim10.3d software. Simulation results are presented which indicate that VHDL simulations match with the MATLAB simulations and confirm the efficiency of presented methodology.
APA, Harvard, Vancouver, ISO, and other styles
49

M.M Jibril, Aliyu Bello, Ismail I Aminu, Awaisu Shafiu Ibrahim, Abba Bashir, Salim Idris Malami, Habibu M.A, and Mohammed Mukhtar Magaji. "An overview of streamflow prediction using random forest algorithm." GSC Advanced Research and Reviews 13, no. 1 (October 30, 2022): 050–57. http://dx.doi.org/10.30574/gscarr.2022.13.1.0112.

Full text
Abstract:
Since the first application of Artificial Intelligence in the field of hydrology, there has been a great deal of interest in exploring aspects of future enhancements to hydrology. This is evidenced by the increasing number of relevant publications published. Random forests (RF) are supervised machine learning algorithms that have lately gained popularity in water resource applications. It has been used in a variety of water resource research domains, including discharge simulation. Random forest could be an alternate approach to physical and conceptual hydrological models for large-scale hazard assessment in various catchments due to its inexpensive setup and operation costs. Existing applications, however, are usually limited to the implementation of Breiman's original algorithm for extrapolation and categorization issues, even though several developments could be useful in handling a variety of practical challenges in the water sector. In this section, we introduce RF and its variants for working water scientists, as well as examine related concepts and techniques that have earned less attention from the water science and hydrologic communities. In doing so, we examine RF applications in water resources, including streamflow prediction, emphasize the capability of the original algorithm and its extensions, and identify the level of RF exploitation in a variety of applications.
APA, Harvard, Vancouver, ISO, and other styles
50

Wang, Baozhong, Jyotsna Sharma, Jianhua Chen, and Patricia Persaud. "Ensemble Machine Learning Assisted Reservoir Characterization Using Field Production Data–An Offshore Field Case Study." Energies 14, no. 4 (February 17, 2021): 1052. http://dx.doi.org/10.3390/en14041052.

Full text
Abstract:
Estimation of fluid saturation is an important step in dynamic reservoir characterization. Machine learning techniques have been increasingly used in recent years for reservoir saturation prediction workflows. However, most of these studies require input parameters derived from cores, petrophysical logs, or seismic data, which may not always be readily available. Additionally, very few studies incorporate the production data, which is an important reflection of the dynamic reservoir properties and also typically the most frequently and reliably measured quantity throughout the life of a field. In this research, the random forest ensemble machine learning algorithm is implemented that uses the field-wide production and injection data (both measured at the surface) as the only input parameters to predict the time-lapse oil saturation profiles at well locations. The algorithm is optimized using feature selection based on feature importance score and Pearson correlation coefficient, in combination with geophysical domain-knowledge. The workflow is demonstrated using the actual field data from a structurally complex, heterogeneous, and heavily faulted offshore reservoir. The random forest model captures the trends from three and a half years of historical field production, injection, and simulated saturation data to predict future time-lapse oil saturation profiles at four deviated well locations with over 90% R-square, less than 6% Root Mean Square Error, and less than 7% Mean Absolute Percentage Error, in each case.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography