Academic literature on the topic 'Field Code Forest Algorithm'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Field Code Forest Algorithm.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Field Code Forest Algorithm"

1

Halladin-Dąbrowska, Anna, Adam Kania, and Dominik Kopeć. "The t-SNE Algorithm as a Tool to Improve the Quality of Reference Data Used in Accurate Mapping of Heterogeneous Non-Forest Vegetation." Remote Sensing 12, no. 1 (December 20, 2019): 39. http://dx.doi.org/10.3390/rs12010039.

Full text
Abstract:
Supervised classification methods, used for many applications, including vegetation mapping require accurate “ground truth” to be effective. Nevertheless, it is common for the quality of this data to be poorly verified prior to it being used for the training and validation of classification models. The fact that noisy or erroneous parts of the reference dataset are not removed is usually explained by the relatively high resistance of some algorithms to errors. The objective of this study was to demonstrate the rationale for cleaning the reference dataset used for the classification of heterogeneous non-forest vegetation, and to present a workflow based on the t-distributed stochastic neighbor embedding (t-SNE) algorithm for the better integration of reference data with remote sensing data in order to improve outcomes. The proposed analysis is a new application of the t-SNE algorithm. The effectiveness of this workflow was tested by classifying three heterogeneous non-forest Natura 2000 habitats: Molinia meadows (Molinion caeruleae; code 6410), species-rich Nardus grassland (code 6230) and dry heaths (code 4030), employing two commonly used algorithms: random forest (RF) and AdaBoost (AB), which, according to the literature, differ in their resistance to errors in reference datasets. Polygons collected in the field (on-ground reference data) in 2016 and 2017, containing no intentional errors, were used as the on-ground reference dataset. The remote sensing data used in the classification were obtained in 2017 during the peak growing season by a HySpex sensor consisting of two imaging spectrometers covering spectral ranges of 0.4–0.9 μm (VNIR-1800) and 0.9–2.5 μm (SWIR-384). The on-ground reference dataset was gradually cleaned by verifying candidate polygons selected by visual interpretation of t-SNE plots. Around 40–50% of candidate polygons were ultimately found to contain errors. Altogether, 15% of reference polygons were removed. As a result, the quality of the final map, as assessed by the Kappa and F1 accuracy measures as well as by visual evaluation, was significantly improved. The global map accuracy increased by about 6% (in Kappa coefficient), relative to the baseline classification obtained using random removal of the same number of reference polygons.
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Wenyu, Yaqun Zhao, and Sijie Fan. "Cryptosystem Identification Scheme Based on ASCII Code Statistics." Security and Communication Networks 2020 (December 15, 2020): 1–10. http://dx.doi.org/10.1155/2020/8875864.

Full text
Abstract:
In the field of information security, block cipher is widely used in the protection of messages, and its safety naturally attracts people’s attention. The identification of the cryptosystem is the premise of encrypted data analysis. It belongs to the category of attack analysis in cryptanalysis and has important theoretical significance and application value. This paper focuses on the extraction of ciphertext features and the construction of cryptosystem identification classifiers. The main contents and innovations of this paper are as follows. Firstly, inspired by language processing, we propose the feature extraction scheme based on ASCII statistics of ciphertexts which decrease the dimension of data preprocessing. Secondly, on the basis of previous work, we increase the types of block ciphers to eight, encrypt plaintext of the same sizes as experimental objects, and recognize the cryptosystem. Thirdly, we use two machine learning classifiers to perform classification experiments including random forest and SVM. The experimental results show that our scheme can not only improve the identification accuracy of 8 typical block cipher algorithms but also shorten the experimental time and reduce the computation load by greatly minimizing the dimension of the feature vector. And the various evaluation indicators obtained by the scheme have been greatly improved compared with the existing published literature.
APA, Harvard, Vancouver, ISO, and other styles
3

Müller, Hendrik, Christoph Behrens, and David J. E. Marsh. "An optimized Ly α forest inversion tool based on a quantitative comparison of existing reconstruction methods." Monthly Notices of the Royal Astronomical Society 497, no. 4 (August 6, 2020): 4937–55. http://dx.doi.org/10.1093/mnras/staa2225.

Full text
Abstract:
ABSTRACT We present a same-level comparison of the most prominent inversion methods for the reconstruction of the matter density field in the quasi-linear regime from the Ly α forest flux. Moreover, we present a pathway for refining the reconstruction in the framework of numerical optimization. We apply this approach to construct a novel hybrid method. The methods which are used so far for matter reconstructions are the Richardson–Lucy algorithm, an iterative Gauss–Newton method and a statistical approach assuming a one-to-one correspondence between matter and flux. We study these methods for high spectral resolutions such that thermal broadening becomes relevant. The inversion methods are compared on synthetic data (generated with the lognormal approach) with respect to their performance, accuracy, their stability against noise, and their robustness against systematic uncertainties. We conclude that the iterative Gauss–Newton method offers the most accurate reconstruction, in particular at small S/N, but has also the largest numerical complexity and requires the strongest assumptions. The other two algorithms are faster, comparably precise at small noise-levels, and, in the case of the statistical approach, more robust against inaccurate assumptions on the thermal history of the intergalactic medium (IGM). We use these results to refine the statistical approach using regularization. Our new approach has low numerical complexity and makes few assumptions about the history of the IGM, and is shown to be the most accurate reconstruction at small S/N, even if the thermal history of the IGM is not known. Our code will be made publicly available.
APA, Harvard, Vancouver, ISO, and other styles
4

Taghlabi, Faycal, Laila Sour, and Ali Agoumi. "Prelocalization and leak detection in drinking water distribution networks using modeling-based algorithms: a case study for the city of Casablanca (Morocco)." Drinking Water Engineering and Science 13, no. 2 (September 21, 2020): 29–41. http://dx.doi.org/10.5194/dwes-13-29-2020.

Full text
Abstract:
Abstract. The role of a drinking water distribution network (DWDN) is to supply high-quality water at the necessary pressure at various times of the day for several consumption scenarios. Locating and identifying water leakage areas has become a major concern for managers of the water supply, to optimize and improve constancy of supply. In this paper, we present the results of field research conducted to detect and to locate leaks in the DWDN focusing on the resolution of the Fixed And Variable Area Discharge (FAVAD) equation by use of the prediction algorithms in conjunction with hydraulic modeling and the Geographical Information System (GIS). The leak localization method is applied in the oldest part of Casablanca. We have used, in this research, two methodologies in different leak episodes: (i) the first episode is based on a simulation of artificial leaks on the MATLAB platform using the EPANET code to establish a database of pressures that describes the network's behavior in the presence of leaks. The data thus established have been fed into a machine learning algorithm called random forest, which will forecast the leakage rate and its location in the network; (ii) the second was field-testing a real simulation of artificial leaks by opening and closing of hydrants, on different locations with a leak size of 6 and 17 L s−1. The two methods converged to comparable results. The leak position is spotted within a 100 m radius of the actual leaks.
APA, Harvard, Vancouver, ISO, and other styles
5

Zhou, Shuni, Guangxing Wu, Yehong Dong, Yuanxiang Ni, Yuheng Hao, Yunhe Jiang, Chuang Zhou, and Zhiyu Tao. "Evaluations on supervised learning methods in the calibration of seven-hole pressure probes." PLOS ONE 18, no. 1 (January 23, 2023): e0277672. http://dx.doi.org/10.1371/journal.pone.0277672.

Full text
Abstract:
Machine learning method has become a popular, convenient and efficient computing tool applied to many industries at present. Multi-hole pressure probe is an important technique widely used in flow vector measurement. It is a new attempt to integrate machine learning method into multi-hole probe measurement. In this work, six typical supervised learning methods in scikit-learn library are selected for parameter adjustment at first. Based on the optimal parameters, a comprehensive evaluation is conducted from four aspects: prediction accuracy, prediction efficiency, feature sensitivity and robustness on the failure of some hole port. As results, random forests and K-nearest neighbors’ algorithms have the better comprehensive prediction performance. Compared with the in-house traditional algorithm, the machine learning algorithms have the great advantages in the computational efficiency and the convenience of writing code. Multi-layer perceptron and support vector machines are the most time-consuming algorithms among the six algorithms. The prediction accuracy of all the algorithms is very sensitive to the features. Using the features based on the physical knowledge can obtain a high accuracy predicted results. Finally, KNN algorithm is successfully applied to field measurements on the angle of attack of a wind turbine blades. These findings provided a new reference for the application of machine learning method in multi-hole probe calibration and measurement.
APA, Harvard, Vancouver, ISO, and other styles
6

Mohan, Midhun, Rodrigo Vieira Leite, Eben North Broadbent, Wan Shafrina Wan Mohd Jaafar, Shruthi Srinivasan, Shaurya Bajaj, Ana Paula Dalla Corte, et al. "Individual tree detection using UAV-lidar and UAV-SfM data: A tutorial for beginners." Open Geosciences 13, no. 1 (January 1, 2021): 1028–39. http://dx.doi.org/10.1515/geo-2020-0290.

Full text
Abstract:
Abstract Applications of unmanned aerial vehicles (UAVs) have proliferated in the last decade due to the technological advancements on various fronts such as structure-from-motion (SfM), machine learning, and robotics. An important preliminary step with regard to forest inventory and management is individual tree detection (ITD), which is required to calculate forest attributes such as stem volume, forest uniformity, and biomass estimation. However, users may find adopting the UAVs and algorithms for their specific projects challenging due to the plethora of information available. Herein, we provide a step-by-step tutorial for performing ITD using (i) low-cost UAV-derived imagery and (ii) UAV-based high-density lidar (light detection and ranging). Functions from open-source R packages were implemented to develop a canopy height model (CHM) and perform ITD utilizing the local maxima (LM) algorithm. ITD accuracy assessment statistics and validation were derived through manual visual interpretation from high-resolution imagery and field-data-based accuracy assessment. As the intended audience are beginners in remote sensing, we have adopted a very simple methodology and chosen study plots that have relatively open canopies to demonstrate our proposed approach; the respective R codes and sample plot data are available as supplementary materials.
APA, Harvard, Vancouver, ISO, and other styles
7

Yi, Weilin, and Hongliang Cheng. "Investigation on the Optimal Design and Flow Mechanism of High Pressure Ratio Impeller with Machine Learning Method." International Journal of Aerospace Engineering 2020 (November 29, 2020): 1–11. http://dx.doi.org/10.1155/2020/8855314.

Full text
Abstract:
The optimization of high-pressure ratio impeller with splitter blades is difficult because of large-scale design parameters, high time cost, and complex flow field. So few relative works are published. In this paper, an engineering-applied centrifugal impeller with ultrahigh pressure ratio 9 was selected as datum geometry. One kind of advanced optimization strategy including the parameterization of impeller with 41 parameters, high-quality CFD simulation, deep machine learning model based on SVR (Support Vector Machine), random forest, and multipoint genetic algorithm (MPGA) were set up based on the combination of commercial software and in-house python code. The optimization objective is to maximize the peak efficiency with the constraints of pressure-ratio at near stall point and choked mass flow. Results show that the peak efficiency increases by 1.24% and the overall performance is improved simultaneously. By comparing the details of the flow field, it is found that the weakening of the strength of shock wave, reduction of tip leakage flow rate near the leading edge, separation region near the root of leading edge, and more homogenous outlet flow distributions are the main reasons for performance improvement. It verified the reliability of the SVR-MPGA model for multiparameter optimization of high aerodynamic loading impeller and revealed the probable performance improvement pattern.
APA, Harvard, Vancouver, ISO, and other styles
8

Gupta, Surya, Tomislav Hengl, Peter Lehmann, Sara Bonetti, and Dani Or. "SoilKsatDB: global database of soil saturated hydraulic conductivity measurements for geoscience applications." Earth System Science Data 13, no. 4 (April 15, 2021): 1593–612. http://dx.doi.org/10.5194/essd-13-1593-2021.

Full text
Abstract:
Abstract. The saturated soil hydraulic conductivity (Ksat) is a key parameter in many hydrological and climate models. Ksat values are primarily determined from basic soil properties and may vary over several orders of magnitude. Despite the availability of Ksat datasets in the literature, significant efforts are required to combine the data before they can be used for specific applications. In this work, a total of 13 258 Ksat measurements from 1908 sites were assembled from the published literature and other sources, standardized (i.e., units made identical), and quality checked in order to obtain a global database of soil saturated hydraulic conductivity (SoilKsatDB). The SoilKsatDB covers most regions across the globe, with the highest number of Ksat measurements from North America, followed by Europe, Asia, South America, Africa, and Australia. In addition to Ksat, other soil variables such as soil texture (11 584 measurements), bulk density (11 262 measurements), soil organic carbon (9787 measurements), moisture content at field capacity (7382), and wilting point (7411) are also included in the dataset. To show an application of SoilKsatDB, we derived Ksat pedotransfer functions (PTFs) for temperate regions and laboratory-based soil properties (sand and clay content, bulk density). Accurate models can be fitted using a random forest machine learning algorithm (best concordance correlation coefficient (CCC) equal to 0.74 and 0.72 for temperate area and laboratory measurements, respectively). However, when these Ksat PTFs are applied to soil samples obtained from tropical climates and field measurements, respectively, the model performance is significantly lower (CCC = 0.49 for tropical and CCC = 0.10 for field measurements). These results indicate that there are significant differences between Ksat data collected in temperate and tropical regions and Ksat measured in the laboratory or field. The SoilKsatDB dataset is available at https://doi.org/10.5281/zenodo.3752721 (Gupta et al., 2020) and the code used to extract the data from the literature and the applied random forest machine learning approach are publicly available under an open data license.
APA, Harvard, Vancouver, ISO, and other styles
9

Jamali, A., M. Mahdianpari, and İ. R. Karaş. "A COMPARISON OF TREE-BASED ALGORITHMS FOR COMPLEX WETLAND CLASSIFICATION USING THE GOOGLE EARTH ENGINE." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVI-4/W5-2021 (December 23, 2021): 313–19. http://dx.doi.org/10.5194/isprs-archives-xlvi-4-w5-2021-313-2021.

Full text
Abstract:
Abstract. Wetlands are endangered ecosystems that are required to be systematically monitored. Wetlands have significant contributions to the well-being of human-being, fauna, and fungi. They provide vital services, including water storage, carbon sequestration, food security, and protecting the shorelines from floods. Remote sensing is preferred over the other conventional earth observation methods such as field surveying. It provides the necessary tools for the systematic and standardized method of large-scale wetland mapping. On the other hand, new cloud computing technologies for the storage and processing of large-scale remote sensing big data such as the Google Earth Engine (GEE) have emerged. As such, for the complex wetland classification in the pilot site of the Avalon, Newfoundland, Canada, we compare the results of three tree-based classifiers of the Decision Tree (DT), Random Forest (RF), and Extreme Gradient Boosting (XGB) available in the GEE code editor using Sentinel-2 images. Based on the results, the XGB classifier with an overall accuracy of 82.58% outperformed the RF (82.52%) and DT (77.62%) classifiers.
APA, Harvard, Vancouver, ISO, and other styles
10

Rusyn, B. P., O. A. Lutsyk, R. Ya Kosarevych, and Yu V. Obukh. "RECOGNITION OF DAMAGED FOREST WITH THE HELP OF CONVOLUTIONAL MODELS IN REMOTE SENSING." Ukrainian Journal of Information Technology 3, no. 1 (2021): 1–7. http://dx.doi.org/10.23939/ujit2021.03.001.

Full text
Abstract:
The article provides a detailed review of the problem of deforestation, which in recent years has become uncontrolled. The main causes of forest damage are analyzed, among which the most well-known are climate change, diseases and pests. The losses of forestry as a result of tree diseases, which are large-scale and widespread in other countries, are given. The solution of these problems is possible under the condition of high-quality monitoring with the involvement of automated remote sensing tools and modern methods of image analysis, including artificial intelligence approaches such as neural networks and deep learning. The article proposes an approach to automatic localization and recognition of trees affected by drought, which is of great practical importance for environmental monitoring and forestry. A fully connected convolutional model of deep learning using the tensorflow and keras libraries has been developed for localization and recognition. This model consists of a detector network and a separate classifier network. To train and test the proposed network based on images obtained by remote sensing, a training database containing 8500 images was created. A comparison of the proposed model with the existing methods is based on such characteristics as accuracy and speed. The accuracy and speed of the proposed recognition system were evaluated on a validation sample of images, consisting of 1700 images. The model has been optimized for practical use with CPU and GPU due to pseudo quantization during training. This helps to distribute the values of the weights in the learning process and bring their appearance closer to a uniform distribution law, which in turn allows more efficient application of quantization to the original model. The average operating time of the algorithm is also determined. In the Visual C++ environment, based on the proposed model, an expert program has been created that allows to perform the ecological monitoring and analysis of dry forests in the field in real time. Libraries such as OpenCV and Direct were used in software development, and the code supports object-oriented programming standards. The results of the work and the developed software can be used in remote monitoring and classification systems for environmental monitoring and in applied tasks of forestry.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Field Code Forest Algorithm"

1

Boskovitz, Agnes, and abvi@webone com au. "Data Editing and Logic: The covering set method from the perspective of logic." The Australian National University. Research School of Information Sciences and Engineering, 2008. http://thesis.anu.edu.au./public/adt-ANU20080314.163155.

Full text
Abstract:
Errors in collections of data can cause significant problems when those data are used. Therefore the owners of data find themselves spending much time on data cleaning. This thesis is a theoretical work about one part of the broad subject of data cleaning - to be called the covering set method. More specifically, the covering set method deals with data records that have been assessed by the use of edits, which are rules that the data records are supposed to obey. The problem solved by the covering set method is the error localisation problem, which is the problem of determining the erroneous fields within data records that fail the edits. In this thesis I analyse the covering set method from the perspective of propositional logic. I demonstrate that the covering set method has strong parallels with well-known parts of propositional logic. The first aspect of the covering set method that I analyse is the edit generation function, which is the main function used in the covering set method. I demonstrate that the edit generation function can be formalised as a logical deduction function in propositional logic. I also demonstrate that the best-known edit generation function, written here as FH (standing for Fellegi-Holt), is essentially the same as propositional resolution deduction. Since there are many automated implementations of propositional resolution, the equivalence of FH with propositional resolution gives some hope that the covering set method might be implementable with automated logic tools. However, before any implementation, the other main aspect of the covering set method must also be formalised in terms of logic. This other aspect, to be called covering set correctibility, is the property that must be obeyed by the edit generation function if the covering set method is to successfully solve the error localisation problem. In this thesis I demonstrate that covering set correctibility is a strengthening of the well-known logical properties of soundness and refutation completeness. What is more, the proofs of the covering set correctibility of FH and of the soundness / completeness of resolution deduction have strong parallels: while the proof of soundness / completeness depends on the reduction property for counter-examples, the proof of covering set correctibility depends on the related lifting property. In this thesis I also use the lifting property to prove the covering set correctibility of the function defined by the Field Code Forest Algorithm. In so doing, I prove that the Field Code Forest Algorithm, whose correctness has been questioned, is indeed correct. The results about edit generation functions and covering set correctibility apply to both categorical edits (edits about discrete data) and arithmetic edits (edits expressible as linear inequalities). Thus this thesis gives the beginnings of a theoretical logical framework for error localisation, which might give new insights to the problem. In addition, the new insights will help develop new tools using automated logic tools. What is more, the strong parallels between the covering set method and aspects of logic are of aesthetic appeal.
APA, Harvard, Vancouver, ISO, and other styles
2

Bedi, Abhishek. "A generic platform for the evolution of hardware." Click here to access this resource online, 2009. http://hdl.handle.net/10292/651.

Full text
Abstract:
Evolvable Hardware is a technique derived from evolutionary computation applied to a hardware design. The term evolutionary computation involves similar steps as involved in the human evolution. It has been given names in accordance with the electronic technology like, Genetic Algorithm (GA), Evolutionary Strategy (ES) and Genetic Programming (GP). In evolutionary computing, a configured bit is considered as a human chromosome for a genetic algorithm, which has to be downloaded into hardware. Early evolvable hardware experiments were conducted in simulation and the only elite chromosome was downloaded to the hardware, which was labelled as Extrinsic Hardware. With the invent of Field Programmable Gate Arrays (FPGAs) and Reconfigurable Processing Units (RPUs), it is now possible for the implementation solutions to be fast enough to evaluate a real hardware circuit within an evolutionary computation framework; this is called an Intrinsic Evolvable Hardware. This research has been taken in continuation with project 'Evolvable Hardware' done at Manukau Institute of Technology (MIT). The project was able to manually evolve two simple electronic circuits of NAND and NOR gates in simulation. In relation to the project done at MIT this research focuses on the following: To automate the simulation by using In Circuit Debugging Emulators (IDEs), and to develop a strategy of configuring hardware like an FPGA without the use of their company supplied in circuit debugging emulators, so that the evolution of an intrinsic evolvable hardware could be controlled, and is hardware independent. As mentioned, the research conducted here was able to develop an evolvable hardware friendly Generic Structure which could be used for the development of evolvable hardware. The structure developed was hardware independent and was able to run on various FPGA hardware’s for the purpose of intrinsic evolution. The structure developed used few configuration bits as compared to current evolvable hardware designs.
APA, Harvard, Vancouver, ISO, and other styles
3

Dahlin, Mathilda. "Avkodning av cykliska koder - baserad på Euklides algoritm." Thesis, Karlstads universitet, Institutionen för matematik och datavetenskap (from 2013), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-48248.

Full text
Abstract:
Today’s society requires that transformation of information is done effectively and correctly. In other words, the received message must correspond to the message being sent. There are a lot of decoding methods to locate and correct errors. The main purpose in this degree project is to study one of these methods based on the Euclidean algorithm. Thereafter an example will be illustrated showing how the method is used when decoding a three - error correcting BCH code. To begin with, fundamental concepts about coding theory are introduced. Secondly, linear codes, cyclic codes and BCH codes - in that specific order - are explained before advancing to the decoding process. The results show that correcting one or two errors is relatively simple, but when three or more errors occur it becomes much more complicated. In that case, a specific method is required.
Dagens samhälle kräver att informationsöverföring sker på ett effektivt och korrekt sätt, det vill säga att den information som når mottagaren motsvarar den som skickades från början. Det finns många avkodningsmetoder för att lokalisera och rätta fel. Syftet i denna uppsats är att studera en av dessa, en som baseras på Euklides algoritm och därefter illustrera ett exempel på hur metoden används vid avkodning av en tre - rättande BCH - kod. Först ges en presentation av grunderna inom kodningsteorin. Sedan introduceras linjära koder, cykliska koder och BCH - koder i nämnd ordning, för att till sist presentera avkodningsprocessen. Det visar sig att det är relativt enkelt att rätta ett eller två fel, men när tre eller fler fel uppstår blir det betydligt mer komplicerat. Då krävs någon speciell metod.
APA, Harvard, Vancouver, ISO, and other styles
4

Fujdiak, Radek. "Analýza a optimalizace datové komunikace pro telemetrické systémy v energetice." Doctoral thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2017. http://www.nusl.cz/ntk/nusl-358408.

Full text
Abstract:
Telemetry system, Optimisation, Sensoric networks, Smart Grid, Internet of Things, Sensors, Information security, Cryptography, Cryptography algorithms, Cryptosystem, Confidentiality, Integrity, Authentication, Data freshness, Non-Repudiation.
APA, Harvard, Vancouver, ISO, and other styles
5

Boskovitz, Agnes. "Data Editing and Logic: The covering set method from the perspective of logic." Phd thesis, 2008. http://hdl.handle.net/1885/49318.

Full text
Abstract:
Errors in collections of data can cause significant problems when those data are used. Therefore the owners of data find themselves spending much time on data cleaning. This thesis is a theoretical work about one part of the broad subject of data cleaning - to be called the covering set method. More specifically, the covering set method deals with data records that have been assessed by the use of edits, which are rules that the data records are supposed to obey. The problem solved by the covering set method is the error localisation problem, which is the problem of determining the erroneous fields within data records that fail the edits. In this thesis I analyse the covering set method from the perspective of propositional logic. ...
APA, Harvard, Vancouver, ISO, and other styles
6

Huang, Tzu-Yen, and 黃子彥. "A study on Random Forest Algorithm in UAV Images for Cultivated Field Classification." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/7274kp.

Full text
Abstract:
碩士
國立中興大學
土木工程學系所
106
The effectiveness of conventional aerial photography is affected by time and weather, whereas optical satellite imagery can be obstructed by obstacles such as clouds. Image collection using unmanned aerial vehicles (UAVs) has become crucial in recent years. It provides instant results, has high mobility and resolution, and is less affected by weather than conventional aerial photography and satellite imagery. UAVs have thus become a new type of photographic instrument. In this study, a high-resolution UAV camera was employed to capture images of Tuku Township in Yunlin County, Taiwan. Minimum Distance Classifier and the Random forest were used to classify the visible light band respectively. The experimental results show that the classification accuracy of Random forest is obviously better than Minimum Distance Classifier. The subsequent image will be added to the texture image in the visible light band and classified by Random forest. Texture information was added to high resolution UAV orthoimages to enhance the differences in spatial characters among the areas of various agricultural crops, thereby enhancing the accuracy of the high-resolution UAV image classification. Preliminary results suggested that this addition of texture information was indeed discovered to improve the accuracy of agricultural crop classification. Texture analysis was conducted using the grey-level co-occurrence matrix, and the six texture factors (homogeneity, contrast, angular second moment, dissimilarity, entropy, and correlation) were calculated. Various moving window sizes and texture factors were added to the raw images, and training sample areas were selected from the images. The areas were then classified through the use of Random Forest algorithm, which ensured high classification accuracy. According to the results, original bands with a 21×21 moving window achieved the optimal classification accuracy. The overall accuracy and Kappa value of image classification were 88.56% and 0.82, respectively, when only the raw RGB image was employed. After the texture information with a 21×21 moving window size was applied to the image, the accuracy and Kappa value increased to 94.22% and 0.91, respectively. Therefore, implementing the texture information in the image classification process did enhance the classification accuracy.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Field Code Forest Algorithm"

1

Lawson, B. D. Ground-truthing the drought code: Field verification of overwinter recharge of forest floor moisture. Victoria, B.C: Canadian Forest Service, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bucher, Taina. If...Then. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780190493028.001.0001.

Full text
Abstract:
IF … THEN provides an account of power and politics in the algorithmic media landscape that pays attention to the multiple realities of algorithms, and how these relate and coexist. The argument is made that algorithms do not merely have power and politics; they help to produce certain forms of acting and knowing in the world. In processing, classifying, sorting, and ranking data, algorithms are political in that they help to make the world appear in certain ways rather than others. Analyzing Facebook’s news feed, social media user’s everyday encounters with algorithmic systems, and the discourses and work practices of news professionals, the book makes a case for going beyond the narrow, technical definition of algorithms as step-by-step procedures for solving a problem in a finite number of steps. Drawing on a process-relational theoretical framework and empirical data from field observations and fifty-five interviews, the author demonstrates how algorithms exist in multiple ways beyond code. The analysis is concerned with the world-making capacities of algorithms, questioning how algorithmic systems shape encounters and orientations of different kinds, and how these systems are endowed with diffused personhood and relational agency. IF … THEN argues that algorithmic power and politics is neither about algorithms determining how the social world is fabricated nor about what algorithms do per se. Rather it is about how and when different aspects of algorithms and the algorithmic become available to specific actors, under what circumstance, and who or what gets to be part of how algorithms are defined.
APA, Harvard, Vancouver, ISO, and other styles
3

Parkin, Jack. Money Code Space. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780197515075.001.0001.

Full text
Abstract:
Newly emerging cryptocurrencies and blockchain technology present a challenging research problem in the field of digital politics and economics. Bitcoin—the first widely implemented cryptocurrency and blockchain architecture—seemingly separates itself from the existing territorial boundedness of nation-state money via a process of algorithmic decentralisation. Proponents declare that the utilisation of cryptography to advance financial transactions will disrupt the modern centralised structures by which capitalist economies are currently organised: corporations, governments, commercial banks, and central banks. Allegedly, software can create a more stable and democratic global economy; a world free from hierarchy and control. In Money Code Space, Jack Parkin debunks these utopian claims by approaching distributed ledger technologies as a spatial and social problem where power forms unevenly across their networks. First-hand accounts of online communities, open-source software governance, infrastructural hardware operations, and Silicon Valley start-up culture are used to ground understandings of cryptocurrencies in the “real world.” Consequently, Parkin demonstrates how Bitcoin and other blockchains are produced across a multitude of tessellated spaces from which certain stakeholders exercise considerable amounts of power over their networks. While money, code, and space are certainly transformed by distributed ledgers, algorithmic decentralisation is rendered inherently paradoxical because it is predicated upon centralised actors, practices, and forces.
APA, Harvard, Vancouver, ISO, and other styles
4

British Columbia. Ministry of Forests., ed. British Columbia Forest Practices Code: Standards with revised rules and field guide references. [Victoria? B.C: Ministry of Forests, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Field Code Forest Algorithm"

1

Navarro, Adrian, María Jose Checa, Francisco Lario, Laura Luquero, Asunción Roldán, and Jesús Estrada. "Monitoring Forest Health: Big Data Applied to Diseases and Plagues Control." In Big Data in Bioeconomy, 335–49. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-71069-9_25.

Full text
Abstract:
AbstractIn this chapter, we present the technological background needed for understanding the problem addressed by this DataBio pilot. Spain has to face plagues and diseases affecting forest species, like Quercus ilex, Quercus suber or Eucaliptus sp. Consequently, Spanish Public Administrations need updated information about the health status of forests. This chapter explains the methodology created based on remote sensing images (satellite + aerial + Remotely Piloted Aircraft Systems (RPAS)) and field data for monitoring the mentioned forest status. The work focused on acquiring data for establishing the relationships between RPAS generated data and field data, and on the creation of a correlation model to obtain a prospection and prediction algorithm based on spectral data for early detection and monitoring of decaying trees. Those data were used to establish the links between EO image-derived indexes and biophysical parameters from field data allowing a health status monitoring for big areas based on EO information. This solution is providing Public Administrations with valuable information to help decision making.
APA, Harvard, Vancouver, ISO, and other styles
2

Mohan, Anshuman, Wei Xiang Leow, and Aquinas Hobor. "Functional Correctness of C Implementations of Dijkstra’s, Kruskal’s, and Prim’s Algorithms." In Computer Aided Verification, 801–26. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-81688-9_37.

Full text
Abstract:
AbstractWe develop machine-checked verifications of the full functional correctness of C implementations of the eponymous graph algorithms of Dijkstra, Kruskal, and Prim. We extend Wang et al.’s CertiGraph platform to reason about labels on edges, undirected graphs, and common spatial representations of edge-labeled graphs such as adjacency matrices and edge lists. We certify binary heaps, including Floyd’s bottom-up heap construction, heapsort, and increase/decrease priority.Our verifications uncover subtle overflows implicit in standard textbook code, including a nontrivial bound on edge weights necessary to execute Dijkstra’s algorithm; we show that the intuitive guess fails and provide a workable refinement. We observe that the common notion that Prim’s algorithm requires a connected graph is wrong: we verify that a standard textbook implementation of Prim’s algorithm can compute minimum spanning forests without finding components first. Our verification of Kruskal’s algorithm reasons about two graphs simultaneously: the undirected graph undergoing MSF construction, and the directed graph representing the forest inside union-find. Our binary heap verification exposes precise bounds for the heap to operate correctly, avoids a subtle overflow error, and shows how to recycle keys to avoid overflow.
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Ruiguang, Ming Liu, Dawei Xu, Jiaqi Gao, Fudong Wu, and Liehuang Zhu. "A Review of Machine Learning Algorithms for Text Classification." In Communications in Computer and Information Science, 226–34. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-9229-1_14.

Full text
Abstract:
AbstractText classification is a basic task in the field of natural language processing, and it is a basic technology for information retrieval, questioning and answering system, emotion analysis and other advanced tasks. It is one of the earliest application of machine learning algorithm, and has achieved good results. In this paper, we made a review of the traditional and state-of-the-art machine learning algorithms for text classification, such as Naive Bayes, Supporting Vector Machine, Decision Tree, K Nearest Neighbor, Random Forest and neural networks. Then, we discussed the advantages and disadvantages of all kinds of machine learning algorithms in depth. Finally, we made a summary that neural networks and deep learning will become the main research topic in the future.
APA, Harvard, Vancouver, ISO, and other styles
4

Slowinski, Peter R. "Rethinking Software Protection." In Artificial Intelligence and Intellectual Property, 341–62. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780198870944.003.0016.

Full text
Abstract:
The core of artificial intelligence (AI) applications is software of one sort or another. But while available data and computing power are important for the recent quantum leap in AI, there would not be any AI without computer programs or software. Therefore, the rise in importance of AI forces us to take—once again—a closer look at software protection through intellectual property (IP) rights, but it also offers us a chance to rethink this protection, and while perhaps not undoing the mistakes of the past, at least to adapt the protection so as not to increase the dysfunctionality that we have come to see in this area of law in recent decades. To be able to establish the best possible way to protect—or not to protect—the software in AI applications, this chapter starts with a short technical description of what AI is, with readers referred to other chapters in this book for a deeper analysis. It continues by identifying those parts of AI applications that constitute software to which legal software protection regimes may be applicable, before outlining those protection regimes, namely copyright and patents. The core part of the chapter analyses potential issues regarding software protection with respect to AI using specific examples from the fields of evolutionary algorithms and of machine learning. Finally, the chapter draws some conclusions regarding the future development of IP regimes with respect to AI.
APA, Harvard, Vancouver, ISO, and other styles
5

Mallikarjuna, Basetty, Supriya Addanke, and Anusha D. J. "An Improved Deep Learning Algorithm for Diabetes Prediction." In Advances in Wireless Technologies and Telecommunication, 103–19. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-7998-7685-4.ch007.

Full text
Abstract:
This chapter introduces the novel approach in deep learning for diabetes prediction. The related work described the various ML algorithms in the field of diabetic prediction that has been used for early detection and post examination of the diabetic prediction. It proposed the Jaya-Tree algorithm, which is updated as per the existing random forest algorithm, and it is used to classify the two parameters named as the ‘Jaya' and ‘Apajaya'. The results described that Pima Indian diabetes dataset 2020 (PIS) predicts diabetes and obtained 97% accuracy.
APA, Harvard, Vancouver, ISO, and other styles
6

Balusamy, Balamurugan, Priya Jha, Tamizh Arasi, and Malathi Velu. "Predictive Analysis for Digital Marketing Using Big Data." In Advances in Business Information Systems and Analytics, 259–83. IGI Global, 2017. http://dx.doi.org/10.4018/978-1-5225-2031-3.ch016.

Full text
Abstract:
Big data analytics in recent years had developed lightning fast applications that deal with predictive analysis of huge volumes of data in domains of finance, health, weather, travel, marketing and more. Business analysts take their decisions using the statistical analysis of the available data pulled in from social media, user surveys, blogs and internet resources. Customer sentiment has to be taken into account for designing, launching and pricing a product to be inducted into the market and the emotions of the consumers changes and is influenced by several tangible and intangible factors. The possibility of using Big data analytics to present data in a quickly viewable format giving different perspectives of the same data is appreciated in the field of finance and health, where the advent of decision support system is possible in all aspects of their working. Cognitive computing and artificial intelligence are making big data analytical algorithms to think more on their own, leading to come out with Big data agents with their own functionalities.
APA, Harvard, Vancouver, ISO, and other styles
7

Balusamy, Balamurugan, Priya Jha, Tamizh Arasi, and Malathi Velu. "Predictive Analysis for Digital Marketing Using Big Data." In Web Services, 745–68. IGI Global, 2019. http://dx.doi.org/10.4018/978-1-5225-7501-6.ch041.

Full text
Abstract:
Big data analytics in recent years had developed lightning fast applications that deal with predictive analysis of huge volumes of data in domains of finance, health, weather, travel, marketing and more. Business analysts take their decisions using the statistical analysis of the available data pulled in from social media, user surveys, blogs and internet resources. Customer sentiment has to be taken into account for designing, launching and pricing a product to be inducted into the market and the emotions of the consumers changes and is influenced by several tangible and intangible factors. The possibility of using Big data analytics to present data in a quickly viewable format giving different perspectives of the same data is appreciated in the field of finance and health, where the advent of decision support system is possible in all aspects of their working. Cognitive computing and artificial intelligence are making big data analytical algorithms to think more on their own, leading to come out with Big data agents with their own functionalities.
APA, Harvard, Vancouver, ISO, and other styles
8

Graziani, Anthony, Karina Meerpoel-Petri, Virginie Tihay-Felicelli, Paul-Antoine Santoni, Frédéric Morandini, Yolanda Perez-Ramirez, Antoine Pieri, and William Mell. "Numerical prediction of the thermal stress induced by the burning of an ornamental vegetation at WUI." In Advances in Forest Fire Research 2022, 733–38. Imprensa da Universidade de Coimbra, 2022. http://dx.doi.org/10.14195/978-989-26-2298-9_112.

Full text
Abstract:
Over the last decades, urban expansion and global warming have increased the occurrence of wildland fires propagating at the vicinity of buildings at WUI. In this scenario, ornamental vegetation has been identified as a vector of fire propagation close to habitations, which can significantly increase the risk of damage [1]. In such context, it is necessary to quantify the thermal stress generated by an ornamental plant over a building to predict the vulnerability of construction materials. To this end, numerical simulation is a good candidate to easily multiply burning cases at field scale and explore the effects. The present study focuses on the numerical prediction of the thermal stress induced by the burning of an ornamental vegetation over targets facing the fire. The study involves a numerical modelling of the burning of rockrose hedges at field scale using the physics based code WFDS. The solver is based on a large eddy simulation approach for fluid dynamics and energy transfer through the fluid phase. A three steps thermal degradation model (dehydration, pyrolysis, char oxidation) with Arrhenius laws [2] is used for the fuel. The raised vegetation is represented with a Fuel Element approach which models the solid fuel as a set of static Lagrangian particles of different sizes and distributed within the volume to reproduce the arrangement of the shrub. The accuracy of WFDS to reproduce the combustion of plants has already been demonstrated at laboratory scales [2-6] but studies at field scale involving raised vegetation are few. Numerical results are compared to experimental measurements recorded during a set of experiments conducted at field scale, which involves the burning of reconstructed rockrose hedges of 6m length, 1m width and two heights (1m and 2m). The geometry mimics the typical shape of ornamental hedges that can be found to separate buildings in south of France. Visible cameras are distributed around the setup to capture the geometry of the flame front. Four couples of heat flux meters are positioned at 3m in front of the centreline and side of the hedge, which represents the theoretical position of the wall of a building according to the current fire safety regulation in France. Comparison between numerical model and experimental results shows good agreement for the local measurement of the heat stress at the location of the targets. Total and radiant heat fluxes fit with experimental data during the fire growth and the fully developed phases, which represent the period where the thermal stress is the highest. Peaks of total and radiant heat flux are the same order value but can be overestimated depending of the location of the sensors due to the wind dynamics that is not fully implementable in WFDS. Results show that the accuracy of the numerical model is enough to predict the thermal stress received by targets during the fully developed fire at field scale and could be used to numerically determine the vulnerability of material buildings in different scenarios.
APA, Harvard, Vancouver, ISO, and other styles
9

Fattah, Abdel. "New Semi-Inversion Method of Bouguer Gravity Anomalies Separation." In Gravitational Field [Working Title]. IntechOpen, 2022. http://dx.doi.org/10.5772/intechopen.101593.

Full text
Abstract:
The workers and researchers in the field of gravity exploration methods, always dream that it is possible one day, to be able to separate completely the Bouguer gravity anomalies and trace rock’ formations, and their densities distribution from a prior known control points (borehole) to any extended distance in the direction of the profile lines-it seems that day become will soon a tangible true! and it becomes possible for gravity interpretation methods to mimic to some extent the 2D seismic interpretation methods. Where, the present chapter is dealing a newly 2D semi-inversion, fast, and easily applicable gravitational technique, based on Bouguer gravity anomaly data. It now becomes possible through, Excel software, Matlab’s code, and a simple algorithm; separating the Bouguer anomaly into its corresponding rock’ formations causative sources, as well as, estimating and tracing its thicknesses (or depths) of sedimentary formations relative to the underlying basement’s structure rocks for any sedimentary basin, through using of profile(s) line(s) and previously known control points. The newly proposed method has been assessed, examine, and applied for two field cases, Abu Roash Dome Area, southwest Cairo, Egypt, and Humble Salt Dome, USA. The method has demonstrated to some extent comparable results with prior known information, for drilled boreholes.
APA, Harvard, Vancouver, ISO, and other styles
10

Hofmann, Martin, and David Zenz. "Autonomous fire containment tool." In Advances in Forest Fire Research 2022, 1408–10. Imprensa da Universidade de Coimbra, 2022. http://dx.doi.org/10.14195/978-989-26-2298-9_213.

Full text
Abstract:
The destructive effect of wildfires on biodiversity and infrastructure is compounded by an increasing frequency and intensity and the scarcity of water as an extinguishing agent. We therefore propose a methodology to assess the efficiency of water-based extinguishing strategies and propose a novel autonomous tool designed to contain an advancing fire front. To that end, the distribution of water is analysed and correlated to the successful containment of firelines of increasing intensities. A scaling law for the critical amount of water is proposed. We make use of laboratory-scale equipment to reproduce realistic behavior in presence of wind, before scaling up fire intensities up to 2500 kW/m in field tests. The novel apparatus, dubbed Papageno module, could replace aerial drops, characterized by low containment efficiency, high running costs and considerable risk for personnel. Destined to communities in all geographical areas threatened by wildfires, the Papageno module reduces the destructive impact of these phenomena on infrastructures in the wildland urban interface and can equally serve as a protective equipment for areas with valuable biodiversity. The Papageno module is the missing tool for safe and efficient containment of wildfires on a ZIP-code level. Thanks to this simple-to-use apparatus, public entities, such as communities and civil protection agencies, will be capable of shielding critical infrastructure from an advancing fire front with a minimal amount of water. Thanks to its autonomous operating mode, practitioners are not exposed to the risk and can attend other duties meanwhile.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Field Code Forest Algorithm"

1

Kumar, Munish, Kanna Swaminathan, Aizat Rusli, and Abel Thomas-Hy. "Applying Data Analytics & Machine Learning Methods for Recovery Factor Prediction and Uncertainty Modelling." In SPE Asia Pacific Oil & Gas Conference and Exhibition. SPE, 2022. http://dx.doi.org/10.2118/210769-ms.

Full text
Abstract:
Abstract The estimation of recoverable hydrocarbons, or field recovery factor (RF), is a critical process for Oil and Gas (O&G) companies to plan and optimise field development, manage ongoing production and identify profitable investments amongst other technical and commercial decisions. However, RF remains one of the greatest uncertainties in O&G projects. The difficulty in RF prediction arises due to the number of variables affecting the recovery from a reservoir. These includes variables that are both uncertain and beyond the control of O&G operators, such as fluid flow in microscopic pores, as a function of fluid and rock properties, and variables which are engineering design based, such as completion methods, secondary and tertiary recovery mechanisms. In early field life, insufficient production data coupled with subsurface uncertainty makes RF prediction uncertain, and it is often the experience of the operator combined with analogue studies that is used to determine RF. However, there may be instances where operators may have insufficient data from analogue fields to properly capture the uncertainty in the RF range. Utilising techniques of big data manipulation and machine learning (ML), two open-source, United States based data sets are (a) deconstructed to identify the key variables impacting the ultimate recovery of a field, and (b) used to create a ML model to predict the RF based on these key variables. These two datasets (the onshore Tertiary Oil Recovery System (TORIS), and the offshore Gulf of Mexico (GOM)) consist of over 1,000,000 real world data points. Employing a low code environment, we test the predictive ability of 20 different ML algorithms by comparing predictive error. Decision tree type models (Random Forest and Category Boosting) show the best results. The paper shows comparison to a distance based (K Neighbour) model as well. The work aims to show that not all variables influence RF equally and that any ML model should therefore be built with variables that have the greatest influence on RF yet have the lowest pairwise correlation. The influence of these input variables differs, depending on the implemented ML model. The paper demonstrates the predictive ability of ML models is strongly dependent on the input dataset. Predicting the recovery factor of fields within the TORIS and GOM databases, the R2 values are 0.81 and 0.88 respectively. Testing the algorithm on three additional fields outside of the two datasets, and in different geological provinces showed errors of up to 10-15%.
APA, Harvard, Vancouver, ISO, and other styles
2

Patterson, Evan, Ioana Baldini, Aleksandra Mojsilović, and Kush R. Varshney. "Semantic Representation of Data Science Programs." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/858.

Full text
Abstract:
Your computer is continuously executing programs, but does it really understand them? Not in any meaningful sense. That burden falls upon human knowledge workers, who are increasingly asked to write and understand code. They would benefit greatly from intelligent tools that reveal the connections between their code and its subject matter. Towards this prospect, we present an AI system that forms semantic representations of computer programs, using techniques from knowledge representation and program analysis. These representations are created through a novel algorithm for the semantic enrichment of dataflow graphs. We illustrate its workings with examples from the field of data science. The algorithm is undergirded by a new ontology language for modeling computer programs and a new ontology about data science, written in this language.
APA, Harvard, Vancouver, ISO, and other styles
3

Guo, Liancheng, Koji Morita, Hirotaka Tagami, and Yoshiharu Tobita. "Numerical Simulation of Self-Leveling Behavior in Debris Bed by a Hybrid Method." In 2013 21st International Conference on Nuclear Engineering. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/icone21-15483.

Full text
Abstract:
The postulated core disruptive accidents (CDAs) are regarded as particular difficulties in the safety analysis of liquid-metal fast reactors (LMFRs). In the CDAs, the self-leveling behavior of debris bed is a crucial issue to the relocation of molten core and heat-removal capability of the debris bed. The fast reactor safety analysis code, SIMMER-III, which is a 2D, multi-velocity-field, multiphase, multicomponent, Eulerian, fluid dynamics code coupled with a fuel-pin model and a space- and energy-dependent neutron kinetics model, was successfully applied to a series of CDA assessments. However, strong interactions among rich solid particles as well as particle characteristics in multiphase flows were not taken into consideration for fluid-dynamics models of SIMMER-III. In this article, a developed hybrid method, by coupling the discrete element method (DEM) with the multi-fluid model of SIMMER-III, is applied in the numerical simulation of self-leveling behavior in debris bed. In the coupling algorithm the motions of gas and liquid phases are solved by a time-factorization (time-splitting) method. For particles, contact forces among particles and interactions between particles and fluid phases are considered through DEM. The applicability of the method in such complicate three phase flow is validated by taking the simulation of a simplified self-leveling experiment in literature. Reasonable agreement between simulation results and corresponding experimental data shows that the present method could provide a promising means for the analysis of self-leveling behavior of debris bed in CDAs.
APA, Harvard, Vancouver, ISO, and other styles
4

Ahmed, Nawzat Sadiq, and Mohammed Hikmat Sadiq. "Clarify of the Random Forest Algorithm in an Educational Field." In 2018 International Conference on Advanced Science and Engineering (ICOASE). IEEE, 2018. http://dx.doi.org/10.1109/icoase.2018.8548804.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Sarafim, Diego S., Karina V. Delgado, and Daniel Cordeiro. "Random Forest for Code Smell Detection in JavaScript." In Encontro Nacional de Inteligência Artificial e Computacional. Sociedade Brasileira de Computação - SBC, 2022. http://dx.doi.org/10.5753/eniac.2022.227328.

Full text
Abstract:
JavaScript has become one of the most widely used programming languages. JavaScript is a dynamic, interpreted, and weakly-typed scripting language especially suited for the development of web applications. While these characteristics allow the language to offer high levels of flexibility, they also can make JavaScript code more challenging to write, maintain and evolve. One of the risks that JavaScript and other programming languages are prone to is the presence of code smells. Code smells result from poor programming choices during source code development that negatively influence source code comprehension and maintainability in the long term. This work reports the result of an approach that uses the Random Forest algorithm to detect a set of 11 code smells based on software metrics extracted from JavaScript source code. It also reports the construction of two datasets, one for code smells that affect functions/methods, and another for code smells related to classes, both containing at least 200 labeled positive instances of each code smell and both extracted from a set of 25 open-source JavaScript projects.
APA, Harvard, Vancouver, ISO, and other styles
6

Joulaian, Meysam, Sorush Khajepor, Ahmadreza Pishevar, and Yaser Afshar. "Dissipative Particle Dynamics Simulation of Nano Taylor Cone." In ASME 2010 8th International Conference on Nanochannels, Microchannels, and Minichannels collocated with 3rd Joint US-European Fluids Engineering Summer Meeting. ASMEDC, 2010. http://dx.doi.org/10.1115/fedsm-icnmm2010-31089.

Full text
Abstract:
Dissipative particle dynamics (DPD) is an emerging method for simulating problems at mesoscopic time and length scales. In this paper, we present a new algorithm to describe the hydrodynamics of a perfect conductive fluid in the presence of an electric field. The model is based on solving the electrostatic equations in each DPD time step for determining the charge distribution at the fluid interface and, therefore, corresponding electrical forces exerted by the electric field to the particles near the interface. The method is applied to a perfect conductive pendant drop which is immersed in a perfect dielectric and hydrodynamically inactive ambient. We have shown that when the applied voltage is sufficiently high, the drop shape is changed to a cone with an apex angle which is near to the Taylor analytical estimation of 98.6°. Our results reveal that the presented algorithm gives new capabilities to the conventional DPD method for simulating nanoscale problems in the presence of an electric field.
APA, Harvard, Vancouver, ISO, and other styles
7

Petillo, John J., Dimitrios N. Panagos, and Kevin L. Jensen. "Modeling field emission array tips using the MICHELLE gun code algorithm." In 2014 IEEE 41st International Conference on Plasma Sciences (ICOPS) held with 2014 IEEE International Conference on High-Power Particle Beams (BEAMS). IEEE, 2014. http://dx.doi.org/10.1109/plasma.2014.7012535.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kwon, Min-Su, and Dong-Kuk Lim. "Combined Random Forest and Genetic Algorithm for Optimal Design of PMa-SynRM for Electric Vehicles." In 2022 IEEE 20th Biennial Conference on Electromagnetic Field Computation (CEFC). IEEE, 2022. http://dx.doi.org/10.1109/cefc55061.2022.9940843.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kasetkasem, T., P. Aonpong, P. Rakwatin, T. Chanwimaluang, and I. Kumazawa. "A novel land cover mapping algorithm based on random forest and Markov random field models." In IGARSS 2016 - 2016 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2016. http://dx.doi.org/10.1109/igarss.2016.7730334.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Mironenko, Aleksey, Sergey Matveev, Vasiliy Slavskiy, and A. Revin. "FOREST ASSESSMENT AND ACCOUNTING SOFTWARE." In Modern machines, equipment and IT solutions for industrial complex: theory and practice. FSBE Institution of Higher Education Voronezh State University of Forestry and Technologies named after G.F. Morozov, 2021. http://dx.doi.org/10.34220/mmeitsic2021_250-255.

Full text
Abstract:
Forestry in Russia is experiencing a great need for digital technologies that can form and generalize existing databases. All participants are interested in the development of digital technologies in forest management, from the end user of forest resources to public authorities in the field of forest relations. At the same time, the modern level of forestry software requires modernization to solve specific problems. The team of the Department of Forestry, Forest Inventory and Forest Inventory of VGFTU has developed a number of automated systems that allow to quickly solve scientific and production problems in the field of forestry, ecology and nature management. The importance and relevance of this work is reflected in the “Strategy for the development of the forestry complex of the Russian Federation for the period up to 2030”.The modularity and scalability of such systems allows the authors to quickly make adjustments to their source code, which allows keeping the software up to date, which meets the modern requirements of the legal framework of the forestry sector.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Field Code Forest Algorithm"

1

Payment Systems Report - June of 2021. Banco de la República, February 2022. http://dx.doi.org/10.32468/rept-sist-pag.eng.2021.

Full text
Abstract:
Banco de la República provides a comprehensive overview of Colombia’s finan¬cial infrastructure in its Payment Systems Report, which is an important product of the work it does to oversee that infrastructure. The figures published in this edition of the report are for the year 2020, a pandemic period in which the con¬tainment measures designed and adopted to alleviate the strain on the health system led to a sharp reduction in economic activity and consumption in Colom¬bia, as was the case in most countries. At the start of the pandemic, the Board of Directors of Banco de la República adopted decisions that were necessary to supply the market with ample liquid¬ity in pesos and US dollars to guarantee market stability, protect the payment system and preserve the supply of credit. The pronounced growth in mone¬tary aggregates reflected an increased preference for liquidity, which Banco de la República addressed at the right time. These decisions were implemented through operations that were cleared and settled via the financial infrastructure. The second section of this report, following the introduction, offers an analysis of how the various financial infrastructures in Colombia have evolved and per¬formed. One of the highlights is the large-value payment system (CUD), which registered more momentum in 2020 than during the previous year, mainly be¬cause of an increase in average daily remunerated deposits made with Banco de la República by the General Directorate of Public Credit and the National Treasury (DGCPTN), as well as more activity in the sell/buy-back market with sovereign debt. Consequently, with more activity in the CUD, the Central Securi¬ties Depository (DCV) experienced an added impetus sparked by an increase in the money market for bonds and securities placed on the primary market by the national government. The value of operations cleared and settled through the Colombian Central Counterparty (CRCC) continues to grow, propelled largely by peso/dollar non-deliverable forward (NDF) contracts. With respect to the CRCC, it is important to note this clearing house has been in charge of managing risks and clearing and settling operations in the peso/dollar spot market since the end of last year, following its merger with the Foreign Exchange Clearing House of Colombia (CCDC). Since the final quarter of 2020, the CRCC has also been re¬sponsible for clearing and settlement in the equities market, which was former¬ly done by the Colombian Stock Exchange (BVC). The third section of this report provides an all-inclusive view of payments in the market for goods and services; namely, transactions carried out by members of the public and non-financial institutions. During the pandemic, inter- and intra-bank electronic funds transfers, which originate mostly with companies, increased in both the number and value of transactions with respect to 2019. However, debit and credit card payments, which are made largely by private citizens, declined compared to 2019. The incidence of payment by check contin¬ue to drop, exhibiting quite a pronounced downward trend during the past last year. To supplement to the information on electronic funds transfers, section three includes a segment (Box 4) characterizing the population with savings and checking accounts, based on data from a survey by Banco de la República con-cerning the perception of the use of payment instruments in 2019. There also is segment (Box 2) on the growth in transactions with a mobile wallet provided by a company specialized in electronic deposits and payments (Sedpe). It shows the number of users and the value of their transactions have increased since the wallet was introduced in late 2017, particularly during the pandemic. In addition, there is a diagnosis of the effects of the pandemic on the payment patterns of the population, based on data related to the use of cash in circu¬lation, payments with electronic instruments, and consumption and consumer confidence. The conclusion is that the collapse in the consumer confidence in¬dex and the drop in private consumption led to changes in the public’s pay¬ment patterns. Credit and debit card purchases were down, while payments for goods and services through electronic funds transfers increased. These findings, coupled with the considerable increase in cash in circulation, might indicate a possible precautionary cash hoarding by individuals and more use of cash as a payment instrument. There is also a segment (in Focus 3) on the major changes introduced in regulations on the retail-value payment system in Colombia, as provided for in Decree 1692 of December 2020. The fourth section of this report refers to the important innovations and tech¬nological changes that have occurred in the retail-value payment system. Four themes are highlighted in this respect. The first is a key point in building the financial infrastructure for instant payments. It involves of the design and im¬plementation of overlay schemes, a technological development that allows the various participants in the payment chain to communicate openly. The result is a high degree of interoperability among the different payment service providers. The second topic explores developments in the international debate on central bank digital currency (CBDC). The purpose is to understand how it could impact the retail-value payment system and the use of cash if it were to be issued. The third topic is related to new forms of payment initiation, such as QR codes, bio¬metrics or near field communication (NFC) technology. These seemingly small changes can have a major impact on the user’s experience with the retail-value payment system. The fourth theme is the growth in payments via mobile tele¬phone and the internet. The report ends in section five with a review of two papers on applied research done at Banco de la República in 2020. The first analyzes the extent of the CRCC’s capital, acknowledging the relevant role this infrastructure has acquired in pro¬viding clearing and settlement services for various financial markets in Colom¬bia. The capital requirements defined for central counterparties in some jurisdic¬tions are explored, and the risks to be hedged are identified from the standpoint of the service these type of institutions offer to the market and those associated with their corporate activity. The CRCC’s capital levels are analyzed in light of what has been observed in the European Union’s regulations, and the conclusion is that the CRCC has a scheme of security rings very similar to those applied internationally and the extent of its capital exceeds what is stipulated in Colombian regulations, being sufficient to hedge other risks. The second study presents an algorithm used to identify and quantify the liquidity sources that CUD’s participants use under normal conditions to meet their daily obligations in the local financial market. This algorithm can be used as a tool to monitor intraday liquidity. Leonardo Villar Gómez Governor
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography