Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: A priori data.

Zeitschriftenartikel zum Thema „A priori data“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "A priori data" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Filippidis, A. „Data fusion using sensor data and a priori information“. Control Engineering Practice 4, Nr. 1 (Januar 1996): 43–53. http://dx.doi.org/10.1016/0967-0661(95)00205-x.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Pankov, A. R., und A. M. Skuridin. „Data Processing Under A Priori Statistical Uncertainty“. IFAC Proceedings Volumes 19, Nr. 5 (Mai 1986): 213–17. http://dx.doi.org/10.1016/s1474-6670(17)59796-7.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Roberts, R. A. „51458 Limited data tomography using support minimization with a priori data“. NDT & E International 27, Nr. 2 (April 1994): 105–6. http://dx.doi.org/10.1016/0963-8695(94)90364-6.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

McKee, B. T. A. „Deconvolution of noisy data using a priori constraints“. Canadian Journal of Physics 67, Nr. 8 (01.08.1989): 821–26. http://dx.doi.org/10.1139/p89-142.

Der volle Inhalt der Quelle
Annotation:
The deconvolution of experimental data with noise present is discussed with emphasis on the ill-conditioned nature of the problem and on the need for a priori constraints to select a solution. An analytical Fourier space deconvolution that selects the minimum-norm solution subject to a least-squares constraint is described. The cutoff parameter used in the constraint is evaluated from the statistical fluctuations of the data. This method is illustrated by its application to 3-d image reconstruction for positron emission tomography.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Lestari, Putri Anggraini, Marnis Nasution und Syaiful Zuhri Harahap. „Analisa Data Penjualan Pada Apotek Ritonga Farma Menggunakan Data Mining Apriori“. INFORMATIKA 12, Nr. 2 (14.12.2024): 180–89. https://doi.org/10.36987/informatika.v12i2.5651.

Der volle Inhalt der Quelle
Annotation:
A pharmacy is a place or business that is specifically dedicated to providing medicines and other health products to the public. This place is also known as a drugstore or drug store in some countries. Pharmacies provide medicines both prescribed by doctors and over-the-counter (over-the-counter), to help patients cope with health problems they are experiencing.Some pharmacies also offer additional services such as blood pressure checks, vaccinations, simple health checks, and health counseling to the public. In applying a priori methods to pharmacy, a deep understanding of data structure and proper product classification is needed to overcome this problem. By knowing the pattern of frequent purchases, pharmacies can place items that are often purchased together close together on shelves or strategic locations. This can increase the convenience of buyers and speed up the purchase process. A priori methods are techniques in data mining that are used to find hidden patterns or associations in large datasets. A priori methods look for relationships between items in a dataset that often appear together. The main principle of the a priori method is that if an item-set appears frequently together, then it is likely that the item-set will also appear frequently together in other transactions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Yi, Guodong, Chuanyuan Zhou, Yanpeng Cao und Hangjian Hu. „Hybrid Assembly Path Planning for Complex Products by Reusing a Priori Data“. Mathematics 9, Nr. 4 (17.02.2021): 395. http://dx.doi.org/10.3390/math9040395.

Der volle Inhalt der Quelle
Annotation:
Assembly path planning (APP) for complex products is challenging due to the large number of parts and intricate coupling requirements. A hybrid assembly path planning method is proposed herein that reuses a priori paths to improve the efficiency and success ratio. The assembly path is initially segmented to improve its reusability. Subsequently, the planned assembly paths are employed as a priori paths to establish an a priori tree, which is expanded according to the bounding sphere of the part to create the a priori space for path searching. Three rapidly exploring random tree (RRT)-based algorithms are studied for path planning based on a priori path reuse. The RRT* algorithm establishes the new path exploration tree in the early planning stage when there is no a priori path to reuse. The static RRT* (S-RRT*) and dynamic RRT* (D-RRT*) algorithms form the connection between the exploration tree and the a priori tree with a pair of connection points after the extension of the exploration tree to a priori space. The difference between the two algorithms is that the S-RRT* algorithm directly reuses an a priori path and obtains a new path through static backtracking from the endpoint to the starting point. However, the D-RRT* algorithm further extends the exploration tree via the dynamic window approach to avoid collision between an a priori path and obstacles. The algorithm subsequently obtains a new path through dynamic and non-continuous backtracking from the endpoint to the starting point. A hybrid process combining the RRT*, S-RRT*, and D-RRT* algorithms is designed to plan the assembly path for complex products in several cases. The performances of these algorithms are compared, and simulations indicate that the S-RRT* and D-RRT* algorithms are significantly superior to the RRT* algorithm in terms of the efficiency and success ratio of APP. Therefore, hybrid path planning combining the three algorithms is helpful to improving the assembly path planning of complex products.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Peysson, Flavien, Abderrahmane Boubezoul, Mustapha Ouladsine und Rachid Outbib. „A Data Driven Prognostic Methodology without a Priori Knowledge“. IFAC Proceedings Volumes 42, Nr. 8 (2009): 1462–67. http://dx.doi.org/10.3182/20090630-4-es-2003.00238.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Menon, Nanda K., und John A. Hunt. „Optimizing EELS data sets using ‘a priori’ spectrum simulation“. Microscopy and Microanalysis 8, S02 (August 2002): 620–21. http://dx.doi.org/10.1017/s1431927602106040.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Suhadi, Suhadi, Carsten Last und Tim Fingscheidt. „A Data-Driven Approach to A Priori SNR Estimation“. IEEE Transactions on Audio, Speech, and Language Processing 19, Nr. 1 (Januar 2011): 186–95. http://dx.doi.org/10.1109/tasl.2010.2045799.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Nasari, Fina Nasari. „Algoritma A Priori Dalam Pengelompokkan Data Pendaftaran Mahasiswa Baru“. Jurnal Sains dan Ilmu Terapan 4, Nr. 1 (01.07.2021): 40–45. http://dx.doi.org/10.59061/jsit.v4i1.102.

Der volle Inhalt der Quelle
Annotation:
Pendaftaran mahasiswa baru merupakan proses penerimaan mahasiswa baru. Proses penerimaan mahasiswa baru yang dilakukan setiap tahunnya menghasilkan data yang semakin besar. Data yang besar jika tidak dimanfaatkan dengan baik, hanya akan memenuhkan memory penyimpanan. Data mining menjadi salah satu solusi untuk mendapatkan sebuah informasi baru dari pengolahan data-data lama. Ada beberapa fungsi data maning diantaranya assosiasi, klasifikasi, clustering, prediksi serta pengenalan pola. Algoritma Apriori adalah salah satu algoritma dalam data mining yang melihat hubungan antara satu item set dengan item set yang lain(assosiasi antara satu item set dengan item set yang lain). Hubungan yang akan dilihat dalam penelitian ini adalah hubungan antara asal sekolah menengah atas dengan program studi yang akan dipilih. Hasil yang diperoleh dari penelitian ini adalah calon mahasiswa yang memilih program studi sistem informasi adalah calon mahasis yang berasal dari SMK(Sekolah menengah Kejuruan) dengan nilai confidence 83 % begitu juga pada program studi teknik informatika, calon mahasiswa berasal dari SMK(Sekolah menengah Kejuruan) dengan nilai confidence 83 %.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Kogan, M. M., und A. V. Stepanov. „Design of Suboptimal Robust Controllers Based on a Priori and Experimental Data“. Avtomatika i telemehanika, Nr. 8 (15.12.2023): 24–42. http://dx.doi.org/10.31857/s0005231023080020.

Der volle Inhalt der Quelle
Annotation:
This paper develops a novel unified approach to designing suboptimal robust control laws for uncertain objects with different criteria based on a priori information and experimental data. The guaranteed estimates of the γ0, generalized H2, and H∞ norms of a closed loop system and the corresponding suboptimal robust control laws are expressed in terms of solutions of linear matrix inequalities considering a priori knowledge and object modeling data. A numerical example demonstrates the improved quality of control systems when a priori and experimental data are used together.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Ferreira, Thales Rangel, Luiz Alberto Beijo, Gilberto Rodrigues Liska und Giulia Eduarda Bento. „MODELAGEM BAYESIANA DA TEMPERATURA MÁXIMA DO AR EM DIVINÓPOLIS-MG“. Nativa 12, Nr. 3 (19.08.2024): 449–56. http://dx.doi.org/10.31413/nat.v12i3.17665.

Der volle Inhalt der Quelle
Annotation:
Nesta pesquisa, objetivou-se modelar o comportamento da temperatura máxima trimestral da cidade de Divinópolis-MG, ajustando a distribuição Generalizada de Valores Extremos às séries históricas de temperaturas máximas através de dois métodos distintos: Máxima Verossimilhança (MV) e Inferência Bayesiana. Objetivou-se também, para cada tempo de retorno, calcular os níveis de retorno de temperatura máxima da referida localidade, avaliando a acurácia e o erro médio de predição (EMP). Para o cálculo dos níveis de retorno foram utilizados o método de MV e abordagens Bayesianas utilizando diferentes estruturas de priori, considerando distribuições a priori informativas (Belo Horizonte e Lavras) e não informativas, e também, pelo método de Máxima Verossimilhança. Analisando-se os resultados do EMP e acurácia, verificou-se que, para todos os trimestres, a inferência Bayesiana propiciou melhores estimativas da temperatura máxima em comparação com o método de MV. A distribuição a priori informativa, fundamentada nos dados de Lavras-MG, apresentou maior precisão nas predições de temperatura máxima do segundo e terceiro trimestres, e a distribuição a priori não informativa apresentou maior precisão para o primeiro e quarto trimestres da cidade de Divinópolis-MG. Palavras-chave: distribuição generalizada de valores extremos; máxima verossimilhança; priori. Bayesian modeling of maximum air temperature in Divinópolis-MG ABSTRACT: This research aimed to model the behavior of the quarterly maximum temperature in the city of Divinópolis-MG by fitting the Generalized Extreme Value (GEV) distribution to a historical series of maximum temperatures using two distinct methods: Maximum Likelihood Estimation (MLE) and Bayesian Inference. Additionally, the maximum temperature return levels for the specified locality were calculated for each return period, assessing the accuracy and mean prediction error (MPE). For the calculation of return levels, both the MLE method and Bayesian approaches were utilized with different prior structures, considering informative priors (based on data from Belo Horizonte and Lavras) and non-informative priors, as well as the MLE method. Analysis of the MPE and accuracy results revealed that, for all quarters, Bayesian inference provided superior estimates of maximum temperature than the MLE method. Based on Lavras-MG data, The informative prior distribution exhibited higher precision in maximum temperature predictions for the second and third quarters. The non-informative prior distribution exhibited greater precision for the first and fourth quarters in Divinópolis-MG. Keywords: generalized extreme value distribution; maximum likelihood; priors.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Student. „DATA-DREDGING IS DANGEROUS“. Pediatrics 84, Nr. 5 (01.11.1989): A34. http://dx.doi.org/10.1542/peds.84.5.a34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Hidayat, Tomi, Ibnu Rasyid Munthe und Angga Putra Juledi. „Analisis Data Penjualan Menggunakan Algoritma Apriori pada Analisis Kopi“. INFORMATIKA 12, Nr. 3 (14.12.2024): 443–52. https://doi.org/10.36987/informatika.v12i3.6064.

Der volle Inhalt der Quelle
Annotation:
Data Mining is a technique for finding, searching, or extracting new information or knowledge from a very large set of data, by integration or merging with other disciplines such as statistics, artificial intelligence, and machine learning, making Data Mining as one of the tools to analyze data and then produce useful information. Association Rule is a process in Data Mining to determine all associative rules that meet the minimum requirements for support (minsup) and confidence (minconf) in a database. In Association Rule, there are 2 methods that can be used, namely a priori method and FP-Growth method, where FP-Growth method is the development of a priori method where a priori method there are still some shortcomings such as there are many patterns of data combinations that often appear (many frequent patterns), many types of items but low minimum support fulfillment, it takes quite a long time because database scanning is done repeatedly to get the ideal frequent pattern. In this study the method used is a priori algorithm method, a priori algorithm method is one of the alternative ways to find the most frequently appearing data sets (frequent itemset) without using candidate generation that is suitable for analyzing a transaction data. Coffee analysis is a Cafe Shop engaged in the sale of food and beverages that many food and beverage sales transactions. Open on November 7, 2021 coffee analysis penetrates 245 sales transactions and this transaction data continues to grow every day.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Huhtala, A., und S. Bossuyt. „Damage localization from vibration data using hierarchical a priori assumptions“. Journal of Physics: Conference Series 181 (01.08.2009): 012088. http://dx.doi.org/10.1088/1742-6596/181/1/012088.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Movchan, I. B., und A. A. Yakovleva. „REFINED ASSESSMENT OF SEISMIC MICROZONATION WITH A PRIORI DATA OPTIMISATION“. Journal of Mining Institute 236, Nr. 2 (25.04.2019): 133–41. http://dx.doi.org/10.31897/pmi.2019.2.133.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Soikkeli, Fanni, Mahmoud Hashim, Mario Ouwens, Maarten Postma und Bart Heeg. „Extrapolating Survival Data Using Historical Trial–Based a Priori Distributions“. Value in Health 22, Nr. 9 (September 2019): 1012–17. http://dx.doi.org/10.1016/j.jval.2019.03.017.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Leeb, H. „Profile reconstruction from neutron reflectivity data and a priori knowledge“. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 586, Nr. 1 (Februar 2008): 105–9. http://dx.doi.org/10.1016/j.nima.2007.11.040.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Dale, M. B. „Continuum or community: a priori assumption or data-dependent choice?“ Community Ecology 4, Nr. 2 (Dezember 2003): 129–39. http://dx.doi.org/10.1556/comec.4.2003.2.2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Wei Tang, Zhenwei Shi, Ying Wu und Changshui Zhang. „Sparse Unmixing of Hyperspectral Data Using Spectral A Priori Information“. IEEE Transactions on Geoscience and Remote Sensing 53, Nr. 2 (Februar 2015): 770–83. http://dx.doi.org/10.1109/tgrs.2014.2328336.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

van Winden, Wouter A., Joseph J. Heijnen, Peter J. T. Verheijen und Johan Grievink. „A priori analysis of metabolic flux identifiability from13C-labeling data“. Biotechnology and Bioengineering 74, Nr. 6 (2001): 505–16. http://dx.doi.org/10.1002/bit.1142.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

von Clarmann, T., und U. Grabowski. „Elimination of hidden a priori information from remotely sensed profile data“. Atmospheric Chemistry and Physics Discussions 6, Nr. 4 (18.07.2006): 6723–51. http://dx.doi.org/10.5194/acpd-6-6723-2006.

Der volle Inhalt der Quelle
Annotation:
Abstract. Profiles of atmospheric state parameters retrieved from remote measurements often contain a priori information which causes complication in the use of data for validation, comparison with models, or data assimilation. For such applications it often is desirable to remove the a priori information from the data product. If the retrieval involves an ill-posed inversion problem, formal removal of the a priori information requires resampling of the data on a coarser grid, which, however, is a prior constraint in itself. The fact that the trace of the averaging kernel matrix of a retrieval is equivalent to the number of degrees of freedom of the retrieval is used to define an appropriate information-centered representation of the data where each data point represents one degree of freedom. Since regridding implies further degradation of the data and thus causes additional loss of information, a re-regularization scheme has been developed which allows resampling without additional loss of information. For a typical ClONO2 profile retrieved from spectra as measured by the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS), the constrained retrieval has 9.7 degrees of freedom. After application of the proposed transformation to a coarser information-centered altitude grid, there are exactly 9 degrees of freedom left, and the averaging kernel on the coarse grid is unity. Pure resampling on the information-centered grid without re-regularization would reduce the degrees of freedom to 7.1.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

von Clarmann, T., und U. Grabowski. „Elimination of hidden a priori information from remotely sensed profile data“. Atmospheric Chemistry and Physics 7, Nr. 2 (23.01.2007): 397–408. http://dx.doi.org/10.5194/acp-7-397-2007.

Der volle Inhalt der Quelle
Annotation:
Abstract. Profiles of atmospheric state variables retrieved from remote measurements often contain a priori information which causes complication in the statistical use of data and in the comparison with other measured or modeled data. For such applications it often is desirable to remove the a priori information from the data product. If the retrieval involves an ill-posed inversion problem, formal removal of the a priori information requires resampling of the data on a coarser grid, which in some sense, however, is a prior constraint in itself. The fact that the trace of the averaging kernel matrix of a retrieval is equivalent to the number of degrees of freedom of the retrieval is used to define an appropriate information-centered representation of the data where each data point represents one degree of freedom. Since regridding implies further degradation of the data and thus causes additional loss of information, a re-regularization scheme has been developed which allows resampling without additional loss of information. For a typical ClONO2 profile retrieved from spectra as measured by the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS), the constrained retrieval has 9.7 degrees of freedom. After application of the proposed transformation to a coarser information-centered altitude grid, there are exactly 9 degrees of freedom left, and the averaging kernel on the coarse grid is unity. Pure resampling on the information-centered grid without re-regularization would reduce the degrees of freedom to 7.1 (6.7) for a staircase (triangular) representation scheme.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

LANDER, LESLIE C., und YU LO CYRUS CHANG. „GENERATION AND RETRIEVAL OF PROBABILISTIC DIAGNOSTIC DATA FOR REAL-TIME SYSTEM FAILURES“. International Journal of Reliability, Quality and Safety Engineering 02, Nr. 04 (Dezember 1995): 361–81. http://dx.doi.org/10.1142/s0218539395000265.

Der volle Inhalt der Quelle
Annotation:
The generation and retrieval of comparison-based probabilistic diagnostic data for fault location in homogeneous systems is presented. A distance-based approach avoids the exponential size of a priori test information. The operation-time computation complexity is O(m), where m is the number of links and O( log m) when some prior reference data is stored. Optimizations for hard real-time systems are provided.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Tselishchev, Vitaly. „Thomas Ryckman. Relativized A Priori: Evaluation and Criticism. (Translated by V. V. Tselishchev)“. Respublica Literaria 5, Nr. 2 (03.07.2024): 150–66. http://dx.doi.org/10.47850/rl.2024.5.2.150-166.

Der volle Inhalt der Quelle
Annotation:
The paper considers M. Friedman's modification of Reichenbach's idea of ‘relativization a priori’. Reichenbach's explication a priori in the form of ‘coordination’ conditions for the correlation of empirical data and mathematical physics is considered by Friedman in the perspective of establishing the nature of the transcendental constitution of scientific objects. In turn, T. Ryckman puts forward an alternative version of relativization a priori, based on the intentionality of a scientific object, in the spirit of Husserl's phenomenology. In general, the paper shows that transcendental constitution is a specifically philosophical, moreover, phenomenological, and not a physical problem.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Najib, Bella Audi, und Nining Suryani. „Penerapan Data Mining Terhadap Data Penjualan Lapis Bogor Sangkuriang Dengan Metode Algoritma Apriori“. Jurnal Teknik Komputer 6, Nr. 1 (10.01.2020): 61–70. http://dx.doi.org/10.31294/jtk.v6i1.6765.

Der volle Inhalt der Quelle
Annotation:
Determination of the pattern of purchasing goods and layout of goods based on the tendency of consumers to buy goods canbe one solution for the Bogor Sangkuriang Lapis Shop in developing marketing strategies so as to increase sales of Lapis Bogor Sangkuriang products. The algorithm that can be used to determine the pattern of purchasing goods and this layout is the Apriori Algorithm which is one of the data mining algorithms in the formation of rule mining associations. With frequent items in a priori algorithm by producing a small frequent, without doing candidate generation and minimizing the completion stages starting at k-1 items or the first stage in the a priori algorithm then used with the FP-Growth method where this method is very significant with the a priori algorithm , efficient in terms of time, the completion stage is faster, produces less frequent items and is more detailed in describing frequent item results because frequent results with a value <1 are still shown, not deleted. This research produced the most sold products for layer cakes are Original Cheese, Cheese Brownies, Full Talas. Based on the rules of the final association, it is known that if you buy the Bogor Sangkuriang Lapis Original Cheese cake layer, you will buy the Lapis Bogor Sangkuriang Brownies Cheese and Full Talas Cheese with a support value of 30% and a confidence value of 70%. Based on these results the company can make the decision to develop a strategy that is done next
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Mattia, F., G. Satalino, V. R. N. Pauwels und A. Loew. „Soil moisture retrieval through a merging of multi-temporal L-band SAR data and hydrologic modelling“. Hydrology and Earth System Sciences Discussions 5, Nr. 6 (03.12.2008): 3479–515. http://dx.doi.org/10.5194/hessd-5-3479-2008.

Der volle Inhalt der Quelle
Annotation:
Abstract. The objective of the study is to investigate the potential of retrieving superficial soil moisture content (mv) from multi-temporal L-band synthetic aperture radar (SAR) data and hydrologic modelling. The study focuses on assessing the performances of an L-band SAR retrieval algorithm intended for agricultural areas and for watershed spatial scales (e.g. from 100 to 10 000 km2). The algorithm transforms temporal series of L-band SAR data into soil moisture contents by using a constrained minimization technique integrating a priori information on soil parameters. The rationale of the approach consists of exploiting soil moisture predictions, obtained at coarse spatial resolution (e.g. 15–30 km2) by point scale hydrologic models (or by simplified estimators), as a priori information for the SAR retrieval algorithm that provides soil moisture maps at high spatial resolution (e.g. 0.01 km2). In the present form, the retrieval algorithm applies to cereal fields and has been assessed on simulated and experimental data. The latter were acquired by the airborne E-SAR system during the AgriSAR campaign carried out over the Demmin site (Northern Germany) in 2006. Results indicate that the retrieval algorithm always improves the a priori information on soil moisture content though the improvement may be marginal when the accuracy of prior mv estimates is better than 5%.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Mattia, F., G. Satalino, V. R. N. Pauwels und A. Loew. „Soil moisture retrieval through a merging of multi-temporal L-band SAR data and hydrologic modelling“. Hydrology and Earth System Sciences 13, Nr. 3 (13.03.2009): 343–56. http://dx.doi.org/10.5194/hess-13-343-2009.

Der volle Inhalt der Quelle
Annotation:
Abstract. The objective of the study is to investigate the potential of retrieving superficial soil moisture content (mv) from multi-temporal L-band synthetic aperture radar (SAR) data and hydrologic modelling. The study focuses on assessing the performances of an L-band SAR retrieval algorithm intended for agricultural areas and for watershed spatial scales (e.g. from 100 to 10 000 km2). The algorithm transforms temporal series of L-band SAR data into soil moisture contents by using a constrained minimization technique integrating a priori information on soil parameters. The rationale of the approach consists of exploiting soil moisture predictions, obtained at coarse spatial resolution (e.g. 15–30 km2) by point scale hydrologic models (or by simplified estimators), as a priori information for the SAR retrieval algorithm that provides soil moisture maps at high spatial resolution (e.g. 0.01 km2). In the present form, the retrieval algorithm applies to cereal fields and has been assessed on simulated and experimental data. The latter were acquired by the airborne E-SAR system during the AgriSAR campaign carried out over the Demmin site (Northern Germany) in 2006. Results indicate that the retrieval algorithm always improves the a priori information on soil moisture content though the improvement may be marginal when the accuracy of prior mv estimates is better than 5%.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

van Zon, Tim, und Kabir Roy-Chowdhury. „Structural inversion of gravity data using linear programming“. GEOPHYSICS 71, Nr. 3 (Mai 2006): J41—J50. http://dx.doi.org/10.1190/1.2197491.

Der volle Inhalt der Quelle
Annotation:
Structural inversion of gravity data — deriving robust images of the subsurface by delineating lithotype boundaries using density anomalies — is an important goal in a range of exploration settings (e.g., ore bodies, salt flanks). Application of conventional inversion techniques in such cases, using [Formula: see text]-norms and regularization, produces smooth results and is thus suboptimal. We investigate an [Formula: see text]-norm-based approach which yields structural images without the need for explicit regularization. The density distribution of the subsurface is modeled with a uniform grid of cells. The density of each cell is inverted by minimizing the [Formula: see text]-norm of the data misfit using linear programming (LP) while satisfying a priori density constraints. The estimate of the noise level in a given data set is used to qualitatively determine an appropriate parameterization. The 2.5D and 3D synthetic tests adequately reconstruct the structure of the test models. The quality of the inversion depends upon a good prior estimation of the minimum depth of the anomalous body. A comparison of our results with one using truncated singular value decomposition (TSVD) on a noisy synthetic data set favors the LP-based method. There are two advantages in using LP for structural inversion of gravity data. First, it offers a natural way to incorporate a priori information regarding the model parameters. Second, it produces subsurface images with sharp boundaries (structure).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Syahara, Zahra, Rika Nur Adiha und Agus Perdana Windarto. „Implementasi Data Mining Algoritma Apriori Pada Sistem Persediaan Bahan Bangunan Di Karang Sari“. Brahmana : Jurnal Penerapan Kecerdasan Buatan 2, Nr. 2 (30.06.2021): 107–15. http://dx.doi.org/10.30645/brahmana.v2i2.72.

Der volle Inhalt der Quelle
Annotation:
The development of construction in Indonesia is increasing rapidly, therefore building materials are needed to be used in building a construction. Every building materials store must have a transaction system and an inventory system. Whether it's an efficient or less efficient inventory system. For this reason, this research was conducted to help the owner or manager of a building shop to more easily determine the combination pattern of supplies and purchases of building goods. The a priori algorithm is used in this study because the a priori algorithm is suitable in connecting the itemset combination patterns. Therefore the a priori algorithm is suitable in determining the purchase combination pattern in order to get a good inventory system. And from the process carried out, a confidence value of 75% was obtained, not only that this research was also assisted by a priori algorithm supporting application, namely the Tanagra application.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

KOSAKA, Manabu, Fumitaka KIMURA und Hiroshi SHIBATA. „Adaptive Law with Variable Dead Zone Width Using a Priori Data“. Transactions of the Society of Instrument and Control Engineers 34, Nr. 8 (1998): 1025–32. http://dx.doi.org/10.9746/sicetr1965.34.1025.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Borgatti, Stephen P. „A Statistical Method for Comparing Aggregate Data Across A Priori Groups“. Field Methods 14, Nr. 1 (Februar 2002): 88–107. http://dx.doi.org/10.1177/1525822x02014001006.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

AVRUNIN, Oleg. „Using a priori data for segmentation anatomical structures of the brain“. PRZEGLĄD ELEKTROTECHNICZNY 1, Nr. 5 (05.05.2017): 104–7. http://dx.doi.org/10.15199/48.2017.05.20.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Anderson, Richard M., Victor I. Koren und Seann M. Reed. „Using SSURGO data to improve Sacramento Model a priori parameter estimates“. Journal of Hydrology 320, Nr. 1-2 (März 2006): 103–16. http://dx.doi.org/10.1016/j.jhydrol.2005.07.020.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Mateer, C. L., H. U. Dütsch, J. Staehelin und J. J. DeLuisi. „Influence of a priori profiles on trend calculations from Umkehr data“. Journal of Geophysical Research: Atmospheres 101, Nr. D11 (01.07.1996): 16779–87. http://dx.doi.org/10.1029/95jd03794.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Ranade, Rishikesh, und Tarek Echekki. „A framework for data-based turbulent combustion closure: A priori validation“. Combustion and Flame 206 (August 2019): 490–505. http://dx.doi.org/10.1016/j.combustflame.2019.05.028.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Grebennikov, A. I. „Algorithms for experimental data analysis allowing for additional a priori information“. Computational Mathematics and Modeling 2, Nr. 2 (1991): 148–52. http://dx.doi.org/10.1007/bf01128925.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Wu, Jin-Long, Jian-Xun Wang, Heng Xiao und Julia Ling. „A Priori Assessment of Prediction Confidence for Data-Driven Turbulence Modeling“. Flow, Turbulence and Combustion 99, Nr. 1 (25.03.2017): 25–46. http://dx.doi.org/10.1007/s10494-017-9807-0.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Gupta, J. Sen. „A Priori Error Bounds for Parabolic Interface Problems with Measure Data“. Numerical Analysis and Applications 16, Nr. 3 (September 2023): 259–75. http://dx.doi.org/10.1134/s1995423923030072.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Høyer, Anne‐Sophie, Flemming Jørgensen, Holger Lykke‐Andersen und Anders Vest Christiansen. „Iterative modelling of AEM data based on a priori information from seismic and borehole data“. Near Surface Geophysics 12, Nr. 5 (Januar 2014): 635–50. http://dx.doi.org/10.3997/1873-0604.2014024.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Zhang, Chi, Li Tong, Ying Zeng, Jingfang Jiang, Haibing Bu, Bin Yan und Jianxin Li. „Automatic Artifact Removal from Electroencephalogram Data Based on A Priori Artifact Information“. BioMed Research International 2015 (2015): 1–8. http://dx.doi.org/10.1155/2015/720450.

Der volle Inhalt der Quelle
Annotation:
Electroencephalogram (EEG) is susceptible to various nonneural physiological artifacts. Automatic artifact removal from EEG data remains a key challenge for extracting relevant information from brain activities. To adapt to variable subjects and EEG acquisition environments, this paper presents an automatic online artifact removal method based on a priori artifact information. The combination of discrete wavelet transform and independent component analysis (ICA), wavelet-ICA, was utilized to separate artifact components. The artifact components were then automatically identified using a priori artifact information, which was acquired in advance. Subsequently, signal reconstruction without artifact components was performed to obtain artifact-free signals. The results showed that, using this automatic online artifact removal method, there were statistical significant improvements of the classification accuracies in both two experiments, namely, motor imagery and emotion recognition.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Mardiaha, Ainul, und Yulia Yulia. „Implementasi Data Mining Menggunakan Algoritma Apriori Pada Penjualan Suku Cadang Motor“. Jurnal Ilmu Komputer 14, Nr. 2 (30.09.2021): 125. http://dx.doi.org/10.24843/jik.2021.v14.i02.p07.

Der volle Inhalt der Quelle
Annotation:
This research was carried out to simplify or assist Candra Motor workshop owners in managing data and archives of motorcycle parts sales by applying a data mining a priori algorithm method. Data mining is an operation that uses a particular technique or method to look for different patterns or shapes in a selected data. Sales data for a year with the number of 15 items selected using the priori algorithm method. A priori algorithm is an algorithm for taking data with associative rules (association rule) to determine the associative relationship of an item combination. In a priori algorithm, it is determined frequent itemset-1, frequent itemset-2, and frequent itemset-3 so that the association rules can be obtained from previously selected data. To obtain the frequent itemset, each selected data must meet the minimum support and minimum confidence requirements. In this study using minimum support ? 7 or 0.583 and minimum confidence of 90%. So that some rules of association were obtained, where the calculation of the search for association rules manually and using WEKA software obtained the same results.By fulfilling the minimum support and minimum confidence requirements, the most sold spare parts are inner tube, Yamaha oil and MPX oil.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Li, Dewang, Meilan Qiu und Zhongyi Ke. „Bayesian Estimation Analysis of Bernoulli Measurement Error Model for Longitudinal Data“. International Journal of Applied Physics and Mathematics 10, Nr. 4 (2020): 160–66. http://dx.doi.org/10.17706/ijapm.2020.10.4.160-166.

Der volle Inhalt der Quelle
Annotation:
The Bayesian method is used to study the inference of the semi-parametric measurement error model (MEs) with longitudinal data. A semi-parametric Bayesian method combined with fracture prior and Gibbs sampling combined with Metropolis-Hastings (MH) algorithm is applied and applied to the simulation observation from the posterior distribution, and the combined Bayesian statistics of unknown parameters and measurement errors are obtained. We obtained Bayesian estimates of the parameters and covariates of the measurement error model. Under three different priori assumptions, four simulation studies illustrate the effectiveness and utility of the proposed method.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Gildea, Richard J., David G. Waterman, James M. Parkhurst, Danny Axford, Geoff Sutton, David I. Stuart, Nicholas K. Sauter, Gwyndaf Evans und Graeme Winter. „New methods for indexing multi-lattice diffraction data“. Acta Crystallographica Section D Biological Crystallography 70, Nr. 10 (27.09.2014): 2652–66. http://dx.doi.org/10.1107/s1399004714017039.

Der volle Inhalt der Quelle
Annotation:
A new indexing method is presented which is capable of indexing multiple crystal lattices from narrow wedges of diffraction data. The method takes advantage of a simplification of Fourier transform-based methods that is applicable when the unit-cell dimensions are knowna priori. The efficacy of this method is demonstrated with both semi-synthetic multi-lattice data and real multi-lattice data recorded from crystals of ∼1 µm in size, where it is shown that up to six lattices can be successfully indexed and subsequently integrated from a 1° wedge of data. Analysis is presented which shows that improvements in data-quality indicators can be obtained through accurate identification and rejection of overlapping reflections prior to scaling.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Heckel, A., S. W. Kim, G. J. Frost, A. Richter, M. Trainer und J. P. Burrows. „Influence of low spatial resolution a priori data on tropospheric NO<sub>2</sub> satellite retrievals“. Atmospheric Measurement Techniques 4, Nr. 9 (09.09.2011): 1805–20. http://dx.doi.org/10.5194/amt-4-1805-2011.

Der volle Inhalt der Quelle
Annotation:
Abstract. The retrieval of tropospheric columns of NO2 and other trace gases from satellite observations of backscattered solar radiation relies on the use of accurate a priori information. The spatial resolution of current space sensors is often significantly higher than that of the a priori datasets used, introducing uncertainties from spatial misrepresentation. In this study, the effect of spatial under-sampling of a priori data on the retrieval of NO2 columns was studied for a typical coastal area (around San Francisco). High-resolution (15 × 15 km2) NO2 a priori data from the WRF-Chem model in combination with high-resolution MODIS surface reflectance and aerosol data were used to investigate the uncertainty introduced by applying a priori data at typical global chemical transport model resolution. The results show that the relative uncertainties can be large (more than a factor of 2 if all a priori data used is at the coarsest resolution) for individual measurements, mainly due to spatial variations in NO2 profile and surface albedo, with smaller contributions from aerosols and surface height changes. Similar sensitivities are expected for other coastal regions and localised sources such as power plants, highlighting the need for high-resolution a priori data in quantitative analysis of the spatial patterns retrieved from satellite observations of tropospheric pollution.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Boucher, Jean-Philippe, und Rofick Inoussa. „A POSTERIORI RATEMAKING WITH PANEL DATA“. ASTIN Bulletin 44, Nr. 3 (23.04.2014): 587–612. http://dx.doi.org/10.1017/asb.2014.11.

Der volle Inhalt der Quelle
Annotation:
AbstractRatemaking is one of the most important tasks of non-life actuaries. Usually, the ratemaking process is done in two steps. In the first step, a priori ratemaking, an a priori premium is computed based on the characteristics of the insureds. In the second step, called the a posteriori ratemaking, the past claims experience of each insured is considered to the a priori premium and set the final net premium. In practice, for automobile insurance, this correction is usually done with bonus-malus systems, or variations on them, which offer many advantages. In recent years, insurers have accumulated longitudinal information on their policyholders, and actuaries can now use many years of informations for a single insured. For this kind of data, called panel or longitudinal data, we propose an alternative to the two-step ratemaking approach and argue this old approach should no longer be used. As opposed to a posteriori models of cross-section data, the models proposed in this paper generate premiums based on empirical results rather than inductive probability. We propose a new way to deal with bonus-malus systems when panel data are available. Using car insurance data, a numerical illustration using at-fault and non-at-fault claims of a Canadian insurance company is included to support this discussion. Even if we apply the model for car insurance, as long as another line of business uses past claim experience to set the premiums, we maintain that a similar approach to the model proposed should be used.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Besson, Rémi, Erwan Le Pennec und Stéphanie Allassonnière. „Learning from Both Experts and Data“. Entropy 21, Nr. 12 (10.12.2019): 1208. http://dx.doi.org/10.3390/e21121208.

Der volle Inhalt der Quelle
Annotation:
In this work, we study the problem of inferring a discrete probability distribution using both expert knowledge and empirical data. This is an important issue for many applications where the scarcity of data prevents a purely empirical approach. In this context, it is common to rely first on an a priori from initial domain knowledge before proceeding to an online data acquisition. We are particularly interested in the intermediate regime, where we do not have enough data to do without the initial a priori of the experts, but enough to correct it if necessary. We present here a novel way to tackle this issue, with a method providing an objective way to choose the weight to be given to experts compared to data. We show, both empirically and theoretically, that our proposed estimator is always more efficient than the best of the two models (expert or data) within a constant.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Cordua, Knud Skou, Thomas Mejer Hansen und Klaus Mosegaard. „Monte Carlo full-waveform inversion of crosshole GPR data using multiple-point geostatistical a priori information“. GEOPHYSICS 77, Nr. 2 (März 2012): H19—H31. http://dx.doi.org/10.1190/geo2011-0170.1.

Der volle Inhalt der Quelle
Annotation:
We present a general Monte Carlo full-waveform inversion strategy that integrates a priori information described by geostatistical algorithms with Bayesian inverse problem theory. The extended Metropolis algorithm can be used to sample the a posteriori probability density of highly nonlinear inverse problems, such as full-waveform inversion. Sequential Gibbs sampling is a method that allows efficient sampling of a priori probability densities described by geostatistical algorithms based on either two-point (e.g., Gaussian) or multiple-point statistics. We outline the theoretical framework for a full-waveform inversion strategy that integrates the extended Metropolis algorithm with sequential Gibbs sampling such that arbitrary complex geostatistically defined a priori information can be included. At the same time we show how temporally and/or spatiallycorrelated data uncertainties can be taken into account during the inversion. The suggested inversion strategy is tested on synthetic tomographic crosshole ground-penetrating radar full-waveform data using multiple-point-based a priori information. This is, to our knowledge, the first example of obtaining a posteriori realizations of a full-waveform inverse problem. Benefits of the proposed methodology compared with deterministic inversion approaches include: (1) The a posteriori model variability reflects the states of information provided by the data uncertainties and a priori information, which provides a means of obtaining resolution analysis. (2) Based on a posteriori realizations, complicated statistical questions can be answered, such as the probability of connectivity across a layer. (3) Complex a priori information can be included through geostatistical algorithms. These benefits, however, require more computing resources than traditional methods do. Moreover, an adequate knowledge of data uncertainties and a priori information is required to obtain meaningful uncertainty estimates. The latter may be a key challenge when considering field experiments, which will not be addressed here.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Pace, Francesca, Alessandro Santilano und Alberto Godio. „Particle swarm optimization of 2D magnetotelluric data“. GEOPHYSICS 84, Nr. 3 (01.05.2019): E125—E141. http://dx.doi.org/10.1190/geo2018-0166.1.

Der volle Inhalt der Quelle
Annotation:
We implement the particle swarm optimization (PSO) algorithm for the two-dimensional (2D) magnetotelluric (MT) inverse problem. We first validate PSO on two synthetic models of different complexity and then apply it to an MT benchmark for real-field data, the COPROD2 data set (Canada). We pay particular attention to the selection of the PSO input parameters to properly address the complexity of the 2D MT inverse problem. We enhance the stability and convergence of the solution of the geophysical problem by applying the hierarchical PSO with time-varying acceleration coefficients (HPSO-TVAC). Moreover, we parallelize the code to reduce the computation time because PSO is a computationally demanding global search algorithm. The inverse problem was solved for the synthetic data both by giving a priori information at the beginning and by using a random initialization. The a priori information was given to a small number of particles as the initial position within the search space of solutions, so that the swarming behavior was only slightly influenced. We have demonstrated that there is no need for the a priori initialization to obtain robust 2D models because the results are largely comparable with the results from randomly initialized PSO. The optimization of the COPROD2 data set provides a resistivity model of the earth in line with results from previous interpretations. Our results suggest that the 2D MT inverse problem can be successfully addressed by means of computational swarm intelligence.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Wijaya, Moh Ichsan Pradana, Nur Lailatul Aqromi und Siti Nurul Afiyah. „Implementasi Data Mining Pola Pembelian Pada Toko Santoso Tiga Sumenep Dengan Menerapkan Algoritma Apriori“. Jurnal Ilmiah Teknologi Informasi Asia 17, Nr. 2 (20.02.2023): 97. http://dx.doi.org/10.32815/jitika.v17i2.909.

Der volle Inhalt der Quelle
Annotation:
In this competition in the business world, many innovations and new breakthroughs are needed to deal with it and require business people to constantly develop their business and also to always survive in the competition. In this study, the author observes the process of sales activities at the Santoso Tiga Sumenep Tool and Building Materials Store which continues to run and the data generated is increasing over time. Sales data that is getting longer will the more big no will useful and useful if left so course. The thing to do conducted in To do Thing is the sales data analysis company with the used algorithm. In data mining there are a number of algorithm or method that can done . Algorithm a priori is one of the data mining used in the system this, algorithm this look for rule association in data mining. Algorithm a priori on the system this aim looks to score the frequency of item sets in a data set. The apriori algorithm is defined as a process for finding something rule a priori that fulfills the minimum requirements for support and minimum requirements for confidence. Algorithm a priori is the most famous algorithm for finding pattern frequency high. Destination from making system on task end this is knowing is algorithm a priori could be used for developing sales and set up strategic marketing as well as knowing connection between goods to use determine placement goods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie