Littérature scientifique sur le sujet « Probability-based method »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Probability-based method ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "Probability-based method"

1

Tang, Li, Jie-zhong Zou et Wen-sheng Yang. « A numerical method based on probability theory ». Journal of Central South University of Technology 10, no 2 (juin 2003) : 159–61. http://dx.doi.org/10.1007/s11771-003-0060-4.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Grigoriu, Mircea, et Katerina D. Papoulia. « Effective conductivity by a probability-based local method ». Journal of Applied Physics 98, no 3 (août 2005) : 033706. http://dx.doi.org/10.1063/1.1993775.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Kruger, D., et W. T. Penzhorn. « Adaptive probability estimation based on IIR filtering method ». Electronics Letters 38, no 25 (2002) : 1659. http://dx.doi.org/10.1049/el:20021111.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Li, Zhi-Gang, Jun-Gang Zhou et Bo-Ying Liu. « System Reliability Analysis Method Based on Fuzzy Probability ». International Journal of Fuzzy Systems 19, no 6 (8 août 2017) : 1759–67. http://dx.doi.org/10.1007/s40815-017-0363-5.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Huang, Yingping, Ross McMurran, Gunwant Dhadyalla et R. Peter Jones. « Probability based vehicle fault diagnosis : Bayesian network method ». Journal of Intelligent Manufacturing 19, no 3 (19 janvier 2008) : 301–11. http://dx.doi.org/10.1007/s10845-008-0083-7.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

He, Liangli, Zhenzhou Lu et Kaixuan Feng. « A novel estimation method for failure-probability-based-sensitivity by conditional probability theorem ». Structural and Multidisciplinary Optimization 61, no 4 (21 décembre 2019) : 1589–602. http://dx.doi.org/10.1007/s00158-019-02437-x.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Zheqi, Zhu, Ren Bo, Zhang Xiaofeng, Zeng Hang, Xue Tao et Chen Qingge. « Neural network-based probability forecasting method of aviation safety ». IOP Conference Series : Materials Science and Engineering 1043, no 3 (1 janvier 2021) : 032063. http://dx.doi.org/10.1088/1757-899x/1043/3/032063.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Mauriello, Paolo, et Domenico Patella. « A DATA-ADAPTIVE PROBABILITY-BASED FAST ERT INVERSION METHOD ». Progress In Electromagnetics Research 97 (2009) : 275–90. http://dx.doi.org/10.2528/pier09092307.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Zhao, Yongxiang. « PROBABILITY-BASED CYCLIC STRESS-STRAIN CURVES AND ESTIMATION METHOD ». Chinese Journal of Mechanical Engineering 36, no 08 (2000) : 102. http://dx.doi.org/10.3901/jme.2000.08.102.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Yang, Xing, Xiaodong Hu et Zhiqing Li. « The conditional risk probability-based seawall height design method ». International Journal of Naval Architecture and Ocean Engineering 7, no 6 (novembre 2015) : 1007–19. http://dx.doi.org/10.1515/ijnaoe-2015-0070.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Thèses sur le sujet "Probability-based method"

1

PINHO, Luis Gustavo Bastos. « Building new probability distributions : the composition method and a computer based method ». Universidade Federal de Pernambuco, 2017. https://repositorio.ufpe.br/handle/123456789/24966.

Texte intégral
Résumé :
Submitted by Pedro Barros (pedro.silvabarros@ufpe.br) on 2018-07-03T21:14:00Z No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) TESE Luis Gustavo Bastos Pinho.pdf: 3785410 bytes, checksum: 4a1cf7340340bd8ff994a74abb62ba0e (MD5)
Made available in DSpace on 2018-07-03T21:14:00Z (GMT). No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) TESE Luis Gustavo Bastos Pinho.pdf: 3785410 bytes, checksum: 4a1cf7340340bd8ff994a74abb62ba0e (MD5) Previous issue date: 2017-01-17
FACEPE
We discuss the creation of new probability distributions for continuous data in two distinct approaches. The first one is, to our knowledge, novelty and consists of using Estimation of Distribution Algorithms (EDAs) to obtain new cumulative distribution functions. This class of algorithms work as follows. A population of solutions for a given problem is randomly selected from a space of candidates, which may contain candidates that are not feasible solutions to the problem. The selection occurs by following a set of probability rules that, initially, assign a uniform distribution to the space of candidates. Each individual is ranked by a fitness criterion. A fraction of the most fit individuals is selected and the probability rules are then adjusted to increase the likelihood of obtaining solutions similar to the most fit in the current population. The algorithm iterates until the set of probability rules are able to provide good solutions to the problem. In our proposal, the algorithm is used to generate cumulative distribution functions to model a given continuous data set. We tried to keep the mathematical expressions of the new functions as simple as possible. The results were satisfactory. We compared the models provided by the algorithm to the ones in already published papers. In every situation, the models proposed by the algorithms had advantages over the ones already published. The main advantage is the relative simplicity of the mathematical expressions obtained. Still in the context of computational tools and algorithms, we show the performance of simple neural networks as a method for parameter estimation in probability distributions. The motivation for this was the need to solve a large number of non linear equations when dealing with SAR images (SAR stands for synthetic aperture radar) in the statistical treatment of such images. The estimation process requires solving, iteratively, a non-linear equation. This is repeated for every pixel and an image usually consists of a large number of pixels. We trained a neural network to approximate an estimator for the parameter of interest. Once trained, the network can be fed the data and it will return an estimate of the parameter of interest without the need of iterative methods. The training of the network can take place even before collecting the data from the radar. The method was tested on simulated and real data sets with satisfactory results. The same method can be applied to different distributions. The second part of this thesis shows two new probability distribution classes obtained from the composition of already existing ones. In each situation, we present the new class and general results such as power series expansions for the probability density functions, expressions for the moments, entropy and alike. The first class is obtained from the composition of the beta-G and Lehmann-type II classes. The second class, from the transmuted-G and Marshall-Olkin-G classes. Distributions in these classes are compared to already existing ones as a way to illustrate the performance of applications to real data sets.
Discutimos a criação de novas distribuições de probabilidade para dados contínuos em duas abordagens distintas. A primeira é, ao nosso conhecimento, inédita e consiste em utilizar algoritmos de estimação de distribuição para a obtenção de novas funções de distribuição acumulada. Algoritmos de estimação de distribuição funcionam da seguinte forma. Uma população de soluções para um determinado problema é extraída aleatoriamente de um conjunto que denominamos espaço de candidatos, o qual pode possuir candidatos que não são soluções viáveis para o problema. A extração ocorre de acordo com um conjunto de regras de probabilidade, as quais inicialmente atribuem uma distribuição uniforme ao espaço de candidatos. Cada indivíduo na população é classificado de acordo com um critério de desempenho. Uma porção dos indivíduos com melhor desempenho é escolhida e o conjunto de regras é adaptado para aumentar a probabilidade de obter soluções similares aos melhores indivíduos da população atual. O processo é repetido por um número de gerações até que a distribuição de probabilidade das soluções sorteadas forneça soluções boas o suficiente. Em nossa aplicação, o problema consiste em obter uma função de distribuição acumulada para um conjunto de dados contínuos qualquer. Tentamos, durante o processo, manter as expressões matemáticas das distribuições geradas as mais simples possíveis. Os resultados foram satisfatórios. Comparamos os modelos providos pelo algoritmo a modelos publicados em outros artigos. Em todas as situações, os modelos obtidos pelo algoritmo apresentaram vantagens sobre os modelos dos artigos publicados. A principal vantagem é a expressão matemática reduzida. Ainda no contexto do uso de ferramentas computacionais e algoritmos, mostramos como utilizar redes neurais simples para a estimação de parâmetros em distribuições de probabilidade. A motivação para tal aplicação foi a necessidade de resolver iterativamente um grande número de equações não lineares no tratamento estatístico de imagens obtidas de SARs (synthetic aperture radar). O processo de estimação requer a solução de uma equação por métodos iterativos e isso é repetido para cada pixel na imagem. Cada imagem possui um grande número de pixels, em geral. Pensando nisso, treinamos uma rede neural para aproximar o estimador para esse parâmetro. Uma vez treinada, a rede é alimentada com as janelas referente a cada pixel e retorna uma estimativa do parâmetro, sem a necessidade de métodos iterativos. O treino ocorre antes mesmo da obtenção dos dados do radar. O método foi testado em conjuntos de dados reais e fictícios com ótimos resultados. O mesmo método pode ser aplicado a outras distribuições. A segunda parte da tese exibe duas classes de distribuições de probabilidade obtidas a partir da composição de classes existentes. Em cada caso, apresentamos a nova classe e resultados gerais tais como expansões em série de potência para a função densidade de probabilidade, expressões para momentos, entropias e similares. A primeira classe é a composição das classes beta-G e Lehmann-tipo II. A segunda classe é obtida a partir das classes transmuted-G e Marshall-Olkin-G. Distribuições pertencentes a essas classes são comparadas a outras já existentes como maneira de ilustrar o desempenho em aplicações a dados reais.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Hoang, Tam Minh Thi 1960. « A joint probability model for rainfall-based design flood estimation ». Monash University, Dept. of Civil Engineering, 2001. http://arrow.monash.edu.au/hdl/1959.1/8892.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Alkhairy, Ibrahim H. « Designing and Encoding Scenario-based Expert Elicitation for Large Conditional Probability Tables ». Thesis, Griffith University, 2020. http://hdl.handle.net/10072/390794.

Texte intégral
Résumé :
This thesis focuses on the general problem of asking experts to assess the likelihood of many scenarios, when there is insufficient time to ask about all possible scenarios. The challenge addressed here is one of experimental design: How to choose which scenarios are assessed; How to use that limited data to extrapolate information about the scenarios that remain unasked? In a mathematical sense, this problem can be constructed as a problem of expert elicitation, where experts are asked to quantify conditional probability tables (CPTs). Experts may be relied on, for example in the situation when empirical data is unavailable or limited. CPTs are used widely in statistical modelling to describe probabilistic relationships between an outcome and several factors. I consider two broad situations where CPTs are important components of quantitative models. Firstly experts are often asked to quantify CPTs that form the building blocks of Bayesian Networks (BNs). In one case study, CPTs describe how habitat suitability of feral pigs is related to various environmental factors, such as water quality and food availability. Secondly CPTs may also support a sensitivity analysis for large computer experiments, by examining how some outcome changes, as various factors are changed. Another case study uses CPTs to examine sensitivity to settings, for algorithms available through virtual laboratories, to map the geographic distribution of species such as the koala. An often-encountered problem is the sheer amount of information asked of the expert: the number of scenarios. Each scenario corresponds to a row of the CPT, and concerns a particular combination of factors, and the likely outcome. Currently most researchers arrange elicitation of CPTs by keeping the number of rows and columns in the CPT to a minimum, so that they need ask experts about no more than twenty or so scenarios. However in some practical problems, CPTs may need to involve more rows and columns, for example involving more than two factors, or factors which can take on more than two or three possible values. Here we propose a new way of choosing scenarios, that underpin the elicitation strategy, by taking advantage of experimental design to: ensure adequate coverage of all scenarios; and to make best use of the scarce resources like the valuable time of the experts. I show that this can be essentially constructed as a problem of how to better design choice of scenarios to elicit from a CPT. The main advantages of these designs is that they explore more of the design space compared to usual design choices like the one-factor-at-a-time (OFAT) design that underpins the popular encoding approach embedded in “CPT Calculator”. In addition this work tailors an under-utilized scenario-based elicitation method to ensure that the expert’s uncertainty was captured, together with their assessments, of the likelihood of each possible outcome. I adopt the more intuitive Outside-In Elicitation method to elicit the expert’s plausible range of assessed values, rather than the more common and reverse-order approach of eliciting their uncertainty around their best guess. Importantly this plausible range of values is more suitable for input into a new approach that was proposed for encoding scenario-based elicitation: Bayesian (rather than a Frequentist) interpretation. Whilst eliciting some scenarios from large CPTs, another challenge arises from the remaining CPT entries that are not elicited. This thesis shows how to adopt a statistical model to interpolate not only the missing CPT entries but also quantify the uncertainty for each scenario, which is new for these two situations: BNs and sensitivity analyses. For this purpose, I introduce the use of Bayesian generalized linear models (GLMs). The Bayesian updating framework also enables us to update the results of elicitation, by incorporating empirical data. The idea is to utilise scenarios elicited from experts to constructan informative Bayesian “prior” model. Then the prior information (e.g. about scenarios) is combined with the empirical data (e.g. from computer model runs), to update the posterior estimates of plausible outcomes (affecting all scenarios). The main findings showed that Bayesian inference suits the small data problem of encoding the expert’s mental model underlying their assessments, allowing uncertainty to vary about each scenario. In addition Bayesian inference provides rich feedback to the modeller and experts on the plausible influence of factors on the response, and whether any information was gained on their interactions. That information could be pivotal to designing the next phase of elicitation about habitat requirements or another phase of computer models. In this way, the Bayesian paradigm naturally supports a sequential approach to gradually accruing information about the issue at hand. As summarised above, the novel statistical methodology presented in this thesis also contributes to computer science. Specifically computation for Bayesian Networks and sensitivity analyses of large computer experiments can be re-designed to be more efficient. Here the expert knowledge is useful to complement the empirical data to inform a more comprehensive analyses.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Info & Comm Tech
Science, Environment, Engineering and Technology
Full Text
Styles APA, Harvard, Vancouver, ISO, etc.
4

Mansour, Rami. « Reliability Assessment and Probabilistic Optimization in Structural Design ». Doctoral thesis, KTH, Hållfasthetslära (Avd.), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-183572.

Texte intégral
Résumé :
Research in the field of reliability based design is mainly focused on two sub-areas: The computation of the probability of failure and its integration in the reliability based design optimization (RBDO) loop. Four papers are presented in this work, representing a contribution to both sub-areas. In the first paper, a new Second Order Reliability Method (SORM) is presented. As opposed to the most commonly used SORMs, the presented approach is not limited to hyper-parabolic approximation of the performance function at the Most Probable Point (MPP) of failure. Instead, a full quadratic fit is used leading to a better approximation of the real performance function and therefore more accurate values of the probability of failure. The second paper focuses on the integration of the expression for the probability of failure for general quadratic function, presented in the first paper, in RBDO. One important feature of the proposed approach is that it does not involve locating the MPP. In the third paper, the expressions for the probability of failure based on general quadratic limit-state functions presented in the first paper are applied for the special case of a hyper-parabola. The expression is reformulated and simplified so that the probability of failure is only a function of three statistical measures: the Cornell reliability index, the skewness and the kurtosis of the hyper-parabola. These statistical measures are functions of the First-Order Reliability Index and the curvatures at the MPP. In the last paper, an approximate and efficient reliability method is proposed. Focus is on computational efficiency as well as intuitiveness for practicing engineers, especially regarding probabilistic fatigue problems where volume methods are used. The number of function evaluations to compute the probability of failure of the design under different types of uncertainties is a priori known to be 3n+2 in the proposed method, where n is the number of stochastic design variables.

QC 20160317

Styles APA, Harvard, Vancouver, ISO, etc.
5

Chapman, Gary. « Computer-based musical composition using a probabilistic algorithmic method ». Thesis, University of Southampton, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.341603.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Liu, Xiang. « Identification of indoor airborne contaminant sources with probability-based inverse modeling methods ». Connect to online resource, 2008. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3337124.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Dedovic, Ines Verfasser], Jens-Rainer [Akademischer Betreuer] Ohm et Dorit [Akademischer Betreuer] [Merhof. « Efficient probability distribution function estimation for energy based image segmentation methods / Ines Dedovic ; Jens-Rainer Ohm, Dorit Merhof ». Aachen : Universitätsbibliothek der RWTH Aachen, 2016. http://d-nb.info/1130871541/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Dedović, Ines [Verfasser], Jens-Rainer Akademischer Betreuer] Ohm et Dorit [Akademischer Betreuer] [Merhof. « Efficient probability distribution function estimation for energy based image segmentation methods / Ines Dedovic ; Jens-Rainer Ohm, Dorit Merhof ». Aachen : Universitätsbibliothek der RWTH Aachen, 2016. http://d-nb.info/1130871541/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Kraum, Martin. « Fischer-Tropsch synthesis on supported cobalt based Catalysts Influence of various preparation methods and supports on catalyst activity and chain growth probability / ». [S.l. : s.n.], 1999. http://deposit.ddb.de/cgi-bin/dokserv?idn=959085181.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Good, Norman Markus. « Methods for estimating the component biomass of a single tree and a stand of trees using variable probability sampling techniques ». Thesis, Queensland University of Technology, 2001. https://eprints.qut.edu.au/37097/1/37097_Good_2001.pdf.

Texte intégral
Résumé :
This thesis developed multistage sampling methods for estimating the aggregate biomass of selected tree components, such as leaves, branches, trunk and total, in woodlands in central and western Queensland. To estimate the component biomass of a single tree randomised branch sampling (RBS) and importance sampling (IS) were trialed. RBS and IS were found to reduce the amount of time and effort to sample tree components in comparison with other standard destructive sampling methods such as ratio sampling, especially when sampling small components such as leaves and small twigs. However, RBS did not estimate leaf and small twig biomass to an acceptable degree of precision using current methods for creating path selection probabilities. In addition to providing an unbiased estimate of tree component biomass, individual estimates were used for developing allometric regression equations. Equations based on large components such as total biomass produced narrower confidence intervals than equations developed using ratio sampling. However, RBS does not estimate small component biomass such as leaves and small wood components with an acceptable degree of precision, and should be mainly used in conjunction with IS for estimating larger component biomass. A whole tree was completely enumerated to set up a sampling space with which RBS could be evaluated under a number of scenarios. To achieve a desired precision, RBS sample size and branch diameter exponents were varied, and the RBS method was simulated using both analytical and re-sampling methods. It was found that there is a significant amount of natural variation present when relating the biomass of small components to branch diameter, for example. This finding validates earlier decisions to question the efficacy of RBS for estimating small component biomass in eucalypt species. In addition, significant improvements can be made to increase the precision of RBS by increasing the number of samples taken, but more importantly by varying the exponent used for constructing selection probabilities. To further evaluate RBS on trees with differing growth forms from that enumerated, virtual trees were generated. These virtual trees were created using L-systems algebra. Decision rules for creating trees were based on easily measurable characteristics that influence a tree's growth and form. These characteristics included; child-to-child and children-to-parent branch diameter relationships, branch length and branch taper. They were modelled using probability distributions of best fit. By varying the size of a tree and/or the variation in the model describing tree characteristics; it was possible to simulate the natural variation between trees of similar size and fonn. By creating visualisations of these trees, it is possible to determine using visual means whether RBS could be effectively applied to particular trees or tree species. Simulation also aided in identifying which characteristics most influenced the precision of RBS, namely, branch length and branch taper. After evaluation of RBS/IS for estimating the component biomass of a single tree, methods for estimating the component biomass of a stand of trees (or plot) were developed and evaluated. A sampling scheme was developed which incorporated both model-based and design-based biomass estimation methods. This scheme clearly illustrated the strong and weak points associated with both approaches for estimating plot biomass. Using ratio sampling was more efficient than using RBS/IS in the field, especially for larger tree components. Probability proportional to size sampling (PPS) -size being the trunk diameter at breast height - generated estimates of component plot biomass that were comparable to those generated using model-based approaches. The research did, however, indicate that PPS is more precise than the use of regression prediction ( allometric) equations for estimating larger components such as trunk or total biomass, and the precision increases in areas of greater biomass. Using more reliable auxiliary information for identifying suitable strata would reduce the amount of within plot variation, thereby increasing precision. PPS had the added advantage of being unbiased and unhindered by numerous assumptions applicable to the population of interest, the case with a model-based approach. The application of allometric equations in predicting the component biomass of tree species other than that for which the allometric was developed is problematic. Differences in wood density need to be taken into account as well as differences in growth form and within species variability, as outlined in virtual tree simulations. However, the development and application of allometric prediction equations in local species-specific contexts is more desirable than PPS.
Styles APA, Harvard, Vancouver, ISO, etc.

Livres sur le sujet "Probability-based method"

1

Center, Lewis Research, dir. EUPDF, an Eulerian-based Monte Carlo probability density function (PDF) solver : User's manual. [Cleveland, Ohio] : National Aeronautics and Space Administration, Lewis Research Center, 1998.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Center, Lewis Research, dir. EUPDF, an Eulerian-based Monte Carlo probability density function (PDF) solver : User's manual. [Cleveland, Ohio] : National Aeronautics and Space Administration, Lewis Research Center, 1998.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Structural performance : Probability-based assessement. London : ISTE, 2011.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

1966-, Bogaert Patrick, et Serre Marc L. 1967-, dir. Temporal GIS : Advanced functions for field-based applications. Berlin : Springer, 2001.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Markov chain Monte Carlo simulations and their statistical analysis : With web-based Fortran code. Hackensack, NJ : World Scientific, 2004.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

author, Thompson Simon G., dir. Mendelian randomization : Methods for using genetic variants in causal estimation. Boca Raton : CRC Press, Taylor & Francis Group, 2015.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Statistical methods in psychiatry research and SPSS. Toronto : Apple Academic Press, 2015.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Nichols, Eve K. Expanding access to investigational therapies for HIV infection and AIDS : March 12-13, 1990, conference summary. Washington, D.C : National Academy Press, 1991.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Beyond second opinions : Making choices about fertility treatment. Berkeley : University of California Press, 1998.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

EUPDF, an Eulerian-based Monte Carlo probability density function (PDF) solver : User's manual. [Cleveland, Ohio] : National Aeronautics and Space Administration, Lewis Research Center, 1998.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Chapitres de livres sur le sujet "Probability-based method"

1

Song, Xu, Guoqiang Li, Ying Li et Yanning Zhang. « A Probability-Based Object Tracking Method ». Dans Lecture Notes in Computer Science, 595–602. Berlin, Heidelberg : Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-42057-3_75.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Kim, Phil Young, Ji Won Kim et Yunsick Sung. « Bayesian Probability-Based Hand Property Control Method ». Dans Lecture Notes in Electrical Engineering, 251–56. Cham : Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-17314-6_33.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Ding, Yuxin, Longfei Wang, Rui Wu et Fuxing Xue. « Source Detection Method Based on Propagation Probability ». Dans Lecture Notes in Computer Science, 179–86. Cham : Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-94307-7_15.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Zhang, Di, Huifang Ma, Junjie Jia et Li Yu. « A Tag Probability Correlation Based Microblog Recommendation Method ». Dans Neural Information Processing, 491–99. Cham : Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-46672-9_55.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Zhao, Chengdong, Xuhui Wang et Jie Shao. « Method of Image Fusion Based on Improved Probability Theory ». Dans Advances in Intelligent and Soft Computing, 241–46. Berlin, Heidelberg : Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-30223-7_39.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Liu, Ying, et Yuefeng Zheng. « A Network Attack Recognition Method Based on Probability Target Graph ». Dans Emerging Trends in Intelligent and Interactive Systems and Applications, 778–85. Cham : Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-63784-2_96.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Son, Junhyuck, et Yunsick Sung. « Bayesian Probability and User Experience-Based Smart UI Design Method ». Dans Lecture Notes in Electrical Engineering, 245–50. Cham : Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-17314-6_32.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Kim, Phil Young, Yunsick Sung et Jonghyuk Park. « Bayesian Probability-Based Motion Estimation Method in Ubiquitous Computing Environments ». Dans Advances in Computer Science and Ubiquitous Computing, 593–98. Singapore : Springer Singapore, 2015. http://dx.doi.org/10.1007/978-981-10-0281-6_84.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Yang, Li, Di He, Peilin Liu et Wenxian Yu. « Fingerprint Positioning Method of Satellite Signal Based on Probability Distribution ». Dans China Satellite Navigation Conference (CSNC) 2016 Proceedings : Volume II, 211–20. Singapore : Springer Singapore, 2016. http://dx.doi.org/10.1007/978-981-10-0937-2_18.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Xie, Lixia, Siyu Liu, Hongyu Yang et Liang Zhang. « A Defect Level Assessment Method Based on Weighted Probability Ensemble ». Dans Cyberspace Safety and Security, 293–300. Cham : Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-18067-5_21.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "Probability-based method"

1

Li Zhi-gang, Zhou Jun-gang et Liu Bo-ying. « System reliability analysis method based fuzzy probability ». Dans 2016 International Conference on Fuzzy Theory and Its Applications (iFuzzy). IEEE, 2016. http://dx.doi.org/10.1109/ifuzzy.2016.8004956.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Yang, Yonghui, Fei Deng, Yunqiang Yan et Feng Gao. « A Fault Localization Method Based on Conditional Probability ». Dans 2019 IEEE 19th International Conference on Software Quality, Reliability and Security Companion (QRS-C). IEEE, 2019. http://dx.doi.org/10.1109/qrs-c.2019.00050.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Xiong, Min, et Zhangjun Liu. « Fuzzy Probability Method-Based Assessment of Green Energy ». Dans 2011 Asia-Pacific Power and Energy Engineering Conference (APPEEC). IEEE, 2011. http://dx.doi.org/10.1109/appeec.2011.5748833.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Wang, Ji-He, Jin-Xiu Zhang et Xi-Bin Cao. « Probability based collision monitoring method within formation flying ». Dans 2008 2nd International Symposium on Systems and Control in Aerospace and Astronautics (ISSCAA). IEEE, 2008. http://dx.doi.org/10.1109/isscaa.2008.4776292.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Lee, Jaeyeon, et Wooram Park. « A probability-based path planning method using fuzzy logic ». Dans 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2014). IEEE, 2014. http://dx.doi.org/10.1109/iros.2014.6942973.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Chao, Wang, Qiu Jing, Liu Guan-jun, Zhang Yong et Zhao Chen-xu. « Testability verification based on sequential probability ratio test method ». Dans 2013 IEEE AUTOTESTCON. IEEE, 2013. http://dx.doi.org/10.1109/autest.2013.6645066.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Eftekhari-Moghadam, Amir-Masud, et Marjan Abdechiri. « An unsupervised evaluation method based on probability density function ». Dans 2010 IEEE International Symposium on Industrial Electronics (ISIE 2010). IEEE, 2010. http://dx.doi.org/10.1109/isie.2010.5636328.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Bi, Qian, Shuang Wu, Yong Huang, Yalong Zhu et Zhuofei Hu. « A Target Location Method Based on Swarm Probability Fusion ». Dans 2021 IEEE 4th International Conference on Electronics Technology (ICET). IEEE, 2021. http://dx.doi.org/10.1109/icet51757.2021.9451063.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Lu, Jiang, Wen Wu, Zhenyong Zhang et Jinyuan Zhang. « Probability Calculation of Equipment Impact Based on Reliability Method ». Dans 2014 10th International Pipeline Conference. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/ipc2014-33147.

Texte intégral
Résumé :
In order to apply the Reliability Based Design and Assessment (RBDA) methodology to evaluate the equipment impact on the onshore natural gas transmission pipelines in China, a research project was undertaken by China Petroleum Pipeline Engineering Corporation (CPPE) based on the framework developed by C-FER Technologies (C-FER) in “Guidelines for Reliability Based Design and Assessment of Onshore Natural Gas Pipelines” (sponsored by PRCI). The objective of the project was to collect native data and calibrate the probability models[1] in order to make it suitable for the situations in China where there is dense population and many newly-built high pressure and large diameter pipelines. The equipment impact model consists of two components: a) the impact probability model which calculates the frequency of mechanical interference by excavation equipment; and b) the failure model which calculates the probability of failure in a given impact. A detailed survey was undertaken in 2012 in order to collect the data required to calculate the impact frequency and the load applied by an excavator to a pipeline. The survey data for impact frequency calculation was gathered based on 19,300km of transmission pipelines from 4 operating companies in China. They reflect current prevention practices and their effectiveness. The frequencies of basic events summarized in this survey used to calculate the probabilities of the fault tree are generally agreement with the data summarized in PRCI’s report. The impact frequencies calculated by the fault tree under typical prevention measures are 400%, 200%, 20% and 0% higher than that in PR-244-9910 report for class 1, class 2, class 3 and class 4 areas respectively, which is due to dense population and more construction activities. Bucket digging forces of 321 types of excavators from 20 manufacturers were gathered. The survey data of the forces are slightly higher than that in the PR-244-9729 report as a whole due to the increase in mechanical efficiency of excavators in recent years. The excavator maximum quasi-static load model was calibrated correspondingly. Equipment impact probability calculation and model sensitivity analysis results are described to present several characteristics of onshore transmission natural gas pipelines in China.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Liu, Zhangjun, Min Xiong et Hongrui Ding. « Fuzzy Probability Method-Based Risk Assessment of Engineering Project ». Dans 2010 International Conference on Internet Technology and Applications (iTAP). IEEE, 2010. http://dx.doi.org/10.1109/itapp.2010.5566233.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Rapports d'organisations sur le sujet "Probability-based method"

1

Wright, T. A simple method for probability proportional to size (. pi. ps) sampling without replacement based on ranks. Office of Scientific and Technical Information (OSTI), juin 1987. http://dx.doi.org/10.2172/6504729.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Lister, C. J., H. M. King, E. A. Atkinson, L. E. Kung et R. Nairn. A probability-based method to generate qualitative petroleum potential maps : adapted for and illustrated using ArcGIS. Natural Resources Canada/ESS/Scientific and Technical Publishing Services, 2018. http://dx.doi.org/10.4095/311225.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Kott, Phillip S. The Degrees of Freedom of a Variance Estimator in a Probability Sample. RTI Press, août 2020. http://dx.doi.org/10.3768/rtipress.2020.mr.0043.2008.

Texte intégral
Résumé :
Inferences from probability-sampling theory (more commonly called “design-based sampling theory”) often rely on the asymptotic normality of nearly unbiased estimators. When constructing a two-sided confidence interval for a mean, the ad hoc practice of determining the degrees of freedom of a probability-sampling variance estimator by subtracting the number of its variance strata from the number of variance primary sampling units (PSUs) can be justified by making usually untenable assumptions about the PSUs. We will investigate the effectiveness of this conventional and an alternative method for determining the effective degrees of freedom of a probability-sampling variance estimator under a stratified cluster sample.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Yaroshchuk, Svitlana O., Nonna N. Shapovalova, Andrii M. Striuk, Olena H. Rybalchenko, Iryna O. Dotsenko et Svitlana V. Bilashenko. Credit scoring model for microfinance organizations. [б. в.], février 2020. http://dx.doi.org/10.31812/123456789/3683.

Texte intégral
Résumé :
The purpose of the work is the development and application of models for scoring assessment of microfinance institution borrowers. This model allows to increase the efficiency of work in the field of credit. The object of research is lending. The subject of the study is a direct scoring model for improving the quality of lending using machine learning methods. The objective of the study: to determine the criteria for choosing a solvent borrower, to develop a model for an early assessment, to create software based on neural networks to determine the probability of a loan default risk. Used research methods such as analysis of the literature on banking scoring; artificial intelligence methods for scoring; modeling of scoring estimation algorithm using neural networks, empirical method for determining the optimal parameters of the training model; method of object-oriented design and programming. The result of the work is a neural network scoring model with high accuracy of calculations, an implemented system of automatic customer lending.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Pilkevych, Ihor, Oleg Boychenko, Nadiia Lobanchykova, Tetiana Vakaliuk et Serhiy Semerikov. Method of Assessing the Influence of Personnel Competence on Institutional Information Security. CEUR Workshop Proceedings, avril 2021. http://dx.doi.org/10.31812/123456789/4374.

Texte intégral
Résumé :
Modern types of internal threats and methods of counteracting these threats are analyzed. It is established that increasing the competence of the staff of the institution through training (education) is the most effective method of counteracting internal threats to information. A method for assessing the influence of personnel competence on institutional information security is proposed. This method takes into account violator models and information threat models that are designed for a specific institution. The method proposes to assess the competence of the staff of the institution by three components: the level of knowledge, skills, and character traits (personal qualities). It is proposed to assess the level of knowledge based on the results of test tasks of different levels of complexity. Not only the number of correct answers is taken into account, but also the complexity of test tasks. It is proposed to assess the assessment of the level of skills as the ratio of the number of correctly performed practical tasks to the total number of practical tasks. It is assumed that the number of practical tasks, their complexity is determined for each institution by the direction of activity. It is proposed to use a list of character traits for each position to assess the character traits (personal qualities) that a person must have to effectively perform the tasks assigned to him. This list should be developed in each institution. It is proposed to establish a quantitative assessment of the state of information security, defining it as restoring the amount of probability of occurrence of a threat from the relevant employee to the product of the general threat and employees of the institution. An experiment was conducted, the results of which form a particular institution show different values of the level of information security of the institution for different values of the competence of the staff of the institution. It is shown that with the increase of the level of competence of the staff of the institution the state of information security in the institution increases.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Clausen, Jay, Vuong Truong, Sophia Bragdon, Susan Frankenstein, Anna Wagner, Rosa Affleck et Christopher Williams. Buried-object-detection improvements incorporating environmental phenomenology into signature physics. Engineer Research and Development Center (U.S.), septembre 2022. http://dx.doi.org/10.21079/11681/45625.

Texte intégral
Résumé :
The ability to detect buried objects is critical for the Army. Therefore, this report summarizes the fourth year of an ongoing study to assess environ-mental phenomenological conditions affecting probability of detection and false alarm rates for buried-object detection using thermal infrared sensors. This study used several different approaches to identify the predominant environmental variables affecting object detection: (1) multilevel statistical modeling, (2) direct image analysis, (3) physics-based thermal modeling, and (4) application of machine learning (ML) techniques. In addition, this study developed an approach using a Canny edge methodology to identify regions of interest potentially harboring a target object. Finally, an ML method was developed to improve automatic target detection and recognition performance by accounting for environmental phenomenological conditions, improving performance by 50% over standard automatic target detection and recognition software.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Tarko, Andrew P., Qiming Guo et Raul Pineda-Mendez. Using Emerging and Extraordinary Data Sources to Improve Traffic Safety. Purdue University, 2021. http://dx.doi.org/10.5703/1288284317283.

Texte intégral
Résumé :
The current safety management program in Indiana uses a method based on aggregate crash data for conditions averaged over several-year periods with consideration of only major roadway features. This approach does not analyze the risk of crashes potentially affected by time-dependent conditions such as traffic control, operations, weather and their interaction with road geometry. With the rapid development of data collection techniques, time-dependent data have emerged, some of which have become available for safety management. This project investigated the feasibility of using emerging and existing data sources to supplement the current safety management practices in Indiana and performed a comprehensive evaluation of the quality of the new data sources and their relevance to traffic safety analysis. In two case studies, time-dependent data were acquired and integrated to estimate their effects on the hourly probability of crash and its severity on two selected types of roads: (1) rural freeways and (2) signalized intersections. The results indicate a considerable connection between hourly traffic volume, average speeds, and weather conditions on the hourly probability of crash and its severity. Although some roadway geometric features were found to affect safety, the lack of turning volume data at intersections led to some counterintuitive results. Improvements have been identified to be implemented in the next phase of the project to eliminate these undesirable results.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Weller, Joel I., Ignacy Misztal et Micha Ron. Optimization of methodology for genomic selection of moderate and large dairy cattle populations. United States Department of Agriculture, mars 2015. http://dx.doi.org/10.32747/2015.7594404.bard.

Texte intégral
Résumé :
The main objectives of this research was to detect the specific polymorphisms responsible for observed quantitative trait loci and develop optimal strategies for genomic evaluations and selection for moderate (Israel) and large (US) dairy cattle populations. A joint evaluation using all phenotypic, pedigree, and genomic data is the optimal strategy. The specific objectives were: 1) to apply strategies for determination of the causative polymorphisms based on the “a posteriori granddaughter design” (APGD), 2) to develop methods to derive unbiased estimates of gene effects derived from SNP chips analyses, 3) to derive optimal single-stage methods to estimate breeding values of animals based on marker, phenotypic and pedigree data, 4) to extend these methods to multi-trait genetic evaluations and 5) to evaluate the results of long-term genomic selection, as compared to traditional selection. Nearly all of these objectives were met. The major achievements were: The APGD and the modified granddaughter designs were applied to the US Holstein population, and regions harboring segregating quantitative trait loci (QTL) were identified for all economic traits of interest. The APGD was able to find segregating QTL for all the economic traits analyzed, and confidence intervals for QTL location ranged from ~5 to 35 million base pairs. Genomic estimated breeding values (GEBV) for milk production traits in the Israeli Holstein population were computed by the single-step method and compared to results for the two-step method. The single-step method was extended to derive GEBV for multi-parity evaluation. Long-term analysis of genomic selection demonstrated that inclusion of pedigree data from previous generations may result in less accurate GEBV. Major conclusions are: Predictions using single-step genomic best linear unbiased prediction (GBLUP) were the least biased, and that method appears to be the best tool for genomic evaluation of a small population, as it automatically accounts for parental index and allows for inclusion of female genomic information without additional steps. None of the methods applied to the Israeli Holstein population were able to derive GEBV for young bulls that were significantly better than parent averages. Thus we confirm previous studies that the main limiting factor for the accuracy of GEBV is the number of bulls with genotypes and progeny tests. Although 36 of the grandsires included in the APGD were genotyped for the BovineHDBeadChip, which includes 777,000 SNPs, we were not able to determine the causative polymorphism for any of the detected QTL. The number of valid unique markers on the BovineHDBeadChip is not sufficient for a reasonable probability to find the causative polymorphisms. Complete resequencing of the genome of approximately 50 bulls will be required, but this could not be accomplished within the framework of the current project due to funding constraints. Inclusion of pedigree data from older generations in the derivation of GEBV may result is less accurate evaluations.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Ahmad, Noshin S., Raul Pineda-Mendez, Fahad Alqahtani, Mario Romero, Jose Thomaz et Andrew P. Tarko. Effective Design and Operation of Pedestrian Crossings. Purdue University, 2022. http://dx.doi.org/10.5703/1288284317438.

Texte intégral
Résumé :
Pedestrians are vulnerable road users since they are prone to more severe injuries in any vehicular collision. While innovative solutions promise improved pedestrian safety, a careful analysis of local conditions is required before selecting proper corrective measures. This research study had two focuses: (1) methodology to identify roads and areas in Indiana where the frequency and severity of pedestrian collisions are heightened above the acceptable level, and (2) selecting effective countermeasures to mitigate or eliminate safety-critical conditions. Two general methods of identifying specific pedestrian safety concerns were proposed: (1) area-wide analysis, and (2) road-focused analysis. A suitable tool, Safety Needs Analysis Program (SNAP), is currently under development by the research team and is likely the future method to implement an area-wide type of analysis. The following models have been developed to facilitate the road-focused analysis: (1) pedestrian crossing activity level to fill the gap in pedestrian traffic data, and (2) crash probability and severity models to estimate the risk of pedestrian crashes around urban intersections in Indiana. The pedestrian safety model was effectively utilized in screening and identifying high-risk urban intersection segments for safety audits and improvements. In addition, detailed guidance was provided for many potential pedestrian safety countermeasures with specific behavioral and road conditions that justify these countermeasures. Furthermore, a procedure was presented to predict the economic feasibility of the countermeasures based on crash reduction factors. The findings of this study should help expand the existing RoadHAT tool used by the Indiana Department of Transportation (INDOT) to emphasize and strengthen pedestrian safety considerations in the current tool.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Daudelin, Francois, Lina Taing, Lucy Chen, Claudia Abreu Lopes, Adeniyi Francis Fagbamigbe et Hamid Mehmood. Mapping WASH-related disease risk : A review of risk concepts and methods. United Nations University Institute for Water, Environment and Health, décembre 2021. http://dx.doi.org/10.53328/uxuo4751.

Texte intégral
Résumé :
The report provides a review of how risk is conceived of, modelled, and mapped in studies of infectious water, sanitation, and hygiene (WASH) related diseases. It focuses on spatial epidemiology of cholera, malaria and dengue to offer recommendations for the field of WASH-related disease risk mapping. The report notes a lack of consensus on the definition of disease risk in the literature, which limits the interpretability of the resulting analyses and could affect the quality of the design and direction of public health interventions. In addition, existing risk frameworks that consider disease incidence separately from community vulnerability have conceptual overlap in their components and conflate the probability and severity of disease risk into a single component. The report identifies four methods used to develop risk maps, i) observational, ii) index-based, iii) associative modelling and iv) mechanistic modelling. Observational methods are limited by a lack of historical data sets and their assumption that historical outcomes are representative of current and future risks. The more general index-based methods offer a highly flexible approach based on observed and modelled risks and can be used for partially qualitative or difficult-to-measure indicators, such as socioeconomic vulnerability. For multidimensional risk measures, indices representing different dimensions can be aggregated to form a composite index or be considered jointly without aggregation. The latter approach can distinguish between different types of disease risk such as outbreaks of high frequency/low intensity and low frequency/high intensity. Associative models, including machine learning and artificial intelligence (AI), are commonly used to measure current risk, future risk (short-term for early warning systems) or risk in areas with low data availability, but concerns about bias, privacy, trust, and accountability in algorithms can limit their application. In addition, they typically do not account for gender and demographic variables that allow risk analyses for different vulnerable groups. As an alternative, mechanistic models can be used for similar purposes as well as to create spatial measures of disease transmission efficiency or to model risk outcomes from hypothetical scenarios. Mechanistic models, however, are limited by their inability to capture locally specific transmission dynamics. The report recommends that future WASH-related disease risk mapping research: - Conceptualise risk as a function of the probability and severity of a disease risk event. Probability and severity can be disaggregated into sub-components. For outbreak-prone diseases, probability can be represented by a likelihood component while severity can be disaggregated into transmission and sensitivity sub-components, where sensitivity represents factors affecting health and socioeconomic outcomes of infection. -Employ jointly considered unaggregated indices to map multidimensional risk. Individual indices representing multiple dimensions of risk should be developed using a range of methods to take advantage of their relative strengths. -Develop and apply collaborative approaches with public health officials, development organizations and relevant stakeholders to identify appropriate interventions and priority levels for different types of risk, while ensuring the needs and values of users are met in an ethical and socially responsible manner. -Enhance identification of vulnerable populations by further disaggregating risk estimates and accounting for demographic and behavioural variables and using novel data sources such as big data and citizen science. This review is the first to focus solely on WASH-related disease risk mapping and modelling. The recommendations can be used as a guide for developing spatial epidemiology models in tandem with public health officials and to help detect and develop tailored responses to WASH-related disease outbreaks that meet the needs of vulnerable populations. The report’s main target audience is modellers, public health authorities and partners responsible for co-designing and implementing multi-sectoral health interventions, with a particular emphasis on facilitating the integration of health and WASH services delivery contributing to Sustainable Development Goals (SDG) 3 (good health and well-being) and 6 (clean water and sanitation).
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie