Academic literature on the topic 'Shower Monte Carlo'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Shower Monte Carlo.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Shower Monte Carlo"

1

Webber, Bryan. "Parton shower Monte Carlo event generators." Scholarpedia 6, no. 12 (2011): 10662. http://dx.doi.org/10.4249/scholarpedia.10662.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kusina, A., O. Gituliar, S. Jadach, and M. Skrzypek. "Evolution Kernels for Parton Shower Monte Carlo." Acta Physica Polonica B 46, no. 7 (2015): 1343. http://dx.doi.org/10.5506/aphyspolb.46.1343.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lazzarin, Marco, Simone Alioli, and Stefano Carrazza. "MCNNTUNES: Tuning Shower Monte Carlo generators with machine learning." Computer Physics Communications 263 (June 2021): 107908. http://dx.doi.org/10.1016/j.cpc.2021.107908.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Gottschalk, Thomas D. "HARD SCATTERING QCD CORRECTIONS IN MONTE CARLO SHOWER MODELS." International Journal of Modern Physics A 02, no. 04 (August 1987): 1393–411. http://dx.doi.org/10.1142/s0217751x87000764.

Full text
Abstract:
The problem of incorporating exact 2→3 QCD cross sections into Monte Carlo models for hadron-hadron scattering is examined, and a simple, branching weight formalism which retains all the information of the 2→3 processes is presented. The connection of this prescription with the angle-ordered soft gluon results of Ellis, Marchesini and Webber is explored in detail. Possible unresolved questions for the procedure are also noted.
APA, Harvard, Vancouver, ISO, and other styles
5

Sapeta, Sebastian. "Matching NLO with parton shower in Monte Carlo scheme." Nuclear and Particle Physics Proceedings 273-275 (April 2016): 2078–83. http://dx.doi.org/10.1016/j.nuclphysbps.2015.09.336.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Dey, Rajat K., and Animesh Basak. "Behaviour of the lateral shower age of cosmic ray extensive air showers." Journal of Physics: Conference Series 2156, no. 1 (December 1, 2021): 012174. http://dx.doi.org/10.1088/1742-6596/2156/1/012174.

Full text
Abstract:
Abstract Some simple arguments are introduced for a possible explanation of the behavior of the lateral shower age of proton-initiated showers. The corresponding analytical treatment based on the proposed argument is then illustrated. Using the Monte Carlo simulation (MC) code CORSIKA, we have validated how the different characteristics associated with the lateral shower age predicted in the present analytical parametrization, can be understood. The lateral shower age of a proton-initiated shower and its correlations with the lateral shower ages of electron- and neutral pion-initiated showers supports the idea that the result of superposition of several electromagnetic sub-showers initiated by neutral pions might produce the lateral density distribution of electrons of a proton initiated shower. It is also noticed with the simulated data that the stated feature still persists even in the local shower age representation.
APA, Harvard, Vancouver, ISO, and other styles
7

Jadach, S., A. Kusina, W. Płaczek, and M. Skrzypek. "NLO Corrections in the Initial-state Parton Shower Monte Carlo." Acta Physica Polonica B 44, no. 11 (2013): 2179. http://dx.doi.org/10.5506/aphyspolb.44.2179.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

RENK, THORSTEN. "YaJEM — A MONTE CARLO CODE FOR IN-MEDIUM SHOWER EVOLUTION." International Journal of Modern Physics E 20, no. 07 (July 2011): 1594–99. http://dx.doi.org/10.1142/s0218301311019933.

Full text
Abstract:
High transverse momentum (PT) QCD scattering processes are regarded as a valuable tool to study the medium produced in heavy-ion collisions, as due to uncertainty arguments their cross section should be calculable independent of medium properties whereas the medium then modifies only the final state partons emerging from a hard vertex. With the heavy-ion physics program at the CERN LHC imminent, the attention of high PT physics in heavy ion collisions is shifting from the observation of hard single hadrons to fully reconstructed jets. However, the presence of a background medium at low PT complicates jet-finding as compared to p - p collisions. Monte-Carlo (MC) codes designed to simulate the evolution of parton showers evolving into hadron jets are valuable tools to understand the complicated interplay between the medium modification of the jet and the bias introduced by a specific jet-finding scheme. However, such codes also use a set of approximations which needs to be tested against the better understood single high PT hadron observables. In this paper, I review the ideas underlying the MC code YaJEM (Yet another Jet Energy-loss Model) and present some of the results obtained with the code.
APA, Harvard, Vancouver, ISO, and other styles
9

HUEGE, T., and H. FALCKE. "MONTE CARLO SIMULATIONS OF RADIO EMISSION FROM COSMIC RAY AIR SHOWERS." International Journal of Modern Physics A 21, supp01 (July 2006): 60–64. http://dx.doi.org/10.1142/s0217751x06033374.

Full text
Abstract:
As a basis for the interpretation of data gathered by LOPES and other experiments, we have carried out Monte Carlo simulations of geosynchrotron radio emission from cosmic ray air showers. The simulations, having been verified carefully with analytical calculations, reveal a wealth of information on the characteristics of the radio signal and their dependence on specific air shower parameters. In this article, we review the spatial characteristics of the radio emission, its predicted frequency spectrum and its dependence on important air shower parameters such as the shower zenith angle, the primary particle energy and the depth of the shower maximum, which can in turn be related to the nature of the primary particle.
APA, Harvard, Vancouver, ISO, and other styles
10

Jones, S. P. "Higgs Boson Pair Production: Monte Carlo Generator Interface and Parton Shower." Acta Physica Polonica B Proceedings Supplement 11, no. 2 (2018): 295. http://dx.doi.org/10.5506/aphyspolbsupp.11.295.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Shower Monte Carlo"

1

Nail, Graeme. "Quantum chromodynamics : simulation in Monte Carlo event generators." Thesis, University of Manchester, 2018. https://www.research.manchester.ac.uk/portal/en/theses/quantum-chromodynamics-simulation-in-monte-carlo-event-generators(46dc6f2e-1552-4dfa-b435-9608932a3261).html.

Full text
Abstract:
This thesis contains the work of two recent developments in the Herwig general purpose event genrator. Firstly, the results from an new implementation of the KrkNLO method in the Herwig event generator are presented. This method allows enables the generation of matched next-to-leading order plus parton shower events through the application of simple positive weights to showered leading order events. This simplicity is achieved by the construction Monte Carlo scheme parton distribution functions. This implementation contains the necessary components to simulation Drell-Yan production as well as Higgs production via gluon fusion. This is used to generate the first differential Higgs results using this method. The results from this implementation are shown to be comparable with predictions from the well established approaches of POWHEG and MC@NLO. The predictions from KrkNLO are found to closely resemble the original configuration for POWHEG. Secondly, a benchmark study focussing on the source of perturbative uncertainties in parton showers is presented. The study employs leading order plus parton shower simulations as a starting point in order to establish a baseline set of controllable uncertainties. The aim of which is to build an understanding of the uncertainties associated with a full simulation which includes higher-order corrections and interplay with non- perturbative models. The uncertainty estimates for a number of benchmark processes are presented. The requirement that these estimates be consistent across the two distinct parton show implementations in Herwig provided an important measure to assess the quality of these uncertainty estimates. The profile scale choice is seen to be an important consideration with the power and hfact displaying inconsistencies between the showers. The resummation profile scale is shown to deliver consistent predictions for the central value and uncertainty bands.
APA, Harvard, Vancouver, ISO, and other styles
2

FERRARIO, RAVASIO SILVIA. "Top-mass observables: all-orders behaviour, renormalons and NLO + Parton Shower effects." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2019. http://hdl.handle.net/10281/241087.

Full text
Abstract:
In questa tesi ci concentriamo su alcuni aspetti teorici che concernono la determinazione della massa del quark top ($ m_t $), problema che persiste nell'essere altamente controverso. Generalmente, per misurare la massa del top, sono necessarie predizioni teoriche dipendenti da $m_t$. Il parametro $m_t$ coincide con la massa fisica, che è collegata alla massa nuda attraverso una procedura di rinormalizzazione. Sono possibili diversi schemi di rinormalizzazione per la massa e il più naturale sembra essere quello della massa polo. Tuttavia, nel caso di oggetti colorati, la massa polo contiene rinormaloni di origine infrarossa, i quali si manifestano come coefficienti che crescono fattorialmente, rovinando la convergenza delle serie perturbativa e portando ad ambigutá di ordine $\Lambda_{\rm QCD}$. D’altro canto, shemi di massa come l’$\overline{\rm MS}$ sono liberi da questi rinormaloni. Fortunatamente, l’ambiguitá rinormalonica sembra essere ben al di sotto dell’errore sistematico quotato per le misurazioni della massa polo. Pertanto questo tipo di determinazione è ancora affidabile. Nella prima parte della tesi studiamo la presenza di rinormaloni in osservabili che possono essere impiegate per la determinazione della massa del top. Consideriamo un modello semplificato per descrivere il processo $W^* \to t \bar{b} \to W b \bar{b}$. Il calcolo è eseguito nel limite in cui il numero di sapori dei quark leggeri $n_f$ è molto grande, utilizzando un nuovo metodo con cui è possibile valutare numericamente una generica osservabile all’ordine $\alpha_s(\alpha_s n_f)^n$ per ogni valore di $n$. Due sono le sorgenti di rinormaloni nelle nostre osservabile: l’uso della massa polo e la richiesta di tagli cinematici sui momenti dei jet. Per questo, predizioni ottenute nello schema polo sono comparate con quelle calcolate nello schema $\overline{\rm MS}$. Dalla nostra analisi risulta che la sezione d’urto senza tagli, se espressa in termini della massa $\overline{\rm MS}$, è libera da rinormaloni lineari, i quali appaiono però in ogni schema appena vengono introdotti dei tagli cinematici relativi al momento dei jet. Inoltre, la massa dei prodotti di decadimento del top è sempre affetta da rinormaloni lineari. L’energia del bosone $W$ ha un rinormalone in ogni schema nel limite in cui la larghezza di decadimento del top è zero, altrimenti, quando una larghezza finita è usata nel calcolo, tali rinormaloni sono assenti nello schema $\overline{\rm MS}$. Le determinazioni più precise della massa del top sono quelle dirette, ossia quelle basate sulla ricostruzione della cinematica dei prodotti di decadimento del top. Queste misure sono basate sull’uso di generatori di eventi Monte Carlo. I generatori che vengono utilizzati devono essere il più accurati possibili, onde evitare imprecisioni nella misura. A questo proposito, nella seconda parte della tesi confrontiamo diversi generatori NLO, implementati nel codice {\tt POWHEG BOX}, che differiscono per il livello di accuratezza impiegato nel descrivere il decadimento del top. Anche l’impatto dei programmi Monte Carlo che implementano la “parton shower”, e che quindi completano gli eventi generati da POWHEG BOX, è oggetto di studio in questa seconda parte della tesi. In particolare, noi ci focalizziamo sui programmi più usati, Pythia8.2 ed Herwig7.1, e presentiamo un metodo per interfacciarli a processi contenenti risonanze che possono emettere radiazione. Il paragone fra diversi generatori Monte Carlo che hanno formalmente lo stesso livello di accuratezza è infatti un passo obbligato verso una stima ragionevole dell’incertezza associata alla misurazione della massa del quark top.
In this thesis we focus on the theoretical subtleties of the top-quark mass ($m_t$) determination, issue which persists in being highly controversial. Typically, in order to infer the top mass, theoretical predictions dependent on $m_t$ are employed. The parameter $m_t$ is the physical mass, that is connected with the bare mass though a renormalization procedure. Several renormalization schemes are possible and the most natural seems to be the pole-mass one. However, the pole mass is not very well defined for a coloured object like the top quark. The pole mass is indeed affected by the presence of infrared renormalons. They manifest as factorially growing coefficients that spoil the convergence of the perturbative series, leading to ambiguities of order of $\Lambda_{\rm QCD}$. On the other hand, short-distance mass schemes, like the $\overline{\rm MS}$, are known to be free from such renormalons. Luckily, the renormalon ambiguity seems to be safely below the quoted systematic errors on the pole-mass determinations, so these measurements are still valuable. In the first part of the thesis, we investigate the presence of linear renormalons in observables that can be employed to determine the top mass. We considered a simplified toy model to describe $W^* \to t \bar{b} \to Wb \bar{b}$. The computation is carried out in the limit of a large number of flavours ($n_f$), using a new method that allows to easily evaluate any infrared safe observable at order $\alpha_s(\alpha_s n_f)^n$ for any $n$. The observables we consider are, in general, affected by two sources of renormalons: the pole-mass definition and the jet requirements. We compare and discuss the predictions obtained in the usual pole scheme with those computed in the $\overline{\rm MS}$ one. We find that the total cross section without cuts, when expressed in terms of the $\overline{\rm MS}$ mass, does not exhibit linear renormalons, but, as soon as selection cuts are introduced, jets-related linear renormalons arise in any mass scheme. In addition, we show that the reconstructed mass is affected by linear renormalons in any scheme. The average energy of the $W$ boson (that we consider as a simplified example of leptonic observable) has a renormalon in the narrow-width limit in any mass scheme, that is however screened at large orders for finite top widths, provided the top mass is in the $\overline{\rm MS}$ scheme. The most precise determinations of the top mass are the direct ones, i.e. those that rely upon the reconstruction of the kinematics of the top-decay products. Direct determinations are heavily based on the use of Monte Carlo event generators. The generators employed must be as much accurate as possible, in order not to introduce biases in the measurements. To this purpose, the second part of the thesis is devoted to the comparison of several NLO generators, implemented in the {\tt POWHEG BOX} framework, that differ by the level of accuracy employed to describe the top decay. The impact of the shower Monte Carlo programs, used to complete the NLO events generated by {\tt POWHEG BOX}, is also studied. In particular, we discuss the two most widely used shower Monte Carlo programs, i.e. {\tt Pythia 8.2} and \{\tt Herwig 7.1}, and we present a method to interface them with processes that contain decayed emitting resonances. The comparison of several Monte Carlo programs that have formally the same level of accuracy is, indeed, a mandatory step towards a sound estimate of the uncertainty associated with $m_t$.
APA, Harvard, Vancouver, ISO, and other styles
3

Idrissi, Ibnsalih Walid. "Selection of showering events and background suppression in ANTARES: comparison between the effects using two different Monte Carlo version." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/18132/.

Full text
Abstract:
Al momento ANTARES è il più grande telescopio di neutrini sottomarino ed è situato nel Mar Meditteraneo, circa 40 km a largo di Tolone, Francia, ad una profondità di 2450 m in fondo al mare. Lo scopo principale di ANTARES è quello di poter osservare neutrini di alta energia riconducibili a sorgenti astrofisiche. Un fondo irriducibile del rivelatore è rappresentato dai muoni atmosferici, prodotti dalle interazioni dei raggi cosmici con i nuclei dell’atmosfera. La collaborazione ANTARES fa largo uso di simulazioni Monte Carlo e recentemente è stata rilasciata una nuova versione della simulazione che tiene conto di effetti di invecchiamento del rivelatore al fine di migliorare l’accordo tra la simulazione ed i dati. In questa tesi, utilizzando eventi di neutrini di tipo sciame, si sono confrontate la vecchia e la nuova simulazione Monte Carlo, in particolare ci si è concentrati sulla reiezione del fondo dovuto ai muoni atmosferici.
APA, Harvard, Vancouver, ISO, and other styles
4

Hakmana, Witharana Sampath S. "Development of Cosmic Ray Simulation Program -- Earth Cosmic Ray Shower (ECRS)." Digital Archive @ GSU, 2007. http://digitalarchive.gsu.edu/phy_astr_diss/12.

Full text
Abstract:
ECRS is a program for the detailed simulation of extensive air shower initiated by high energy cosmic ray particles. In this dissertation work, a Geant4 based ECRS simulation was designed and developed to study secondary cosmic ray particle showers in the full range of Earth's atmosphere. A proper atmospheric air density and geomagnetic field are implemented in order to correctly simulate the charged particles interactions in the Earth's atmosphere. The initial simulation was done for the Atlanta (33.460 N , 84.250 W) region. Four different types of primary proton energies (109, 1010, 1011 and 1012 eV) were considered to determine the secondary particle distribution at the Earth's surface. The geomagnetic field and atmospheric air density have considerable effects on the muon particle distribution at the Earth's surface. The muon charge ratio at the Earth's surface was studied with ECRS simulation for two different geomagnetic locations: Atlanta, Georgia, USA and Lynn Lake, Manitoba, Canada. The simulation results are shown in excellent agreement with the data from NMSU-WIZARD/CAPRICE and BESS experiments at Lynn Lake. At low momentum, ground level muon charge ratios show latitude dependent geomagnetic effects for both Atlanta and Lynn Lake from the simulation. The simulated charge ratio is 1.20 ± 0.05 (without geomagnetic field), 1.12 ± 0.05 (with geomagnetic field) for Atlanta and 1.22 ± 0.04 (with geomagnetic field) for Lynn Lake. These types of studies are very important for analyzing secondary cosmic ray muon flux distribution at the Earth's surface and can be used to study the atmospheric neutrino oscillations.
APA, Harvard, Vancouver, ISO, and other styles
5

RE, EMANUELE. "Next - to - leading order qcd corrections to shower Monte Carlo event generators: single vector- boson and single- top hadroproduction." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2009. http://hdl.handle.net/10281/7455.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

ALIOLI, SIMONE. "Matching next-to-leading-order QCD calculations with shower Monte Carlo Simulations: single vector boson and higgs boson productions in powheg." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2009. http://hdl.handle.net/10281/7381.

Full text
Abstract:
In the past years, next-to-leading order (NLO) QCD computations have become standard tools for phenomenological studies at lepton and hadron colliders. On the experimental side, instead, general purpose Shower Monte Carlo (SMC) programs have become the main tools used in the analysis. These programs perform a resummation of all order leading logarithmic contributions in soft and collinear approximation. The whole process is thus represented as a parton shower, in which subsequent emissions are strongly ordered. Being fully exclusive, it is easy to interface them with phenomenological hadronization models, enabling the comparison with experimental data. However, they do not enforce NLO accuracy. In view of increasing precision required to disentangle signals from backgrounds, at present and future colliders, it has become clear that SMC programs should be improved, when possible, with NLO results. In this way a large amount of the acquired knowledge on QCD corrections would be made directly available to the experimentalists, in a flexible form that they could easily use for simulations. The problem of merging NLO calculations with parton shower simulations is basically that of avoiding overcounting, since the SMC programs already implement approximate NLO corrections. Several proposals have appeared in the literature during past years to overcome this problem. However, the first general solution to the overcounting was the MC@NLO proposal. The basic idea of MC@NLO is that of avoiding the overcounting by subtracting from the exact NLO cross section its approximation, as implemented in the SMC program to which the NLO computation is then matched. Such approximated cross section is computed analytically, and is SMC dependent. On the other hand, the MC subtraction terms are process-independent, and thus, for a given SMC, can be computed once and for all. In the current version of the MC@NLO code, the MC subtraction terms have been computed for the HERWIG SMC. In turns out, however, that in general, the exact NLO cross section minus the MC subtraction terms does not need to be positive. Therefore MC@NLO can generate events with negative weights. For the processes implemented so far, negative-weighted events may reach about 10--15% of the total. More recently, a method, named POWHEG (Positive Weight Hardest Emission Generator), was proposed that overcomes the problem of negative weighted events, and that is not SMC specific. In the POWHEG method the hardest radiation is generated first, with a technique that yields only positive-weighted events using the exact NLO matrix elements. The POWHEG output can then be interfaced to any SMC program that is either pt-ordered, or allows the implementation of a pt veto. The POWHEG method has been successfully tested in several production processes, both at leptonic and hadronic colliders. Among these we list: $ZZ$, $Q\bar{Q}$ hadroproduction, $ q\bar{q}$ and top pairs production and decay from $e^+e^-$ annihilation, Drell-Yan vector boson production, $W'$ production, Higgs boson production via gluon fusion, Higgs boson production associated with a vector boson (Higgs-strahlung) and single top, both in the $s$- and $t$-channel production mechanism. Detailed comparisons have been carried out between the POWHEG and MCatNLO results, and reasonable agreement has been found, which nicely confirms the validity of both approaches. In the present work we give a detailed description of the POWHEG method and an overview of two specific applications: single vector boson and Higgs boson production via gluon fusion. We first present the features of a general subtraction scheme. Then, we illustrate in detail two such schemes, which we adopted in calculations appearing in this thesis: the Catani and Seymour (CS) and the Frixione, Kunszt and Signer (FKS) one. Next we concentrate on the application of the POWHEG method to the process of single vector boson production, where, in the POWHEG framework, the Catani-Seymour subtraction approach was employed for the first time. We also introduced a generalization of the method in order to deal with vanishing Born cross sections, as in the case of $W^\pm$ production. Matrix elements were evaluated from scratch using helicity amplitude methods, including finite width effects, $Z/\gamma$ interference and angular correlations of decay products. Our program has been interfaced both with HERWIG and with PYTHIA, two of the most popular Shower Monte Carlo used in simulations. Results were found in remarkable agreement both with Tevatron data and with the MC@NLO program. We also discuss results at the LHC collider. Higgs boson production via gluon fusion process is then presented, with applications to both Tevatron and LHC colliders. Gluon fusion is the predominant Higgs boson production channel over a wide range of masses. Matrix elements were evaluated analytically and regularized according to the FKS subtraction formalism. In this case, results show agreement with MC@NLO distributions up to next-to-next-to-leading order (NNLO) contributions. However, we fully understand the origin of these discrepancies and show that the POWHEG framework allows enough flexibility to get rid of them, if it is needed. Our results were also checked against NNLO and $q_T$ resummed available calculations, giving expected results.
APA, Harvard, Vancouver, ISO, and other styles
7

Zager, Eric Louis. "The impact of TeV nucleus-nucleus simulations on JACEE results /." Thesis, Connect to this title online; UW restricted, 2002. http://hdl.handle.net/1773/9757.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Brunet, Florian. "Reconstruction et analyse des gerbes électromagnétiques dans l'expérience OPERA et étude des oscillations neutrino avec détection d'électrons." Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00947334.

Full text
Abstract:
Un vaste programme international est en cours pour déterminer les paramètres du phénomène d'oscillation des neutrinos et approfondir la connaissance de la matrice de mélange des neutrinos (MNSP). Le détecteur OPERA, qui est installé dans le laboratoire souterrain du Gran Sasso en Italie, a pour but principal de mettre en évidence l'apparition de neutrinos de type tau dans un faisceau de neutrinos initialement de type muon, produit au CERN (CNGS) 730 km en amont. Il est aussi en mesure de détecter les oscillations des neutrinos muon en neutrinos électron, donnant accès au paramètre de mélange sin(2θ13)2, où θ13 est le dernier angle de la matrice MNSP finalement déterminé en 2012 conjointement par Daya Bay, RENO et Double Chooz. Pour déterminer la présence des ντ dans le faisceau, le détecteur OPERA est composé de cibles calorimétriques utilisant une alternance de plaques de plomb et de films d'émulsion. Ceux-ci permettront de reconstruire les traces des particules chargées résultant des interactions neutrino avec une précision inégalable (de l'ordre du micron). La recherche des événements de signal d'oscillation νµ → νe sera basée sur l'aptitude à identifier les électrons, à rejeter les événements de fond où un π0 est produit et à soustraire le fond dominant intrinsèque provenant du faisceau. Ce travail de thèse a pour objectif l'élaboration de méthodes d'analyse pour améliorer les performances du détecteur OPERA dans la recherche d'oscillations νµ → νe .
APA, Harvard, Vancouver, ISO, and other styles
9

Champion, Theresa Janet. "Quality assurance of CsI(TI) crystals for the B←aB←a←r electromagnetic calorimeter, and a Monte Carlo study of the CP-violating channel B'0#←>##pi#'+#pi#'-#pi#'0 for the B←aB←a←r." Thesis, Brunel University, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.311547.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Norén, Magnus. "Measuring the vertical muon intensity with the ALTO prototype at Linnaeus University." Thesis, Linnéuniversitetet, Institutionen för fysik och elektroteknik (IFE), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-107133.

Full text
Abstract:
ALTO is a project, currently in the research and development phase, with the goal of constructing a Very High Energy (VHE) gamma-ray observatory in the southern hemisphere. It will detect the particle content reaching the ground from the interactions of either VHE gamma rays or cosmic rays in the atmosphere known as extensive air showers. In this thesis, we use an ALTO prototype built at Linneaus University to estimate the vertical muon intensity in Växjö. The atmospheric muons we detect at ground level come from hadronic showers caused by a cosmic ray entering the atmosphere. Such showers are considered background noise in the context of VHE gamma-ray astronomy, and the presence of muons is an important indicator of the nature of the shower, and thus of the primary particle. The measurement is done by isolating events that produce signals in two small scintillation detectors that are part of the ALTO prototype, and are placed almost directly above each other. This gives us a data set that we assume represents muons travelling along a narrow set of trajectories, and by measuring the rate of such events, we estimate the muon intensity. We estimate the corresponding momentum threshold using two different methods; Monte Carlo simulation and calculation of the mean energy loss. The vertical muon intensity found through this method is about 21% higher than commonly accepted values. We discuss some possible explanations for this discrepancy, and conclude that the most likely explanation is that the isolated data set contains a significant number of “false positives”, i.e., events that do not represent a single muon following the desired trajectory.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Shower Monte Carlo"

1

Radio, les auditeurs en représentation: Les coulisses de Bourdin and Co et du Téléphone sonne. [Latresne]: Éd. le Bord de l'eau, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wigmans, Richard. The Physics of Shower Development. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198786351.003.0002.

Full text
Abstract:
The processes that play a role in the absorption of different types of particles in dense matter are described, with emphasis on the aspects that are important for calorimetry. A distinction is made between particles that develop electromagnetic showers (electrons, photons) and particles that are subject to the strong nuclear interaction, such as pions and protons. A separate section is dedicated to muons, which are typically not fully absorbed in practical calorimeters. The energy dependence of the various processes, and the consequences for the size requirements of detectors, are discussed in detail. The practical importance and limitations of Monte Carlo simulations of the shower development process are reviewed. The chapter ends with a summary of facts deriving from the physics of shower development that are important for calorimetry.
APA, Harvard, Vancouver, ISO, and other styles
3

Raydugin, Yuri G. Modern Risk Quantification in Complex Projects. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780198844334.001.0001.

Full text
Abstract:
There are multiple complaints that existing project risk quantification methods—both parametric and Monte Carlo—fail to produce accurate project duration and cost-risk contingencies in a majority of cases. It is shown that major components of project risk exposure—non-linear risk interactions—pertaining to complex projects are not taken into account. It is argued that a project system consists of two interacting subsystems: a project structure subsystem (PSS) and a project delivery subsystem (PDS). Any misalignments or imbalances between these two subsystems (PSS–PDS mismatches) are associated with the non-linear risk interactions. Principles of risk quantification are developed to take into account three types of non-linear risk interactions in complex projects: internal risk amplifications due to existing ‘chronic’ project system issues, knock-on interactions, and risk compounding. Modified bowtie diagrams for the three types of risk interactions are developed to identify and address interacting risks. A framework to visualize dynamic risk patterns in affinities of interacting risks is proposed. Required mathematical expressions and templates to factor relevant risk interactions to Monte Carlo models are developed. Business cases are discussed to demonstrate the power of the newly-developed non-linear Monte Carlo methodology (non-linear integrated schedule and cost risk analysis (N-SCRA)). A project system dynamics methodology based on rework cycles is adopted as a supporting risk quantification tool. Comparison of results yielded by the non-linear Monte Carlo and system dynamics models demonstrates a good alignment of the two methodologies. All developed Monte Carlo and system dynamics models are available on the book’s companion website.
APA, Harvard, Vancouver, ISO, and other styles
4

Boudreau, Joseph F., and Eric S. Swanson. Classical spin systems. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198708636.003.0020.

Full text
Abstract:
The thermodynamic properties of spin systems are evaluated with Monte Carlo methods. A review of classical thermodynamics is followed by a discussion of critical exponents. The Monte Carlo method is then applied to the two-dimensional Ising model with the goal of determining the phase diagram for magnetization. Boundary conditions, the reweighting method, autocorrelation, and critical slowing down are all explored. Cluster algorithms for overcoming critical slowing down are developed next and shown to dramatically reduce autocorrelation. A variety of spin systems that illustrate first, second, and infinite order (topological) phase transitions are explored. Finally, applications to random systems called spin glasses and to neural networks are briefly reviewed.
APA, Harvard, Vancouver, ISO, and other styles
5

Martin, Andrew D. Bayesian Analysis. Edited by Janet M. Box-Steffensmeier, Henry E. Brady, and David Collier. Oxford University Press, 2009. http://dx.doi.org/10.1093/oxfordhb/9780199286546.003.0021.

Full text
Abstract:
This article surveys modern Bayesian methods of estimating statistical models. It first provides an introduction to the Bayesian approach for statistical inference, contrasting it with more conventional approaches. It then explains the Monte Carlo principle and reviews commonly used Markov Chain Monte Carlo (MCMC) methods. This is followed by a practical justification for the use of Bayesian methods in the social sciences, and a number of examples from the literature where Bayesian methods have proven useful are shown. The article finally provides a review of modern software for Bayesian inference, and a discussion of the future of Bayesian methods in political science. One area ripe for research is the use of prior information in statistical analyses. Mixture models and those with discrete parameters (such as change point models in the time-series context) are completely underutilized in political science.
APA, Harvard, Vancouver, ISO, and other styles
6

Coolen, A. C. C., A. Annibale, and E. S. Roberts. Graphs with hard constraints: further applications and extensions. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198709893.003.0007.

Full text
Abstract:
This chapter looks at further topics pertaining to the effective use of Markov Chain Monte Carlo to sample from hard- and soft-constrained exponential random graph models. The chapter considers the question of how moves can be sampled efficiently without introducing unintended bias. It is shown mathematically and numerically that apparently very similar methods of picking out moves can give rise to significant differences in the average topology of the networks generated by the MCMC process. The general discussion in complemented with pseudocode in the relevant section of the Algorithms chapter, which explicitly sets out some accurate and practical move sampling approaches. The chapter also describes how the MCMC equilibrium probabilities can be purposely deformed to, for example, target desired correlations between degrees of connected nodes. The mathematical exposition is complemented with graphs showing the results of numerical simulations.
APA, Harvard, Vancouver, ISO, and other styles
7

Lopes, Hedibert, and Nicholas Polson. Analysis of economic data with multiscale spatio-temporal models. Edited by Anthony O'Hagan and Mike West. Oxford University Press, 2018. http://dx.doi.org/10.1093/oxfordhb/9780198703174.013.12.

Full text
Abstract:
This article discusses the use of Bayesian multiscale spatio-temporal models for the analysis of economic data. It demonstrates the utility of a general modelling approach for multiscale analysis of spatio-temporal processes with areal data observations in an economic study of agricultural production in the Brazilian state of Espìrito Santo during the period 1990–2005. The article first describes multiscale factorizations for spatial processes before presenting an exploratory multiscale data analysis and explaining the motivation for multiscale spatio-temporal models. It then examines the temporal evolution of the underlying latent multiscale coefficients and goes on to introduce a Bayesian analysis based on the multiscale decomposition of the likelihood function along with Markov chain Monte Carlo (MCMC) methods. The results from agricultural production analysis show that the spatio-temporal framework can effectively analyse massive economics data sets.
APA, Harvard, Vancouver, ISO, and other styles
8

Wright, A. G. Timing with PMTs. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780199565092.003.0008.

Full text
Abstract:
The timing capability of photomultipliers (PMTs) can be inferred from the basic laws of electron motion. The relationships between time dispersion and field strength, initial electron energy, angle of emission, and electrode spacing follow from these laws. For conventional PMTs, the major contribution to dispersion arises from the cathode-to-first-dynode region. The field gradient at the cathode primarily determines the timing. This is verified by examining the electron motion in non-uniform electric fields. The contribution from interdynode transitions is small for linear focussed PMTs. Monte Carlo simulations of output waveforms from scintillators agree with measurements. The performance of threshold, zero crossing, and constant fraction (CF) discriminators is examined, revealing the superiority of the CF types. Two organizations have made detailed timing measurements, some of which show sub-nanosecond jitter. Proximity focussed PMTs from Hamamatsu confirm time dispersion measured in picoseconds.
APA, Harvard, Vancouver, ISO, and other styles
9

Cheng, Russell. Finite Mixture Models. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198505044.003.0017.

Full text
Abstract:
Fitting a finite mixture model when the number of components, k, is unknown can be carried out using the maximum likelihood (ML) method though it is non-standard. Two well-known Bayesian Markov chain Monte Carlo (MCMC) methods are reviewed and compared with ML: the reversible jump method and one using an approximating Dirichlet process. Another Bayesian method, to be called MAPIS, is examined that first obtains point estimates for the component parameters by the maximum a posteriori method for different k and then estimates posterior distributions, including that for k, using importance sampling. MAPIS is compared with ML and the MCMC methods. The MCMC methods produce multimodal posterior parameter distributions in overfitted models. This results in the posterior distribution of k being biased towards high k. It is shown that MAPIS does not suffer from this problem. A simple numerical example is discussed.
APA, Harvard, Vancouver, ISO, and other styles
10

Cheng, Russell. Finite Mixture Examples; MAPIS Details. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198505044.003.0018.

Full text
Abstract:
Two detailed numerical examples are given in this chapter illustrating and comparing mainly the reversible jump Markov chain Monte Carlo (RJMCMC) and the maximum a posteriori/importance sampling (MAPIS) methods. The numerical examples are the well-known galaxy data set with sample size 82, and the Hidalgo stamp issues thickness data with sample size 485. A comparison is made of the estimates obtained by the RJMCMC and MAPIS methods for (i) the posterior k-distribution of the number of components, k, (ii) the predictive finite mixture distribution itself, and (iii) the posterior distributions of the component parameters and weights. The estimates obtained by MAPIS are shown to be more satisfactory and meaningful. Details are given of the practical implementation of MAPIS for five non-normal mixture models, namely: the extreme value, gamma, inverse Gaussian, lognormal, and Weibull. Mathematical details are also given of the acceptance-rejection importance sampling used in MAPIS.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Shower Monte Carlo"

1

Peneliau, Y. "Electron Photon Shower Simulation in TRIPOLI-4 Monte Carlo Code." In Advanced Monte Carlo for Radiation Physics, Particle Transport Simulation and Applications, 129–34. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/978-3-642-18211-2_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Quicke, Donald, Buntika A. Butcher, and Rachel Kruft Welton. "Monte Carlo tests and randomization." In Practical R for biologists: an introduction, 187–93. Wallingford: CABI, 2021. http://dx.doi.org/10.1079/9781789245349.0016.

Full text
Abstract:
Abstract This chapter focuses on Monte Carlo tests and randomization. It involves randomizing the observed numbers many times and comparing the randomized results with the original observed data. It is shown how randomization can be used in experimental design and sampling.
APA, Harvard, Vancouver, ISO, and other styles
3

Quicke, Donald, Buntika A. Butcher, and Rachel Kruft Welton. "Monte Carlo tests and randomization." In Practical R for biologists: an introduction, 187–93. Wallingford: CABI, 2021. http://dx.doi.org/10.1079/9781789245349.0187.

Full text
Abstract:
Abstract This chapter focuses on Monte Carlo tests and randomization. It involves randomizing the observed numbers many times and comparing the randomized results with the original observed data. It is shown how randomization can be used in experimental design and sampling.
APA, Harvard, Vancouver, ISO, and other styles
4

Aliverti, Emanuele, Daniele Durante, and Bruno Scarpa. "Projecting Proportionate Age–Specific Fertility Rates via Bayesian Skewed Processes." In Developments in Demographic Forecasting, 89–103. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-42472-5_5.

Full text
Abstract:
Abstract Fertility rates show dynamically–varying shapes when modeled as a function of the age at delivery. We incorporate this behavior under a novel Bayesian approach for dynamic modeling of proportionate age–specific fertility rates via skewed processes. The model assumes a skew–normal distribution for the age at the moment of childbirth, while allowing the location and the skewness parameters to evolve in time via Gaussian processes priors. Posterior inference is performed via Monte Carlo methods, leveraging results on unified skew–normal distributions. The proposed approach is illustrated on Italian age–specific fertility rates from 1991 to 2014, providing forecasts until 2030.
APA, Harvard, Vancouver, ISO, and other styles
5

Budde, Carlos E., and Arnd Hartmanns. "Replicating $$\textsc {Restart}$$ with Prolonged Retrials: An Experimental Report." In Tools and Algorithms for the Construction and Analysis of Systems, 373–80. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-72013-1_21.

Full text
Abstract:
AbstractStatistical model checking uses Monte Carlo simulation to analyse stochastic formal models. It avoids state space explosion, but requires rare event simulation techniques to efficiently estimate very low probabilities. One such technique is $$\textsc {Restart}$$ R E S T A R T . Villén-Altamirano recently showed—by way of a theoretical study and ad-hoc implementation—that a generalisation of $$\textsc {Restart}$$ R E S T A R T to prolonged retrials offers improved performance. In this paper, we demonstrate our independent replication of the original experimental results. We implemented $$\textsc {Restart}$$ R E S T A R T with prolonged retrials in the and tools, and apply them to the models used originally. To do so, we had to resolve ambiguities in the original work, and refine our setup multiple times. We ultimately confirm the previous results, but our experience also highlights the need for precise documentation of experiments to enable replicability in computer science.
APA, Harvard, Vancouver, ISO, and other styles
6

Akkar, S. "Earthquake Physical Risk/Loss Assessment Models and Applications: A Case Study on Content Loss Modeling Conditioned on Building Damage." In Springer Tracts in Civil Engineering, 223–37. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-68813-4_10.

Full text
Abstract:
AbstractThis paper presents a novel approach to develop content fragility conditioned on building damage for contents used in residential buildings in Turkey. The approach combines the building damage state probabilities with the content damage probabilities conditioned on building damage states to develop the content fragilities. The paper first presents the procedure and then addresses the epistemic uncertainty in building and content fragilities to show their effects on the content vulnerability. The approach also accounts for the expert opinion differences in the content replacement cost ratios (consequence functions) as part of the epistemic uncertainty. Monte Carlo sampling is used to consider the epistemic uncertainty in each model component contributing to the content vulnerability. A sample case study is presented at the end of the paper to show the implementation of the developed content fragilities by calculating the average annual loss ratio (AALR) distribution of residential content loss over the mainland Turkey.
APA, Harvard, Vancouver, ISO, and other styles
7

Li, Jianbin, Junguang Huang, Huawei Tong, and Shankai Zhang. "Study on Surface Deformation Model Induced by Shield Tunneling Based on Random Field Theory." In Lecture Notes in Civil Engineering, 440–54. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-1260-3_40.

Full text
Abstract:
AbstractBased on the shield tunnel engineering in weathered granite stratum in Xiamen, Stochastic calculations, by combining the random field theory and the finite difference analysis together with Monte Carlo simulation, are used to carry out the change law of the characteristics of surface deformation curve and surface deformation model. Results show that with the increase of the vertical scales of fluctuation, the decrease of the transverse scales of fluctuation or the increase of the coefficient of variation, the low peak distribution characteristics of the location of the maximum surface settlement induced by shield tunneling become more obvious, and the randomness and chaos of the shape of surface deformation curve gradually increase. The diversity of surface deformation model is affected by parameter correlation and randomness. Under the condition of small transverse scales of fluctuation and large vertical scales of fluctuation, the sensitivity of coefficient of variation to surface deformation mode is limited.
APA, Harvard, Vancouver, ISO, and other styles
8

Zhang, Zhenya, Deyun Lyu, Paolo Arcaini, Lei Ma, Ichiro Hasuo, and Jianjun Zhao. "Effective Hybrid System Falsification Using Monte Carlo Tree Search Guided by QB-Robustness." In Computer Aided Verification, 595–618. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-81685-8_29.

Full text
Abstract:
AbstractHybrid system falsification is an important quality assurance method for cyber-physical systems with the advantage of scalability and feasibility in practice than exhaustive verification. Falsification, given a desired temporal specification, tries to find an input of violation instead of a proof guarantee. The state-of-the-art falsification approaches often employ stochastic hill-climbing optimization that minimizes the degree of satisfaction of the temporal specification, given by its quantitative robust semantics. However, it has been shown that the performance of falsification could be severely affected by the so-called scale problem, related to the different scales of the signals used in the specification (e.g., rpm and speed): in the robustness computation, the contribution of a signal could be masked by another one. In this paper, we propose a novel approach to tackle this problem. We first introduce a new robustness definition, called QB-Robustness, which combines classical Boolean satisfaction and quantitative robustness. We prove that QB-Robustness can be used to judge the satisfaction of the specification and avoid the scale problem in its computation. QB-Robustness is exploited by a falsification approach based on Monte Carlo Tree Search over the structure of the formal specification. First, tree traversal identifies the sub-formulas for which it is needed to compute the quantitative robustness. Then, on the leaves, numerical hill-climbing optimization is performed, aiming to falsify such sub-formulas. Our in-depth evaluation on multiple benchmarks demonstrates that our approach achieves better falsification results than the state-of-the-art falsification approaches guided by the classical quantitative robustness, and it is largely not affected by the scale problem.
APA, Harvard, Vancouver, ISO, and other styles
9

Chinazzo, André, Christian De Schryver, Katharina Zweig, and Norbert Wehn. "Increasing the Sampling Efficiency for the Link Assessment Problem." In Lecture Notes in Computer Science, 39–56. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-21534-6_3.

Full text
Abstract:
AbstractComplex graphs are at the heart of today’s big data challenges like recommendation systems, customer behavior modeling, or incident detection systems. One reoccurring task in these fields is the extraction of network motifs, which are subgraphs that are reoccurring and statistically significant. To assess the statistical significance of their occurrence, the observed values in the real network need to be compared to their expected value in a random graph model.In this chapter, we focus on the so-called Link Assessment (LA) problem, in particular for bipartite networks. Lacking closed-form solutions, we require stochastic Monte Carlo approaches that raise the challenge of finding appropriate metrics for quantifying the quality of results (QoR) together with suitable heuristics that stop the computation process if no further increase in quality is expected. We provide investigation results for three quality metrics and show that observing the right metrics reveals so-called phase transitions that can be used as a reliable basis for such heuristics. Finally, we propose a heuristic that has been evaluated with real-word datasets, providing a speedup of $$15.4\times $$ 15.4 × over previous approaches.
APA, Harvard, Vancouver, ISO, and other styles
10

Belzner, Fabian, Carsten Thorenz, and Mario Oertel. "A Modernized Safety Concept for Ship Force Evaluations During Lock Filling Processes." In Lecture Notes in Civil Engineering, 271–80. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-19-6138-0_24.

Full text
Abstract:
AbstractA ship in a lock chamber is exposed to forces acting on the hull during the filling and emptying processes. These forces accelerate the ship and lead to a displacement. To avoid a collision of the ship with the lock structure, it is moored with mooring lines, which can be strained up to a certain breaking load. The force acting in the mooring lines is called the mooring line force and must be distinguished from the ship force. If the mooring line force exceeds the breaking load, the mooring line will fail and the tension energy will abruptly be transformed into kinetic energy. A snap-back of the mooring line ends can produce great forces and the mooring staff is at risk for major injuries. Furthermore, the ship will start to move and could damage the structure and itself. Thus, the mooring line force must be limited during the locking process. The mooring line force depends on the ship force and, furthermore, on the properties of ship and mooring lines. Due to the number of possible parameter combinations the given ship force alone might not be sufficient to judge on the mooring line safety. In this paper a statistical approach to determine the relations between mooring line configuration, mooring line forces and ship force based on Monte Carlo simulations is shown.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Shower Monte Carlo"

1

Waters, Laurie S., Gregg W. McKinney, Joe W. Durkee, Michael L. Fensin, John S. Hendricks, Michael R. James, Russell C. Johns, and Denise B. Pelowitz. "The MCNPX Monte Carlo Radiation Transport Code." In HADRONIC SHOWER SIMULATION WORKSHOP. AIP, 2007. http://dx.doi.org/10.1063/1.2720459.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hagmann, Chris, David Lange, and Douglas Wright. "Cosmic-ray shower generator (CRY) for Monte Carlo transport codes." In 2007 IEEE Nuclear Science Symposium Conference Record. IEEE, 2007. http://dx.doi.org/10.1109/nssmic.2007.4437209.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Jadach, Stanislaw, Wiesław Płaczek, Sebastian Sapeta, Andrzej Konrad Siodmok, and Maciej SKRZYPEK. "New simpler methods of matching NLO corrections with parton shower Monte Carlo." In Loops and Legs in Quantum Field Theory. Trieste, Italy: Sissa Medialab, 2016. http://dx.doi.org/10.22323/1.260.0020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ranft, J. "High energy hadron production Monte Carlos." In HADRONIC SHOWER SIMULATION WORKSHOP. AIP, 2007. http://dx.doi.org/10.1063/1.2720461.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Alioli, Simone. "Matching NLO QCD to Monte Carlo showers and POWHEG." In “Loops and Legs in Quantum Field Theory ” 11th DESY Workshop on Elementary Particle Physics. Trieste, Italy: Sissa Medialab, 2013. http://dx.doi.org/10.22323/1.151.0056.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Strachan, Steven, John Williamson, and Roderick Murray-Smith. "Show me the way to Monte Carlo." In the SIGCHI Conference. New York, New York, USA: ACM Press, 2007. http://dx.doi.org/10.1145/1240624.1240812.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

HUEGE, T., and H. FALCKE. "MONTE CARLO SIMULATIONS OF RADIO EMISSION FROM COSMIC RAY AIR SHOWERS." In Proceedings of the International Workshop (ARENA 2005). WORLD SCIENTIFIC, 2006. http://dx.doi.org/10.1142/9789812773791_0011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Dasgupta, Mrinal. "Parton Shower Monte Carlos vs Resummed Calculations for Interjet Energy Flow Observables." In 15th International Workshop on Deep-Inelastic Scattering and Related Subjects. Amsterdam: Science Wise Publishing, 2007. http://dx.doi.org/10.3360/dis.2007.194.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Vilchez Torres, Mylena Karen, Jimy Frank Oblitas Cruz, Ronmel Leoncio Valcárcel Bornas, Sharon Lizeth Castillo Bazán, and Dennis Renato Pérez Villena. "Model for Hydraulic Shovel Maintenance Planning Using Monte Carlo Simulation." In 20th LACCEI International Multi-Conference for Engineering, Education and Technology: “Education, Research and Leadership in Post-pandemic Engineering: Resilient, Inclusive and Sustainable Actions”. Latin American and Caribbean Consortium of Engineering Institutions, 2022. http://dx.doi.org/10.18687/laccei2022.1.1.172.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ye, Nanyang, and Zhanxing Zhu. "Stochastic Fractional Hamiltonian Monte Carlo." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/419.

Full text
Abstract:
In this paper, we propose a novel stochastic fractional Hamiltonian Monte Carlo approach which generalizes the Hamiltonian Monte Carlo method within the framework of fractional calculus and L\'evy diffusion. Due to the large ``jumps'' introduced by L\'evy noise and momentum term, the proposed dynamics is capable of exploring the parameter space more efficiently and effectively. We have shown that the fractional Hamiltonian Monte Carlo could sample the multi-modal and high-dimensional target distribution more efficiently than the existing methods driven by Brownian diffusion. We further extend our method for optimizing deep neural networks. The experimental results show that the proposed stochastic fractional Hamiltonian Monte Carlo for training deep neural networks could converge faster than other popular optimization schemes and generalize better.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Shower Monte Carlo"

1

Ayoul-Guilmard, Q., F. Nobile, S. Ganesh, M. Nuñez, R. Tosi, C. Soriano, and R. Rosi. D5.5 Report on the application of multi-level Monte Carlo to wind engineering. Scipedia, 2022. http://dx.doi.org/10.23967/exaqute.2022.3.03.

Full text
Abstract:
We study the use of multi-level Monte Carlo methods for wind engineering. This report brings together methodological research on uncertainty quantification and work on target applications of the ExaQUte project in wind and civil engineering. First, a multi-level Monte Carlo for the estimation of the conditional value at risk and an adaptive algorithm are presented. Their reliability and performance are shown on the time-average of a non-linear oscillator and on the lift coefficient of an airfoil, with both preset and adaptively refined meshes. Then, we propose an adaptive multi-fidelity Monte Carlo algorithm for turbulent fluid flows where multilevel Monte Carlo methods were found to be inefficient. Its efficiency is studied and demonstrated on the benchmark problem of quantifying the uncertainty on the drag force of a tall building under random turbulent wind conditions. All numerical experiments showcase the open-source software stack of the ExaQUte project for large-scale computing in a distributed environment.
APA, Harvard, Vancouver, ISO, and other styles
2

Ford, Richard L., and W. Ralph Nelson. The EGS Code System: Computer Programs for the Monte Carlo Simulation of Electromagnetic Cascade Showers (Version 3). Office of Scientific and Technical Information (OSTI), August 2006. http://dx.doi.org/10.2172/1104725.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Rojas-Bernal, Alejandro, and Mauricio Villamizar-Villegas. Pricing the exotic: Path-dependent American options with stochastic barriers. Banco de la República de Colombia, March 2021. http://dx.doi.org/10.32468/be.1156.

Full text
Abstract:
We develop a novel pricing strategy that approximates the value of an American option with exotic features through a portfolio of European options with different maturities. Among our findings, we show that: (i) our model is numerically robust in pricing plain vanilla American options; (ii) the model matches observed bids and premiums of multidimensional options that integrate Ratchet, Asian, and Barrier characteristics; and (iii) our closed-form approximation allows for an analytical solution of the option’s greeks, which characterize the sensitivity to various risk factors. Finally, we highlight that our estimation requires less than 1% of the computational time compared to other standard methods, such as Monte Carlo simulations.
APA, Harvard, Vancouver, ISO, and other styles
4

Bouezmarni, Taoufik, Mohamed Doukali, and Abderrahim Taamouti. Copula-based estimation of health concentration curves with an application to COVID-19. CIRANO, 2022. http://dx.doi.org/10.54932/mtkj3339.

Full text
Abstract:
COVID-19 has created an unprecedented global health crisis that caused millions of infections and deaths worldwide. Many, however, argue that pre-existing social inequalities have led to inequalities in infection and death rates across social classes, with the most-deprived classes are worst hit. In this paper, we derive semi/non-parametric estimators of Health Concentration Curve (HC) that can quantify inequalities in COVID-19 infections and deaths and help identify the social classes that are most at risk of infection and dying from the virus. We express HC in terms of copula function that we use to build our estimators of HC. For the semi-parametric estimator, a parametric copula is used to model the dependence between health and socio-economic variables. The copula function is estimated using maximum pseudo-likelihood estimator after replacing the cumulative distribution of health variable by its empirical analogue. For the non-parametric estimator, we replace the copula function by a Bernstein copula estimator. Furthermore, we use the above estimators of HC to derive copula-based estimators of health Gini coeffcient. We establish the consistency and the asymptotic normality of HC’s estimators. Using different data-generating processes and sample sizes, a Monte-Carlo simulation exercise shows that the semiparametric estimator outperforms the smoothed nonparametric estimator, and that the latter does better than the empirical estimator in terms of Integrated Mean Squared Error. Finally, we run an extensive empirical study to illustrate the importance of HC’s estimators for investigating inequality in COVID-19 infections and deaths in the U.S. The empirical results show that the inequalities in state’s socio-economic variables like poverty, race/ethnicity, and economic prosperity are behind the observed inequalities in the U.S.’s COVID-19 infections and deaths.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography