Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: CMS Experimental.

Rozprawy doktorskie na temat „CMS Experimental”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „CMS Experimental”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Thomaz, Cássia Roberta da Cunha. "Efeito da submissão ao chronic mild stress (CMS) sobre o valor reforçador do estímulo". Pontifícia Universidade Católica de São Paulo, 2001. https://tede2.pucsp.br/handle/handle/16811.

Pełny tekst źródła
Streszczenie:
Made available in DSpace on 2016-04-29T13:18:06Z (GMT). No. of bitstreams: 1 diss completa.pdf: 1344921 bytes, checksum: 0d5eb60c4e0b5309d3eaeef2b4c9187d (MD5) Previous issue date: 2001-09-28
Fundação de Amparo a Pesquisa do Estado de São Paulo
Chronic Mild Stress (CMS) is an experimental model for depression: Rats are submitted to a set of stressing conditions and as a result their consumption of water and water with sucrose, as well as the animals previous preference for water and water and sucrose also drops. It is argued that the stress changes the organism and, consequently, changes the reinforcing properties of water and water with sucrose. Therefore the stress would make the subjects insensible to reward. The present study investigated if what has been called sensitivity to reward could be described as a drop in the reinforcing value of a reinforcer. In other words, one of this study's goals was to evaluate if a protocol of mild stress diminished the reinforcing value of a known reinforcer, here, water with sucrose. Four male rats were submitted to a stress protocol of six weeks (as described by Willner, Towell, Sampson, Sophokleus e Muscat (1987). Two of them were also submitted to operant sessions of a FR-FR concurrent schedule of reinforcement, with responses followed either by water or water with sucrose prior and after the stress protocol. Results showed that all four subjects diminished their overall liquid consumption while the stress protocol was in effect. Results also showed the two subjects preferred water with sucrose on the FRFR condition (with higher rates of responses on the lever associated with water with sucrose), and that these subjects recovered their overall consumption and their preference for water with sucrose faster than the two subjects not submitted to the operant condition
Chronic Mild Stress (CMS) é um modelo animal experimental proposto como um modelo de depressão, ao qual ratos são submetidos. Após passarem por um conjunto de "situações de estresse suave", o consumo de água e de água com sacarose desses animais decresce. Considera-se que a submissão ao conjunto de estressores modifica o organismo e, consequentemente, a propriedade recompensadora da água e da água com sacarose. Supõe-se, então, que o sujeito toma-se "Insensível" à recompensa. O presente estudo pretendeu replicar os resultados de Willner, Towell, Sampson, Sopholeus e Muscat (1987) e também verificar se o que é denominado "Insensibilidade à recompensa" poderia ser descrito como diminuição do valor reforçador do estímulo, uma vez que este estímulo, nos estudos de CMS, não é produzido sistematicamente por uma ação/resposta dos sujeitos sistematicamente medida. Ou seja, a submissão ao "regime de estresse" afetaria o valor reforçador do estímulo? Ratos machos foram sujeitos. Dois sujeitos foram submetidos a um conjunto de condições de estresse suave, por 6 semanas, conforme descritos por Willner e cols. (1987), como por exemplo, privação de água e comida, barulho intermitente, iluminação contínua, agrupamento de dois sujeitos na mesma gaiola, luz estroboscópica, cheiro, apresentação de uma garrafa vazia após privação de água, acesso restrito a comida, inclinação da gaiola, presença de um objeto estranho na gaiola e chão da gaiola sujo. Verificou-se uma diminuição no consumo total de líquido e na preferência por água com sacarose destes sujeitos durante e após este procedimento. Outros dois sujeitos foram submetidos a sessões operantes sob um esquema concorrente FR15-FRI5 com água e água com sacarose como estímulos reforçadores antes e após a submissão ao mesmo conjunto de estressores. Também com estes sujeitos observou-se diminuição no consumo de líquido e na preferência por água com sacarose em testes semanais de consumo de líquido, durante as semanas de exposição ao estresse, o que indicaria um aparente efeito do regime de estresse sobre o valor reforçador dos estímulos. Observou-se também, com estes sujeitos, que a submissão à condição operante após o período de estresse parece ter alterado o efeito produzido pelo CMS, aumentando o consumo de líquido e a preferência pela água com sacarose nos testes realizados nas semanas seguintes à exposição ao regime de estresse
Style APA, Harvard, Vancouver, ISO itp.
2

Almeida, Najara Karine Salomão Pereira. "Chronic Mild Stress (CMS) e os efeitos da exposição de sujeitos a um esquema de reforçamento de tempo variável". Pontifícia Universidade Católica de São Paulo, 2013. https://tede2.pucsp.br/handle/handle/16710.

Pełny tekst źródła
Streszczenie:
Made available in DSpace on 2016-04-29T13:17:49Z (GMT). No. of bitstreams: 1 Najara Karine Salomao Pereira Almeida.pdf: 2355640 bytes, checksum: 39d43e135617768e1e42e6084f16eca9 (MD5) Previous issue date: 2013-05-28
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Chronic Mild Stress (SMS) is an anhedonia experimental animal model, induced by the chronic exposition of rats to a mild stressors protocol, and measured by the intake of sucrose, intracranial stimulation and/or site preference conditioning. Aside from anhedonia, this model is also recognized by the production of losses in the body weight, independently of specific feeding regimes and other characteristics analogous to the conditions that compose the depression diagnosis. The objective of the present study was to investigate if the exposition of the subjects to a water VT concurrent scheme water with sucrose 8% VT, of same value, before and after the stressors protocol, would produce disturbances: (1) in the body weight of the subjects; (2) in the diary consumption of food and water; (3) in the consumption and preference of liquids; (4) in the time during which the subjects remained close to the stimulus water and to the stimulus water with sucrose in the operant conditioning box, and (5) in the time in which the subjects have emitted responses in the site of the water fountain and of the water with sucrose 8% fountain. The design was composed by three experimental conditions: (1) exposition of the subjects VTP3, VTP4, VTP7, VTP8, P5, P6, P13 and P15 to the stressors protocol; (2) submission of the subjects VTP3, VTP4, VTP7 and VTP8 to the concurrent 20s VT sessions and (3) application of consumption and liquids´ preference tests in all of the experimental subjects, including the subject C10. The main disturbances noticed were: (a) decrease in the weight losses during the exposition to the protocol and decrease in weight variations during the role experiment; (b) diary water consumption similar to the ones of the subjects submitted to the FR and VI sessions; (c) increase in the diary food consumption, mainly during the exposition to the protocol; (d) constant consumption and preference by sucrose; (e) preference by sucrose in the 20s VT sessions after the exposition to the protocol, and (f) increase in the subjects´ general activity along the submission to the 20s VT sessions. The subjects exposed only to the protocol and to the liquid consumption tests didn´t present decrease in the sensibility to the reinforcer stimulus water with sucrose, what is commonly observed in other studies
Chronic Mild Stress (CMS) é um modelo animal experimental de anedonia, induzida através da exposição crônica de ratos a um protocolo de estressores pouco severos, e medida a partir de ingestão de sacarose, estimulação intracraniana e/ou condicionamento de preferência de lugar. Além de anedonia, este modelo também é reconhecido por produzir perda de peso corporal, independente de regime alimentar específico, e outras características análogas ao que compõe o diagnóstico de depressão. O objetivo do presente estudo foi investigar se a exposição de sujeitos a um esquema concorrente VT água VT água com sacarose a 8%, de mesmo valor, antes e após o protocolo de estressores, produzia alterações: (1) no peso corporal dos sujeitos; (2) no consumo diário de ração e água; (3) no consumo e preferência de líquidos; (4) no tempo em que os sujeitos permaneceram do lado do estímulo água ou do estímulo água com sacarose na caixa de condicionamento operante, e (5) no tempo em que os sujeitos emitiram respostas na região do bebedouro água ou do bebedouro água com sacarose a 8%. O delineamento foi composto por três condições experimentais: (1) exposição dos sujeitos VTP3, VTP4, VTP7, VTP8, P5, P6, P13 e P15 ao protocolo de estressores; (2) submissão dos sujeitos VTP3, VTP4, VTP7 e VTP8 as sessões concorrentes VT 20s e (3) aplicação dos testes de consumo e preferência de líquidos a todos os sujeitos da pesquisa, incluindo o sujeito C10. As principais alterações observadas foram: (a) menor diminuição de peso durante a exposição ao protocolo e menor variação de peso durante todo o experimento (b) consumo diário de água semelhante aos dos sujeitos submetidos às sessões de FR e VI; (c) aumento no consumo diário de ração, principalmente durante a exposição ao protocolo; (d) consumo de líquidos e preferência por sacarose constantes; (e) preferência por sacarose nas sessões VT 20s após a exposição dos sujeitos ao protocolo, e (f) aumento da atividade geral dos sujeitos ao longo da submissão às sessões VT 20s. Os sujeitos expostos somente ao protocolo e aos testes de consumo de líquido não apresentaram diminuição de sensibilidade ao estímulo reforçador água com sacarose comumente observado nos outros estudos
Style APA, Harvard, Vancouver, ISO itp.
3

Lacerda, Larissa Gomes. "Chronic Mild Stress (CMS): um estudo sobre a interação entre manipulação neonatal e submissão ao protocolo de estressores na vida adulta". Pontifícia Universidade Católica de São Paulo, 2013. https://tede2.pucsp.br/handle/handle/16705.

Pełny tekst źródła
Streszczenie:
Made available in DSpace on 2016-04-29T13:17:48Z (GMT). No. of bitstreams: 1 Larissa Gomes Lacerda.pdf: 718140 bytes, checksum: b435ea39accd2e28173856939c3a4690 (MD5) Previous issue date: 2013-05-29
Conselho Nacional de Desenvolvimento Científico e Tecnológico
Chronic Mild Stress (CMS) an experimental animal model of anhedonia induced by exposing the rats to a protocol of chronic stress for a long period of time. At the Laboratório do Programa de Pós Graduação em Psicologia Experimental: Análise do Comportamento, a research line about CMS and its relation with the operant behavior has been developed since 2001, as in Pereira (2009). Unlike other studies conducted in this laboratory, Pereira (2009) did not observe sucrose intake and preference reducing (anhedonia) along stressors. One pointed hypothesis was the neonatal handling was a responsible variable for that. The goal of this study was to investigate whether the neonatal handling alters the fluid intake and preference on rats that were submitted to the stressors protocol (CMS) during the adult phase. Three groups were divided among experimental conditions: 1) neonatal handling, 2) protocol and 3) neonatal handling and protocol. The used measures were: weight, ration and water intake and weekly tests over intake and fluid preference. Results found show that stressors protocol reduced weight of subjects from the manipulation group (MP) as for the ones from only stressors protocol group (P). Food and water intake also presented similar results for Groups P and MP: all subjects presented water intake enhance and ration intake reducing during protocol. For fluid intake and preference tests, both Group P and MP presented preference oscillation at no stressors period and sucrose intake enhance during protocol submission weeks, that means, they did not produced anhedonia. Results allow to assure that the group submitted to maternal manipulation with 15min mother s separation did not show any difference in comparison to the nohandling group, specially related to preference and intake tests, when submitted to CMS
Chronic Mild Stress (CMS) um modelo animal experimental de anedonia induzida através da exposição de ratos a um protocolo de estressores crônicos por um longo período de tempo. No Laboratório do Programa de Pós Graduação em Psicologia Experimental: Análise do Comportamento, uma linha de pesquisa sobre o CMS e a relação com o comportamento operante tem sido desenvolvida desde 2001, como em Pereira, 2009. Diferente de outros estudos realizados nesse laboratório, Pereira (2009) não observou redução da ingestão e preferência de sacarose (anedonia) durante os estressores. Uma das hipóteses levantadas foi que a manipulação neonatal tenha sido uma variável responsável por isso. O objetivo do presente estudo foi investigar se a manipulação neonatal, altera a ingestão e preferência de líquidos em ratos que passaram pelo protocolo de estressores (CMS) durante a fase adulta. Três grupos foram divididos entre as condições experimentais: 1) manipulação neonatal, 2) protocolo e 3) manipulação neonatal e protocolo. As medidas utilizadas foram: peso, consumo de ração e de água e testes semanais de ingestão e preferência de líquido. Os resultados encontrados mostram que o protocolo de estressores diminuiu o peso dos sujeitos tanto para o grupo que passou pela manipulação (MP) quanto para o que passou apenas pelo protocolo de estressores (P). O consumo de alimento e água também apresentou resultados semelhantes para os Grupos P e MP: todos apresentaram aumento na ingestão de água e diminuição do consumo de ração durante o protocolo. Para os testes de consumo e preferência de líquido, tanto o Grupo P quanto o MP apresentaram oscilação de preferência no período sem estressores e aumento da ingestão de sacarose durante as semanas de submissão ao protocolo, ou seja, não produziram anedonia. Os resultados permitem afirmar que o grupo que passou pelo procedimento de manipulação neonatal separados da mãe por 15min não apresentou diferença em comparação com o grupo que não foi manipulado, principalmente em relação aos testes de ingestão e preferência, quando submetidos ao CMS
Style APA, Harvard, Vancouver, ISO itp.
4

Ogul, Hasan. "Measurements of the differential cross section and charge asymmetry for inclusive pp→W(μν) production with 8 TeV CMS data and CMS single muon trigger efficiency study". Diss., University of Iowa, 2016. https://ir.uiowa.edu/etd/3152.

Pełny tekst źródła
Streszczenie:
This dissertation presents muon charge asymmetry, fiducial differential cross section and CMS single muon trigger efficiency measurements as a function of muon pseudorapidity for inclusive W→μν events produced in proton-proton collisions at the LHC. The data were recorded by the CMS detector at a center-of-mass energy of 8 TeV and correspond to an integrated luminosity of 18.8 fb-1. Several comparisons are performed to cross-check the experimental results. Muon efficiency measurements are compared to estimated values from Monte Carlo simulations and reference values recommended by CMS physics object groups. The differential cross section and the charge asymmetry measurements are compared to theoretical predictions based on next-to-leading order and next-to-next-to-leading order QCD calculations with different PDF models. Inputs from the charge asymmetry and the differential cross section measurements for the determination of the next generation of PDF sets are expected to bring different predictions closer together and aid in reducing PDF uncertainties. The impact of the charge asymmetry on PDFs has been investigated by putting the asymmetry results into a QCD analysis at next-to-leading order and next-to-next-leading order with inclusive deep-inelastic scattering data from HERA. Significant improvement of the accuracy on the valence-quark distributions is observed. This measurement is recommended for more accurate constraints in future PDF determinations. More precise measurements of PDFs will improve LHC predictions.
Style APA, Harvard, Vancouver, ISO itp.
5

VAZZOLER, FEDERICO. "Misura della sezione d'urto di produzione e ricerca di nuova fisica in eventi Z gamma gamma prodotti da collisioni protone protone a sqrt(s) = 13 TeV misurate dal detector CMS presso l'acceleratore LHC". Doctoral thesis, Università degli Studi di Trieste, 2021. http://hdl.handle.net/11368/2981625.

Pełny tekst źródła
Streszczenie:
In questa tesi viene presentato uno studio della produzione associata di un bosone Z e almeno due fotoni. L'analisi è stata condotta sfruttando collisioni protone protone, raccolte dall'esperimento CMS ad un'energia del centro di massa pari a 13 TeV, che corrispondono ad una luminosità integrata di 137 femtobarn inversi. Gli eventi di interesse sono ricostruiti in uno spazio delle fasi fiduciale selezionando coppie di leptoni, provenienti dal decadimento del bosone Z, accompagnati da almeno due fotoni isolati. Le possibili sorgenti di fondo sono divise in due classi identificate come "prompt" e "non-prompt": la prima descrive gli eventi in cui i fotoni provengono dal vertice primario dell'interazione mentre la seconda contiene gli eventi in cui getti adronici sono ricostruiti come fotoni. La quantità di eventi contenuta nella categoria "non-prompt" è stata stimata con un metodo basato sui dati mentre quella "prompt" attraverso simulazioni. La sezione d'urto fiduciale misurata è in accordo con la predizione del Modello Standard e la significatività osservata corrisponde a 4.8 deviazioni standard. Gli eventi selezionati per la misura della sezione d'urto fiduciale sono anche usati per porre limiti a possibili accoppiamenti anomali dei bosoni di gauge, parametrizzati attraverso teorie di campo effettive. I limiti ottenuti sono compatibili e complementari ai risultati derivati da altri studi che utilizzano canali di produzione e decadimento alternativi. Il lavoro presentato in questa tesi rappresenta il primo studio della produzione associata di un bosone Z ed almeno due fotoni ad un'energia del centro di mass pari a 13 TeV.
A measurement of the production of a Z boson in association with two or more photons is presented. The analysis is based on proton proton collisions data collected by the CMS experiment, at a centre of mass energy of 13 TeV, and corresponding to an integrated luminosity of about 137 inverse femtobarns. Events are reconstructed in a fiducial phase space by selecting pairs of same flavour opposite sign leptons, originating from the Z boson decay, and at least two isolated photons. The background contributions to this process are divided into two classes, "prompt" and "non-prompt", the former including events with genuine photons while the latter representing events where one or more reconstructed photons originate from hadrons decay inside jets. The non-prompt background source is estimated with a data-driven technique, while the prompt background source is estimated by using simulated events. The measured fiducial cross section is in agreement with the Standard Model prediction and the observed significance corresponds to 4.8 standard deviations. Events selected in the same phase space used for the fiducial cross section measurement are also exploited to set limits on several anomalous Quartic Gauge Couplings, in terms of dimension 8 effective field theory operators. The constraints obtained are compatible and complementary to other studies which use alternative production and decay channels. This work represents the first study of the associated production of Z boson and at least two photons at a centre of mass energy of 13 TeV.
Style APA, Harvard, Vancouver, ISO itp.
6

Ogul, Hasan. "Studies of muon efficiences for measurement of W charge asymmetry in inclusive pp→W (μυ) production at √s=7 TeV". Thesis, University of Iowa, 2013. https://ir.uiowa.edu/etd/4887.

Pełny tekst źródła
Streszczenie:
The main motivation of the Compact Muon Solenoid (CMS) experiment is to explore and to discover physics underlying electro-weak asymmetry breaking. Beside this, CMS detector provides an opportunity to do various experiments for detecting new physics signatures beyond the Standard Model (SM). Investigation of these signatures requires the identification and precise energy and moment measurement of electrons, muons, photons, and jets. The objective of this thesis is the calculation of the efficiencies for the measurement of W charge asymmetry in inclusive pp→W (μυ) production. The charge asymmetry is defined to be the difference between W^+ and W^- bosons, normalized to the sum. This asymmetry is sensitive to the u-quark and d-quark ratios in the proton and precise measurement of the W charge asymmetry can provides new insights to the proton structure functions. Therefore, to improve understanding of SM backgrounds in search for new physics, the moun trigger, isolation, reconstruction, identification efficiencies has been studied using partial data collected by the CMS detector during pp collisions at the LHC in 2011. The dataset corresponds to an integrated luminosity of 2.31 fb-1. The efficiencies are measured as functions of the decay muon pseudo rapidity and transverse momentum based on "tag and probe" method. The efficiency measurements are compared to their estimated value from the Monte Carlo simulations so as to provide scaling factors to correct to the residual mis-modeling of the CMS muon performance. The comparison with simulations based on MC simulations opens a gate for validation of the detector simulation and optimization of selection strategies.
Style APA, Harvard, Vancouver, ISO itp.
7

Rocha, Laura Muniz. "Os efeitos da submissão ao Chronic Mild Stress (CMS) no estabelecimento de uma discriminação". Pontifícia Universidade Católica de São Paulo, 2013. https://tede2.pucsp.br/handle/handle/16712.

Pełny tekst źródła
Streszczenie:
Made available in DSpace on 2016-04-29T13:17:49Z (GMT). No. of bitstreams: 1 Laura Muniz Rocha.pdf: 1241168 bytes, checksum: eaaf1dca965f9aa61ee39990e47879dd (MD5) Previous issue date: 2013-06-21
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Chronic Mild Stress (CMS) is an experimental animal model of induced anhedonia by exposing rats to a protocol of mild stressors for a long period of time. This model is an attempt to reproduce in controlled environment conditions analogous to the real environment, seen as important to produce behavioral changes. To identify other effects of exposure to CMS, this study aimed to verify whether the exposure to chronic mild stress produces changes in the establishment of a simple discrimination after the expouse to the protocol. Therefore, the subjects of this study had their weight and food and water intake measured daily, tests of consumption and preference of liquid were realized weekly; subjects were exposed to the protocol stressors over six weeks and, after this period was initiated the discriminative training. The experiment consisted of four experimental conditions: (1) one subject was exposed, like all other subjects, to tests of consumption and preference of liquids, (2) four subjects were exposed to the protocol, (3) four subjects were exposed to the protocol and to a discrimination procedure(4) four subjects were exposed only to the discrimination procedure and they weren t exposoused to the chronic mild stress. The results obtained in this study indicate that: (a) subjects exposed to the protocol showed a greater loss and variation of the body weight; (b) during the protocol the average of water consumption increased and average food consumption decreased for subjects exposed to the protocol. (c) tests of consumption and preference of liquids did not show a reduction in preference for the sucrose subtance but an increase in the total fluid intake for subjects exposed to stress. And, finally, (d) observed that exposure protocol stressors interfere with the acquisition of a simple discrimination. For subjects exposed to the protocol were required, on average, twice as many sessions to reach the criterion of two consecutive sessions with discriminative indices above 80%. and (e) that the subjects exposed to the protocol showed differences in tests of generalization when compared with subjects exposed only to discrimination. And, lastly (d) observed that exposure to the protocol interfered the acquisition of a simple discrimination. The subjects exposed to the protocol required, on average, twice as many sessions to reach the criterion of two consecutive sessions with discriminative indices above 80% and (e) the subjects exposed to the protocol showed differences in the generalization tests when compared with subjects exposed only to discrimination.Thereby, the exposure to a condition of chronic mild stress and affects the acquisition of a simple discrimination established after the protocol. This result may indicate that exposure to stress condition altered the reinforcing value of the stimulus for the subjects who were exposed to it, this change may have been responsible for the differences obtained in the discrimination of subjects exposed to this condition and subjects who were not exposed to the protocol
O Chronic Mils Stress (CMS) é um modelo animal experimental de anedonia induzida através da exposição de ratos a um protocolo de estressores crônicos e moderados por um longo período de tempo. Este modelo é uma tentativa de reproduzir, em ambiente controlado, condições análogas às do ambiente real, vistas como importantes para a produção de alterações comportamentais. Visando identificar outros efeitos da exposição ao CMS, o presente estudo teve como objetivo verificar se a exposição a eventos aversivos crônicos e moderados produz alterações no estabelecimento de uma discriminação simples posterior ao protocolo de estressores. Para tanto, os sujeitos do presente estudo tiveram o peso e o consumo de ração aferidos diariamente; foram expostos a testes semanais de consumo e preferência de líquidos; foram expostos ao protocolo de estressores ao longo de seis semanas e, após o encerramento deste, foi iniciado o treino discriminativo. O delineamento foi composto por quatro condições experimentais: (1) um sujeito foi exposto, como todos os outros sujeitos, aos testes de consumo e preferência de líquidos, (2) quatro sujeitos foram expostos ao protocolo de estressores, (3) quatro sujeitos foram expostos ao protocolo de estressores e a discriminação simples e (4) quatro sujeitos foram expostos a uma discriminação simples sem exposição aos estímulos crônicos e moderados. Os resultados obtidos no presente estudo indicam que: (a)os sujeitos expostos ao protocolo de estressores apresentaram uma maior perda e variação no peso corporal; (b) durante o protocolo o consumo médio de água aumentou e o consumo médio de ração diminuiu para os sujeitos expostos ao protocolo. (c) nos testes de consumo e preferência de líquidos não foi observado uma redução na preferência por sacarose e sim um aumento no consumo total de líquidos para os sujeitos expostos a condição de estresse. E, por fim, (d) observou-se que a exposição ao protocolo de estressores interferiu na aquisição de uma discriminação simples. Para os sujeitos expostos ao protocolo foram necessárias, em média, o dobro de sessões para atingir o critério de duas sessões consecutivas com índices discriminativos superiores a 80% e (e) que os sujeitos expostos ao protocolo apresentaram diferenças nos testes de generalização se comparados com os sujeitos expostos somente à discriminação. Assim, é possível concluir que a exposição a uma condição de estresse crônica e moderada afeta a aquisição de uma discriminação simples posteriormente estabelecida. Esse resultado pode indicar que a exposição à condição de estresse alterou o valor reforçador do estímulo para os sujeitos que foram expostos a ele, essa alteração pode ter sido responsável pelas diferenças obtidas na discriminação dos sujeitos expostos a essa condição e dos sujeitos que não foram expostos aos estressores
Style APA, Harvard, Vancouver, ISO itp.
8

Souza, Sandro Fonseca de. "Procura de assinaturas experimentais do modelo 331 com o detector CMS do CERN". Universidade do Estado do Rio de Janeiro, 2010. http://www.bdtd.uerj.br/tde_busca/arquivo.php?codArquivo=8788.

Pełny tekst źródła
Streszczenie:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Neste trabalho investigamos a produção e detecção de novas ressonâncias de uma extensão do Modelo Padrão, o modelo 331, no detector CMS. Este modelo prediz que o número de famílias de férmions seja múltiplo de número de cor dos quarks e apresenta como assinaturas características a existência de novos bósons como o Z' e os bi-léptons duplamente carregados. Concentramos na busca destas ressonâncias de alta massa decaindo em di-múons, no caso do Z', e no canal de 2 múons e 2 elétrons para os estudos sobre os bi-léptons. O método de verossimilhança foi utilizado para quantificar a probabilidade da descoberta do Z', que possui como fundo irredutível o processo Drell Yan. Estudamos também a produção de bi-léptons que são uma assinatura única deste modelo.
In this work, we study the production and detection of the new resonances of an extension of Standard Model, the 331 model, in the CMS detector. This model predicts that the number of fermion families to be a multiple of the number of quark colors and it has as special signatures new bosons, the Z' and the double charged bi-leptons. In the Z' boson analysis, we search it through modification in the mass spectrum of highmass dimuons pairs and for the double charged bi-leptons search in the 2 muons and 2 electrons decay channel. The potential discovery of Z' boson was estimated using the Likelihood Method where the Drell-Yan process is the main background. The bi-leptons are an exclusive signature of model, which was used to discriminate this model.
Style APA, Harvard, Vancouver, ISO itp.
9

Ruiz, Vargas José Cupertino [UNESP]. "Search for new resonances in the merged jet plus dilepton final state in CMS". Universidade Estadual Paulista (UNESP), 2017. http://hdl.handle.net/11449/150892.

Pełny tekst źródła
Streszczenie:
Submitted by JOSÉ CUPERTINO RUIZ VARGAS null (jcruizva@ift.unesp.br) on 2017-06-12T17:27:11Z No. of bitstreams: 1 ruizvargas_jc_do_ift.pdf: 6851051 bytes, checksum: e9a0d9e4c7912fe009e916f5c658c041 (MD5)
Approved for entry into archive by Luiz Galeffi (luizgaleffi@gmail.com) on 2017-06-13T16:58:53Z (GMT) No. of bitstreams: 1 ruizvargas_jc_do_ift.pdf: 6851051 bytes, checksum: e9a0d9e4c7912fe009e916f5c658c041 (MD5)
Made available in DSpace on 2017-06-13T16:58:53Z (GMT). No. of bitstreams: 1 ruizvargas_jc_do_ift.pdf: 6851051 bytes, checksum: e9a0d9e4c7912fe009e916f5c658c041 (MD5) Previous issue date: 2017-05-22
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Na Organização Europeia para a Pesquisa Nuclear (CERN), o Large Hadron Collider (LHC) colide grupos de prótons 40 milhões de vezes por segundo a uma energia de 13 TeV. Operando junto ao LHC, o Compact Muon Solenoid (CMS) é um detector projetado para identificar uma ampla gama de partículas produzidas nessas colisões. As partículas produzidas em cada colisão são observadas nos subdetectores na busca de pistas sobre a Natureza no seu nível mais fundamental. Apesar do modelo padrão das partículas elementares ter sido testado em uma variedade de experimentos de altas energias, um dos principais objetivos do LHC é a busca de uma nova física além daquela prevista pela teoria existente. Nesse trabalho analisamos os dados de colisões próton-próton produzidos pelo LHC operando com energia de centro de massa de 13 TeV e coletados pelo CMS em 2015. O presente estudo envolve a busca de uma ressonância X não observada previamente, decaindo em um par de bósons vetoriais. Os resultados são interpretados no contexto do modelo de dimensões extras deformadas de Randall-Sundrum, distinguindo as hipóteses de fundo (modelo padrão) e fundo mais sinal (modelo padrão + graviton). Nenhuma evidência da existência de uma partícula com as características do graviton de Randall-Sundrum foi encontrada.
At the European Organization for Nuclear Research (CERN), the Large Hadron Collider (LHC) smashes groups of protons 40 million times per second at an energy of 13 TeV. Operating at the LHC, the Compact Muon Solenoid (CMS) is a multipurpose detector conceived to identify a large variety of particles produced in such collisions. The produced particles are observed at the sub-detectors in search of clues about Nature at the most fundamental level. In spite of the impressive agreement of the standard model with all the experimental results obtained so far, one of the main aims of the LHC is the search of new physics beyond the one foreseen by this theoretical model. In this work, we analyze proton–proton collisions delivered by the LHC operating at centre-of-mass energy of 13 TeV and collected by CMS during 2015. The channel under study involves the search for an unknown resonance X decaying into a pair of vector bosons. The results are interpreted in the context of the Randall-Sundrum warped extra-dimensional model, distinguishing between the hypotheses of background only (standard model) and background plus signal (standard model + graviton). No evidence of the existence of a graviton-like particle was found.
FAPESP: 2012/24593-8
Style APA, Harvard, Vancouver, ISO itp.
10

GIANNINI, Leonardo. "Deep Learning techniques for the observation of the Higgs boson decay to bottom quarks with the CMS experiment". Doctoral thesis, Scuola Normale Superiore, 2020. http://hdl.handle.net/11384/91153.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
11

Vanel, Jean-Charles. "Étude et caractérisation de photodiodes à avalanche en silicium pour le calorimètre électromagnétique de l'expérience CMS". Grenoble 1, 1997. http://www.theses.fr/1997GRE10236.

Pełny tekst źródła
Streszczenie:
Le futur calorimètre électromagnétique de l'expérience CMS (LHC au CERN), la lumière scintillante des cristaux de tungstanate de plomb sera lue par des photodiodes à avalanche (APD). Cette thèse présente les résultats de mesure de caractérisation obtenus avec une première génération d'APD de grande surface (> 20 mm#2) fournies par deux fabricants : Hamamatsu et Eg&G. Les paramètres mesurés sont : le gain, le facteur d'excès de bruit, l'efficacité quantique, la capacité, les courants de fuites et la réponse aux particules ionisantes. La mesure du gain et du facteur d'excès de bruit a été réalisée en utilisant trois méthodes : sources calibrées, webb - MC intyre et analyse spectrale. Les résultats ont montré qu'aucun photo détecteur de cette première génération ne possédait toutes les caractéristiques recherchées. L'analyse des résultats obtenus a permis aux constructeurs de modifier leur processus de fabrication en améliorant ainsi les générations suivantes. Ce travail présente une étude complémentaire de bruit, réalisée sur l'ensemble APD + préamplificateur placé suivant différentes configurations dans un prototype de calorimètre. Les résultats obtenus conduisent à proposer une solution optimisant à la fois les problèmes de bruit, de refroidissement et de coût.
Style APA, Harvard, Vancouver, ISO itp.
12

Rogerson, Samuel. "A search for supersymmetry using the alphaT variable with the CMS detector and the impact of experimental searches for supersymmetry on supersymmetric parameter space". Thesis, Imperial College London, 2013. http://hdl.handle.net/10044/1/18398.

Pełny tekst źródła
Streszczenie:
A search for supersymmetry in final states with jets and missing transverse energy is performed in pp collisions at a centre-of-mass energy of sqrt(s)=7TeV. The data sample corresponds to 4.98fb^-1 collected by the CMS experiment at the LHC in 2011. A dimensionless kinematic variable is used as the main discriminant between genuine and misreconstructed signal events. The search is performed in a signal region binned according to the scalar sum of the transverse energy of jets and the number of jets identified as originating from a bottom quark. The limits are presented in the parameter space of the cMSSM as well as in simplified models with particular attention paid to compressed spectra and third-generation models. Global frequentist fits to the cMSSM} and a non-universal Higgs model are also performed using the Mastercode framework incorporating recent experimental constraints, similar to those those presented here. Global likelihood contours are presented in the parameter planes of both the cMSSM and NUHM1, as well as a selection of 1D likelihood functions for observables.
Style APA, Harvard, Vancouver, ISO itp.
13

Djupfeldt, Petter. "Dr. Polopoly - IntelligentSystem Monitoring : An Experimental and Comparative Study ofMultilayer Perceptrons and Random Forests ForError Diagnosis In A Network of Servers". Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-191557.

Pełny tekst źródła
Streszczenie:
This thesis explores the potential of using machine learning to superviseand diagnose a computer system by comparing how Multilayer Perceptron(MLP) and Random Forest (RF) perform at this task in a controlledenvironment. The base of comparison is primarily how accurate theyare in their predictions, but some thought is given to how cost effectivethey are regarding time. The specific system used is a content management system (CMS)called Polopoly. The thesis details how training samples were collectedby inserting Java proxys into the Polopoly system in order to time theinter-server method calls. Errors in the system were simulated by limitingindividual server’s bandwith, and a normal use case was simulatedthrough the use of a tool called Grinder. The thesis then delves into the setup of the two algorithms andhow the parameters were decided upon, before comparing their finalimplementations based on their accuracy. The accuracy is noted to bepoor, with both being correct roughly 20% of the time, but discussesif there could still be a use case for the algorithms with this level ofaccuracy. Finally, the thesis concludes that there is no significant difference(p 0.05) in the MLP and RF accuracies, and in the end suggeststhat future work should focus either on comparing the algorithms or ontrying to improve the diagnosing of errors in Polopoly.
Denna uppsats utforskar potentialen i att använda maskininlärning föratt övervaka och diagnostisera ett datorsystem genom att jämföra hureffektivt Multilayer Perceptron (MLP) respektive Random Forest (RF)gör detta i en kontrollerad miljö. Grunden för jämförelsen är främst hurträffsäkra MLP och RF är i sina klassifieringar, men viss tanke ges ocksååt hur kostnadseffektiva de är med hänseende till tid. Systemet som används är ett “content management system” (CMS)vid namn Polopoly. Uppsatsen beskriver hur träningsdatan samlades invia Java proxys, som injicerades i Polopoly systemet för att mäta hurlång tid metodanrop mellan servrarna tar. Fel i systemet simulerades genomatt begränsa enskilda servrars bandbredd, och normalt användandesimulerades med verktyget Grinder. Uppsatsen går sedan in på hur de två algoritmerna användes ochhur deras parametrar sattes, innan den fortsätter med att jämföra detvå slutgiltiga implementationerna baserat på deras träffsäkerhet. Detnoteras att träffsäkerheten är undermålig; både MLP:n och RF:n gissarrätt i ca 20% av fallen. En diskussion förs om det ändå finns en användningför algoritmerna med denna nivå av träffsäkerhet. Slutsatsen drasatt det inte finns någon signifikant skillnad (p 0.05) mellan MLP:nsoch RF:ns träffsäkerhet, och avslutningsvis så föreslås det att framtidaarbete borde fokusera antingen på att jämföra de två algoritmerna ellerpå att försöka förbättra feldiagnosiseringen i Polopoly.
Style APA, Harvard, Vancouver, ISO itp.
14

Benato, Lisa. "Search for heavy resonances decaying into a $Z$ boson and a vector boson in the $\nu \bar{\nu}$ $q\bar{q}$ final state at CMS". Doctoral thesis, Università degli studi di Padova, 2018. http://hdl.handle.net/11577/3426686.

Pełny tekst źródła
Streszczenie:
This thesis presents a search for potential signals of new heavy resonances decaying into a pair of vector bosons, with masses between 1 TeV and 4 TeV, predicted by beyond standard model theories. The signals probed are spin-1 W', predicted by the Heavy Vector Triplet model, and spin-2 bulk gravitons, predicted by warped extra-dimension models. The scrutinized data are produced by LHC proton-proton collisions at a center-of-mass energy $\sqrt{s}=13$ TeV during the 2016 operations, and collected by the CMS experiment, corresponding to an integrated luminosity of 35.9 fbinv. One of the boson should be a Z, and it is identified through its invisible decay into neutrinos, while the other electroweak boson, consisting either into a W or into a Z boson, is required to decay hadronically into a pair of quarks. The decay products of heavy resonances are produced with large Lorentz boosts; as a consequence, the decay products of the bosons (quarks and neutrinos) are expected to be highly energetic and collimated. The couple of neutrinos, escaping undetected, is reconstructed as missing momentum in the transverse plane of the CMS detector. The couple of quarks is reconstructed as one large-cone jet, with high transverse momentum, recoiling against the couple of neutrinos. Grooming algorithms are adopted in order to improve the jet mass resolution, by removing soft radiation components and spectator events from the particles clustered as the large-cone jet. The groomed jet mass is used to tag the hadronically decaying vector boson, to define the signal region of the search (close to the nominal mass of the W and Z bosons, between 65-105 GeV) and a signal-depleted control region, that is used for the background estimation. An hybrid data-simulation approach predicts the normalization and the shape of the main background, represented by a vector boson produced in association with jets, by taking advantage of the distribution of data in the signal-depleted control regions. Secondary backgrounds are predicted from simulations. Jet substructure techniques are exploited, in order to classify events into two exclusive purity categories, by distinguishing the couple of quarks inside the large-cone jet. This approach improves the background rejection and the discovery reach. The search is performed by scanning the distribution of the reconstructed mass of the resonance, looking for a local excess in data with regards to the prediction. Depending on the mass, upper limits on the cross-section of heavy spin-1 and spin-2 narrow resonances, multiplied by the branching fraction of the resonance decaying into Z and a W boson for a spin-1 signal, and into a pair of Z bosons for spin-2, are set in the range $0.9$ -- $63$ fb and in the range $0.5$ -- $40$ fb respectively. A W' hypothesis is excluded up to 3.11 TeV, in the Heavy Vector Triplet benchmark A scenario, and up to 3.41 TeV, considering the benchmark B scenario. A bulk graviton hypothesis, given the curvature parameter of the extra-dimension $\tilde{k}=1.0$, is excluded up to 1.14 TeV.
Questa tesi presenta una ricerca di potenziali segnali di nuove risonanze pesanti, che decadono in una coppia di bosoni vettori, con masse comprese tra 1 TeV e 4 TeV, predette da teorie oltre il modello standard. I segnali indagati sono W' di spin 1, predette dal modello Heavy Vector Triplet, e gravitoni di spin 2, predetti da modelli che prevedono extra dimensioni ripiegate. I dati esaminati sono prodotti dalle collisioni protone-protone di LHC ad un'energia del centro di massa di $\sqrt{s}=13$ TeV durante le operazioni del 2016, e raccolti dall'esperimento CMS, per una luminosità integrata di 35.9 fbinv. Uno dei bosoni dev'essere una Z, che viene identificata dal suo decadimento invisibile in neutrini, mentre l'altro bosone elettrodebole, sia una W che una Z, deve decadere nel canale adronico in una coppia di quark. I prodotti di decadimento di risonanze pesanti sono generati con significativi boost di Lorentz; di conseguenza, ci si aspetta che i prodotti di decadimento dei bosoni (i quark e i neutrini) abbiano elevate energie e siano collimati. La coppia di neutrini, che sfugge alla rivelazione, viene ricostruita come momento mancante nel piano trasverso del rivelatore CMS. La coppia di quark viene ricostruita come un jet a largo cono, con elevato momento trasverso, che rincula contro la coppia di neutrini. Algoritmi di grooming sono impiegati per migliorare la risoluzione della massa del jet, rimuovendo la radiazione soffice e gli eventi spettatori dalle particelle clusterizzate come jet a largo cono. La massa ripulita del jet viene utilizzata per identificare il bosone vettore che decade in adroni, per definire la regione di segnale della ricerca (vicina alla massa nominale dei bosoni W e Z, nell'intervallo 65-105 GeV) e una regione di controllo svuotata dal segnale, che viene utilizzata per la stima dei fondi. Un approccio ibdrido dati-simulazione predice la normalizzazione e la forma del fondo principale, rappresentato da un bosone vettore prodotto in associazione con jet, sfruttando la distribuzione dei dati nelle regioni di controllo svuotate dal segnale. I fondi secondari sono predetti completamente con le simulazioni. Tecniche di sottostruttura del jet sono adoperate per classificare gli eventi in due categorie esclusive di purezza, distinguendo le coppie di quark dentro al jet a largo cono. Questo approccio migliora la soppressione del fondo e la potenzialità di scoperta. La ricerca viene fatta scansionando la distribuzione della massa ricostruita della risonanza, cercando un eccesso locale nei dati rispetto alle predizioni. In funzione della massa, limiti superiori sulla sezione d'urto per risonanze pesanti e strette di spin 1 e spin 2, moltiplicate per il rapporto di diramazione della risonanza che decade in Z e W per il segnale di spin 1, e in una coppia di bosoni Z per lo spin 2, sono fissati nell'intervallo $0.9$ -- $63$ fb e nell'intervallo $0.5$ -- $40$ fb rispettivamente. Un'ipotesi di W' e' esclusa fino ad una massa di 3.11 TeV, nello scenario A di riferimento dell'Heavy Vector Triplet, e fino a 3.41 TeV, nello scenario B di riferimento. Un'ipotesi di gravitone, dato il parametro di curvatura della dimensione addizionale $\tilde{k}=1.0$, è esclusa fino ad una massa di 1.14 TeV.
Style APA, Harvard, Vancouver, ISO itp.
15

Moreno, Thiago Victor 1988. "Comparação entre produção de múons nos chuveiros atmosféricos extensos observados no Observatório Pierre Auger e nos detetores do experimento CMS do CERN, a partir de colisões próton-próton". [s.n.], 2014. http://repositorio.unicamp.br/jspui/handle/REPOSIP/277005.

Pełny tekst źródła
Streszczenie:
Orientador: José Augusto Chinellato
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Física Gleb Wataghin
Made available in DSpace on 2018-08-24T07:17:25Z (GMT). No. of bitstreams: 1 Moreno_ThiagoVictor_M.pdf: 9704369 bytes, checksum: 4998048904230625e6cb929460a8f975 (MD5) Previous issue date: 2014
Resumo: Neste trabalho o programa CORSIKA foi utilizado para gerar eventos de colisão próton-próton e chuveiros atmosféricos extensos com partícula primária sendo próton ou ferro. Como modelo de interações hadrônicas usou-se o EPOS LHC, QGSJET 01c, QGSJET II-4 e SIBYLL 2,1. As colisões p-p foram simuladas com energia igual a 7 TeV no referencial centro de momenta e foi estudada a distribuição de multiplicidade de hádrons carregados e a densidade em pseudorapidez. Comparando estes observáveis com dados do CMS escolheu-se os modelos que melhor reproduzissem os dados para posteriormente, simular chuveiros atmosféricos extensos. Estes chuveiros foram gerados com partícula primária de energia igual a 1019eV no referencial do laboratório. Observou-se a densidade de múons na altitude do Detetor de Superfície do Observatório de raios cósmicos Pierre Auger. O objetivo é estudar a possibilidade de usar esta densidade para sondar modelos de interações hadrônicas e identificar a partícula primária dos eventos detetados pelo Observatório Pierre Auger
Abstract: In this work CORSIKA program was used to generate events from proton-proton collision and extensive air showers with primary particle being proton or iron. The hadronic interaction models used was EPOS LHC, QGSJET 01c, QGSJET II-4 and SIBYLL 2,1. The p-p collisions were simulated with energy equal to 7 TeV in the center of momenta reference system and the charged hadron multiplicity and the pseudorapidity density was studied. Comparing this with data collected by the CMS detector at the LHC it was chosen the best models to generate air showers. The extensive air showers were generated with primary particle energy equal to 1019 eV in the laboratory frame. It was observed the muon density in the altitude of the Surface Detector of the Pierre Auger Observatory. The objective is to study the possibility of using this density to probe the model and the primary particle of the events detected by the Pierre Auger Observatory
Mestrado
Física
Mestre em Física
Style APA, Harvard, Vancouver, ISO itp.
16

Medina, Jaime Miguel 1983. "Produção exclusiva de bósons Z em colisões pp no experimento CMS/LHC". [s.n.], 2015. http://repositorio.unicamp.br/jspui/handle/REPOSIP/321487.

Pełny tekst źródła
Streszczenie:
Orientador: José Augusto Chinellato
Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Física Gleb Wataghin
Made available in DSpace on 2018-08-29T05:01:52Z (GMT). No. of bitstreams: 1 MedinaJaime_Miguel_D.pdf: 6686309 bytes, checksum: 48654e10cf1138cdc89eb810557a427d (MD5) Previous issue date: 2015
Resumo: A produção exclusiva de bósons Z é um processo difrativo, teoricamente representado pela troca de um objeto com os números quânticos do vácuo (pomeron- IP), o qual pode ser tratado no modelo padrão no enfoque do mecanismo de dipolo, via fotoprodução. A caraterística principal deste tipo de processo é que os dois prótons emergem intactos após a colisão com momentum transverso pequeno. Assim, os prótons são espalhados muito frontais e escapam à detecção., restandonos como criterio de e. Espera-se , onde um próton irradia um fóton virtual, o qual flutua para um par quark anti-quark os quais são espalhados elasticamente pela troca de dois glúons emergentes do outro próton, e se materializa num bóson Z. Além disso, os dois prótons emergem intactos após a colisão com momentum transverso pequeno, assim prótons são espalhados muito frontais e escapam à detecção. Portanto, o critério usado para a identificação de eventos exclusivos de bósons Z decaindo em par múons é o número de traços adicionais no vértice, mas ainda resta a possibilidade de haver uma dissociação de pelo menos um dos prótons. Neste caso, a medida da seção de choque respectiva será contaminada. São expostos três diferentes métodos para a pré-seleção des eventos contendo eventos exclusivos. A estratégia de análise proposta para selecionar eventos exclusivos é baseada no decaimento de bósons Z em pares ???? caraterizados por grande momento transverso, para o qual exploramos múons em uma região cinemática de momento transverso Pt(?)>20 GeV, massa invariante M(????)>40 GeV/c² e pseudo-rapidez |?|<2,5. Adicionalmente a estes cortes, exigimos "zero traços" extras no vértice e, por último que o par de múons sejam back-to-back com balanço no Pt. Nesta Tese se apresenta um estudo da produção exclusiva de bósons Z em colisões próton-próton para energias de centro de massa igual a ?s=7 TeV no experimento CMS/LHC. Esta análise foi feita usando simulações para processos de Drell-Yan, fotoprodução de múons e dados coletados pelo detector CMS de colisões próton-próton com ?s=7 TeV durante o ano de 2011, correspondendo a uma luminosidade integrada de 5,09 fb?¹. Nenhum excesso significante de sinal foi extraído nas produções exclusivas de fazemos uma estimativa de eventos esperados nas produções exclusiva e bósons. Nenhum candidato a bóson Z exclusivo foi possível de observar, isto devido ao fato da baixa estatística e o enorme background proveniente de processos de Drell-Yan., que mesmo sendo processos não exclusivos contaminarem nossa amostra de dados devido ao fato de que os prótons escapam de nossa detecção e o único critério de exclusividade é o de o número de traços extras no vertice do evento seja igual a zero, mas devido a que os prótons podem dissociar sim ser detetados e nós foi impossível dete devido a que assim prótons são espalhados muito frontais e escapam à detecção. Entretanto, a seção de choque para processos de produção exclusiva de bósons Z é encontrada a ser ?excl(Z?????)<13,3 fb para um nível de confiança de 95%. Este é o primeiro resultado preliminar para este tipo de processo na escala de energia de 7 TeV, onde cabe destacar que este limite foi melhorado com respeito a resultados publicados pela colaboração CDF para ?s = 1,9 TeV
Abstract: he exclusive Z boson production is a diffractive process theoretically represented by the exchange of an object with the quantum numbers of vacuum (pomeron-IP), which is predicted by the Standard Model in the dipole mechanism approach, via photoproduction. The main feature of this type of process is two forward outgoing protons intact after the collision with small transverse momentum, the protons are scattering very front and escape detection. Then, the approach taken to detect exclusive Z bosons events decaying in to muon pair is the number of additional tracks on the vertex, but there is still the possibility of dissociation of at least one proton, then the measurement of cross section will be contaminated. It's exposed three different methods for the pre-selection events containing exclusive Z bosons. The strategy proposed to exclusive selection events, is based on the decay of Z bosons on pairs ???? featured by large transverse momentum, for which we explored muons in a kinematic region with transverse momentum Pt(?)>20 GeV, Invariant mass M(????)>40 GeV/c² and pseudorapidity |?|<2,5. The additional cuts, require zero extra tracks on vertex and finally the pair of muons are back-to-back with momentum transverse conservation . This thesis presents a search for exclusive Z boson production in proton proton collisions at ?s=7 TeV center of mass energy in the experiment CMS/LHC. This analysis was made using simulations to Drell-Yan processes, photoproduction of muon and data collected by the CMS detector on proton proton collisions with ?s=7 TeV in 2011, for 5.09 fb?¹ integrated luminosity record. No exclusive Z????? candidates are observed, because low statistical and huge background contribution from Drell-Yan processes. But even so, the cross section for the exclusive Z bosons production is found to be ?excl(Z?????)<13.3 fb at 95% C.L. This is the first very preliminary results for 7 TeV, where we highlight that it limit has been improved respect to the results published by the CDF collaboration to ?s = 1.9 TeV
Doutorado
Física
Doutor em Ciências
147301/2011-04
CNPQ
Style APA, Harvard, Vancouver, ISO itp.
17

Almeida, Miquéias Melo de. "Obtenção de um limite para a largura do bóson de Higgs no experimento CMS via H→ ZZ → (4e, 4, 2e2)". Universidade do Estado do Rio de Janeiro, 2015. http://www.bdtd.uerj.br/tde_busca/arquivo.php?codArquivo=8860.

Pełny tekst źródła
Streszczenie:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Apresenta-se neste trabalho um estudo sobre a largura de decaimento total do bóson de Higgs através do canal H→ ZZ → (4e, 4, 2e2). Segundo o Modelo Padrão da Física de Partículas Elementares, um bóson de Higgs com massa de 126 GeV deve ter uma largura de decaimento total ΓH = 4.15 MeV, muito abaixo da resoluções dos experimentos instalados no LHC. Isto impede uma medida direta sobre os eventos da ressonância. Recentemente foi proposto limitar ΓH a partir da relação entre a taxa de eventos observados na região da ressonância e na região off-shell. Utilizando o pacote de análise desenvolvido pela colaboração CMS obteve-se um limite de ΓH < 31.46(12.82) MeV em 95(68.3)% CL combinando os dados coletados pelo LHC em colisões pp em √s = 7 TeV (5.1fb-1) e em √s = 8 TeV (19.7fb -1).
We present in this work a study about the Higgs boson total width using the channel H→ ZZ → (4e, 4, 2e2. According to the Standard Model of Elementary Particle Physics, the Higgs boson with mass around 126 GeV should have a total decay width of ΓH = 4.15 MeV, very below the resolution of the experiments installed at the LHC. This fact prevents a direct measurement on the events of the Higgs resonance. Recently it was proposed limit ΓH from the relationship between the rate of events observed in the resonance and the off-shell regions. Using the package of analysis developed by CMS collaboration was obtained a limit of ΓH < 31.46(12.82)MeV in 95(68.3)% CL combining the data collected by the LHC in pp collisions at √s = 7 TeV (5.1fb-1)and at √s = 8 TeV (19.7fb -1).
Style APA, Harvard, Vancouver, ISO itp.
18

Martins, Jordan. "Procura do bóson de Higgs a partir da fusão de bósons vetoriais no Experimento CMS". Universidade do Estado do Rio de Janeiro, 2010. http://www.bdtd.uerj.br/tde_busca/arquivo.php?codArquivo=7399.

Pełny tekst źródła
Streszczenie:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
O presente trabalho tem por finalidade determinar a luminosidade necessária para se impor um limite para exclusão e a signifucância para evidência e descoberta para um bóson de Higgs de mH = 200 GeV no canal qqH → ZZ → 4. Também, apresenta-se uma crítica a qualidade da relação sinal/fundo em eventos de fusão de bósons vetoriais conhecida da literatura atual.
This analysis aims to determine the luminosity required to impose an exclusion limit and significance for evidence and discovery for a Higgs boson of mH = 200 GeV on the channel qqH → ZZ → 4. Also, it presents a critical to the quality of the relation signal/background in vector boson fusion events known form the current literature.
Style APA, Harvard, Vancouver, ISO itp.
19

MANDORLI, GIULIO. "Search for the Higgs boson decaying into two muons with the CMS experiment". Doctoral thesis, Scuola Normale Superiore, 2021. http://hdl.handle.net/11384/105966.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
20

Benaglia, Andrea Davide. "ECAL calibration studies for H > gg searches and Higgs boson search in the H > WW > lvqq final state with the CMS detector at the LHC". Phd thesis, Ecole Polytechnique X, 2012. http://pastel.archives-ouvertes.fr/pastel-00722897.

Pełny tekst źródła
Streszczenie:
Cette thèse présente le travail de trois ans effectué dans l'expérience CMS, dans le contexte des premières collisions proton-proton du LHC. L'étude porte en particulier sur la recherche du boson de Higgs. Cette particule, dont l'existence est prévue par le Modèle Standard, n'a encore jamais été observée, mais elle est toujours activement l'objet de recherches auprès de collisionneurs à très hautes énergies. Dans le passé récent, les expériences au LEP et Tevatron ont permis d'exclure l'existence du Higgs dans les domaines de masse < 114 GeV/c2 et [147, 179] GeV/c2, avec un intervalle de confiance de 95%. Au moment où le sujet de cette thèse a été décidé, les domaines de masse très faible et très élevé étaient donc les régions les plus intéressantes. Avec les données de collisions de 2011, je me suis occupé de la question de la stabilité et uniformité de la réponse du calorimètre électromagnétique (ECAL). Des électrons isolés, provenant de la désintégration de boson W en en voie électron-neutrino ont été utilisé pour caractériser la réponse du ECAL (corrections locales d'uniformité, corrections de transparence des cristaux, vieillissement des cellules de lecture). Ceci sert de référence pour l'analyse du Boson du Higgs se désintégrant en deux photons, qui est le canal favori pour des hypothèses de masse faible et qui exige une une résolution en énergie optimale pour mieux profiter de la résonance trés étroite du boson du Higgs pour ces masses. L'analyse de physique effectuée dans cette thèse porte sur la recherche de bosons de Higgs dans l'autre région de masse encore permise par les contraintes expérimentales. En particulier, j'ai étudié le canal de désintégration H > WW > lνqq pour des masses du Higgs mH > 2mW. Ce canal est celui qui offre la plus grande section efficace de production (σ × B) pour le boson de Higgs, même s'il est pénalisé par des bruits de fonds de processus standards importants. Une stratégie d'analyse complète a été définie: j'ai étudié les performances pour la physique de la reconstruction et de l'identification des leptons isolés, fortement contribué à la définition et à la caractérisation du déclenchement (trigger) pour ce canal, et procédé à une analyse détaillée des sources d'incertitudes systématiques affectants l'interprétation statistique des résultats. En absence d'une déviation par rapport à l'attendue des bruits de fonds du modèle standard, des limites supérieures sur la masse du boson de Higgs ont été obtenu, entre 320 et 400 GeV/c2 environ.
Style APA, Harvard, Vancouver, ISO itp.
21

Franzosi, Diogo Buarque. "Estudo da quebra da simetria eletrofraca através do espelhamento W W no experimento CMS do CERN". Universidade do Estado do Rio de Janeiro, 2007. http://www.bdtd.uerj.br/tde_busca/arquivo.php?codArquivo=513.

Pełny tekst źródła
Streszczenie:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Este trabalho apresenta um estudo sobre o espalhamento W+W+ e W−W− para os primeiros anos de tomada de dados do experimento CMS do LHC, no CERN. O processo de espalhamento de bósons vetoriais, dentre os quais se inclui o espalhamento WW, é um processo chave para elucidação do mecanismo de quebra de simetria eletrofraca. Previsões teóricas mostram que as características cinemáticas do espalhamento de bósons vetoriais na escala T e V de energia devem depender significativamente do mecanismo que quebra a simetria eletrofraca. Este processo será analisado com dois objetivos principais: estudar a viabilidade de medi-lo em altas energias no CMS, e mostrar a sua sensibilidade ao mecanismo da quebra de simetria eletrofraca. Esta sensibilidade será mostrada através da análise de duas amostras de eventos correspondentes a dois cenários distintos: o modelo padrão com a presença de um bóson de Higgs de massa 500 GeV e o modelo padrão sem a presença do bóson de Higgs. Estas amostras foram geradas com o gerador de eventos PHASE, cuja principal importância para o estudo do espalhamento de bósons vetoriais em altas energias e a sua capacidade de calcular o elemento de matriz completo em ordem dominante O(α6). Para se analisar a viabilidade de se medir o espalhamento WW no CMS, foi feita uma simulação do detector utilizando as amostras dos processos de sinal e dos principais processos de fundo através do pacote de simulação rápida do CMS, o FAMOS. Os processos de fundo, WW+N jatos, WZ+N jatos, ZZ+N jatos, W+N jatos e tt+N jatos, foram estudados e suprimidos através de seleções de regiões cinemáticas. A análise dos dados mostra que a observação do espalhamento WW na fase inicial do LHC será muito difícil, sendo necessária uma luminosidade maior, além de aprimoramento da análise.
This work presents a study of W+W+ and W−W− scattering for the first years of the CMS experiment data-taking at LHC, CERN. Vector boson scattering, including WW, is a key processes for probing electroweak symmetry breaking. Theoretical predictions show that kinematics characteristics of vector boson scattering at T e V scale must strongly depend on the electroweak symmetry breaking mechanism. This process is analyzed with two main objectives: viability study of performing this measurement at CMS and show its dependence on the electroweak symmetry breaking mechanism. This dependence is shown through an analysis with two event samples corresponding to two distinct scenarios: the standard model with the presence of a 500 GeV massive Higgs boson and the standard model without presence of a Higgs boson. These samples were generated by the Monte-Carlo generator PHASE, whose main importance for vector boson scattering at high energies is its characteristic of calculating the complete matrix elements in leading order O(α6). To analyze the viability of measuring the WW scattering at CMS, the simulated signal and background events were submitted to the CMS fast detector simulation, FAMOS. Background processes, WW+N jets, WZ+N jets, ZZ+N jets, W+N jets and tt+N jets, were studied and suppressed through kinematics region selection. Data analysis shows that the measurement of WW scattering in early stages of LHC will be very difficult, being necessary a larger luminosity, besides improvements on the analysis.
Style APA, Harvard, Vancouver, ISO itp.
22

Talotte, Corinne. "Adaptation de procédures expérimentales au cas des voiliers en gîte et dérive : comparaison des résultats expérimentaux et numériques". Nantes, 1994. http://www.theses.fr/1994NANT2102.

Pełny tekst źródła
Streszczenie:
Le voilier présente des spécificités de forme et de fonctionnement qui en font le navire idéal pour tester les avancées dans le domaine de la modélisation numérique des écoulements sur les carènes et les appendices. Que ce soit pour réaliser des validations précises des modélisations ou simplement obtenir des résultats expérimentaux utiles à la prédiction des performances, il est nécessaire de développer une instrumentation et des procédures adaptées aux voiliers pour réaliser les essais en bassin. Dans un premier chapitre, nous présentons les différentes méthodes expérimentales qui ont été adaptées au voilier et leur validation sur deux types de carènes. La connaissance du torseur complet des efforts est primordiale du fait des spécificités de fonctionnement des voiliers en gite et dérive. Deux balances de mesures six composantes ont ainsi été concues, la seconde étant dimensionnée pour des maquettes plus importantes de manière à étudier les appendices portants à des nombres de Reynolds significatifs. La stimulation de la turbulence en bassin est obtenue de manière classique par des bandes de sable placées sur la carène et les appendices. Nous étudions au deuxième chapitre leur influence sur les efforts par des essais systématiques en bassin, des essais en soufflerie sur les appendices et des essais de vélocimétrie laser. Des essais sur deux tailles de maquettes ont également été effectués pour évaluer sur les voiliers étudiés, les effets d'échelle et de confinement en bassin. Les résultats expérimentaux obtenus sont ensuite comparés, dans un troisième chapitre, aux résultats de calcul numérique du code REVA pour la résistance de vagues et la résolution des problèmes portants et du code AQUAPLUS pour le comportement sur houle.
Style APA, Harvard, Vancouver, ISO itp.
23

Ozturk, Fahri. "Modelling and experimental study of millimetre wave refractive systems". Thesis, University of Manchester, 2014. https://www.research.manchester.ac.uk/portal/en/theses/modelling-and-experimentalstudy-of-millimetre-waverefractive-systems(0fd069b3-27bd-437b-ae09-0811972ff23c).html.

Pełny tekst źródła
Streszczenie:
Astronomical instruments dedicated to the study of Cosmic Microwave Background polarization are in need of optics with very low systematic effects such as beam shape and cross-polarization in an optical configuration. With the demand for millimetre wave larger focal planes comprising thousands of pixels, these systematic effects have to be minimal across the whole focal surface. In order to reach the instrument requirements such as resolution, cross-polarization and beam ellipticity, new optical configurations with well-understood components have to be studied. Refractive configurations are of great importance amongst the potential candidates. The aim is to bring the required technology to the same level of maturity that has been achieved with well-understood existing ones. This thesis is focused on the study of such optical components for the W-band spectral domain. Using optical modelling with various software packages, combined with the manufacture and accurate experimental characterization of some prototype components, a better understanding of their performance has been reached. To do so, several test set-ups have been developed. Thanks to these new results, full Radio-Frequency refractive systems can be more reliably conceived.
Style APA, Harvard, Vancouver, ISO itp.
24

Bianchini, Lorenzo. "Recherche du boson de Higgs du modèle standard dans la voie de désintégration en leptons taus avec l'expérience CMS au LHC". Phd thesis, Ecole Polytechnique X, 2012. http://pastel.archives-ouvertes.fr/pastel-00739907.

Pełny tekst źródła
Streszczenie:
Dans cette thèse, je présente mon travail effectué au sein de l'expérience CMS, et consacré à la recherche du boson de Higgs du modèle standard dans sa voie de désintégration en une paire de leptons tau. L'objectif original et la finalité de cette recherche est une contribution à la découverte du boson de Higgs au LHC. Après un résumé du cadre théorique et expérimental de ma thése, j'expose la technique particle-fow développée pour la reconstruction des événements hors ligne enregistrés par CMS. La reconstruction et l'identification des leptons tau dans CMS sont décrites en détail dans la thèse. Une attention particulière est consacrée à la discrimination entre les électrons et les leptons taus qui se désintégrant en voie semi-leptoniques (donnant des neutrinos et des hadrons). Un élément crucial dans l'étude H -> tautau est la reconstruction de la masse invariante. Sur ce sujet, j'ai apporté des contributions originales au développement d'un Dynamical Likelihood Model, appelé SVfit, pour estimer la masse totale de la paire des taus, et qui sera utilisé pour l'analyse. Enfin, une recherche du boson de Higgs SM dans la voie H -> tautau , base sur 4.9 1/fb de collisions pp à sqrt(s) = 7 TeV, est présentée. Des événements avec une paire de taus sont sélectionnés où l'un se désintégre de façon semi-leptonique (i.e. en hadrons et neutrinos) et l'autre en façon leptonique (i.e. en muons ou électrons et en neutrinos). Aucun excès par dessus aux bruits de fond prédicts par le SM n'a été observé. L'interprétation statistique des résultats à été faite dans le cadre du SM et elle est finalement traduite en limites au 95% CL sur la section efficace du signal.
Style APA, Harvard, Vancouver, ISO itp.
25

Broutin, Clémentine. "Mesure des électrons et recherche de bosons de Higgs en canaux multi-leptons avec l'expérience CMS au LHC". Phd thesis, Ecole Polytechnique X, 2011. http://pastel.archives-ouvertes.fr/pastel-00617514.

Pełny tekst źródła
Streszczenie:
Cette thèse présente le travail de trois ans effectué dans l'expérience CMS, dans le contexte des premières collisions du LHC. L'étude porte en particulier sur les électrons, objets essentiels pour les analyses de recherche en canaux multi-leptons, notamment l'analyse H → ZZ(∗) → 4ℓ. Durant les premiers mois de collisions, nous avons pris part à la validation des données enregistrées par le calorimètre électromagnétique. Nous avons mesuré l'efficacité du système de déclenchement de niveau 1 en électrons et photons tout au long de l'année 2010. L'efficacité sur le plateau est de 99.6 % (resp. 98.5 %) pour des électrons dans le tonneau (resp. dans les bouchons) du calorimètre. En vue d'optimiser le potentiel de découverte, nous avons mis au point un nouvel algorithme de mesure de la charge des électrons. Dans CMS, celle-ci est affectée par la grande quantité de matière présente dans le trajectographe. Les performances de cet algorithme ont été mesurées sur les données enregistrées en 2010, sur des électrons issus de désintégrations de bosons Z, passant une sélection standard. La probabilité d'erreur est de 1.06 % (0.19 % après une sélection spécifique), en accord avec la simulation. L'analyse de physique effectuée dans cette thèse porte sur la recherche de bosons de Higgs doublement chargés, dans leurs désintégrations en paires de leptons. Avec les données enregistrées en 2010, un événement de bruit de fond est attendu à la fin de la sélection, pour un nombre d'événements de signal qui dépend de l'hypothèse de masse et du modèle considéré. Un événement est observé dans les données, en accord avec le bruit de fond attendu, ce qui permet d'exclure ce signal sur des domaines de masse plus larges que précédemment : la limite varie de 122 GeV à 176 GeV en fonction du modèle.
Style APA, Harvard, Vancouver, ISO itp.
26

Guativa, Lina Milena Huertas. "Medida da seção de choque difrativa simples dσ/d|t| a √s = 8 TeV no experimento CMS". Universidade do Estado do Rio de Janeiro, 2013. http://www.bdtd.uerj.br/tde_busca/arquivo.php?codArquivo=6247.

Pełny tekst źródła
Streszczenie:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
A análise descrita nesta dissertação tem como objetivo a medida da seção de choque difrativa dσ/d|t| à energia no sistema do centro de massa de √s = 8 TeV. Os eventos usados para obter a seção de choque difrativa foram selecionados para processos de difração simples com um próton espalhado na região frontal e na região cinemática de 0,03 <|t| < 1,0 GeV e 0,03 < ξ < 0.1, usando os dados em comum obtidos em 2012 dos detectores CMS e TOTEM, o qual permite ter uma perspectiva mais detalhada do processo difrativo, devido à aceitação completa que oferece a combinação dos detectores. Os dados foram corrigidos devido à aceitação e eficiência do detector. A partir de uma parametrização exponencial da forma Ae ^Bt, o valor da inclinação do processo em que o próton espalhado é detectado em ambas as direções positiva da CMS é de B= -6,403 1,241 GeV- .
The goal of the analysis presented in this dissertation is the measurement of the diffractive cross section dσ/d|t| at center of mass energy of √s = 8 TeV. The events were selected for single diffractive processes with one scattering proton in the forward region and the kinematic region of 0,03 <|t| < 1,0 GeV e 0,03 < ξ < 0.1, using the data collected during 2012 by both the CMS and TOTEM detectors, whose joint acceptance allows a more detailed perspective of the diffractive processes. The data were corrected for the effects of detector acceptance and efficiency. Since the exponential parametrization Ae ^Bt was fit to data, the value obtained for the slope for a precess when the scattering proton is detected in both positive or negative regions of CMS is of B= -6,403 1,241 GeV- .
Style APA, Harvard, Vancouver, ISO itp.
27

Custódio, Analu Verçosa. "Procura do bóson de Higgs no canal de decaimento H → W+ W- →+ν-ν utilizando a técnica de elemento de matriz no experimento CMS do CERN". Universidade do Estado do Rio de Janeiro, 2011. http://www.bdtd.uerj.br/tde_busca/arquivo.php?codArquivo=7134.

Pełny tekst źródła
Streszczenie:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Nesta dissertação apresento as atividades que foram desenvolvidas durante o período de mestrado, que teve como objetivo o desenvolvimento da Técnica de Análise de dados através do Método de Elemento de Matriz (ME) para procura do Bóson de Higgs no experimento CMS. A proposta foi utilizar uma técnica de análise de dados relativamente nova, conhecida como Método do Elemento de Matriz (ME). Esta técnica foi desenvolvida e utilizada recentemente para aplicação na física do quark top nos experimentos D0 e CDF do Tevatron (FERMILAB). Entretanto, ainda não existem estudos envolvendo a aplicação da mesma para a física do Higgs no LHC. O método de ME foi aplicado na procura do Higgs no canal de decaimento H → W+ W- →+ν-ν, qual os estudos atuais apontam como sendo um dos canais com maior potencial de descoberta, principalmente nesta fase inicial em que a estatística ainda será muito limitada.
We present the activities that were developed during the Master, which aimed at the development of the data analysis technique using the Matrix Element Method (ME) in the Higgs boson search in the CMS experiment. The proposal was to use a data analysis technique relatively new known as Matrix Element Method (ME). This technique was developed and used recently for application in the top quark physics in D0 and CDF experiments at the Tevatron (Fermilab). However, there are no studies involving the use of it in the physics of the Higgs at the LHC. The ME method was applied in the Higgs boson search in the decay channel H → W+ W- →+ν-ν , which the current studies indicate as being one of the channels with the highest potential for discovery, especially at this early stage in which the statistic is still very limited.
Style APA, Harvard, Vancouver, ISO itp.
28

NATALE, Umberto. "Distilling information from present and future CMB datasets: the cases of large-scale polarization and lensing". Doctoral thesis, Università degli studi di Ferrara, 2021. http://hdl.handle.net/11392/2488201.

Pełny tekst źródła
Streszczenie:
The level of precision expected by future cosmic microwave background experiments makes it necessary to refine the techniques used to analyse the data. It becomes essential to understand how to maximise their information content. This can be done, either by developing techniques to reduce the noise level present in the data or by systematically studying the significance of different observations for a given model. In addition, it is possible to define new estimators to test their statistical properties. In this thesis, we initially show how it is possible to construct a pixel-based dataset that combines the WMAP and low-frequency \Planck\ large-scale polarization maps. After demonstrating its robustness, we derive constraints on the optical depth obtaining $\tau = 0.069^{+ 0.012}_{- 0.011}$ (68\% CL). Adding small-scale data, BAO and lensing we find $\tau = 0.0714 _{- 0.0096}^{+ 0.0087}$ (68\% CL). As a further topic, we show how it is possible to define new estimators to study the correlation between the orientation of the Galactic plane and the low-variance anomaly shown by large angle CMB temperature data. Through the use of random rotations, we show the stability of this anomaly at high Galactic latitudes, finding a significance of $\sim 3 \sigma$. Finally, we compare two main observables in CMB experiments: lensing and large-scale polarization. We show how the information carried by these two probes affects our ability to constrain the base $\Lambda$CDM parameters. We extend the analyses considering also some of its most debated extensions, quantifying which future probe will play a crucial role in their characterisation.
Il livello di precisione atteso dai futuri esperimenti sulla radiazione cosmica di fondo rende necessario il perfezionamento delle tecniche utilizzate per analizzare i dati. Diviene indispensabile capire come massimizzare il loro contenuto informativo. Questo pu\`o essere fatto sia sviluppando tecniche per ridurre del livello di rumore presente nei dati, sia tramite lo studio sistematico della significativit\`a delle diverse osservazioni per un determinato modello. In aggiunta, \`e possibile definire nuovi estimatori per testarne le propriet\`a statistiche. In questo lavoro di tesi, inizialmente mostriamo come sia possibile costruire un dataset nello spazio dei pixel che combini le mappe di polarizzazione su larga scala ottenute dalle misure a bassa frequenza di WMAP e \Planck. Dopo averne dimostrato la robustezza, deriviamo i vincoli sullo spessore ottico ottenendo un valore pari a $\tau=0.069^{+0.012}_{-0.011}$ (68\% CL). Aggiungendo misure derivanti dalle piccole scale, BAO e lensing troviamo un valore pari a $\tau=0.0714_{-0.0096}^{+0.0087}$ (68\% CL). Come ulteriore argomento, facciamo vedere come sia possibile definire nuovi estimatori per studiare la correlazione tra l'orientazione del piano Galattico e l'abbassamento anomalo della varianza visto nei dati di temperatura della CMB su larga scala angolare. Tramite l'uso di rotazioni random mostriamo la stabilit\`a di questa anomalia ad alte latitudini Galattiche, trovando una significativit\`a di $\sim 3 \sigma$. Infine, compariamo due osservabili principali negli esperimenti di CMB: quella del lensing e quella della polarizzazione su larga scala. Mostriamo come l'informazione contenuta in queste due sonde influenzi la nostra capacit\`a di vincolare i parametri base del modello $\Lambda$CDM. Estendiamo l'analisi considerando anche alcune delle sue estensioni pi\`u dibattute, quantificando quale sar\`a la sonda che giocher\`a un ruolo cruciale nella loro caratterizzazione.
Style APA, Harvard, Vancouver, ISO itp.
29

Marquès, i. Bueno Maria del Mar. "Estudi de la implicació de la proteïna quinasa CK2 en via de senyalització de les auxines en arabidopsis thaliana". Doctoral thesis, Universitat Autònoma de Barcelona, 2012. http://hdl.handle.net/10803/362638.

Pełny tekst źródła
Streszczenie:
Aquesta tesi s’emmarca dins d’un projecte d’investigació més general que té per objectiu caracteritzar i estudiar la funció de la proteïna quinasa CK2 de sistemes vegetals. La CK2 és una Ser/Thr quinasa present a tots els eucariotes estudiats fins al moment. Estudis fets en diversos eucariotes han evidenciat la importància d’aquesta quinasa en el creixement i en el control del cicle cel·lular, no obstant, les vies de senyalització en que participa aquesta quinasa encara no estan caracteritzades per complet. Al primer capítol d’aquesta tesi es descriu per primera vegada la involucració de la proteïna quinasa CK2 d’Arabidopsis en la via de senyalització de les auxines, en particular, del transport d’auxina. En plantes, la fitohormona auxina és la principal reguladora del creixement cel·lular. Recentment s’ha demostrat que la distribució asimètrica de l’auxina en els diferents òrgans de la planta és essencial per al desenvolupament vegetal i, aquesta distribució depèn principalment del transport polar de l’auxina. Per dur a terme aquesta investigació hem utilitzat un mutant dominant negatiu de la proteïna quinasa CK2 d’Arabidopsis thaliana (la línia CK2mut); aquest mutant té uns trets fenotípics que estan lligats a alteracions en processos dependents d’auxina, com són la inhibició del creixement de l’arrel principal, l’absència d’arrels laterals i el fenotip wavy. No obstant, s’ha comprovat que la percepció de l’auxina no està afectada ja que les plantes CK2mut responen al tractament exogen amb aquesta hormona. Hem demostrat que aquest mutant no té defectes en la síntesi de l’auxina, però si en el transport d’aquesta hormona. Eliminant l’activitat CK2 utilitzant eines genètiques (línia CK2mut) i eines farmacològiques (tractament amb TBB, un inhibidor específic de la CK2) hem demostrat que hi ha anomalies en la formació dels gradients d’auxina i canvis en l’expressió de gens relacionats amb les auxines; en particular, en membres de la família de transportadors de sortida d’aquesta hormona cap a l’exterior cel·lular (PINs) i en la proteïna quinasa PINOID (PID). Aquestes proteïnes són reguladores dels fluxos d’auxina. Estudis fets amb plàntules PIN4::PIN4-GFP i PIN7::PIN7-GFP han permès detectar que l’eliminació de l’activitat CK2 provoca l’acumulació d’aquestes proteïnes en vesícules endosomals. La formació de vesícules endosomals va fer pensar que la CK2 tenia algun paper en la via de tràfic vesicular d’aquests transportadors, i aquest ha estat el tema d’estudi del segon capítol d’aquesta tesi. S’han emprat diversos inhibidors de diferents punts de la via de tràfic vesicular per tal de poder discriminar a quin punt d’aquesta via actua la CK2. Aquests experiments s’han realitzat utilitzant plantes PIN7::PIN7-GFP i utilitzant com a model les plantes PIN1::PIN1-GFP, ja que PIN1 és el transportador d’auxina més ben caracteritzat. Amb el tractament amb tyrphostin A23 hem observat que les vesícules endosomals que es formen per eliminació de l’activitat CK2 estan revestides de clatrina. Amb els tractaments amb brefeldin A (BFA) i wortmannin hem determinat que la CK2 actua en algun punt de la via d’endocitosi diferent al d’aquests inhibidors. S’ha detectat que aquests endosomes no es tenyeixen amb el colorant específic de lípids FM4-64 i que són més grans que els endosomes obtinguts pel tractament amb BFA. Per últim, el tractament amb cicloheximida ens ha permès observar dos efectes: el primer és que la formació d’endosomes no depèn de la síntesi de proteïnes de novo i el segon, és que la inhibició de la CK2 no compromet la degradació al vacúol d’aquestes proteïnes. Així doncs, amb els resultats obtinguts podria ser que l’eliminació de la CK2 provoqués la formació d’endosomes sorting nexin aberrants, indicant així, que aquesta quinasa estaria intervenint en la via del retròmer.
This thesis is involved in a more general project that aims to characterise and study the protein kinase CK2 in vegetal systems. Protein kinase CK2 is a pleiotropic Ser/Thr kinase and evolutionary conserved in eukaryotes. Studies performed in different organisms, from yeast to humans, have highlighted the importance of CK2 in cell growth and cell-cycle control. However, the signalling pathways in which CK2 is involved have not been fully identified. The first chapter of this thesis involves for first time the protein kinase CK2 in the auxin signaling pathway. In plants, the phytohormone auxin is a major regulator of auxin cell growth. Recent discoveries have demonstrated that differential distribution of within plant tissues is essential for developmental processes, and that this distribution is dependent on polar auxin transport. We report here that a dominant-negative mutant of CK2 (CK2mut) in Arabidopsis thaliana shows phenotypic traits that are typically linked to alterations in auxin-dependent processes. However, CK2mut plants exhibit normal responses to exogenous indole-3-acetic acid (IAA) indicating that they are not affected in the perception of the hormone, but upstream in the pathway. We demonstrate that mutant plants are not deficient in IAA but are impaired in its transport. Using genetic and pharmacological tools we show that CK2 activity depletion hinders correct formation of auxin gradients and leads to widespread changes in the expression of auxin-related genes. In particular, members of the auxin efflux carrier family (PINs), and the protein kinase PINOID, both key regulators of auxin fluxes, were misexpressed. PIN4 and PIN7 were also found mislocalized, with accumulation in endosomal bodies. The endosomal bodies formation lead to think that CK2 has a role in vesicular traffic pathway of these carriers and this was the topic of the study of the second chapter of this thesis. We have used several inhibitors in order to identify the point of the pathway in which acts the CK2. We performed these experiments using both PIN7::PIN7-GFP and PIN1::PIN1-GFP as a model, because PIN1 is the carrier more characterized. The tyrphostin A23 treatment reveals that these endosomal vesicles formed by the CK2 inhibition are clathrin coated. The brefeldin A and wortmannin treatments have determined that CK2 acts in a different point of the pathway than these inhibitors. We have detected that the endosomal bodies obtained by CK2 inhibition are not stained by FM4-64 dye and are enlarged compared with the endosomes obtained by the BFA treatment. Finally, the cicloheximide treatment has allowed observing two effects. The first is that the formation of endosomes does not depend on de novo protein synthesis. The second one is that CK2 inhibition does not compromise the degradation of the PIN proteins in the vacuole. Therefore, our results seem to indicate that inhibition of CK2 entails the formation of aberrant sorting nexin endosomes, which may lead to think that this kinase would be involved in the retromer complex.
Style APA, Harvard, Vancouver, ISO itp.
30

Figueiredo, Diego Matos. "Análise da produção de dijatos de difração simples no experimento CMS/LHC". Universidade do Estado do Rio de Janeiro, 2011. http://www.bdtd.uerj.br/tde_busca/arquivo.php?codArquivo=4349.

Pełny tekst źródła
Streszczenie:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
O escopo desse trabalho é a observação de dijatos de difração simples em colisões pp com ps = 7 TeV, durante os primeiros períodos de aquisição de dados do experimento CMS/LHC. A técnica utilizada foi a medida da multiplicidade no calorímetro HF. Os dados foram analisados para diferentes períodos de aquisição de dados do ano de 2010, com ∫ Ldt ~_ 3,2 pb-1. Comparamos os dados observados com o Monte Carlo simulado com efeito de empilhamento e sem esse efeito.
This work concerns the observation of difractive dijets in pp collisions with ps = 7 TeV, during the frst period of data taking at the CMS/LHC experiment. The technique used was to measure the multiplicity of forward calorimeters at HF. The data was analyzed for diferent periods of data acquisition in the year 2010, with ∫ Ldt ~_ 3,2 pb-1. We compared the observed data with Monte Carlo simulations with and without pile-up.
Style APA, Harvard, Vancouver, ISO itp.
31

Le, grand Thomas. "Physique du quark top dans l'expérience CMS au démarrage du LHC". Thesis, Lyon 1, 2010. http://www.theses.fr/2010LYO10165/document.

Pełny tekst źródła
Streszczenie:
La première partie de cette thèse porte sur l'amélioration de l'algorithme de l'étape d'initiation de reconstruction des traces de hadrons et de muons au sein du trajectographe au silicium de l'expérience CMS. Les différentes étapes de mise au point et de tests, qui ont permis d'aboutir à la qualification de ce nouvel algorithme en tant que méthode standard d'initiation de la reconstruction des traces, sont présentées dans ce document.La deuxième partie concerne la mise en place d'une méthode alternative de mesure de la section efficace de production des paires top-antitop dans l'expérience CMS lors du démarrage du LHC. Cette analyse est effectuée à partir du canal de désintégration semi-muonique avec au moins un muon supplémentaire provenant d'un des quarks bottom et a été réalisée en simulation complète démontrant ainsi la possibilité d'une “redécouverte” possible du quark top avec 5 pb-1. Les 2.4 pb-1 de données réelles obtenues à la fin du mois d'Août m'ont permis d'observer les premières paires top-antitop et d'effectuer une première mesure de section efficace : 171±77(stat.) ±27(syst.) pb
The first part of this thesis is about the improve made to the seeding algorithm of track reconstruction for the hadrons and the muons in the silicon tracker of the CMS experiment. The different stages from the creation to the tests, which allowed us to qualify this new algorithm as the standard seeding for tracks reconstruction, are presented in this document. The second part is dedicated to the creation of an alternative method to measure the cross-section of the top-antitop pairs production in the CMS experiment at the LHC launch. This analysis has been made using the channel of the semi-muonic decay with at least one another muon coming from a bottom quark and has been studied on full simulation showing the feasibility to “re-discover” the top quark with 5 pb-1. The 2.4 pb-1 of data collected by the end of august have allowed me to observe the first top-antitop pairs and to make the first cross-section measurement: 171±77(stat.) ±27(syst.) pb
Style APA, Harvard, Vancouver, ISO itp.
32

Tida, Tsilanizara. "Convection turbulente dans le cas d'un jet parietal plan d'eau". Nantes, 1988. http://www.theses.fr/1988NANT2004.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
33

CIRIOLO, VINCENZO. "Study of the Higgs boson associated production with a vector boson in the Higgs boson diphoton decay channel with the CMS detector". Doctoral thesis, Università degli Studi di Milano-Bicocca, 2019. http://hdl.handle.net/10281/241085.

Pełny tekst źródła
Streszczenie:
Questa tesi di dottorato presenta lo studio della produzione associata di un bosone di Higgs con un bosone vettore (VH) in collisioni protone-protone, utilizzando i dati raccolti ad un'energia nel centro di massa delle collisioni pari a 13 TeV con l'esperimento CMS nel 2016. Questo studio è ristretto allo stato finale dove il boson di Higgs decade in due fotoni. Dopo un'introduzione riguardante il contesto teorico e sperimentale che motiva questo studio (Capitolo 1) e una descrizione generale dell'esperimento CMS (Capitolo 2), il lavoro sperimental necessario per ottenere exxellenti performance dal calorimetro eletromagnetico di CMS sono descritti in dettaglio (Capitolo 4). Le utilizzate nell'analisi dei dati e i risultati sono descritti nel capitolo 4, includendo la discussione dell'ottimizzazione dell'analisi che sono stati ritenuti benefici per gli sviluppi futuri. In particolare, il canale in cui il bosone vettore decade in due quark a quello in cui il bosone W decade leptonicamente sono descritti. I risultati dell'analisi sono inclusi in un articolo recentemente sottomesso per la pubblicazione su una rivista internazionale, rappresentando i primi risultati della misura della produzione VH nel canale H→ γγ con dati raccolti nel Run 2 di LHC.
This Ph.D. thesis presents a study of the associated production of the Higgs boson with a vector boson (VH) in proton-proton collisions, using data collected at a center-of-mass energy of 13 TeV with the CMS experiment in 2016 for an integrated luminosity of 35.9 fb −1 . The study is restricted to final states were the Higgs boson decays in two photons (H→ γγ). After an introduction on the experimental and theoretical landscape that motivates this study (Chapter 1) and an overview of the CMS experiment (Chapter 2), the exper- imental work performed to ensure the best quality of the CMS ECAL reconstruction is described in detail (Chapter 3). The analysis method and the results are then described (Chapter 4), including the discussion of optimization steps that were identified and will be relevant for future developments. In particular, the channels with the vector boson decaying into a pair of quarks and the W boson decaying into a lepton and a neutrino are described This analysis, included in a paper recently submitted for publication, represents the first result on the measurement of the VH production process in the H→ γγ final state with the LHC Run 2 data.
Style APA, Harvard, Vancouver, ISO itp.
34

Gómez, Gutiérrez Anna. "Contaminants orgànics persistents a la conca mediterrània. El cas del delta de l'Ebre". Doctoral thesis, Universitat Autònoma de Barcelona, 2008. http://hdl.handle.net/10803/5811.

Pełny tekst źródła
Streszczenie:
Els contaminants orgànics persistents (COPs) són compostos tòxics, persistents, bioacumulables i capaços de ser transportats per l'atmosfera a tot el planeta, incloses aquelles zones remotes on mai s'han produït ni emprat. Aquesta tesi estudia la contaminació dels ecosistemes aquàtics mediterranis per una selecció de COPs: els bifenils policlorats (PCBs), el diclorodifeniltricloroetà (DDT) i els seus derivats (DDE i DDD), els hexaclorociclohexans (HCHs) i l'hexaclorobenzè (HCB).
Els resultats d'aquesta tesi s'han organitzat en tres capítols. En el primer d'ells s'estudia l'estat de la contaminació i la dinàmica dels COPs en el delta de l'Ebre. L'anàlisi de les mostres recollides (novembre de 2002 - octubre de 2003) mostrà com encara es produeixen aportacions no controlades de DDT al delta. Aquestes entrades poden ser la conseqüència de l'arrossegament de partícules enriquides mobilitzades des de diferents indrets de la conca i/o el resultat de les activitats agrícoles. La variabilitat estacional i geogràfica del lindà també indica l'existència d'aportacions recents. D'altra banda, les crescudes del riu provoquen un important augment de la descàrrega de COPs, bàsicament a la fase particulada, a causa de l'augment de l'escolament, la lixiviació, l'arrossegament de sòls i la remobilització dels sediments (ex. material dipositat als embassaments). Així, els períodes d'avinguda són claus en el còmput total de la descàrrega de COPs a la Mediterrània.
En el segon dels capítols s'estudià el perfil vertical estratificat del riu a la seva desembocadura com un exemple dels estuaris de falca salina desenvolupats a la Mediterrània. La falca salina a l'Ebre provoca un canvi en la dinàmica vertical de diferents variables fisicoquímiques, de la matèria orgànica i dels COPs. La floculació induïda pel canvi salí vertical provoca un augment de la quantitat de partícules en suspensió a la falca salina i la barreja de partícules de diversos orígens (orgànic i mineral). Aquest material es barreja a la vegada amb el material marí i les partícules suspeses pateixen una disminució general de les concentracions de COPs. Tanmateix, la falca salina actua com una zona de retenció dels COPs ja que el seu moviment lent i la baixa renovació de l'aigua fan que els contaminants es mantinguin a l'estuari. D'altra banda, tot i que la interfície vertical entre l'aigua dolça - falca salina juga un paper molt important en la transformació i l'acumulació de la matèria orgànica, no s'han detectat màxims molts evidents de COPs en aquesta zona.
Finalment, l'últim capítol presenta una visió més global i s'analitza l'abast i el risc ecològic de la utilització dels COPs a tota la conca mediterrània. Es realitzà una recopilació i valoració de la informació existent sobre COPs als sediments marins superficials com a indicadors i integradors del nivell de contaminació de la conca. Tot i les limitacions trobades en el recull d'informació (manca de dades per les costes est i sud de la conca, inexistència de procediments de mostreig i d'anàlisi normalitzats i manca de guies de qualitat ecotoxicològica) es van observar algunes tendències. Des del punt de vista geogràfic, s'ha demostrat com la contaminació per COPs als sediments mediterranis és un problema localment important en algunes zones urbanes/industrials, àrees de descàrrega de rius i zones semitancades (ports i llacs costaners). Malgrat això, les concentracions i el seu risc ecològic associat decreixen ràpidament en les zones de mar obert. Temporalment, tot i la gran variabilitat, s'observa una disminució general de les concentracions de COPs des dels anys setanta fins a l'actualitat.
Persistent Organic Pollutants (POPs) are compounds which have shown to be toxic, persistent, bioaccumulable and susceptible to atmospheric transport to remote areas where they have never been produced or used. This Thesis deals with the contamination of the Mediterranean Sea by a selection of POPs: hexachlorobenzene (HCB), hexachlorocyclohexanes (HCHs), dichlorodiphenyltrichloroethane (DDT) and its derivatives (DDE and DDD) and polychlorinated biphenyls (PCBs).
Results of this Thesis have been organized in three different chapters. The first chapter is focused on the contamination by POPs and their dynamics in the Ebro Delta. The analysis of collected samples (November 2002 - October 2003) have shown that even today, some DDT inputs are occurring in the Ebro Delta. These inputs can be related to the mobilization of enriched particles from several places in the basin and/or to agricultural activities. In the case of lindane, the seasonal and spatial variability also seems to indicate the existence of recent inputs of this pesticide in the area. On the other hand, the rises of the river flow leads to a significant increase in POP transport downstream, basically in the particulate phase, as a result of the increase in runoff, lixiviation, dragging of soils and the mobilization of sediments (ex. sediment stored in the dams). Flood episodes are thus contributing greatly to the total amounts of POPs annually discharged by the river into the Mediterranean.
In the second chapter, the stratified vertical profile of the river in the mouth was studied as an example of the salt wedge estuaries developed in the Mediterranean. The salt wedge in the Ebro produces a great change in the vertical profiles of various physicochemical variables, as well as the concentrations of organic matter and POPs. Flocculation caused by the salinity gradient leads to an increase of suspended organic matter in the salt wedge and the mixing of particles from different sources (organic and mineral). In the salt wedge, this material is also mixed with marine suspended particulate matter and the concentrations of POPs in the suspended solids decrease. However, the salt wedge entraps a large amount of POPs due to the low renewal levels and lack of movement and, thus, keeps the pollutants in the estuary. Although the interface between fresh and salty waters has a crucial role in the transformation and accumulation of organic matter, the results do not show important maxima of POPs in this region.
Finally, in the last chapter, from a more general standpoint, an assessment of POPs contamination and ecological risk in the Mediterranean was conducted, using the superficial sediments as indicators. In spite of all the limitations on gathering the existing information (lack of data in the eastern and southern Mediterranean, non-existence of standardised sampling and analytical procedures and ecotoxicological quality standards), some trends can be discerned. Geographically, it has been proved that POPs pollution in the Mediterranean sediments is a local problem and important in some urban and industrial locations, river discharge areas and semi-enclosed zones (harbours and coastal lagoons). In spite of this, the concentrations and their ecological risk are falling rapidly in open sea areas. Temporally, despite the great variability of data, a general decrease of POPs concentrations since the 1970s can be seen.
Style APA, Harvard, Vancouver, ISO itp.
35

Martelli, Arabella. "Première mesure de la production de WZ avec le détecteur CMS au LHC". Phd thesis, Ecole Polytechnique X, 2012. http://pastel.archives-ouvertes.fr/pastel-00671521.

Pełny tekst źródła
Streszczenie:
Cette thèse présente la première mesure de production du W Z avec le détecteur CMS au LHC en utilisant les modes leptoniques. La réponse du calorimètre électromagnétique (ECAL) est étudiée par la mesure du pouvoir d'arrêt de muons dans le tungstate de plomb d'ECAL. Les électrons sont validés avec les premières données pour être utilisable dans toutes les analyses. Avec les données de rayons cosmiques, le pouvoir d'arrêt de muons traversants le tungstate de plomb du calorimètre électromagnétique a été mesuré pour une gamme d'impulsion 5-1000 GeV/c. Le résultat compatible avec l'attendu a permis de valider l'échelle d'énergie d'ECAL, déterminée auparavant avec un faisceau d'électrons de 120 GeV/c, dans la région du sub-GeV en accord avec 1.004+0.002-0.003(stat.) ±0.019(syst.). Les données des premières collisions de LHC ont permis la vérification des algorithmes de reconstruction des électrons, en particulier pour la détermination des pré-traces dans la partie la plus interne du trajectographe. Les algorithmes ont été optimisés, sur des électrons issus de désintégrations de bosons W et entièrement validés avec 14/pb de données environ. La mesure physique est la section efficace de production des bosons associés WZ dans les collisions proton-proton à √s =7TeV. La signature claire de la désintégration en leptons permet l'extraction efficace du signal et la réjection du bruit de fond, pour chaque canal considéré. Le premier évènement a été observé avec 36/pb de données. Avec 1.09/fb, la section de production de WZ a été mesurée pour la première fois à √s =7TeV σ(WZ) = 19.11+3.30-2.53(stat.)±1.10(syst.)±1.15(lumi.)pb et trouvée en accord avec la prédiction du Model Standard (18.57+0.75-0.58 pb NLO).
Style APA, Harvard, Vancouver, ISO itp.
36

KIM, Geun-Beom. "Recherche des sélectrons, neutralinos et squarks dans le cadre du modèle GMSB avec le détecteur CMS. Etude de la compression sans pertes de données provenant du calorimètre électromagnétique". Phd thesis, Université Louis Pasteur - Strasbourg I, 2001. http://tel.archives-ouvertes.fr/tel-00001120.

Pełny tekst źródła
Streszczenie:
Le sujet de cette thèse est la recherche des sélectrons, neutralinos et squarks dans le cadre du modèle GMSB (Gauge Mediated Supersymmetry Breaking) avec le détecteur CMS et l?étude de la compression sans pertes des données. Avec une fréquence de 40 MHz des croisements des faisceaux de protons environ 10^9 interactions proton-proton par seconde se produisent à la luminosité nominale de 10^34 cm-2 s-1. La définition du système de déclenchement et d?acquisition de CMS impose des conditions très restrictives aux données sortantes. La fréquence moyenne des événements sortants doit être inférieure à 100 kHz. La taille moyenne d?un événement provenant du calorimètre électromagnétique doit être inférieure à 100 kilo octets. Cette thèse résumera dans quelle mesure il est possible d?appliquer aux données ainsi obtenues une réduction du volume des données supplémentaire en utilisant les techniques offertes par la compression de données sans pertes. Les grands objectifs du programme de physique du LHC est de comprendre l?origine des masses des particules et de tester l?hypothèse de l?existence d?une symétrie fondamentale entre les fermions et les bosons, la supersymétrie (SUperSYmmetry en anglais d?où SUSY). Dans cette thèse nous montrons comment expérimentalement nous pouvons rechercher, dans le cadre des modèles GMSB (Gauge Mediated SUSY Breaking), les particules supersymétriques au LHC.
Style APA, Harvard, Vancouver, ISO itp.
37

Plestina, Roko. "Evidence pour une nouvelle particule semblable au boson de Higgs du modèle standard et se désintégrant en quatre leptons dans l'expérience CMS". Phd thesis, Ecole Polytechnique X, 2013. http://pastel.archives-ouvertes.fr/pastel-00827458.

Pełny tekst źródła
Streszczenie:
Cette thèse présente la mise en évidence dans l'expérience CMS d'un nouveau boson dans la voie H→ZZ et la contribution à la découverte de ce nouveau boson à une masse proche de 125 GeV dans l'expérience CMS au CERN. La mesure des propriétés est passée en revue. Les résultats sont obtenus par une analyse inclusive du canal H→ZZ→ 4l , i.e. où chacun des bosons Z se désintègre en une paire de leptons (l ), électrons ou muons. La recherche du boson de Higgs couvre toutes les hypothèses de masse dans le domaine 110 < mH < 1000 GeV. L'analyse utilise les données de collisions proton-proton enregistrées par le détecteur CMS au collisionneur LHC, correspondants à des luminosités intégrées de 5.1 fb^−1 a sqrt(s) = 7 TeV et 12.2 fb^−1 at sqrt(s) = 8 TeV. Le nouveau boson est observé avec une signifiance statistique au-desus du bruit de fond attendu de 4.5 écarts standards. L'intensité du signal μ, normalisé à l'attendu pour le boson de Higgs du modèle standard, est mesuré à une valeur de μ = 0.80+0.35-0.28 a 126 GeV. Une mesure précise de la masse du nouveau boson a été effectué et donne 126.2 ± 0.6 (stat) ±0.2 (syst) GeV. L'hypothèse d'un boson scalaire 0+ est en accord avec l'observation. Les données expérimentales défavorisent l'hypothèse pseudoscalaire 0− avec CLs de 2.4%. Aucun autre excès significatif n'est observé, et des limites supérieures d'exclusions sont obtenues à 95% de niveau de confiance pour les domaines 113-116 GeV et 129-720 GeV, alors que la domaine d'exclusion attendue en absence du boson de Higgs est de 118-670 GeV. Pour cette thèse, une emphase particulière a été mis sur l'isolation des leptons. L'isolation des leptons fait parties des observables clefs sur le chemin de la découverte. En même temps, l'isolation est très sensible aux conditions pile-up de la machine LHC. Cette thèse établit une méthode robuste qui permet de marginaliser l'effet de pile-up sur l'isolation. La méthode est maintenant utilisée à travers les différentes analyses de CMS. Une attention particulière a également été mis sur les mesures de l'efficacité de l'identification des leptons, l'isolation et le paramètre d'impact directement à partir de données à l'aide désintégrations leptoniques de boson Z . Les mesures ont été utilisées pour les corrections finales appliquées aux leptons lors du calcul de la signifiance statistique de l'excès des événements à quatre leptons.
Style APA, Harvard, Vancouver, ISO itp.
38

Troska, Jan Kevin. "Radiation-hard optoelectronic data transfer for the CMS tracker". Thesis, Imperial College London, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.313621.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
39

Ravat, O. "Etude du calorimètre électromagnétique de l'expérience CMS et recherche de boson de Higgs neutres dans le canal de production associée". Phd thesis, Université Claude Bernard - Lyon I, 2004. http://tel.archives-ouvertes.fr/tel-00011561.

Pełny tekst źródła
Streszczenie:
La quête du boson de Higgs, témoin de la brisure de la symétrie électrofaible, est un des buts principaux du détecteur CMS, qui sera placé auprès du futur collisionneur proton-proton LHC. Dans cette thèse, le calorimètre électromagnétique à cristaux de tungstate de plomb est sollicité.

D'une part, l'évaluation des performances du calorimètre a été effectuée grâce aux données prises durant les faisceaux tests de l'été 2003, et les choix concernant les différentes électroniques de lecture ont été faits.

D'autre part, les capacités du calorimètre électromagnétique ont été utilisées afin d'évaluer le potentiel d'observation du boson de Higgs du Modèle Standard dans le canal dit de production associée : de masse comprise entre 115 et 150 GeV, il se désintègre en deux photons et est produit en association avec un boson W ou Z se désintégrant en électrons ou muons. La présence du (ou des) lepton(s) chargé(s), permettant de localiser le vertex de l'interaction, améliore la mesure de la masse photon-photon par rapport au canal inclusif. Ce signal ainsi que les nombreux bruits de fond ont été simulés et reconstruits. L'analyse des événements permet de confirmer l'observation du boson de Higgs du Modèle Standard dans le canal de production associée. Une étude dans le cadre du MSSM est également effectuée.
Style APA, Harvard, Vancouver, ISO itp.
40

Mocellin, Paolo. "CCS and EOR hazard analysis. Experimental investigation and modeling of multiphase CO2 pressurized releases". Doctoral thesis, Università degli studi di Padova, 2017. http://hdl.handle.net/11577/3422290.

Pełny tekst źródła
Streszczenie:
CO2 handling operations are gaining importance in the worldwide discussion mainly because of the large strategies that are currently being developed to govern the Global Warming. In this sense, the CO2 is considered one of the most important GHG (Greenhouse Gas) since it is also actually emitted in large quantities from anthropogenic sources. These emissions arise mainly from combustion of fossil fuels and biomass in power generation, gasification, industrial processes, natural gas processing, petroleum refining, building and transport sectors. The deployment of the Carbon Capture and Storage (CCS) technique is currently under analysis as a mean of limiting these emissions and so the atmospheric concentration of the CO2. The CCS chain is subdivided into three main systems: the system of capture and compression, the transport system and the storage system. This technology therefore involves the handling of the CO2 to store it in a reservoir, instead of allowing its release to the atmosphere, where it contributes to the Climate Change. Therefore, an essential element of the CCS chain is the transportation of large amounts of captured CO2 to the storage site. This is usually achieved by means of large infrastructures mainly consisting of pressurized pipelines that may typically be several hundred kilometers long. Given the significant amounts of CO2 involved and because of this substance at high concentrations is an asphyxiant, the safety of CO2 pipelines is of primarily importance to the public acceptance of CCS as viable means for tackling the impact of the Global Warming. This works moves therefore from the fact that central to QRA (Quantitative Risk Assessment) procedure of pressurized pipeline carrying CO2 is the accurate prediction of the flow parameters concerning an accidental release induced by a rupture. From this point of view, there is naturally the possibility of pipeline rupture that may be driven by many factors like human errors, component failures, corrosion, earthquakes and many others external interferences. In what concerning the CO2 releases modeling, it should be underlined the lack of a comprehensive model able to manage all phenomena that are expected to take place during a rapid depressurization. In the case of CO2 pipeline, the model must be capable of handling multiphase flows also at sonic conditions as well as the peculiar thermodynamic behavior that may invoke the occurrence of the solid phase. The main target is to have a comprehensive, reliable and easy applicable (from a QRA perspective) model to be used in assessing hazard profiles related to a CO2 pipeline rapid depressurization. In this sense, available models lack the whole dynamics and the formation and sublimation of solid CO2 expect for sophisticated and detailed CFD (Computational Fluid Dynamic) simulation that are substantially inappropriate to face the emergencies and the investigation of general scenarios. The model availability is also still limited by the lack of experimental data that are used to drive the model formulation and its validation. The few accessible experimental series give contradictory results in what concerning the peculiar CO2 behavior under transient pressure fields and in correspondence of choked flow conditions thus requiring further investigations. Moving from these considerations, the present study has the aim of covering the existing gaps in what concerning the availability of a descriptive model. The study is introduced by a detailed theoretical investigation of specific phenomena the CO2 undergoes when subjected to a rapid depressurization. Examples of these may include the occurrence of multiple choked flow states depending on the established thermodynamic equilibrium (liquid-vapor, solid-vapor) and the investigation of specific thermodynamic pathways that are linked to the solid phase appearance. A section pertaining to the experimental campaign follows. In this sense the data availability is enhanced by the collection of self-made data series especially linked to moderate pressures (up to 65-70 barg). The experimental arrangement allows for the investigation of different charging states, matching the operative CO2 conditions. Collected data are then used to support the model development in the assessment of specific discharge and thermodynamic parameters that are needed to close the mathematical structure. The proposed model, that is the main target of this investigation, is illustrated and discussed in a specific section with details on the adopted assumptions, the mathematical structure and the numerical approach. The structure covers all main mechanisms related to the rapid CO2 depressurization including phase change mechanisms (boiling, solidification and sublimation), heat transfer and friction effects as well as a reliable thermodynamic description of the expansion pathway to atmospheric conditions. Results are first checked against the collected laboratory scale experimental data especially in what concerning the prediction of the main depressurization variables requested by the QRA procedure (stagnation and orifice pressure and temperature profiles, mass flow rate evolution, total discharge time). A model extension oriented to large scale geometries and different orifice sizes and operative conditions is then proposed. Main discharge parameters are analyzed and their sensitivity to variations in the domain and orifice sizes is assessed with specific light on the relative amounts of dense phase produced and the total discharge time. The mathematical model is then supplemented of some features concerning non-equilibrium phenomena that do not allow for a mere thermodynamic description. Among these the behavior in the spinodal region, the structural entropy increase in the passage across the triple point and the irreversibilities in the phase change mechanisms are investigated. The target is to firstly check if the additional complication is reasonable from the QRA perspective to then move to variations in the release parameters that emerge because of this extended approach. A final section is dedicated to the model application to some real existing CCS and EOR (Ehnanced Oil Recovery) projects worldwide making available the main discharge parameters and profiles for different pipeline lengths and orifice sizes. A specific investigation on the solid CO2 is performed in addition to the collection of useful comprehensive parameters to be used in QRA studies.
Il presente lavoro muove dalla necessità di colmare alcune lacune esistenti riguardanti l’Analisi Quantitativa di Rischio (AQR) applicata ad infrastrutture di movimentazione di CO2. La recente diffusione di complessi sistemi per la cattura e lo stoccaggio della CO2 e per lo spurgo di pozzi petroliferi esausti ha posto il problema di garantire adeguati standard di sicurezza in fase di trasporto mediante pipelines, dato il carattere asfissiante della sostanza movimentata. Per quanto riguarda l’AQR preliminare e di esercizio applicata al caso della CO2, essa risente della carenza di modelli ad-hoc e di dati sperimentali di supporto utili allo sviluppo di idonei strumenti di calcolo descrittivi dei fenomeni di rapida depressurizzazione. La CO2, infatti, è caratterizzata da un peculiare comportamento termodinamico che in fase di depressurizzazione può portare alla comparsa di fase liquida e solida, rendendo di fatto inadatti e imprecisi i modelli tradizionali attualmente disponibili. Volendo pertanto trattare tali carenze, il lavoro di ricerca ha previsto la raccolta di dati sperimentali relativi a rapide depressurizzazioni di CO2 nonché la formulazione di un modello comprensivo a scopi di AQR. L’apparecchiatura sperimentale predisposta ha pertanto consentito di indagare i fenomeni transitori meccanici e termici che seguono ad un rilascio pressurizzato così come fare luce sui meccanismi competitivi di scambio termico che determinano la trasformazione risultante di espansione. I dati relativi a cariche liquide hanno altresì evidenziato le peculiarità derivanti dai meccanismi di cambio di fase sulla determinazione dei profili di pressione e di temperatura all’interno del serbatoio e all’orifizio. Allo scopo di supportare lo sviluppo del modello, sia il coefficiente di scarico che i dettagli sulla trasformazione di espansione sono stati derivati, dipendentemente dalle condizioni di carico del serbatoio. Il modello proposto si articola in un complesso sistema di equazioni di bilancio unite a specifiche correlazioni utili a descrivere i meccanismi di scambio termico e di transizione di fase che, nel caso della CO2, possono coinvolgere l’ebollizione, la solidificazione e la sublimazione. Specifiche equazioni di stato per la CO2 contribuiscono ad una robusta descrizione del fenomeno. È stata dedicata particolare attenzione all’indagine preliminare della fluidodinamica dei moti soggetti a rilevanti gradienti termici nonché al peculiare comportamento della CO2 in regime di choked flow. La corrispondenza quantitativa e qualitativa con i dati sperimentali raccolti è buona. Inoltre, il modello ha permesso di ricavare dettagli sul grado di reversibilità dei percorsi di espansione e sui principali parametri richiesti in fase di AQR quali i profili di pressione e di temperatura, l’evoluzione della portata massiva rilasciata dal sistema e il tempo totale di scarico. L’estensione del modello a domini su scala reale ha previsto l’indagine dell’incidenza delle condizioni operative, della dimensione della pipeline così come di quella del foro di scarico sull’evoluzione del fenomeno di depressurizzazione. I risultati hanno consentito di fare luce sulle condizioni che determinano la comparsa di fase solida in fase di rilascio, spesso trascurata dai modelli tradizionali. Diverse simulazioni hanno evidenziato come sia le condizioni operative che la dimensione dell’orifizio rappresentino i parametri chiave nel governare i processi di cambiamento di fase e le modalità di espansione. Inoltre, l’analisi ha consentito di stabilire sotto quali condizioni sia possibile semplificare l’intera modellazione adottando l’ipotesi di isotermia e di scambio termico trascurabile mostrando come pipeline molto lunghe soggette a piccoli fori soddisfino a tali requisiti. Infine, il perfezionamento del modello mediante l’introduzione di concetti legati sia a fenomeni di non-equilibrio che di processi irreversibili che accompagnano le transizioni di fase, ha mostrato come per pressioni moderate e piccole pipelines ciò sia del tutto superfluo. Negli altri casi, invece, l’adozione di questo approccio supplementare mostra importanti deviazioni nella stima sia della dinamica termica che del tempo totale di scarico.
Style APA, Harvard, Vancouver, ISO itp.
41

Regnard, Simon. "Measurements of Higgs boson properties in the four-lepton final state at sqrt(s) = 13 TeV with the CMS experiment at the LHC". Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLX081/document.

Pełny tekst źródła
Streszczenie:
Cette thèse présente une étude de la production de boson de Higgs dans les collisions proton-proton à sqrt(s) = 13 TeV enregistrées avec le détecteur CMS au Grand collisionneur de hadrons (LHC) du CERN, exploitant la voie de désintégration en une paire de bosons Z qui se désintègrent à leur tour en paires d’électrons ou de muons (H->ZZ->4l, l = e,mu).Ce travail s’inscrit dans le contexte du début du Run II du LHC, une nouvelle période de prise de données qui a commencé en 2015 après une interruption de deux ans. Ce redémarrage est marqué par une augmentation de l’énergie dans le centre de masse de 8 TeV à 13 TeV et un resserrement de l’espacement entre paquets de protons de 50 ns à 25 ns. Ces nouveaux paramètres augmentent à la fois la luminosité et placent des contraintes inédites sur le déclenchement, la reconstruction et l’analyse des données de collisions pp. Un effort important est donc consacré à l’amélioration et la réoptimisation du système de déclenchement de CMS pour le Run II, en mettant l’accent sur la reconstruction et la sélection des électrons et sur la préparation de chemins de déclenchement multi-leptons préservant une efficacité maximale pour le canal H->ZZ->4l.Dans un second temps, les algorithmes de sélection hors-ligne des électrons et des muons sont optimisés et leurs efficacités sont mesurées dans les données, tandis que la logique de sélection des candidats à quatre leptons est améliorée. Afin d’extraire des modes de production rares du boson de Higgs tels que la fusion de bosons vecteurs, la production par « Higgsstrahlung » VH et la production associée ttH, une nouvelle répartition des événements sélectionnés en catégories exclusives est introduite, fondée sur des discriminants utilisant le calcul d’éléments de matrice et l’étiquetage de saveur des jets.Les résultats de l’analyse des premières données à 13 TeV sont présentés pour des lots de données enregistrés en 2015 et au début de 2016, qui correspondent à des luminosités intégrées respectives de 2.8 fb-1 and 12.9 fb-1. Le boson de Higgs est redécouvert de façon indépendante à la nouvelle énergie. L’intensité du signal relative à la prédiction du modèle standard, la masse et la largeur de désintégration du boson sont mesurées, ainsi qu’un jeu de paramètres contrôlant les contributions des principaux modes de production attendus. Tous les résultats sont en bon accord avec les prévisions du modèle standard pour un boson de Higgs à 125 GeV, aux incertitudes de mesure près, ces dernières étant dominées par la composante statistique avec l’échantillon de données actuel. Enfin, une autre résonance se désintégrant en quatre leptons est recherchée à haute masse, et aucun excès significatif n’est observé
This thesis reports a study of Higgs boson production in proton-proton collisions at sqrt(s) = 13 TeV recorded with the CMS detector at the CERN Large Hadron Collider (LHC), exploiting the decay channel into a pair of Z bosons that in turn decay into pairs of electrons or muons (H->ZZ->4l, l = e,mu).This work is carried out in the context of the beginning of Run II of the LHC, a new data-taking period that started in 2015, following a two-year-long shutdown. This restart is marked by an increase of the centre-of-mass energy from 8 TeV to 13 TeV, and a narrowing of the spacing of proton bunches from 50 ns to 25 ns. These new parameters both increase the luminosity and set new constraints on the triggering, reconstruction and analysis of pp collision events. Therefore, considerable effort is devoted to the improvement and reoptimization of the CMS trigger system for Run II, focusing on the reconstruction and selection of electrons and on the preparation of multilepton trigger paths that preserve a maximal efficiency for the H->ZZ->4l channel.Secondly, the offline algorithms for electron and muon selection are optimized and their efficiencies are measured in data, while the selection logic of four-lepton candidates is improved. In order to extract rare production modes of the Higgs boson such as vector boson fusion, VH associated production and ttH associated production, a new classification of selected events into exclusive categories is introduced, using discriminants based on matrix-element calculations and jet flavour tagging.Results of the analysis of first 13 TeV data are presented for two data sets recorded in 2015 and early 2016, corresponding to integrated luminosities of 2.8 fb-1 and 12.9 fb-1, respectively. A standalone rediscovery of the Higgs boson in the four-lepton channel is achieved at the new energy. The signal strength relative to the standard model prediction, the mass and decay width of the boson, and a set of parameters describing the contributions of its main predicted production modes are measured. All results are in good agreement with standard model expectations for a 125 GeV Higgs boson within the incertainties, which are dominated by their statistical component with the current data set. Finally, a search for an additional high-mass resonance decaying to four leptons is performed, and no significant excess is observed
Style APA, Harvard, Vancouver, ISO itp.
42

Brun, Hugues. "La reconstruction et l'identification des photons dans l'expérience CMS au LHC : applications à la recherche de bosons de Higgs dans le canal H $\rightarrow \gamma\gamma$". Phd thesis, Université Claude Bernard - Lyon I, 2012. http://tel.archives-ouvertes.fr/tel-00916276.

Pełny tekst źródła
Streszczenie:
Le Modèle Standard de la physique des particules explique avec succès les données expérimentales. L'origine de la masse des bosons W et Z est expliquée à l'aide du mécanisme de Higgs qui permet de briser la symétrie de jauge de l'interaction électro-faible. Cependant ce mécanisme prédit l'existence d'une particule, appelée le boson de Higgs, qui n'a pas été observée pour l'instant. Cette particule est recherchée au LHC en particulier dans les expériences ATLAS et CMS. Les premiers résultats utilisant les données du LHC permettent d'exclure, avec un niveau de confiance de 95%, un boson de Higgs qui aurait la section efficace du Modèle Standard entre 128 et 600 GeV/c$^2$ et les résultats plus anciens du LEP ont exclu un boson de Higgs plus léger que 114.4 GeV/c$^2$. Dans l'intervalle de masse restant, le canal de désintégration du Higgs en deux photons est le canal idéal pour la recherche du boson de Higgs car, malgré son faible rapport d'embranchement (environ quelques pour mille) et grâce à son état final clair, il permet d'obtenir une résonance de faible largeur dans le spectre de masse invariante des événements di-photons. La manière dont un photon est reconstruit dans CMS sera d'abord décrite et la compréhension de cette reconstruction avec les premières données du LHC présentée. Du fait de la faible largeur de la résonance du boson de Higgs à basse masse, un grand intérêt doit être porté à la résolution sur l'énergie des photons. C'est pourquoi, nous étudierons les corrections apportées à cette énergie. Ensuite, comme les pions neutres qui se désintègrent en deux photons sont le principal bruit de fond aux photons dans les données, nous verrons comment utiliser la forme du dépôt d'énergie dans le calorimètre électromagnétique de CMS à l'aide d'un réseau de neurones artificiels pour discriminer ces pions neutres des vrais photons. La chromodynamique quantique est la source d'un large nombre d'événements di-photons qui forment la majorité du bruit de fond à la désintégration du boson de Higgs. La mesure de la section efficace de ces processus et de leur cinématique aide aussi à la compréhension du Modèle Standard. La possibilité d'utiliser le pouvoir discriminant du réseau de neurones pour mesurer le nombre d'événements diphotons dans les données, a été étudiée. Les mésons neutres sont aussi un bruit de fond pour les photons issus de la désintégration du boson de Higgs. L'amélioration de l'identification à l'aide d'une coupure sur la variable de sortie du réseau de neurones a donc été évaluée : la conséquence de cette amélioration en termes de limite sera présentée sur le premier 1.6fb$^1$ des données de 2011 enregistrées par l'expérience CMS.
Style APA, Harvard, Vancouver, ISO itp.
43

Pujantell, Albós Josep Antoni. "Les manifestacions del canvi global en àrees de muntanya mediterrània. Un cas d’estudi al Baix Montseny". Doctoral thesis, Universitat Autònoma de Barcelona, 2012. http://hdl.handle.net/10803/117591.

Pełny tekst źródła
Streszczenie:
El canvi global és un procés de caràcter global, amb una complexitat que en fa necessari l’estudi a múltiples escales, incloent-hi la local. La conca mediterrània és una regió especialment apropiada per al seu estudi, on històricament s’ha produït una important interacció entre societat i medi, responsable de la seva actual diversitat de paisatges. A partir del segle XIX, i especialment durant el segle XX, les zones de muntanya de Catalunya i de la Mediterrània en general han experimentat grans canvis socioeconòmics, derivats de la transició des d’una societat rural cap a una societat industrial, que han resultat en l’abandonament del poblament dispers i les activitats primàries associades en favor de les zones urbanes industrialitzades. Aquests canvis es plantegen com a responsables d’importants transformacions socioecològiques, actuant com a forces inductores del canvi d’usos i cobertes del sòl i de les transformacions paisatgístiques. La ubicació biogeogràfica de l’àmbit d’estudi, entre les serralades del Montnegre i del Montseny és molt apropiada per a l’estudi del canvi global a escala local. La dualitat entre el poblament dispers de l’àmbit d’estudi, vinculat a les activitats primàries, i la vila de Sant Celoni, històricament vinculada a les activitats artesanes i comercials, esdevé de gran interès per a estudiar el procés d’industrialització. Per a l’anàlisi d’aquests processos s’ha partit d’una perspectiva històrica holística i interdisciplinària, que permetés integrar el component socioeconòmic i el component biofísic dels sistemes socioecològics en un marc temporal ampli. La quantificació i la representació cartogràfica dels canvis en els usos de cobertes del sòl entre 1956 i el 2010 ha estat un dels eixos centrals d’aquest treball, que s’ha complementat amb les dades procedents de la documentació fiscal (amillaraments) disponible per a l’àmbit d’estudi a mitjans del segle XIX. Els resultats cartogràfics obtinguts per a 1956 i 2010 han permès analitzar els canvis en la configuració paisatgística producte del canvi d’usos i cobertes. El procés més destacat és la important disminució de les cobertes agrícoles, responsable d’una pèrdua d’espais oberts, i associat a la regressió de les activitats primàries. En canvi, les cobertes associades als usos residencials i industrials han crescut de forma destacada resultat del creixement industrial i demogràfic de Sant Celoni i d’un procés de suburbanització a escala de la regió metropolitana de Barcelona. Una de les conseqüències de la regressió de les activitats primàries és la pèrdua del patrimoni socioecològic associat, resultat de la interacció entre els éssers humans i el medi. En aquest treball s’ha estudiat aquest patrimoni, vinculat a un marc espacial i temporal determinat, i inscrit en una determinada cosmovisió, que el procés d’industrialització ha debilitat i transformat. Partint d’aquest context, l’objectiu principal d’aquesta tesi és descriure i analitzar els components del procés de Canvi Global a una escala local, prenent com a cas d’estudi el municipi de Sant Celoni.
Global change is a global process whose complexity requires multiple scales of study, including the local one. The Mediterranean region is particularly suitable for it, because historically there has been a significant interaction between society and environment, responsible for its current landscape diversity. Since the nineteenth century, and especially during the twentieth century, the mountain areas of Catalonia and the Mediterranean have experienced major socio-economic changes, related with the transition from an agrarian society to an industrial one, which have caused the abandonment of the dispersed farmhouses and its associated agrarian activities in favor of industrialized urban areas. These changes are considered to be responsible for important socioecological transformations, acting as driving forces of land use and land cover changes as well as landscape transformations. The biogeographical location of the study area, between the Montseny and the Montnegre mountains, is well suited for the study of global change at a local level. The duality between the dispersed farmhouses and the town of San Celoni, historically linked to craftwork and trade, becomes of great interest in order to study this industrialization process. For the analysis of these processes, a holistic and interdisciplinary historical perspective is required, allowing to integrate the socioeconomic and biophysical components of socioecological systems. Quantification and cartographic representation of changes in land use land cover between 1956 and 2010 was one of the central aims of this thesis, which has been compared with data from tax documentation (amillaraments) available for the study area in the mid-nineteenth century. The maps obtained for 1956 and 2010 allowed to analyze changes in land use and land cover, and changes in landscape configuration. The most important process detected is the significant reduction of crops, related with the regression of the primary sector, and responsible for the loss of open spaces. However, land covers associated with residential and industrial uses have grown as a result of the industrialization and the population growth of Sant Celoni, as well as suburbanization processes at a metropolitan region scale. One consequence of the decline of the primary activities is the loss of its associated socio-ecological heritage, which results from the interaction between humans and the environment. In this study, it has been considered that this heritage is linked to a particular spatial and temporal context, and embedded in a particular worldview, weakened and transformed by industrialization. In this context, the main objective of this thesis is to describe and analyze the components of Global Change at a local scale, taking the municipality of Sant Celoni as a case study.
Style APA, Harvard, Vancouver, ISO itp.
44

Chauveau, Eric. "Analyse, par modélisation, de la sélectivité réactionnelle dans les procédés d'électrolyse pulsée : cas d'un dépôt métallique". Grenoble INPG, 1987. http://www.theses.fr/1987INPG0141.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
45

DONATO, SILVIO. "Search for the Standard Model Higgs boson decaying to b quarks with the CMS experiment". Doctoral thesis, Scuola Normale Superiore, 2017. http://hdl.handle.net/11384/85894.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
46

SORRENTINO, GIULIA. "Misura della sezione d'urto di produzione Zγ nel canale Z → νν e limiti sugli accoppiamenti di gauge neutri tripli anomali in collisioni protone-protone a √s = 13 TeV con l'esperimento CMS a LHC". Doctoral thesis, Università degli Studi di Trieste, 2023. https://hdl.handle.net/11368/3041027.

Pełny tekst źródła
Streszczenie:
Lo studio della produzione associata di due o più bosoni rappresenta un test potente ed efficace del settore elettrodebole del Modello Standard (MS). I processi di questo tipo sono infatti sensibili alla presenza di possibili contributi di nuova fisica, i quali possono essere misurati in termini di discrepanze rispetto alle predizioni del MS. Utilizzando una Teoria Effettiva di Campo (Effective Field Theory, EFT), tali effetti possono essere parametrizzati e studiati sotto forma di accoppiamenti di gauge anomali. Le EFT non dipendono da un modello teorico in particolare e, di conseguenza, nelle analisi di ricerca di nuova fisica hanno un potere esplorativo maggiore rispetto all’utilizzo di teorie oltre il MS specifiche. La produzione associata di un bosone Z e di un fotone, nel canale di decadimento Z→νν, riveste un ruolo di particolare importanza: lo stato finale di tale processo consiste infatti in un singolo fotone, in quanto la presenza di neutrini non può essere direttamente misurata in un rivelatore come CMS. Il medesimo stato finale è tuttavia condiviso da numerose altre analisi di ricerca di materia oscura in collisioni protone-protone in cui coppie di possibili particelle di materia oscura, non rilasciando alcun segnale nei rivelatori, potrebbero simulare la presenza di neutrini. In questo tipo di analisi i processi Z(→ νν)γ rappresentano la sorgente principale di eventi di fondo e una misura precisa della loro produzione si mostra perciò necessaria, non solo per la sua capacità di testare il MS, ma anche per il contributo fondamentale che fornirebbe nella ricerca di materia oscura. La sezione d’urto di produzione Z(→ νν)γ è stata misurata per la prima volta dagli esperimenti installati ai collisori LEP e Tevatron, utilizzando rispettivamente eventi elettrone-positrone e protone-antiprotone. I risultati più recenti sono invece stati ottenuti a LHC, dagli esperimenti CMS e ATLAS, in collisioni protone-protone ad un’energia del centro di massa di 8 TeV (per una luminosità integrata di boh.1 fb−1) e 13 TeV (per una luminosità integrata di 36.1 fb−1) rispettivamente. Per questo lavoro sono stati analizzati i dati raccolti dall’esperimento CMS durante l’intero Run 2 di LHC ad un’energia del centro di massa di 13 TeV, per una luminosità integrata di 137.6 fb-1. Tecniche multivariate sono state implementate per identificare il fotone di segnale e per la prima volta sono stati considerati nello spazio delle fasi d’interesse anche i fotoni rivelati nelle regioni in avanti del calorimetro elettromagnetico. La loro inclusione è stata possibile grazie a un nuovo algoritmo, specificatamente sviluppato per questa analisi, che permette la reiezione degli eventi di beam-halo, che altrimenti costituirebbero il fondo dominante, nonché ineliminabile, nelle regioni in avanti. Un evento di beam-halo si verifica quando un protone interagisce con le pareti dei tubi a vuoto o con le molecole residue di gas al loro interno, producendo un flusso di muoni che viaggiano parallelamente alla linea di fascio e che possono dare origine a sciami elettromagnetici simili a quelli prodotti dai fotoni. La sezione d’urto attesa è stata calcolata massimizzando la funzione di likelihood ottenuta a partire dalla distribuzione del momento trasverso del fotone, e l’impatto delle incertezze sulla misura è stato stimato. I limiti attesi sugli accoppiamenti di gauge anomali hZ3 e hZ4 sono stati estratti parametrizzando il segnale con una funzione quadratica in hZ3 e hZ4. I risultati attesi forniscono limiti più stringenti di un ordine di grandezza rispetto ai limiti attesi ottenuti dalla precedente analisi a 8 TeV fatta da CMS, e di un fattore che varia tra 1.4 e 2 rispetto ai limiti attesi ottenuti dall’analisi di ATLAS a 13 TeV.
The study of multiboson production is a powerful and effective way to probe the electroweak sector of the Standard Model (SM). Multiboson processes are indeed sensitive to possible contributions beyond the SM (BSM) that would manifest themselves as deviations from the SM predictions, for example enhancing the event yield in specific kinematic regions. These new physics effects can be parameterized in terms of anomalous gauge couplings using the framework of Effective Field Theories (EFT), that has the great advantage of being model-independent, thus providing more exploratory power with respect to a specific BSM theory. Among all the multiboson processes that can be explored, the associated production of a photon and a Z boson decaying into two neutrinos is of particular interest. Given that the neutrinos cannot be directly detected in a general-purpose detector like CMS, the signature of the Z(→ νν)γ production consists in a single, isolated photon. The same monophoton signature is shared by several physics analysis that search for dark matter at the hadron colliders, where pairs of dark matter candidates mimic the presence of neutrinos, and where the SM Z(→ νν)γ processes therefore represent the most important background. A careful and precise measurement of the Z(→ νν)γ production becomes then compelling, for both its strong capability of testing the SM and for the contribution that it would provide to a wide range of fundamental physics searches. The Z(→ νν)γ cross section has been measured and searches for anomalous gauge couplings have been performed first by experiments operating at LEP and Tevatron, in electron-positron and proton-antiproton collisions respectively. The most recent results are those obtained at the LHC, by the CMS and ATLAS collaborations. The analysis has been carried out by the CMS experiment at a center of mass energy of 8 TeV, while the latest results from ATLAS exploit the data collected during 2016 (with a luminosity of 36.1 fb−1), at a center of mass energy of 13 TeV. The measurement of the Z(→ νν)γ cross section and the evaluation of the limits on the anomalous couplings that are presented in this work are performed using all the data collected by the CMS experiment during the whole Run 2 of LHC, at a centre of mass energy of 13 TeV and for a total luminosity of 137.6 fb−1. Multivariate techniques have also been implemented for identifying the signal photon and, in addition, photons detected in the forward region of the CMS detector are for the very first time taken into account when defining the phace space. The inclusion of forward photons is possible thanks to a new algorithm specifically developed for this analysis that allows to reject beam-halo events, the otherwise dominant and ineliminable background events when looking at monophoton signature in the forward region. A beam halo event occurs when the protons interact with residual gas molecules in the vacuum chamber or with the wall of the beam pipe, giving rise to a flow of secondary muons traveling along the beam line that can lead to electromagnetic showers similar to those produced by photons. The expected cross section is evaluated performing a maximum likelihood fit to the distribution of the transverse momentum of the photon, and the impact of the uncertainties is estimated. The expected limits on the anomalous triple neutral gauge couplings hZ3 and hZ4 are extracted by parameterizing the signal as a quadratic function of hZ3 and hZ4 . The expected results provide more stringent limits by one order of magnitudine with respect to the expected limits in the previous CMS analysis, and by a factor that varies between around 1.4 and 2 with respect to the expected limits in the 13 TeV ATLAS analysis.
Style APA, Harvard, Vancouver, ISO itp.
47

Ripp-Baudot, Isabelle. "Etude de détecteurs gazeux à micropistes pour le trajectographe de CMS et mesure de la masse du boson W dans l'expérience DELPHI". Habilitation à diriger des recherches, Université Louis Pasteur - Strasbourg I, 2004. http://tel.archives-ouvertes.fr/tel-00386060.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
48

Sarrazin-Baudoux, Christine. "Etude du mixage ionique dans un système à grande limite de solubilité : cas du Cuivre-Nickel, caractérisation de l'adhérence de ces revêtements sur substrat acier". Poitiers, 1987. http://www.theses.fr/1987POIT2305.

Pełny tekst źródła
Streszczenie:
Elaboration de la technique de melange ionique associant l'evaporation de couches minces successives et l'implantation ionique. Etude du melange ionique des multicouches cu-ni. Mise en evidence de deux phenomenes principaux de diffusion en fonction de la temperature. Analyse des phases formees lors du melange ionique. Application de la technique de melange ionique comme procede de depot
Style APA, Harvard, Vancouver, ISO itp.
49

Cadamuro, Luca. "Search for Higgs boson pair production in the bbtautau decay channel with the CMS detector at the LHC". Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLX059/document.

Pełny tekst źródła
Streszczenie:
Cette thèse présente une recherche de la production de paires de bosons de Higgs en utilisant les données des collision proton-proton à sqrt(s) = 13 TeV enregistrées avec les détecteur CMS auprès du LHC du CERN. Les évènements avec les deux boson de Higgs se désintégrant en une paire de quark b et de leptons tau (HH -> bb tau+tau-) sont utilisés pour l’exploration des mécanismes de production résonante et non-résonante.La mesure de la production de HH est expérimentalement difficile à cause de la petite section efficace prévue par le modèle standard de la physique des particules. Sa recherche est néanmoins motivée par les informations qu’elle peut révéler sur la nature de la brisure de la symétrie électrofaible. La production de HH donne accès à l’auto-couplage trilineaire du boson de Higgs et, par conséquent, à la forme du potentiel scalaire. En plus, elle est sensible à la présence de physique au-délà du modèle standard. La présence de nouvelles résonances se désintégrant en HH et de couplages anormales du boson de Higgs sont étudiées dans ce travail.Les leptons tau ont un rôle de premier plan dans cette recherche et un effort important a été consacré à l’amélioration de leur efficacité de sélection par le système de déclenchement de l’expérience CMS. En particulier, le déclenchement de premier niveau (L1) a été amélioré pour faire face aux nouvelles conditions des collisions du Run II du LHC, marqué par une augmentation de l’énergie dans le centre de masse et de la luminosité instantanée. Ce nouveau système de déclenchement permet de développer un algorithme spécifique pour la reconstruction des leptons tau se désintégrant en hadrons (tauh) et un neutrino.Cet algorithme se fonde une technique avancée de regroupement dynamique de l’énergie et utilise des critères dédiés pour la réjection du bruit de fond. Le développement, l’implementation, et la vérification de son fonctionnement pour le redémarrage du LHC sont présentés ici. La performance de l’algorithme est initialement évaluée grâce à une simulation et puis mesurée avec les données de l’expérience CMS. Son excellente performance est un element essentiel dans la recherche de la production de HH.L’investigation du processus HH -> bb tau+tau- utilise les trois canaux de désintégration du système tau+tau- avec au moins un tauh dans l’état final. La sélections et categorisation des évènements sont conçues pour optimiser la sensibilité de la recherche, et des techniques d’analyse multivariés sont mises en place pour distinguer le signal du bruit de fond.Les résultats sont présentés en utilisant une luminosité intégrée de 35.9 fb-1. Ils sont compatibles, dans les incertitudes expérimentales, avec les prédictions du modèle standard pour les bruits de fond. Des limites supérieures à la production résonante de HH sont évaluées et interprétées dans le contexte du modèle standard supersymmétrique minimal. Les limites supérieures à la production non-résonante permettent de contraindre l’espace des paramètres des couplages anormales du boson de Higgs. Les limites supérieures observées et attendues correspondent respectivement à environ 30 et 25 fois la prédiction du modèle standard. Ils représentent l’un des résultats les plus sensibles à la production de HH atteint jusqu'à présent au LHC.Les perspectives pour l‘observation de la production de HH au LHC sont enfin discutées. Les résultats précédemment obtenus sont extrapolés à une luminosité intégrée de 3000 fb-1 sous des différentes assomptions pour la performance du détecteur et de l’analyse
This thesis describes a search for Higgs boson pair (HH) production using proton-proton collision data collected at sqrt(s) = 13 TeV with the CMS experiment at the CERN LHC. Events with one Higgs boson decaying into two bottom quarks and the other decaying into two tau leptons (HH -> bb tau+tau-) are explored to investigate both resonant and nonresonant production mechanisms. The measurement of HH production is experimentally challenging because of the tiny cross section predicted by the standard model of particle physics (SM). However, this process can reveal invaluable information on the nature of electroweak symmetry breaking by giving access to the Higgs boson trilinear self-coupling and, consequently, to the shape of the scalar potential itself. Moreover, HH production is sensitive to the presence of physics beyond the SM. Both the presence of new states decaying to HH and of anomalous Higgs boson couplings are investigated in this work.Tau leptons have a key role in this search and considerable effort has been devoted to ensure a high efficiency in their selection by the trigger system of the CMS experiment. In particular, the CMS Level-1 (L1) trigger was upgraded to face the increase in the centre-of-mass energy and instantaneous luminosity conditions expected for the LHC Run II operations. The upgrade opened up the possibility to develop an efficient and dedicated algorithm to reconstruct tau leptons decaying to hadrons (tauh) and a neutrino.The tau algorithm implements a sophisticated dynamic energy clustering technique and dedicated background rejection criteria. Its development, optimisation, implementation, and commissioning for the LHC restart are presented. The algorithm performance is initially demonstrated using a simulation and subsequently measured with the data collected with the CMS experiment. The excellent performance achieved is an essential element of the search for HH production.The investigation of the HH -> bb tau+tau- process explores the three decay modes of the tau+tau- system with at least one tauh object in the final state. A dedicated event selection and categorisation is developed and optimised to enhance the sensitivity, and multivariate techniques are applied for the first time to these final states to separate the signal from the background.Results are derived using an integrated luminosity of 35.9 fb-1. They are found to be consistent, within uncertainties, with the SM background predictions. Upper limits on resonant production are set and interpreted in the context of the minimal supersymmetric standard model. Upper limits on nonresonant production constrain the parameter space for anomalous Higgs boson couplings. The observed and expected upper limit are about 30 and 25 times the SM prediction, respectively, corresponding to one of the most stringent limits set so far at the LHC.Finally, prospects for future measurements of HH production at the LHC are evaluated by extrapolating the current results to an integrated luminosity of 3000 fb-1 under different detector and analysis performance scenarios
Style APA, Harvard, Vancouver, ISO itp.
50

Houchu, Ludovic. "Identification des leptons tau dans leurs désintégrations hadroniques et recherche de particules supersymétriques se désintégrant en leptons tau avec le détecteur CMS au LHC". Phd thesis, Université Louis Pasteur - Strasbourg I, 2008. http://tel.archives-ouvertes.fr/tel-00391963.

Pełny tekst źródła
Streszczenie:
Cette thèse a pour sujet la recherche de particules supersymétriques se désintégrant en leptons tau dans l'expérience CMS auprès du collisionneur LHC. L'analyse porte sur des données simulées de collisions proton-proton à une énergie √s = 14 TeV dans le centre de masse. Le modèle supersymétrique retenu est celui mSUGRA. Les événements d'intérêt contiennent au moins deux leptons tau, une grande énergie transverse manquante et au moins deux jets hadroniques de grandes impulsions issus de la fragmentation d'un quark. Dans un échantillon test de données, des processus autres que ceux du Modèle Standard sont mis en évidence. Préalablement, est effectué un travail d'identification des leptons tau dans leurs modes de désintégration hadroniques : une méthode de discrimination entre les jets hadroniques de tau et ceux issus d'un quark ou d'un gluon est introduite, utilisant des candidats photons ou pions neutres reconstruits et un test sur un rapport de pseudo-vraisemblances
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii